repo_id
stringlengths
15
89
file_path
stringlengths
27
180
content
stringlengths
1
2.23M
__index_level_0__
int64
0
0
hf_public_repos/transformers/tests
hf_public_repos/transformers/tests/fixtures/input.txt
Who was Jim Henson ? ||| Jim Henson was a puppeteer
0
hf_public_repos/transformers/tests
hf_public_repos/transformers/tests/fixtures/test_entity_vocab.json
{"[MASK]": 0, "[UNK]": 1, "[PAD]": 2, "DUMMY": 3, "DUMMY2": 4, "[MASK2]": 5}
0
hf_public_repos/transformers/tests
hf_public_repos/transformers/tests/fixtures/sample_text.txt
This text is included to make sure Unicode is handled properly: ๅŠ›ๅŠ ๅ‹ๅŒ—ๅŒบแดตแดบแต€แตƒเฆ›เฆœเฆŸเฆกเฆฃเฆค Text should be one-sentence-per-line, with empty lines between documents. This sample text is public domain and was randomly selected from Project Guttenberg. The rain had only ceased with the gray streaks of morning at Blazing Star, and the settlement awoke to a moral sense of cleanliness, and the finding of forgotten knives, tin cups, and smaller camp utensils, where the heavy showers had washed away the debris and dust heaps before the cabin doors. Indeed, it was recorded in Blazing Star that a fortunate early riser had once picked up on the highway a solid chunk of gold quartz which the rain had freed from its incumbering soil, and washed into immediate and glittering popularity. Possibly this may have been the reason why early risers in that locality, during the rainy season, adopted a thoughtful habit of body, and seldom lifted their eyes to the rifted or india-ink washed skies above them. "Cass" Beard had risen early that morning, but not with a view to discovery. A leak in his cabin roof,--quite consistent with his careless, improvident habits,--had roused him at 4 A. M., with a flooded "bunk" and wet blankets. The chips from his wood pile refused to kindle a fire to dry his bed-clothes, and he had recourse to a more provident neighbor's to supply the deficiency. This was nearly opposite. Mr. Cassius crossed the highway, and stopped suddenly. Something glittered in the nearest red pool before him. Gold, surely! But, wonderful to relate, not an irregular, shapeless fragment of crude ore, fresh from Nature's crucible, but a bit of jeweler's handicraft in the form of a plain gold ring. Looking at it more attentively, he saw that it bore the inscription, "May to Cass." Like most of his fellow gold-seekers, Cass was superstitious. The fountain of classic wisdom, Hypatia herself. As the ancient sage--the name is unimportant to a monk--pumped water nightly that he might study by day, so I, the guardian of cloaks and parasols, at the sacred doors of her lecture-room, imbibe celestial knowledge. From my youth I felt in me a soul above the matter-entangled herd. She revealed to me the glorious fact, that I am a spark of Divinity itself. A fallen star, I am, sir!' continued he, pensively, stroking his lean stomach--'a fallen star!--fallen, if the dignity of philosophy will allow of the simile, among the hogs of the lower world--indeed, even into the hog-bucket itself. Well, after all, I will show you the way to the Archbishop's. There is a philosophic pleasure in opening one's treasures to the modest young. Perhaps you will assist me by carrying this basket of fruit?' And the little man jumped up, put his basket on Philammon's head, and trotted off up a neighbouring street. Philammon followed, half contemptuous, half wondering at what this philosophy might be, which could feed the self-conceit of anything so abject as his ragged little apish guide; but the novel roar and whirl of the street, the perpetual stream of busy faces, the line of curricles, palanquins, laden asses, camels, elephants, which met and passed him, and squeezed him up steps and into doorways, as they threaded their way through the great Moon-gate into the ample street beyond, drove everything from his mind but wondering curiosity, and a vague, helpless dread of that great living wilderness, more terrible than any dead wilderness of sand which he had left behind. Already he longed for the repose, the silence of the Laura--for faces which knew him and smiled upon him; but it was too late to turn back now. His guide held on for more than a mile up the great main street, crossed in the centre of the city, at right angles, by one equally magnificent, at each end of which, miles away, appeared, dim and distant over the heads of the living stream of passengers, the yellow sand-hills of the desert; while at the end of the vista in front of them gleamed the blue harbour, through a network of countless masts. At last they reached the quay at the opposite end of the street; and there burst on Philammon's astonished eyes a vast semicircle of blue sea, ringed with palaces and towers. He stopped involuntarily; and his little guide stopped also, and looked askance at the young monk, to watch the effect which that grand panorama should produce on him.
0
hf_public_repos/transformers/tests
hf_public_repos/transformers/tests/fixtures/dummy-config.json
{ "model_type": "roberta" }
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/MRPC/train.csv
label,sentence1,sentence2 equivalent,He said the foodservice pie business doesn 't fit the company 's long-term growth strategy .,""" The foodservice pie business does not fit our long-term growth strategy ." not_equivalent,Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war .,"His wife said he was "" 100 percent behind George Bush "" and looked forward to using his years of training in the war ." not_equivalent,"The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat .","The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent ." equivalent,The AFL-CIO is waiting until October to decide if it will endorse a candidate .,The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries . not_equivalent,No dates have been set for the civil or the criminal trial .,"No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty ." equivalent,Wal-Mart said it would check all of its million-plus domestic workers to ensure they were legally employed .,It has also said it would review all of its domestic employees more than 1 million to ensure they have legal status .
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/MRPC/dev.tsv
๏ปฟQuality #1 ID #2 ID #1 String #2 String 1 1355540 1355592 He said the foodservice pie business doesn 't fit the company 's long-term growth strategy . " The foodservice pie business does not fit our long-term growth strategy . 0 2029631 2029565 Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war . His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war . 0 487993 487952 The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat . The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent . 1 1989515 1989458 The AFL-CIO is waiting until October to decide if it will endorse a candidate . The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries . 0 1783137 1782659 No dates have been set for the civil or the criminal trial . No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty . 1 3039165 3039036 Wal-Mart said it would check all of its million-plus domestic workers to ensure they were legally employed . It has also said it would review all of its domestic employees more than 1 million to ensure they have legal status .
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/MRPC/train.tsv
๏ปฟQuality #1 ID #2 ID #1 String #2 String 1 1355540 1355592 He said the foodservice pie business doesn 't fit the company 's long-term growth strategy . " The foodservice pie business does not fit our long-term growth strategy . 0 2029631 2029565 Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war . His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war . 0 487993 487952 The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat . The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent . 1 1989515 1989458 The AFL-CIO is waiting until October to decide if it will endorse a candidate . The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries . 0 1783137 1782659 No dates have been set for the civil or the criminal trial . No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty . 1 3039165 3039036 Wal-Mart said it would check all of its million-plus domestic workers to ensure they were legally employed . It has also said it would review all of its domestic employees more than 1 million to ensure they have legal status .
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/MRPC/dev.csv
label,sentence1,sentence2 equivalent,He said the foodservice pie business doesn 't fit the company 's long-term growth strategy .,""" The foodservice pie business does not fit our long-term growth strategy ." not_equivalent,Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war .,"His wife said he was "" 100 percent behind George Bush "" and looked forward to using his years of training in the war ." not_equivalent,"The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat .","The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent ." equivalent,The AFL-CIO is waiting until October to decide if it will endorse a candidate .,The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries . not_equivalent,No dates have been set for the civil or the criminal trial .,"No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty ." equivalent,Wal-Mart said it would check all of its million-plus domestic workers to ensure they were legally employed .,It has also said it would review all of its domestic employees more than 1 million to ensure they have legal status .
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/wiki_text/wiki_00
<doc id="12" url="https://en.wikipedia.org/wiki?curid=12" title="Anarchism"> Anarchism Anarchism is a political philosophy and movement that rejects all involuntary, coercive forms of hierarchy. It radically calls for the abolition of the state which it holds to be undesirable, unnecessary, and harmful. The history of anarchism stretches back to prehistory, when humans lived in anarchistic societies long before the establishment of formal states, realms or empires. With the rise of organised hierarchical bodies, skepticism toward authority also rose, but it was not until the 19th century that a self-conscious political movement emerged. During the latter half of the 19th and the first decades of the 20th century, the anarchist movement flourished in most parts of the world and had a significant role in worker's struggles for emancipation. Various anarchist schools of thought formed during this period. Anarchists took part in several revolutions, most notably in the Spanish Civil War, where they were crushed along with the alliance to restore the Second Republic by the fascist forces of the Nationalist faction and its foreign allies in Nazi Germany, Fascist Italy, Portuguese Dictatorship and the Catholic Church in 1939, marking the end of the classical era of anarchism. In the last decades of the 20th century and into the 21st century, the anarchist movement has been resurgent once more. Anarchism employs various tactics in order to meet its ideal ends; these can be broadly separated into revolutionary and evolutionary tactics. There is significant overlap between the two, which are merely descriptive. Revolutionary tactics aim to bring down authority and state, and have taken a violent turn in the past. Evolutionary tactics aim to prefigure what an anarchist society would be like. Anarchist thought, criticism, and praxis has played a part in diverse areas of human society. The etymological origin of "anarchism" is from the Ancient Greek "anarkhia", meaning "without a ruler", composed of the prefix "an-" (i.e. "without") and the word "arkhos" (i.e. "leader" or "ruler"). The suffix "-ism" denotes the ideological current that favours anarchy. "Anarchism" appears in English from 1642 as "anarchisme" and "anarchy" from 1539. Various factions within the French Revolution labelled their opponents as "anarchists", although few such accused shared many views with later anarchists. Many revolutionaries of the 19th century such as William Godwin (1756โ€“1836) and Wilhelm Weitling (1808โ€“1871) would contribute to the anarchist doctrines of the next generation, but they did not use "anarchist" or "anarchism" in describing themselves or their beliefs. The first political philosopher to call himself an "anarchist" () was Pierre-Joseph Proudhon (1809โ€“1865), marking the formal birth of anarchism in the mid-19th century. Since the 1890s and beginning in France, "libertarianism" has often been used as a synonym for anarchism and its use as a synonym is still common outside the United States. On the other hand, some use "libertarianism" to refer to individualistic free-market philosophy only, referring to free-market anarchism as "libertarian anarchism". While opposition to the state is central to anarchist thought, defining anarchism is not an easy task as there is a lot of discussion among scholars and anarchists on the matter and various currents perceive anarchism slightly differently. Hence, it might be true to say that anarchism is a cluster of political philosophies opposing authority and hierarchical organization (including the state, capitalism, nationalism and all associated institutions) in the conduct of all human relations in favour of a society based on voluntary association, on freedom and on decentralisation, but this definition has the same shortcomings as the definition based on etymology (which is simply a negation of a ruler), or based on anti-statism (anarchism is much more than that) or even the anti-authoritarian (which is an "a posteriori" conclusion). Nonetheless, major elements of the definition of anarchism include the following: During the prehistoric era of mankind, an established authority did not exist. It was after the creation of towns and cities that institutions of authority were established and anarchistic ideas espoused as a reaction. Most notable precursors to anarchism in the ancient world were in China and Greece. In China, philosophical anarchism (i.e. the discussion on the legitimacy of the state) was delineated by Taoist philosophers Zhuang Zhou and Laozi. Likewise, anarchic attitudes were articulated by tragedians and philosophers in Greece. Aeschylus and Sophocles used the myth of Antigone to illustrate the conflict between rules set by the state and personal autonomy. Socrates questioned Athenian authorities constantly and insisted to the right of individual freedom of consciousness. Cynics dismissed human law ("nomos") and associated authorities while trying to live according to nature ("physis"). Stoics were supportive of a society based on unofficial and friendly relations among its citizens without the presence of a state. During the Middle Ages, there was no anarchistic activity except some ascetic religious movements in the Muslim world or in Christian Europe. This kind of tradition later gave birth to religious anarchism. In the Sasanian Empire, Mazdak called for an egalitarian society and the abolition of monarchy, only to be soon executed by Emperor Kavad I. In Basra, religious sects preached against the state. In Europe, various sects developed anti-state and libertarian tendencies. Libertarian ideas further emerged during the Renaissance with the spread of reasoning and humanism through Europe. Novelists fictionalised ideal societies that were based not on coercion but voluntarism. The Enlightenment further pushed towards anarchism with the optimism for social progress. During the French Revolution, partisan groups such as the Enragรฉs and the saw a turning point in the fermentation of anti-state and federalist sentiments. The first anarchist currents developed throughout the 18th centuryโ€”William Godwin espoused philosophical anarchism in England, morally delegitimizing the state, Max Stirner's thinking paved the way to individualism, and Pierre-Joseph Proudhon's theory of mutualism found fertile soil in France. This era of classical anarchism lasted until the end of the Spanish Civil War of 1936 and is considered the golden age of anarchism. Drawing from mutualism, Mikhail Bakunin founded collectivist anarchism and entered the International Workingmen's Association, a class worker union later known as the First International that formed in 1864 to unite diverse revolutionary currents. The International became a significant political force, with Karl Marx being a leading figure and a member of its General Council. Bakunin's faction (the Jura Federation) and Proudhon's followers (the mutualists) opposed Marxist state socialism, advocating political abstentionism and small property holdings. After bitter disputes, the Bakuninists were expelled from the International by the Marxists at the 1872 Hague Congress. Bakunin famously predicted that if revolutionaries gained power by Marx's terms, they would end up the new tyrants of workers. After being expelled, anarchists formed the St. Imier International. Under the influence of Peter Kropotkin, a Russian philosopher and scientist, anarcho-communism overlapped with collectivism. Anarcho-communists, who drew inspiration from the 1871 Paris Commune, advocated for free federation and for the distribution of goods according to one's needs. At the turn of the century, anarchism had spread all over the world. In China, small groups of students imported the humanistic pro-science version of anarcho-communism. Tokyo was a hotspot for rebellious youth from countries of the far east, travelling to the Japanese capital to study. In Latin America, Argentina was a stronghold for anarcho-syndicalism, where it became the most prominent left-wing ideology. During this time, a minority of anarchists adopted tactics of revolutionary political violence. This strategy became known as propaganda of the deed. The dismemberment of the French socialist movement into many groups, and the execution and exile of many Communards to penal colonies following the suppression of the Paris Commune, favoured individualist political expression and acts. Even though many anarchists distanced themselves from these terrorist acts, infamy came upon the movement. Illegalism was another strategy which some anarchists adopted during this period. Anarchists enthusiastically participated in the Russian Revolutionโ€”despite concernsโ€”in opposition to the Whites. However, they met harsh suppression after the Bolshevik government was stabilized. Several anarchists from Petrograd and Moscow fled to Ukraine, notably leading to the Kronstadt rebellion and Nestor Makhno's struggle in the Free Territory. With the anarchists being crushed in Russia, two new antithetical currents emerged, namely platformism and synthesis anarchism. The former sought to create a coherent group that would push for revolution while the latter were against anything that would resemble a political party. Seeing the victories of the Bolsheviks in the October Revolution and the resulting Russian Civil War, many workers and activists turned to communist parties, which grew at the expense of anarchism and other socialist movements. In France and the United States, members of major syndicalist movements, the General Confederation of Labour and Industrial Workers of the World, left their organisations and joined the Communist International. In the Spanish Civil War, anarchists and syndicalists (CNT and FAI) once again allied themselves with various currents of leftists. A long tradition of Spanish anarchism led to anarchists playing a pivotal role in the war. In response to the army rebellion, an anarchist-inspired movement of peasants and workers, supported by armed militias, took control of Barcelona and of large areas of rural Spain, where they collectivised the land. The Soviet Union provided some limited assistance at the beginning of the war, but the result was a bitter fight among communists and anarchists at a series of events named May Days as Joseph Stalin tried to seize control of the Republicans. At the end of World War II, the anarchist movement was severely weakened. However, the 1960s witnessed a revival of anarchism likely caused by a perceived failure of Marxismโ€“Leninism and tensions built by the Cold War. During this time, anarchism took root in other movements critical towards both the state and capitalism, such as the anti-nuclear, environmental and pacifist movements, the New Left, and the counterculture of the 1960s. Anarchism became associated with punk subculture, as exemplified by bands such as Crass and the Sex Pistols, and the established feminist tendencies of anarcha-feminism returned with vigour during the second wave of feminism. Around the turn of the 21st century, anarchism grew in popularity and influence within anti-war, anti-capitalist, and anti-globalisation movements. Anarchists became known for their involvement in protests against the World Trade Organization, the Group of Eight and the World Economic Forum. During the protests, "ad hoc" leaderless anonymous cadres known as black blocs engaged in rioting, property destruction, and violent confrontations with the police. Other organisational tactics pioneered in this time include security culture, affinity groups, and the use of decentralised technologies such as the internet. A significant event of this period was the confrontations at the WTO conference in Seattle in 1999. Anarchist ideas have been influential in the development of the Zapatistas in Mexico and the Democratic Federation of Northern Syria, more commonly known as Rojava, a "de facto" autonomous region in northern Syria. Anarchist schools of thought have been generally grouped into two main historical traditions, social anarchism and individualist anarchism, owing to their different origins, values and evolution. The individualist current emphasises negative liberty in opposing restraints upon the free individual, while the social current emphasises positive liberty in aiming to achieve the free potential of society through equality and social ownership. In a chronological sense, anarchism can be segmented by the classical currents of the late 19th century, and the post-classical currents (such as anarcha-feminism, green anarchism and post-anarchism) developed thereafter. Beyond the specific factions of anarchist movements which constitute political anarchism lies philosophical anarchism, which holds that the state lacks moral legitimacy, without necessarily accepting the imperative of revolution to eliminate it. A component especially of individualist anarchism, philosophical anarchism may tolerate the existence of a minimal state, but argues that citizens have no moral obligation to obey government when it conflicts with individual autonomy. Anarchism pays significant attention to moral arguments since ethics have a central role in anarchist philosophy. One reaction against sectarianism within the anarchist milieu was anarchism without adjectives, a call for toleration and unity among anarchists first adopted by Fernando Tarrida del Mรกrmol in 1889 in response to the bitter debates of anarchist theory at the time. Despite separation, the various anarchist schools of thought are not seen as distinct entities, but as tendencies that intermingle. Anarchism is usually placed on the far-left of the political spectrum. Much of its economics and legal philosophy reflect anti-authoritarian, anti-statist, and libertarian interpretations of the radical left-wing and socialist politics of collectivism, communism, individualism, mutualism, and syndicalism, among other libertarian socialist economic theories. As anarchism does not offer a fixed body of doctrine from a single particular worldview, many anarchist types and traditions exist, and varieties of anarchy diverge widely. Inceptive currents among classical anarchist currents were mutualism and individualism. They were followed by the major currents of social anarchism (collectivist, communist, and syndicalist). They differ on organizational and economic aspects of their ideal society. Mutualism is an 18th-century economic theory that was developed into anarchist theory by Pierre-Joseph Proudhon. Its aims include reciprocity, free association, voluntary contract, federation, and credit and currency reform that would be regulated by a bank of the people. Mutualism has been retrospectively characterised as ideologically situated between individualist and collectivist forms of anarchism. Proudhon first characterised his goal as a "third form of society, the synthesis of communism and property". Collectivist anarchism, also known as anarchist collectivism or anarcho-collectivism, is a revolutionary socialist form of anarchism commonly associated with Mikhail Bakunin. Collectivist anarchists advocate collective ownership of the means of production, theorised to be achieved through violent revolution, and that workers be paid according to time worked, rather than goods being distributed according to need as in communism. Collectivist anarchism arose alongside Marxism, but rejected the dictatorship of the proletariat despite the stated Marxist goal of a collectivist stateless society. Anarcho-communism, also known as anarchist-communism, communist anarchism, and libertarian communism, is a theory of anarchism that advocates a communist society with common ownership of the means of production, direct democracy, and a horizontal network of voluntary associations and workers' councils with production and consumption based on the guiding principle: "From each according to his ability, to each according to his need". Anarcho-communism developed from radical socialist currents after the French Revolution, but it was first formulated as such in the Italian section of the First International. It was later expanded upon in the theoretical work of Peter Kropotkin. Anarcho-syndicalism, also referred to as revolutionary syndicalism, is a branch of anarchism that views labour syndicates as a potential force for revolutionary social change, replacing capitalism and the state with a new society democratically self-managed by workers. The basic principles of anarcho-syndicalism are workers' solidarity, direct action, and workers' self-management. Individualist anarchism refers to several traditions of thought within the anarchist movement that emphasise the individual and their will over any kinds of external determinants. Early influences on individualist forms of anarchism include William Godwin, Max Stirner and Henry David Thoreau. Through many countries, individualist anarchism attracted a small yet diverse following of Bohemian artists and intellectuals as well as young anarchist outlaws in what became known as illegalism and individual reclamation. Anarchist principles undergird contemporary radical social movements of the left. Interest in the anarchist movement developed alongside momentum in the anti-globalization movement, whose leading activist networks were anarchist in orientation. As the movement shaped 21st century radicalism, wider embrace of anarchist principles signaled a revival of interest. Contemporary news coverage which emphasizes black bloc demonstrations has reinforced anarchism's historical association with chaos and violence, although its publicity has also led more scholars to engage with the anarchist movement. Anarchism has continued to generate many philosophies and movementsโ€”at times eclectic, drawing upon various sources, and syncretic, combining disparate concepts to create new philosophical approaches. The anti-capitalist tradition of classical anarchism has remained prominent within contemporary currents. Various anarchist groups, tendencies, and schools of thought exist today, making it difficult to describe contemporary anarchist movement. While theorists and activists have established "relatively stable constellations of anarchist principles", there is no consensus on which principles are core. As a result, commentators describe multiple "anarchisms" (rather than a singular "anarchism") in which common principles are shared between schools of anarchism while each group prioritizes those principles differently. For example, gender equality can be a common principle but ranks as a higher priority to anarcha-feminists than anarchist communists. Anarchists are generally committed against coercive authority in all forms, namely "all centralized and hierarchical forms of government (e.g., monarchy, representative democracy, state socialism, etc.), economic class systems (e.g., capitalism, Bolshevism, feudalism, slavery, etc.), autocratic religions (e.g., fundamentalist Islam, Roman Catholicism, etc.), patriarchy, heterosexism, white supremacy, and imperialism". However, anarchist schools disagree on the methods by which these forms should be opposed. Anarchists' tactics take various forms but in general serve two major goalsโ€”first, to oppose the Establishment; and second, to promote anarchist ethics and reflect an anarchist vision of society, illustrating the unity of means and ends. A broad categorization can be made between aims to destroy oppressive states and institutions by revolutionary means, and aims to change society through evolutionary means. Evolutionary tactics reject violence and take a gradual approach to anarchist aims, though there is significant overlap between the two. Anarchist tactics have shifted during the course of the last century. Anarchists during the early 20th century focused more on strikes and militancy, while contemporary anarchists use a broader array of approaches. During the classical era, anarchists had a militant tendency. Not only did they confront state armed forces (as in Spain and Ukraine) but some of them also employed terrorism as propaganda of the deed. Assassination attempts were carried out against heads of state, some of which were successful. Anarchists also took part in revolutions. Anarchist perspectives towards violence have always been perplexing and controversial. On one hand, anarcho-pacifists point out the unity of means and ends. On the other hand, other anarchist groups advocate direct action, a tactic which can include acts of sabotage or even acts of terrorism. This attitude was quite prominent a century ago; seeing the state as a tyrant, some anarchists believed that they had every right to oppose its oppression by any means possible. Emma Goldman and Errico Malatesta, who were proponents of limited use of violence, argued that violence is merely a reaction to state violence as a necessary evil. Anarchists took an active role in strikes, although they tended to be antipathetic to formal syndicalism, seeing it as reformist. They saw it as a part of the movement which sought to overthrow the state and capitalism. Anarchists also reinforced their propaganda within the arts, some of whom practiced nudism. They also built communities which were based on friendship. They were also involved in the press. In the current era, Italian anarchist Alfredo Bonanno, a proponent of insurrectionary anarchism, has reinstated the debate on violence by rejecting the nonviolence tactic adopted since the late 19th century by Kropotkin and other prominent anarchists afterwards. Both Bonanno and the French group The Invisible Committee advocate for small, informal affiliation groups, where each member is responsible for their own actions but works together to bring down oppression utilizing sabotage and other violent means against state, capitalism and other enemies. Members of The Invisible Committee were arrested in 2008 on various charges, terrorism included. Overall, today's anarchists are much less violent and militant than their ideological ancestors. They mostly engage in confronting the police during demonstrations and riots, especially in countries like Canada, Mexico or Greece. ฮœilitant black bloc protest groups are known for clashing with the police. However, anarchists not only clash with state operators; they also engage in the struggle against fascists and racists, taking anti-fascist action and mobilizing to prevent hate rallies from happening. Anarchists commonly employ direct action. This can take the form of disrupting and protesting against unjust hierarchy, or the form of self-managing their lives through the creation of counter-institutions such as communes and non-hierarchical collectives. Often, decision-making is handled in an anti-authoritarian way, with everyone having equal say in each decision, an approach known as horizontalism. Contemporary-era anarchists have been engaging with various grassroots movements that are not explicitly anarchist but are more or less based on horizontalism, respecting personal autonomy, and participating in mass activism such as strikes and demonstrations. The newly coined term "small-a anarchism", in contrast with the "big-A anarchism" of the classical era, signals their tendency not to base their thoughts and actions on classical-era anarchism or to refer to Kropotkin or Proudhon to justify their opinions. They would rather base their thought and praxis on their own experience, which they will later theorize. The decision-making process of small affinity anarchist groups play a significant tactical role. Anarchists have employed various methods in order to build a rough consensus among members of their group, without the need of a leader or a leading group. One way is for an individual from the group to play the role of facilitator to help achieve a consensus without taking part in the discussion themselves or promoting a specific point. Minorities usually accept rough consensus, except when they feel the proposal contradicts anarchist goals, values, or ethics. Anarchists usually form small groups (5โ€“20 individuals) to enhance autonomy and friendships among their members. These kind of groups more often than not interconnect with each other, forming larger networks. Anarchists still support and participate in strikes, especially wildcat strikes; these are leaderless strikes not organised centrally by a syndicate. Anarchists have gone online to spread their message. As in the past, newspapers and journals are used; however, because of distributional and other difficulties, anarchists have found it easier to create websites, hosting electronic libraries and other portals. Anarchists were also involved in developing various software that are available for free. The way these hacktivists work to develop and distribute resembles the anarchist ideals, especially when it comes to preserving user's privacy from state surveillance. Anarchists organize themselves to squat and reclaim public spaces. During important events such as protests and when spaces are being occupied, they are often called Temporary Autonomous Zones (TAZ), spaces where surrealism, poetry and art are blended to display the anarchist ideal. As seen by anarchists, squatting is a way to regain urban space from the capitalist market, serving pragmatical needs, and is also seen an exemplary direct action. Acquiring space enables anarchists to experiment with their ideas and build social bonds. Adding up these tactics, and having in mind that not all anarchists share the same attitudes towards them, along with various forms of protesting at highly symbolic events, make up a carnivalesque atmosphere that is part of contemporary anarchist vividity. As anarchism is a philosophy that embodies many diverse attitudes, tendencies, and schools of thought, and disagreement over questions of values, ideology, and tactics is common, its diversity has led to widely different uses of identical terms among different anarchist traditions, which has created a number of definitional concerns in anarchist theory. For instance, the compatibility of capitalism, nationalism and religion with anarchism is widely disputed. Similarly, anarchism enjoys complex relationships with ideologies such as Marxism, communism, collectivism and trade unionism. Anarchists may be motivated by humanism, divine authority, enlightened self-interest, veganism, or any number of alternative ethical doctrines. Phenomena such as civilisation, technology (e.g. within anarcho-primitivism) and the democratic process may be sharply criticised within some anarchist tendencies and simultaneously lauded in others. Gender and sexuality carry along them dynamics of hierarchy; anarchism is obliged to address, analyse and oppose the suppression of one's autonomy because of the dynamics that gender roles traditionally impose. A historical current that arose and flourished during 1890 and 1920 within anarchism was free love; in contemporary anarchism, this current survives as a tendency to support polyamory and queer anarchism. Free love advocates were against marriage, which they saw as a way of men imposing authority over women, largely because marriage law greatly favoured the power of men. The notion of free love, though, was much broader; it included critique of the established order that limited women's sexual freedom and pleasure. Such free love movements contributed to the establishment of communal houses, where large groups of travelers, anarchists, and other activists slept in beds together. Free love had roots both in Europe and the United States. Some anarchists, however, struggled with the jealousy that arose from free love. Anarchist feminists were advocates of free love, against marriage, were pro-choice (utilizing a contemporary term) and had a likewise agenda. Anarchist and non-anarchist feminists differed on suffrage, but were nonetheless supportive of one another. During the second half of the 20th century, anarchism intermingled with the second wave of feminism, radicalizing some currents of the feminist movement (and being influenced as well). By the latest decades of the 20th century, anarchists and feminists were advocating for the rights and autonomy of women, gays, queers and other marginalized groups, with some feminist thinkers suggesting a fusion of the two currents. With the third wave of feminism, sexual identity and compulsory heterosexuality became a subject of study for anarchists, which yielded a post-structuralist critique of sexual normality. However, some anarchists distanced themselves from this line of thinking, suggesting that it leaned towards individualism and was, therefore, dropping the cause of social liberation. The interest of anarchists in education stretches back to the first emergence of classical anarchism. Anarchists consider 'proper' education, which sets the foundations of the future autonomy of the individual and the society, to be an act of mutual aid. Anarchist writers such as Willian Godwin and Max Stirner attacked both state education and private education as another means by which the ruling class replicate their privileges. In 1901, Catalan anarchist and free thinker Francisco Ferrer established the Escuela Moderna in Barcelona as an opposition to the established education system, which was dictated largely by the Catholic Church. Ferrer's approach was secular, rejecting both state and church involvement in the educational process, and gave pupils large amounts of autonomy in planning their work and attendance. Ferrer aimed to educate the working class and explicitly sought to foster class consciousness among students. The school closed after constant harassment by the state and Ferrer was later arrested. His ideas, however, formed the inspiration for a series of modern schools around the world. Christian anarchist Leo Tolstoy also established a similar school, with its founding principle, according to Tolstoy, being that "for education to be effective it had to be free". In a similar token, A. S. Neill founding what became Summerhill School in 1921, also declaring being free from coercion. Anarchist education is based largely on the idea that a child's right to develop freely, without manipulation, ought to be respected, and that rationality will lead children to morally good conclusions. However, there has been little consensus among anarchist figures as to what constitutes manipulation; Ferrer, for example, believed that moral indoctrination was necessary and explicitly taught pupils that equality, liberty, and social justice were not possible under capitalism (along with other critiques of nationalism and government). Late 20th century and contemporary anarchist writers (such as Colin Ward, Herbert Read and Paul Goodman) intensified and expanded the anarchist critique of state education, largely focusing on the need for a system that focuses on children's creativity rather than on their ability to attain a career or participate in consumer society. Contemporary anarchists, such as Colin Ward, have further argued that state education serves to perpetuate socio-economic inequality. While few anarchist education institutions have survived to the modern day, major tenets of anarchist schools, such as respect for child autonomy and relying on reasoning rather than indoctrination as a teaching method, have spread among mainstream educational institutions. Objection to the state and its institutions is a "sine qua non" of anarchism. Anarchists consider the state as a tool of domination and believe it to be illegitimate regardless of its political tendencies. Instead of people being able to control the aspects of their life, major decisions are taken by a small elite. Authority ultimately rests solely on power, regardless of whether that power is open or transparent, as it still has the ability to coerce people. Another anarchist argument against states is that the people constituting a government, even the most altruistic among officials, will unavoidably seek to gain more power, leading to corruption. Anarchists consider the idea that the state is the collective will of the people to be an unachievable fiction, due to the fact that the ruling class is distinct from the rest of society. The connection between anarchism and art was quite profound during the classical era of anarchism, especially among artistic currents that were developing during that era, such as futurists, surrealists, and others, while in literature anarchism was mostly associated with the New Apocalyptics and the Neo-romanticism movement. In music, anarchism has been associated with music scenes such as Punk. Anarchists such as Leo Tolstoy and Herbert Read argued that the border between the artist and the non-artist, what separates art from a daily act, is a construct produced by the alienation caused by capitalism, and it prevents humans from living a joyful life. Other anarchists advocated for or used art as a means to achieve anarchist ends. In his book Breaking the Spell: A History of Anarchist Filmmakers, Videotape Guerrillas, and Digital Ninjas Chris Robรฉ claims that "anarchist-inflected practices have increasingly structured movement-based video activism." Three overlapping properties made art useful to anarchists: It could depict a critique of existing society and hierarchies; it could serve as a prefigurative tool to reflect the anarchist ideal society, and also it could turn into a means of direct action, in protests for example. As it appeals to both emotion and reason, art could appeal to the "whole human" and have a powerful effect. Philosophy lecturer Andrew G. Fiala has listed five main arguments against anarchism. Firstly, he notes that anarchism is related to violence and destruction, not only in the pragmatic world (i.e. at protests) but in the world of ethics as well. The second argument is that it is impossible for a society to function without a state or something like a state, acting to protect citizens from criminality. Fiala takes "Leviathan" from Thomas Hobbes and the night-watchman state from philosopher Robert Nozick as examples. Thirdly, anarchism is evaluated as unfeasible or utopian since the state can not be defeated practically; this line of arguments most often calls for political action within the system to reform it. The fourth argument is that anarchism is self-contradictory since while it advocates for no-one to "archiei", if accepted by the many, then anarchism will turn into the ruling political theory. In this line of criticism also comes the self contradiction that anarchist calls for collective action while anarchism endorses the autonomy of the individual and hence no collective action can be taken. Lastly, Fiala mentions a critique towards philosophical anarchism, of being ineffective (all talk and thoughts) and in the meantime capitalism and bourgeois class remains strong. Philosophical anarchism has met the criticism of members of academia, following the release of pro-anarchist books such as A. John Simmons' "Moral Principles and Political Obligations" (1979). Law professor William A. Edmundson authored an essay arguing against three major philosophical anarchist principles, which he finds fallacious; Edmundson claims that while the individual does not owe a normal state a duty of obedience, this does not imply that anarchism is the inevitable conclusion, and the state is still morally legitimate. </doc> <doc id="25" url="https://en.wikipedia.org/wiki?curid=25" title="Autism"> Autism Autism is a developmental disorder characterized by difficulties with social interaction and communication, and by restricted and repetitive behavior. Parents often notice signs during the first three years of their child's life. These signs often develop gradually, though some children with autism experience worsening in their communication and social skills after reaching developmental milestones at a normal pace. Autism is associated with a combination of genetic and environmental factors. Risk factors during pregnancy include certain infections, such as rubella, toxins including valproic acid, alcohol, cocaine, pesticides, lead, and air pollution, fetal growth restriction, and autoimmune diseases. Controversies surround other proposed environmental causes; for example, the vaccine hypothesis, which has been disproven. Autism affects information processing in the brain and how nerve cells and their synapses connect and organize; how this occurs is not well understood. The Diagnostic and Statistical Manual of Mental Disorders (DSM-5), combines autism and less severe forms of the condition, including Asperger syndrome and pervasive developmental disorder not otherwise specified (PDD-NOS) into the diagnosis of autism spectrum disorder (ASD). Early behavioral interventions or speech therapy can help children with autism gain self-care, social, and communication skills. Although there is no known cure, there have been cases of children who recovered. Some autistic adults are unable to live independently. An autistic culture has developed, with some individuals seeking a cure and others believing autism should be accepted as a difference to be accommodated instead of cured. Globally, autism is estimated to affect 24.8 million people . In the 2000s, the number of people affected was estimated at 1โ€“2 per 1,000 people worldwide. In the developed countries, about 1.5% of children are diagnosed with ASD , from 0.7% in 2000 in the United States. It occurs four-to-five times more often in males than females. The number of people diagnosed has increased dramatically since the 1960s, which may be partly due to changes in diagnostic practice. The question of whether actual rates have increased is unresolved. Autism is a highly variable, neurodevelopmental disorder whose symptoms first appears during infancy or childhood, and generally follows a steady course without remission. People with autism may be severely impaired in some respects but average, or even superior, in others. Overt symptoms gradually begin after the age of six months, become established by age two or three years and tend to continue through adulthood, although often in more muted form. It is distinguished by a characteristic triad of symptoms: impairments in social interaction, impairments in communication, and repetitive behavior. Other aspects, such as atypical eating, are also common but are not essential for diagnosis. Individual symptoms of autism occur in the general population and appear not to associate highly, without a sharp line separating pathologically severe from common traits. Social deficits distinguish autism and the related autism spectrum disorders (ASD; see Classification) from other developmental disorders. People with autism have social impairments and often lack the intuition about others that many people take for granted. Noted autistic Temple Grandin described her inability to understand the social communication of neurotypicals, or people with typical neural development, as leaving her feeling "like an anthropologist on Mars". Unusual social development becomes apparent early in childhood. Autistic infants show less attention to social stimuli, smile and look at others less often, and respond less to their own name. Autistic toddlers differ more strikingly from social norms; for example, they have less eye contact and turn-taking, and do not have the ability to use simple movements to express themselves, such as pointing at things. Three- to five-year-old children with autism are less likely to exhibit social understanding, approach others spontaneously, imitate and respond to emotions, communicate nonverbally, and take turns with others. However, they do form attachments to their primary caregivers. Most children with autism display moderately less attachment security than neurotypical children, although this difference disappears in children with higher mental development or less pronounced autistic traits. Older children and adults with ASD perform worse on tests of face and emotion recognition although this may be partly due to a lower ability to define a person's own emotions. Children with high-functioning autism have more intense and frequent loneliness compared to non-autistic peers, despite the common belief that children with autism prefer to be alone. Making and maintaining friendships often proves to be difficult for those with autism. For them, the quality of friendships, not the number of friends, predicts how lonely they feel. Functional friendships, such as those resulting in invitations to parties, may affect the quality of life more deeply. There are many anecdotal reports, but few systematic studies, of aggression and violence in individuals with ASD. The limited data suggest that, in children with intellectual disability, autism is associated with aggression, destruction of property, and meltdowns. About a third to a half of individuals with autism do not develop enough natural speech to meet their daily communication needs. Differences in communication may be present from the first year of life, and may include delayed onset of babbling, unusual gestures, diminished responsiveness, and vocal patterns that are not synchronized with the caregiver. In the second and third years, children with autism have less frequent and less diverse babbling, consonants, words, and word combinations; their gestures are less often integrated with words. Children with autism are less likely to make requests or share experiences, and are more likely to simply repeat others' words (echolalia) or reverse pronouns. Joint attention seems to be necessary for functional speech, and deficits in joint attention seem to distinguish infants with ASD. For example, they may look at a pointing hand instead of the pointed-at object, and they consistently fail to point at objects in order to comment on or share an experience. Children with autism may have difficulty with imaginative play and with developing symbols into language. In a pair of studies, high-functioning children with autism aged 8โ€“15 performed equally well as, and as adults better than, individually matched controls at basic language tasks involving vocabulary and spelling. Both autistic groups performed worse than controls at complex language tasks such as figurative language, comprehension and inference. As people are often sized up initially from their basic language skills, these studies suggest that people speaking to autistic individuals are more likely to overestimate what their audience comprehends. Autistic individuals can display many forms of repetitive or restricted behavior, which the Repetitive Behavior Scale-Revised (RBS-R) categorizes as follows. No single repetitive or self-injurious behavior seems to be specific to autism, but autism appears to have an elevated pattern of occurrence and severity of these behaviors. Autistic individuals may have symptoms that are independent of the diagnosis, but that can affect the individual or the family. An estimated 0.5% to 10% of individuals with ASD show unusual abilities, ranging from splinter skills such as the memorization of trivia to the extraordinarily rare talents of prodigious autistic savants. Many individuals with ASD show superior skills in perception and attention, relative to the general population. Sensory abnormalities are found in over 90% of those with autism, and are considered core features by some, although there is no good evidence that sensory symptoms differentiate autism from other developmental disorders. Differences are greater for under-responsivity (for example, walking into things) than for over-responsivity (for example, distress from loud noises) or for sensation seeking (for example, rhythmic movements). An estimated 60โ€“80% of autistic people have motor signs that include poor muscle tone, poor motor planning, and toe walking; deficits in motor coordination are pervasive across ASD and are greater in autism proper. Unusual eating behavior occurs in about three-quarters of children with ASD, to the extent that it was formerly a diagnostic indicator. Selectivity is the most common problem, although eating rituals and food refusal also occur. There is tentative evidence that autism occurs more frequently in people with gender dysphoria. Gastrointestinal problems are one of the most commonly associated medical disorders in people with autism. These are linked to greater social impairment, irritability, behavior and sleep problems, language impairments and mood changes. Parents of children with ASD have higher levels of stress. Siblings of children with ASD report greater admiration of and less conflict with the affected sibling than siblings of unaffected children and were similar to siblings of children with Down syndrome in these aspects of the sibling relationship. However, they reported lower levels of closeness and intimacy than siblings of children with Down syndrome; siblings of individuals with ASD have greater risk of negative well-being and poorer sibling relationships as adults. It has long been presumed that there is a common cause at the genetic, cognitive, and neural levels for autism's characteristic triad of symptoms. However, there is increasing suspicion that autism is instead a complex disorder whose core aspects have distinct causes that often co-occur. Autism has a strong genetic basis, although the genetics of autism are complex and it is unclear whether ASD is explained more by rare mutations with major effects, or by rare multigene interactions of common genetic variants. Complexity arises due to interactions among multiple genes, the environment, and epigenetic factors which do not change DNA sequencing but are heritable and influence gene expression. Many genes have been associated with autism through sequencing the genomes of affected individuals and their parents. Studies of twins suggest that heritability is 0.7 for autism and as high as 0.9 for ASD, and siblings of those with autism are about 25 times more likely to be autistic than the general population. However, most of the mutations that increase autism risk have not been identified. Typically, autism cannot be traced to a Mendelian (single-gene) mutation or to a single chromosome abnormality, and none of the genetic syndromes associated with ASDs have been shown to selectively cause ASD. Numerous candidate genes have been located, with only small effects attributable to any particular gene. Most loci individually explain less than 1% of cases of autism. The large number of autistic individuals with unaffected family members may result from spontaneous structural variationโ€”such as deletions, duplications or inversions in genetic material during meiosis. Hence, a substantial fraction of autism cases may be traceable to genetic causes that are highly heritable but not inherited: that is, the mutation that causes the autism is not present in the parental genome. Autism may be underdiagnosed in women and girls due to an assumption that it is primarily a male condition, but genetic phenomena such as imprinting and X linkage have the ability to raise the frequency and severity of conditions in males, and theories have been put forward for a genetic reason why males are diagnosed more often, such as the imprinted brain theory and the extreme male brain theory. Maternal nutrition and inflammation during preconception and pregnancy influences fetal neurodevelopment. Intrauterine growth restriction is associated with ASD, in both term and preterm infants. Maternal inflammatory and autoimmune diseases may damage fetal tissues, aggravating a genetic problem or damaging the nervous system. Exposure to air pollution during pregnancy, especially heavy metals and particulates, may increase the risk of autism. Environmental factors that have been claimed without evidence to contribute to or exacerbate autism include certain foods, infectious diseases, solvents, PCBs, phthalates and phenols used in plastic products, pesticides, brominated flame retardants, alcohol, smoking, illicit drugs, vaccines, and prenatal stress. Some, such as the MMR vaccine, have been completely disproven. Parents may first become aware of autistic symptoms in their child around the time of a routine vaccination. This has led to unsupported theories blaming vaccine "overload", a vaccine preservative, or the MMR vaccine for causing autism. The latter theory was supported by a litigation-funded study that has since been shown to have been "an elaborate fraud". Although these theories lack convincing scientific evidence and are biologically implausible, parental concern about a potential vaccine link with autism has led to lower rates of childhood immunizations, outbreaks of previously controlled childhood diseases in some countries, and the preventable deaths of several children. Autism's symptoms result from maturation-related changes in various systems of the brain. How autism occurs is not well understood. Its mechanism can be divided into two areas: the pathophysiology of brain structures and processes associated with autism, and the neuropsychological linkages between brain structures and behaviors. The behaviors appear to have multiple pathophysiologies. There is evidence that gutโ€“brain axis abnormalities may be involved. A 2015 review proposed that immune dysregulation, gastrointestinal inflammation, malfunction of the autonomic nervous system, gut flora alterations, and food metabolites may cause brain neuroinflammation and dysfunction. A 2016 review concludes that enteric nervous system abnormalities might play a role in neurological disorders such as autism. Neural connections and the immune system are a pathway that may allow diseases originated in the intestine to spread to the brain. Several lines of evidence point to synaptic dysfunction as a cause of autism. Some rare mutations may lead to autism by disrupting some synaptic pathways, such as those involved with cell adhesion. Gene replacement studies in mice suggest that autistic symptoms are closely related to later developmental steps that depend on activity in synapses and on activity-dependent changes. All known teratogens (agents that cause birth defects) related to the risk of autism appear to act during the first eight weeks from conception, and though this does not exclude the possibility that autism can be initiated or affected later, there is strong evidence that autism arises very early in development. Diagnosis is based on behavior, not cause or mechanism. Under the DSM-5, autism is characterized by persistent deficits in social communication and interaction across multiple contexts, as well as restricted, repetitive patterns of behavior, interests, or activities. These deficits are present in early childhood, typically before age three, and lead to clinically significant functional impairment. Sample symptoms include lack of social or emotional reciprocity, stereotyped and repetitive use of language or idiosyncratic language, and persistent preoccupation with unusual objects. The disturbance must not be better accounted for by Rett syndrome, intellectual disability or global developmental delay. ICD-10 uses essentially the same definition. Several diagnostic instruments are available. Two are commonly used in autism research: the Autism Diagnostic Interview-Revised (ADI-R) is a semistructured parent interview, and the Autism Diagnostic Observation Schedule (ADOS) uses observation and interaction with the child. The Childhood Autism Rating Scale (CARS) is used widely in clinical environments to assess severity of autism based on observation of children. The Diagnostic interview for social and communication disorders (DISCO) may also be used. A pediatrician commonly performs a preliminary investigation by taking developmental history and physically examining the child. If warranted, diagnosis and evaluations are conducted with help from ASD specialists, observing and assessing cognitive, communication, family, and other factors using standardized tools, and taking into account any associated medical conditions. A pediatric neuropsychologist is often asked to assess behavior and cognitive skills, both to aid diagnosis and to help recommend educational interventions. A differential diagnosis for ASD at this stage might also consider intellectual disability, hearing impairment, and a specific language impairment such as Landauโ€“Kleffner syndrome. The presence of autism can make it harder to diagnose coexisting psychiatric disorders such as depression. Clinical genetics evaluations are often done once ASD is diagnosed, particularly when other symptoms already suggest a genetic cause. Although genetic technology allows clinical geneticists to link an estimated 40% of cases to genetic causes, consensus guidelines in the US and UK are limited to high-resolution chromosome and fragile X testing. A genotype-first model of diagnosis has been proposed, which would routinely assess the genome's copy number variations. As new genetic tests are developed several ethical, legal, and social issues will emerge. Commercial availability of tests may precede adequate understanding of how to use test results, given the complexity of autism's genetics. Metabolic and neuroimaging tests are sometimes helpful, but are not routine. ASD can sometimes be diagnosed by age 14 months, although diagnosis becomes increasingly stable over the first three years of life: for example, a one-year-old who meets diagnostic criteria for ASD is less likely than a three-year-old to continue to do so a few years later. In the UK the National Autism Plan for Children recommends at most 30 weeks from first concern to completed diagnosis and assessment, though few cases are handled that quickly in practice. Although the symptoms of autism and ASD begin early in childhood, they are sometimes missed; years later, adults may seek diagnoses to help them or their friends and family understand themselves, to help their employers make adjustments, or in some locations to claim disability living allowances or other benefits. Girls are often diagnosed later than boys. Underdiagnosis and overdiagnosis are problems in marginal cases, and much of the recent increase in the number of reported ASD cases is likely due to changes in diagnostic practices. The increasing popularity of drug treatment options and the expansion of benefits has given providers incentives to diagnose ASD, resulting in some overdiagnosis of children with uncertain symptoms. Conversely, the cost of screening and diagnosis and the challenge of obtaining payment can inhibit or delay diagnosis. It is particularly hard to diagnose autism among the visually impaired, partly because some of its diagnostic criteria depend on vision, and partly because autistic symptoms overlap with those of common blindness syndromes or blindisms. Autism is one of the five pervasive developmental disorders (PDD), which are characterized by widespread abnormalities of social interactions and communication, and severely restricted interests and highly repetitive behavior. These symptoms do not imply sickness, fragility, or emotional disturbance. Of the five PDD forms, Asperger syndrome is closest to autism in signs and likely causes; Rett syndrome and childhood disintegrative disorder share several signs with autism, but may have unrelated causes; PDD not otherwise specified (PDD-NOS; also called "atypical autism") is diagnosed when the criteria are not met for a more specific disorder. Unlike with autism, people with Asperger syndrome have no substantial delay in language development. The terminology of autism can be bewildering, with autism, Asperger syndrome and PDD-NOS often called the "autism spectrum disorders" (ASD) or sometimes the "autistic disorders", whereas autism itself is often called "autistic disorder", "childhood autism", or "infantile autism". In this article, "autism" refers to the classic autistic disorder; in clinical practice, though, "autism", "ASD", and "PDD" are often used interchangeably. ASD, in turn, is a subset of the broader autism phenotype, which describes individuals who may not have ASD but do have autistic-like traits, such as avoiding eye contact. Autism can also be divided into syndromal and non-syndromal autism; the syndromal autism is associated with severe or profound intellectual disability or a congenital syndrome with physical symptoms, such as tuberous sclerosis. Although individuals with Asperger syndrome tend to perform better cognitively than those with autism, the extent of the overlap between Asperger syndrome, HFA, and non-syndromal autism is unclear. Some studies have reported diagnoses of autism in children due to a loss of language or social skills, as opposed to a failure to make progress, typically from 15 to 30 months of age. The validity of this distinction remains controversial; it is possible that regressive autism is a specific subtype, or that there is a continuum of behaviors between autism with and without regression. Research into causes has been hampered by the inability to identify biologically meaningful subgroups within the autistic population and by the traditional boundaries between the disciplines of psychiatry, psychology, neurology and pediatrics. Newer technologies such as fMRI and diffusion tensor imaging can help identify biologically relevant phenotypes (observable traits) that can be viewed on brain scans, to help further neurogenetic studies of autism; one example is lowered activity in the fusiform face area of the brain, which is associated with impaired perception of people versus objects. It has been proposed to classify autism using genetics as well as behavior. Autism has long been thought to cover a wide spectrum, ranging from individuals with severe impairmentsโ€”who may be silent, developmentally disabled, and prone to frequent repetitive behavior such as hand flapping and rockingโ€”to high functioning individuals who may have active but distinctly odd social approaches, narrowly focused interests, and verbose, pedantic communication. Because the behavior spectrum is continuous, boundaries between diagnostic categories are necessarily somewhat arbitrary. Sometimes the syndrome is divided into low-, medium- or high-functioning autism (LFA, MFA, and HFA), based on IQ thresholds. Some people have called for an end to the terms "high-functioning" and "low-functioning" due to lack of nuance and the potential for a person's needs or abilities to be overlooked. About half of parents of children with ASD notice their child's unusual behaviors by age 18 months, and about four-fifths notice by age 24 months. According to an article, failure to meet any of the following milestones "is an absolute indication to proceed with further evaluations. Delay in referral for such testing may delay early diagnosis and treatment and affect the long-term outcome". The United States Preventive Services Task Force in 2016 found it was unclear if screening was beneficial or harmful among children in whom there is no concerns. The Japanese practice is to screen all children for ASD at 18 and 24 months, using autism-specific formal screening tests. In contrast, in the UK, children whose families or doctors recognize possible signs of autism are screened. It is not known which approach is more effective. Screening tools include the Modified Checklist for Autism in Toddlers (M-CHAT), the Early Screening of Autistic Traits Questionnaire, and the First Year Inventory; initial data on M-CHAT and its predecessor, the Checklist for Autism in Toddlers (CHAT), on children aged 18โ€“30 months suggests that it is best used in a clinical setting and that it has low sensitivity (many false-negatives) but good specificity (few false-positives). It may be more accurate to precede these tests with a broadband screener that does not distinguish ASD from other developmental disorders. Screening tools designed for one culture's norms for behaviors like eye contact may be inappropriate for a different culture. Although genetic screening for autism is generally still impractical, it can be considered in some cases, such as children with neurological symptoms and dysmorphic features. While infection with rubella during pregnancy causes fewer than 1% of cases of autism, vaccination against rubella can prevent many of those cases. The main goals when treating children with autism are to lessen associated deficits and family distress, and to increase quality of life and functional independence. In general, higher IQs are correlated with greater responsiveness to treatment and improved treatment outcomes. No single treatment is best and treatment is typically tailored to the child's needs. Families and the educational system are the main resources for treatment. Services should be carried out by behavior analysts, special education teachers, speech pathologists, and licensed psychologists. Studies of interventions have methodological problems that prevent definitive conclusions about efficacy. However, the development of evidence-based interventions has advanced in recent years. Although many psychosocial interventions have some positive evidence, suggesting that some form of treatment is preferable to no treatment, the methodological quality of systematic reviews of these studies has generally been poor, their clinical results are mostly tentative, and there is little evidence for the relative effectiveness of treatment options. Intensive, sustained special education programs and behavior therapy early in life can help children acquire self-care, communication, and job skills, and often improve functioning and decrease symptom severity and maladaptive behaviors; claims that intervention by around age three years is crucial are not substantiated. While medications have not been found to help with core symptoms, they may be used for associated symptoms, such as irritability, inattention, or repetitive behavior patterns. Educational interventions often used include applied behavior analysis (ABA), developmental models, structured teaching, speech and language therapy, social skills therapy, and occupational therapy. Among these approaches, interventions either treat autistic features comprehensively, or focalize treatment on a specific area of deficit. The quality of research for early intensive behavioral intervention (EIBI)โ€”a treatment procedure incorporating over thirty hours per week of the structured type of ABA that is carried out with very young childrenโ€”is currently low, and more vigorous research designs with larger sample sizes are needed. Two theoretical frameworks outlined for early childhood intervention include structured and naturalistic ABA interventions, and developmental social pragmatic models (DSP). One interventional strategy utilizes a parent training model, which teaches parents how to implement various ABA and DSP techniques, allowing for parents to disseminate interventions themselves. Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation. Despite the recent development of parent training models, these interventions have demonstrated effectiveness in numerous studies, being evaluated as a probable efficacious mode of treatment. Early, intensive ABA therapy has demonstrated effectiveness in enhancing communication and adaptive functioning in preschool children; it is also well-established for improving the intellectual performance of that age group. Similarly, a teacher-implemented intervention that utilizes a more naturalistic form of ABA combined with a developmental social pragmatic approach has been found to be beneficial in improving social-communication skills in young children, although there is less evidence in its treatment of global symptoms. Neuropsychological reports are often poorly communicated to educators, resulting in a gap between what a report recommends and what education is provided. It is not known whether treatment programs for children lead to significant improvements after the children grow up, and the limited research on the effectiveness of adult residential programs shows mixed results. The appropriateness of including children with varying severity of autism spectrum disorders in the general education population is a subject of current debate among educators and researchers. Medications may be used to treat ASD symptoms that interfere with integrating a child into home or school when behavioral treatment fails. They may also be used for associated health problems, such as ADHD or anxiety. More than half of US children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics. The atypical antipsychotic drugs risperidone and aripiprazole are FDA-approved for treating associated aggressive and self-injurious behaviors. However, their side effects must be weighed against their potential benefits, and people with autism may respond atypically. Side effects, for example, may include weight gain, tiredness, drooling, and aggression. SSRI antidepressants, such as fluoxetine and fluvoxamine, have been shown to be effective in reducing repetitive and ritualistic behaviors, while the stimulant medication methylphenidate is beneficial for some children with co-morbid inattentiveness or hyperactivity. There is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD. No known medication relieves autism's core symptoms of social and communication impairments. Experiments in mice have reversed or reduced some symptoms related to autism by replacing or modulating gene function, suggesting the possibility of targeting therapies to specific rare mutations known to cause autism. Although many alternative therapies and interventions are available, few are supported by scientific studies. Treatment approaches have little empirical support in quality-of-life contexts, and many programs focus on success measures that lack predictive validity and real-world relevance. Some alternative treatments may place the child at risk. The preference that children with autism have for unconventional foods can lead to reduction in bone cortical thickness with this being greater in those on casein-free diets, as a consequence of the low intake of calcium and vitamin D; however, suboptimal bone development in ASD has also been associated with lack of exercise and gastrointestinal disorders. In 2005, botched chelation therapy killed a five-year-old child with autism. Chelation is not recommended for people with ASD since the associated risks outweigh any potential benefits. Another alternative medicine practice with no evidence is CEASE therapy, a mixture of homeopathy, supplements, and 'vaccine detoxing'. Although popularly used as an alternative treatment for people with autism, as of 2018 there is no good evidence to recommend a gluten- and casein-free diet as a standard treatment. A 2018 review concluded that it may be a therapeutic option for specific groups of children with autism, such as those with known food intolerances or allergies, or with food intolerance markers. The authors analyzed the prospective trials conducted to date that studied the efficacy of the gluten- and casein-free diet in children with ASD (4 in total). All of them compared gluten- and casein-free diet versus normal diet with a control group (2 double-blind randomized controlled trials, 1 double-blind crossover trial, 1 single-blind trial). In two of the studies, whose duration was 12 and 24 months, a significant improvement in ASD symptoms (efficacy rate 50%) was identified. In the other two studies, whose duration was 3 months, no significant effect was observed. The authors concluded that a longer duration of the diet may be necessary to achieve the improvement of the ASD symptoms. Other problems documented in the trials carried out include transgressions of the diet, small sample size, the heterogeneity of the participants and the possibility of a placebo effect. In the subset of people who have gluten sensitivity there is limited evidence that suggests that a gluten-free diet may improve some autistic behaviors. There is tentative evidence that music therapy may improve social interactions, verbal communication, and non-verbal communication skills. There has been early research looking at hyperbaric treatments in children with autism. Studies on pet therapy have shown positive effects. There is no known cure. The degree of symptoms can decrease, occasionally to the extent that people lose their diagnosis of ASD; this occurs sometimes after intensive treatment and sometimes not. It is not known how often recovery happens; reported rates in unselected samples have ranged from 3% to 25%. Most children with autism acquire language by age five or younger, though a few have developed communication skills in later years. Many children with autism lack social support, future employment opportunities or self-determination. Although core difficulties tend to persist, symptoms often become less severe with age. Few high-quality studies address long-term prognosis. Some adults show modest improvement in communication skills, but a few decline; no study has focused on autism after midlife. Acquiring language before age six, having an IQ above 50, and having a marketable skill all predict better outcomes; independent living is unlikely with severe autism. Many individuals with autism face significant obstacles in transitioning to adulthood. Compared to the general population individuals with autism are more likely to be unemployed and to have never had a job. About half of people in their 20s with autism are not employed. Most recent reviews tend to estimate a prevalence of 1โ€“2 per 1,000 for autism and close to 6 per 1,000 for ASD as of 2007. A 2016 survey in the United States reported a rate of 25 per 1,000 children for ASD. Globally, autism affects an estimated 24.8 million people , while Asperger syndrome affects a further 37.2 million. In 2012, the NHS estimated that the overall prevalence of autism among adults aged 18 years and over in the UK was 1.1%. Rates of PDD-NOS's has been estimated at 3.7 per 1,000, Asperger syndrome at roughly 0.6 per 1,000, and childhood disintegrative disorder at 0.02 per 1,000. CDC estimates about 1 out of 59 (1.7%) for 2014, an increase from 1 out of every 68 children (1.5%) for 2010. The number of reported cases of autism increased dramatically in the 1990s and early 2000s. This increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness, though unidentified environmental risk factors cannot be ruled out. The available evidence does not rule out the possibility that autism's true prevalence has increased; a real increase would suggest directing more attention and funding toward changing environmental factors instead of continuing to focus on genetics. Boys are at higher risk for ASD than girls. The sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without. Several theories about the higher prevalence in males have been investigated, but the cause of the difference is unconfirmed; one theory is that females are underdiagnosed. Although the evidence does not implicate any single pregnancy-related risk factor as a cause of autism, the risk of autism is associated with advanced age in either parent, and with diabetes, bleeding, and use of psychiatric drugs in the mother during pregnancy. The risk is greater with older fathers than with older mothers; two potential explanations are the known increase in mutation burden in older sperm, and the hypothesis that men marry later if they carry genetic liability and show some signs of autism. Most professionals believe that race, ethnicity, and socioeconomic background do not affect the occurrence of autism. Several other conditions are common in children with autism. They include: A few examples of autistic symptoms and treatments were described long before autism was named. The "Table Talk" of Martin Luther, compiled by his notetaker, Mathesius, contains the story of a 12-year-old boy who may have been severely autistic. Luther reportedly thought the boy was a soulless mass of flesh possessed by the devil, and suggested that he be suffocated, although a later critic has cast doubt on the veracity of this report. The earliest well-documented case of autism is that of Hugh Blair of Borgue, as detailed in a 1747 court case in which his brother successfully petitioned to annul Blair's marriage to gain Blair's inheritance. The Wild Boy of Aveyron, a feral child caught in 1798, showed several signs of autism; the medical student Jean Itard treated him with a behavioral program designed to help him form social attachments and to induce speech via imitation. The New Latin word "autismus" (English translation "autism") was coined by the Swiss psychiatrist Eugen Bleuler in 1910 as he was defining symptoms of schizophrenia. He derived it from the Greek word "autรณs" (ฮฑแฝฯ„ฯŒฯ‚, meaning "self"), and used it to mean morbid self-admiration, referring to "autistic withdrawal of the patient to his fantasies, against which any influence from outside becomes an intolerable disturbance". A Soviet child psychiatrist, Grunya Sukhareva, described a similar syndrome that was published in Russian in 1925, and in German in 1926. The word "autism" first took its modern sense in 1938 when Hans Asperger of the Vienna University Hospital adopted Bleuler's terminology "autistic psychopaths" in a lecture in German about child psychology. Asperger was investigating an ASD now known as Asperger syndrome, though for various reasons it was not widely recognized as a separate diagnosis until 1981. Leo Kanner of the Johns Hopkins Hospital first used "autism" in its modern sense in English when he introduced the label "early infantile autism" in a 1943 report of 11 children with striking behavioral similarities. Almost all the characteristics described in Kanner's first paper on the subject, notably "autistic aloneness" and "insistence on sameness", are still regarded as typical of the autistic spectrum of disorders. It is not known whether Kanner derived the term independently of Asperger. Donald Triplett was the first person diagnosed with autism. He was diagnosed by Kanner after being first examined in 1938, and was labeled as "case 1". Triplett was noted for his savant abilities, particularly being able to name musical notes played on a piano and to mentally multiply numbers. His father, Oliver, described him as socially withdrawn but interested in number patterns, music notes, letters of the alphabet, and U.S. president pictures. By the age of 2, he had the ability to recite the 23rd Psalm and memorized 25 questions and answers from the Presbyterian catechism. He was also interested in creating musical chords. Kanner's reuse of "autism" led to decades of confused terminology like "infantile schizophrenia", and child psychiatry's focus on maternal deprivation led to misconceptions of autism as an infant's response to "refrigerator mothers". Starting in the late 1960s autism was established as a separate syndrome. As late as the mid-1970s there was little evidence of a genetic role in autism; while in 2007 it was believed to be one of the most heritable psychiatric conditions. Although the rise of parent organizations and the destigmatization of childhood ASD have affected how ASD is viewed, parents continue to feel social stigma in situations where their child's autistic behavior is perceived negatively, and many primary care physicians and medical specialists express some beliefs consistent with outdated autism research. It took until 1980 for the DSM-III to differentiate autism from childhood schizophrenia. In 1987, the DSM-III-R provided a checklist for diagnosing autism. In May 2013, the DSM-5 was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, autism, Asperger syndrome, Rett syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The two categories that exist are impaired social communication and/or interaction, and restricted and/or repetitive behaviors. The Internet has helped autistic individuals bypass nonverbal cues and emotional sharing that they find difficult to deal with, and has given them a way to form online communities and work remotely. Societal and cultural aspects of autism have developed: some in the community seek a cure, while others believe that autism is simply another way of being. An autistic culture has emerged, accompanied by the autistic rights and neurodiversity movements. Events include World Autism Awareness Day, Autism Sunday, Autistic Pride Day, Autreat, and others. Organizations dedicated to promoting awareness of autism include Autistic Self Advocacy Network, Aspies For Freedom, Autism National Committee, and Autism Society of America. At the same time, some organizations, including Autism Speaks, have been condemned by disability rights organizations for failing to support autistic people. Social-science scholars study those with autism in hopes to learn more about "autism as a culture, transcultural comparisons... and research on social movements." While most autistic individuals do not have savant skills, many have been successful in their fields. The autism rights movement is a social movement within the context of disability rights that emphasizes the concept of neurodiversity, viewing the autism spectrum as a result of natural variations in the human brain rather than a disorder to be cured. The autism rights movement advocates for including greater acceptance of autistic behaviors; therapies that focus on coping skills rather than on imitating the behaviors of those without autism, and the recognition of the autistic community as a minority group. Autism rights or neurodiversity advocates believe that the autism spectrum is genetic and should be accepted as a natural expression of the human genome. This perspective is distinct from two other likewise distinct views: the medical perspective, that autism is caused by a genetic defect and should be addressed by targeting the autism gene(s), and fringe theories that autism is caused by environmental factors such as vaccines. A common criticism against autistic activists is that the majority of them are "high-functioning" or have Asperger syndrome and do not represent the views of "low-functioning" autistic people. About half of autistics are unemployed, and one third of those with graduate degrees may be unemployed. Among autistics who find work, most are employed in sheltered settings working for wages below the national minimum. While employers state hiring concerns about productivity and supervision, experienced employers of autistics give positive reports of above average memory and detail orientation as well as a high regard for rules and procedure in autistic employees. A majority of the economic burden of autism is caused by decreased earnings in the job market. Some studies also find decreased earning among parents who care for autistic children. </doc>
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/SQUAD/sample.json
{ "version": 2.0, "data": [ { "id": "56ddde6b9a695914005b9628", "question": "In what country is Normandy located?", "context": "The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.", "answers": { "answer_start": [ 159, 159, 159, 159 ], "text": [ "France", "France", "France", "France" ] } }, { "id": "56ddde6b9a695914005b9629", "question": "When were the Normans in Normandy?", "context": "The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.", "answers": { "answer_start": [ 94, 87, 94, 94 ], "text": [ "10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries" ] } }, { "id": "56ddde6b9a695914005b962a", "question": "From which countries did the Norse originate?", "context": "The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.", "answers": { "answer_start": [ 256, 256, 256, 256 ], "text": [ "Denmark, Iceland and Norway", "Denmark, Iceland and Norway", "Denmark, Iceland and Norway", "Denmark, Iceland and Norway" ] } }, { "id": "5ad39d53604f3c001a3fe8d3", "question": "Who did King Charles III swear fealty to?", "context": "The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.", "answers": { "answer_start": [], "text": [] } }, { "id": "5ad39d53604f3c001a3fe8d4", "question": "When did the Frankish identity emerge?", "context": "The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.", "answers": { "answer_start": [], "text": [] } }, { "id": "56dddf4066d3e219004dad5f", "question": "Who was the duke in the battle of Hastings?", "context": "The Norman dynasty had a major political, cultural and military impact on medieval Europe and even the Near East. The Normans were famed for their martial spirit and eventually for their Christian piety, becoming exponents of the Catholic orthodoxy into which they assimilated. They adopted the Gallo-Romance language of the Frankish land they settled, their dialect becoming known as Norman, Normaund or Norman French, an important literary language. The Duchy of Normandy, which they formed by treaty with the French crown, was a great fief of medieval France, and under Richard I of Normandy was forged into a cohesive and formidable principality in feudal tenure. The Normans are noted both for their culture, such as their unique Romanesque architecture and musical traditions, and for their significant military accomplishments and innovations. Norman adventurers founded the Kingdom of Sicily under Roger II after conquering southern Italy on the Saracens and Byzantines, and an expedition on behalf of their duke, William the Conqueror, led to the Norman conquest of England at the Battle of Hastings in 1066. Norman cultural and military influence spread from these new European centres to the Crusader states of the Near East, where their prince Bohemond I founded the Principality of Antioch in the Levant, to Scotland and Wales in Great Britain, to Ireland, and to the coasts of north Africa and the Canary Islands.", "answers": { "answer_start": [ 1022, 1022, 1022 ], "text": [ "William the Conqueror", "William the Conqueror", "William the Conqueror" ] } }, { "id": "5ad3a266604f3c001a3fea2b", "question": "What principality did William the conquerer found?", "context": "The Norman dynasty had a major political, cultural and military impact on medieval Europe and even the Near East. The Normans were famed for their martial spirit and eventually for their Christian piety, becoming exponents of the Catholic orthodoxy into which they assimilated. They adopted the Gallo-Romance language of the Frankish land they settled, their dialect becoming known as Norman, Normaund or Norman French, an important literary language. The Duchy of Normandy, which they formed by treaty with the French crown, was a great fief of medieval France, and under Richard I of Normandy was forged into a cohesive and formidable principality in feudal tenure. The Normans are noted both for their culture, such as their unique Romanesque architecture and musical traditions, and for their significant military accomplishments and innovations. Norman adventurers founded the Kingdom of Sicily under Roger II after conquering southern Italy on the Saracens and Byzantines, and an expedition on behalf of their duke, William the Conqueror, led to the Norman conquest of England at the Battle of Hastings in 1066. Norman cultural and military influence spread from these new European centres to the Crusader states of the Near East, where their prince Bohemond I founded the Principality of Antioch in the Levant, to Scotland and Wales in Great Britain, to Ireland, and to the coasts of north Africa and the Canary Islands.", "answers": { "answer_start": [], "text": [] } }, { "id": "56e16182e3433e1400422e28", "question": "What branch of theoretical computer science deals with broadly classifying computational problems by difficulty and class of relationship?", "context": "Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.", "answers": { "answer_start": [ 0, 0, 0 ], "text": [ "Computational complexity theory", "Computational complexity theory", "Computational complexity theory" ] } }, { "id": "5ad5316b5b96ef001a10ab76", "question": "What is a manual application of mathematical steps?", "context": "Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.", "answers": { "answer_start": [], "text": [] } }, { "id": "56e16839cd28a01900c67887", "question": "What measure of a computational problem broadly defines the inherent difficulty of the solution?", "context": "A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.", "answers": { "answer_start": [ 46, 49, 46 ], "text": [ "if its solution requires significant resources", "its solution requires significant resources", "if its solution requires significant resources" ] } }, { "id": "56e16839cd28a01900c67888", "question": "What method is used to intuitively assess or quantify the amount of resources required to solve a computational problem?", "context": "A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.", "answers": { "answer_start": [ 176, 176, 176 ], "text": [ "mathematical models of computation", "mathematical models of computation", "mathematical models of computation" ] } }, { "id": "56e16839cd28a01900c67889", "question": "What are two basic primary resources used to guage complexity?", "context": "A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.", "answers": { "answer_start": [ 305, 305, 305 ], "text": [ "time and storage", "time and storage", "time and storage" ] } }, { "id": "5ad532575b96ef001a10ab7f", "question": "What unit is measured to determine circuit simplicity?", "context": "A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.", "answers": { "answer_start": [], "text": [] } }, { "id": "5ad532575b96ef001a10ab80", "question": "What number is used in perpendicular computing?", "context": "A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.", "answers": { "answer_start": [], "text": [] } } ] }
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/swag/sample.json
{"ending0": "passes by walking down the street playing their instruments.", "ending1": "has heard approaching them.", "ending2": "arrives and they're outside dancing and asleep.", "ending3": "turns the lead singer watches the performance.", "label": 0, "sent1": "Members of the procession walk down the street holding small horn brass instruments.", "sent2": "A drum line"} {"ending0": "are playing ping pong and celebrating one left each in quick.", "ending1": "wait slowly towards the cadets.", "ending2": "continues to play as well along the crowd along with the band being interviewed.", "ending3": "continue to play marching, interspersed.", "label": 3, "sent1": "A drum line passes by walking down the street playing their instruments.", "sent2": "Members of the procession"} {"ending0": "pay the other coaches to cheer as people this chatter dips in lawn sheets.", "ending1": "walk down the street holding small horn brass instruments.", "ending2": "is seen in the background.", "ending3": "are talking a couple of people playing a game of tug of war.", "label": 1, "sent1": "A group of members in green uniforms walks waving flags.", "sent2": "Members of the procession"} {"ending0": "are playing ping pong and celebrating one left each in quick.", "ending1": "wait slowly towards the cadets.", "ending2": "makes a square call and ends by jumping down into snowy streets where fans begin to take their positions.", "ending3": "play and go back and forth hitting the drums while the audience claps for them.", "label": 3, "sent1": "A drum line passes by walking down the street playing their instruments.", "sent2": "Members of the procession"} {"ending0": "finishes the song and lowers the instrument.", "ending1": "hits the saxophone and demonstrates how to properly use the racquet.", "ending2": "finishes massage the instrument again and continues.", "ending3": "continues dancing while the man gore the music outside while drums.", "label": 0, "sent1": "The person plays a song on the violin.", "sent2": "The man"} {"ending0": "finishes playing then marches their tenderly.", "ending1": "walks in frame and rubs on his hands, and then walks into a room.", "ending2": "continues playing guitar while moving from the camera.", "ending3": "plays a song on the violin.", "label": 3, "sent1": "The person holds up the violin to his chin and gets ready.", "sent2": "The person"} {"ending0": "examines the instrument in his hand.", "ending1": "stops playing the drums and waves over the other boys.", "ending2": "lights the cigarette and sticks his head in.", "ending3": "drags off the vacuum.", "label": 0, "sent1": "A person retrieves an instrument from a closet.", "sent2": "The man"} {"ending0": "studies a picture of the man playing the violin.", "ending1": "holds up the violin to his chin and gets ready.", "ending2": "stops to speak to the camera again.", "ending3": "puts his arm around the man and backs away.", "label": 1, "sent1": "The man examines the instrument in his hand.", "sent2": "The person"} {"ending0": "hands her another phone.", "ending1": "takes the drink, then holds it.", "ending2": "looks off then looks at someone.", "ending3": "stares blearily down at the floor.", "label": 3, "sent1": "Someone walks over to the radio.", "sent2": "Someone"} {"ending0": "looks off then looks at someone.", "ending1": "hands her another phone.", "ending2": "takes the drink, then holds it.", "ending3": "turns on a monitor.", "label": 3, "sent1": "Someone walks over to the radio.", "sent2": "Someone"}
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/COCO/coco_panoptic_annotations.txt
[{"id": 8222595, "category_id": 17, "iscrowd": 0, "bbox": [18, 54, 301, 415], "area": 53306}, {"id": 8225432, "category_id": 17, "iscrowd": 0, "bbox": [349, 26, 291, 343], "area": 59627}, {"id": 8798150, "category_id": 63, "iscrowd": 0, "bbox": [1, 0, 639, 474], "area": 174579}, {"id": 14466198, "category_id": 75, "iscrowd": 0, "bbox": [42, 74, 133, 45], "area": 4068}, {"id": 12821912, "category_id": 75, "iscrowd": 0, "bbox": [333, 80, 38, 106], "area": 2118}, {"id": 10898909, "category_id": 93, "iscrowd": 0, "bbox": [0, 0, 640, 480], "area": 2750}]
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/COCO/coco_annotations.txt
[{"segmentation": [[333.96, 175.14, 338.26, 134.33, 342.55, 95.67, 348.99, 79.57, 368.32, 80.64, 371.54, 91.38, 364.03, 106.41, 356.51, 145.07, 351.14, 166.55, 350.07, 184.8, 345.77, 185.88, 332.89, 178.36, 332.89, 172.99]], "area": 2120.991099999999, "iscrowd": 0, "image_id": 39769, "bbox": [332.89, 79.57, 38.65, 106.31], "category_id": 75, "id": 1108446}, {"segmentation": [[44.03, 86.01, 112.75, 74.2, 173.96, 77.42, 175.03, 89.23, 170.74, 98.9, 147.11, 102.12, 54.77, 119.3, 53.69, 119.3, 44.03, 113.93, 41.88, 94.6, 41.88, 94.6]], "area": 4052.607, "iscrowd": 0, "image_id": 39769, "bbox": [41.88, 74.2, 133.15, 45.1], "category_id": 75, "id": 1110067}, {"segmentation": [[1.08, 473.53, 633.17, 473.53, 557.66, 376.45, 535.01, 366.74, 489.71, 305.26, 470.29, 318.2, 456.27, 351.64, 413.12, 363.51, 376.45, 358.11, 348.4, 350.56, 363.51, 331.15, 357.03, 288.0, 353.8, 257.8, 344.09, 190.92, 333.3, 177.98, 345.17, 79.82, 284.76, 130.52, 265.35, 151.01, 308.49, 189.84, 317.12, 215.73, 293.39, 243.78, 269.66, 212.49, 235.15, 199.55, 214.65, 193.08, 187.69, 217.89, 159.64, 278.29, 135.91, 313.89, 169.35, 292.31, 203.87, 281.53, 220.04, 292.31, 220.04, 307.42, 175.82, 345.17, 155.33, 360.27, 105.71, 363.51, 85.21, 374.29, 74.43, 366.74, 70.11, 465.98, 42.07, 471.37, 33.44, 457.35, 34.52, 414.2, 29.12, 368.9, 9.71, 291.24, 46.38, 209.26, 99.24, 128.36, 131.6, 107.87, 50.7, 117.57, 40.99, 103.55, 40.99, 85.21, 60.4, 77.66, 141.3, 70.11, 173.66, 72.27, 174.74, 92.76, 204.94, 72.27, 225.44, 62.56, 262.11, 56.09, 292.31, 53.93, 282.61, 81.98, 298.79, 96.0, 310.65, 102.47, 348.4, 74.43, 373.21, 81.98, 430.38, 35.6, 484.31, 23.73, 540.4, 46.38, 593.26, 66.88, 638.56, 80.9, 632.09, 145.62, 581.39, 118.65, 543.64, 130.52, 533.93, 167.19, 512.36, 197.39, 498.34, 218.97, 529.62, 253.48, 549.03, 273.98, 584.63, 276.13, 587.87, 293.39, 566.29, 305.26, 531.78, 298.79, 549.03, 319.28, 576.0, 358.11, 560.9, 376.45, 639.64, 471.37, 639.64, 2.16, 1.08, 0.0]], "area": 176277.55269999994, "iscrowd": 0, "image_id": 39769, "bbox": [1.08, 0.0, 638.56, 473.53], "category_id": 63, "id": 1605237}, {"segmentation": [[1.07, 1.18, 640.0, 3.33, 638.93, 472.59, 4.3, 479.03]], "area": 301552.6694999999, "iscrowd": 0, "image_id": 39769, "bbox": [1.07, 1.18, 638.93, 477.85], "category_id": 65, "id": 1612051}, {"segmentation": [[138.75, 319.38, 148.75, 294.38, 165.0, 246.87, 197.5, 205.63, 247.5, 203.13, 268.75, 216.88, 280.0, 239.38, 293.75, 244.38, 303.75, 241.88, 307.5, 228.13, 318.75, 220.63, 315.0, 200.63, 291.25, 171.88, 265.0, 156.88, 258.75, 148.13, 262.5, 135.63, 282.5, 123.13, 292.5, 115.63, 311.25, 108.13, 313.75, 106.88, 296.25, 93.13, 282.5, 84.38, 292.5, 64.38, 288.75, 60.63, 266.25, 54.38, 232.5, 63.12, 206.25, 70.63, 170.0, 100.63, 136.25, 114.38, 101.25, 138.13, 56.25, 194.38, 27.5, 259.38, 17.5, 299.38, 32.5, 378.13, 31.25, 448.13, 41.25, 469.38, 66.25, 466.88, 70.0, 419.38, 71.25, 391.88, 77.5, 365.63, 113.75, 364.38, 145.0, 360.63, 168.75, 349.38, 191.25, 330.63, 212.5, 319.38, 223.75, 305.63, 206.25, 286.88, 172.5, 288.13]], "area": 53301.618749999994, "iscrowd": 0, "image_id": 39769, "bbox": [17.5, 54.38, 301.25, 415.0], "category_id": 17, "id": 2190839}, {"segmentation": [[543.75, 136.88, 570.0, 114.38, 591.25, 123.13, 616.25, 140.63, 640.0, 143.13, 636.25, 124.37, 605.0, 103.13, 640.0, 103.13, 633.75, 86.88, 587.5, 73.13, 548.75, 49.38, 505.0, 35.63, 462.5, 25.63, 405.0, 48.13, 362.5, 111.88, 347.5, 179.38, 355.0, 220.63, 356.25, 230.63, 365.0, 264.38, 358.75, 266.88, 358.75, 270.63, 356.25, 291.88, 356.25, 325.63, 355.0, 338.13, 350.0, 348.13, 365.0, 354.38, 396.25, 351.88, 423.75, 355.63, 446.25, 350.63, 460.0, 345.63, 462.5, 321.88, 468.75, 306.88, 481.25, 299.38, 516.25, 341.88, 536.25, 368.13, 570.0, 369.38, 578.75, 359.38, 555.0, 330.63, 532.5, 298.13, 563.75, 299.38, 582.5, 298.13, 586.25, 286.88, 578.75, 278.13, 548.75, 269.38, 525.0, 256.88, 505.0, 206.88, 536.25, 161.88, 540.0, 149.38]], "area": 59700.95625, "iscrowd": 0, "image_id": 39769, "bbox": [347.5, 25.63, 292.5, 343.75], "category_id": 17, "id": 2190842}]
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/STS-B/dev.tsv
index genre filename year old_index source1 source2 sentence1 sentence2 score 0 main-captions MSRvid 2012test 0000 none none A man with a hard hat is dancing. A man wearing a hard hat is dancing. 5.000 1 main-captions MSRvid 2012test 0002 none none A young child is riding a horse. A child is riding a horse. 4.750 2 main-captions MSRvid 2012test 0003 none none A man is feeding a mouse to a snake. The man is feeding a mouse to the snake. 5.000 3 main-captions MSRvid 2012test 0007 none none A woman is playing the guitar. A man is playing guitar. 2.400 4 main-captions MSRvid 2012test 0008 none none A woman is playing the flute. A man is playing a flute. 2.750 5 main-captions MSRvid 2012test 0010 none none A woman is cutting an onion. A man is cutting onions. 2.615 6 main-captions MSRvid 2012test 0015 none none A man is erasing a chalk board. The man is erasing the chalk board. 5.000 7 main-captions MSRvid 2012test 0023 none none A woman is carrying a boy. A woman is carrying her baby. 2.333 8 main-captions MSRvid 2012test 0027 none none Three men are playing guitars. Three men are on stage playing guitars. 3.750
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/STS-B/train.tsv
index genre filename year old_index source1 source2 sentence1 sentence2 score 0 main-captions MSRvid 2012test 0001 none none A plane is taking off. An air plane is taking off. 5.000 1 main-captions MSRvid 2012test 0004 none none A man is playing a large flute. A man is playing a flute. 3.800 2 main-captions MSRvid 2012test 0005 none none A man is spreading shreded cheese on a pizza. A man is spreading shredded cheese on an uncooked pizza. 3.800 3 main-captions MSRvid 2012test 0006 none none Three men are playing chess. Two men are playing chess. 2.600 4 main-captions MSRvid 2012test 0009 none none A man is playing the cello. A man seated is playing the cello. 4.250 5 main-captions MSRvid 2012test 0011 none none Some men are fighting. Two men are fighting. 4.250 6 main-captions MSRvid 2012test 0012 none none A man is smoking. A man is skating. 0.500 7 main-captions MSRvid 2012test 0013 none none The man is playing the piano. The man is playing the guitar. 1.600 8 main-captions MSRvid 2012test 0014 none none A man is playing on a guitar and singing. A woman is playing an acoustic guitar and singing. 2.200
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/GermEval/dev.txt
Gleich O darauf O entwirft O er O seine O Selbstdarstellung O " O Ecce B-OTH homo I-OTH " O in O enger O Auseinandersetzung O mit O diesem O Bild O Jesu B-PER . O 1980 O kam O der O Crown B-OTH als O Versuch O von O Toyota B-ORG , O sich O in O der O Oberen O Mittelklasse O zu O etablieren O , O auch O nach O Deutschland B-LOC . O โ€“ O 4:26 O # O Sometime B-OTH Ago/La I-OTH Fiesta I-OTH โ€“ O 23:18 O Alle O Stรผcke O wurden O von O Corea B-PER komponiert O mit O Ausnahme O der O einleitenden O Improvisation O zu O Sometime B-OTH Ago I-OTH . O Bis O 2013 O steigen O die O Mittel O aus O dem O EU-Budget B-ORGpart auf O rund O 120 O Millionen O Euro B-OTH . O Daraus O entwickelte O sich O im O Rokoko B-OTH die O Sitte O des O gemeinsamen O Weinens O im O Theater O , O das O die O Standesgrenzen O innerhalb O des O Publikums O รผberbrรผcken O sollte O . O Die O Spinne O hatte O sie O mit O Seidenfรคden O an O ihrem O Schwanz O gefesselt O und O nach O oben O gezogen O . O In O Deutschland B-LOC ist O nach O StGB O eine O Anwerbung O fรผr O die O Fremdenlegion O strafbar O . O Am O Donnerstag O wird O sich O zeigen O , O ob O die O Idee O der O DLR-Forscher B-ORGpart funktioniert O . O Der O sechste O Lauf O der O ADAC B-ORG GT I-ORG Mastersstand O ganz O klar O im O Mittelpunkt O des O Motorsport-Wochenendes O auf O dem O Eurospeedway B-ORG Lausitz I-ORG . O Nach O den O schwรคcheren O Vorgaben O der O Wall B-ORG Street I-ORG vom O Vortag O setzten O die O deutschen B-LOCderiv Standardwerte O ihren O Konsolidierungskurs O fort O . O Kolb B-PER war O seit O 1986 O im O Turnverein O als O Leiter O tรคtig O , O darunter O elf O Jahre O als O Hauptleiter O in O der O Mรคnnerriege O . O
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/GermEval/train.txt
Schartau B-PER sagte O dem O " O Tagesspiegel B-ORG " O vom O Freitag O , O Fischer B-PER sei O " O in O einer O Weise O aufgetreten O , O die O alles O andere O als O รผberzeugend O war O " O . O Firmengrรผnder O Wolf B-PER Peter I-PER Bree I-PER arbeitete O Anfang O der O siebziger O Jahre O als O Mรถbelvertreter O , O als O er O einen O fliegenden O Hรคndler O aus O dem O Libanon B-LOC traf O . O Ob O sie O dabei O nach O dem O Runden O Tisch O am O 23. O April O in O Berlin B-LOC durch O ein O pรคdagogisches O Konzept O unterstรผtzt O wird O , O ist O allerdings O zu O bezweifeln O . O Bayern B-ORG Mรผnchen I-ORG ist O wieder O alleiniger O Top- O Favorit O auf O den O Gewinn O der O deutschen B-LOCderiv FuรŸball-Meisterschaft O . O Dabei O hรคtte O der O tapfere O Schlussmann O allen O Grund O gehabt O , O sich O viel O frรผher O aufzuregen O . O ARD-Programmchef B-ORGpart Gรผnter B-PER Struve I-PER war O wegen O eines O vierwรถchigen O Urlaubs O fรผr O eine O Stellungnahme O nicht O erreichbar O . O Alternativ O sollten O sich O die O Restaurantbetreiber O aus O Sicht O der O Solingerin B-LOCderiv zu O lรคngeren O ร–ffnungszeiten O verpflichten O , O um O wartende O Kunden O aufzunehmen O . O Die O Deutsche B-ORG Flugsicherung I-ORG ( O DFS B-ORG ) O beschloss O ein O Flugverbot O fรผr O alle O internationalen O Flughรคfen O mit O Ausnahme O der O beiden O Berliner B-LOCderiv Flughรคfen O bis O 2.00 O Uhr O nachts O . O New O Small O Family O mit O E-Motor O : O Studie O E-Up O ! O Eine O Schwachstelle O war O beispielsweise O der O Spiegelkasten O . O Denn O durch O den O Einsatz O moderner O Fahrzeugtechnik O ( O Dieseltriebwagen O ) O und O schalldรคmmender O Fenster O entsteht O keine O Einschrรคnkung O der O Wohnqualitรคt O . O
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/GermEval/labels.txt
B-LOC B-LOCderiv B-LOCpart B-ORG B-ORGderiv B-ORGpart B-OTH B-OTHderiv B-OTHpart B-PER B-PERderiv B-PERpart I-LOC I-LOCderiv I-LOCpart I-ORG I-ORGderiv I-ORGpart I-OTH I-OTHderiv I-OTHpart I-PER I-PERderiv I-PERpart O
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/wmt_en_ro/val.json
{ "translation": { "en": "Brazil's Former Presidential Chief-of-Staff to Stand Trial A federal judge on Tuesday accepted the charges filed against Brazil's former presidential chief of staff for his alleged involvement in a massive corruption scheme at state-owned oil company Petrobras. The federal prosecutor's office said Jose Dirceu will face trial on the corruption, racketeering and money laundering charges filed earlier this month. Fourteen other people will also be tried, including Joao Vaccari Neto, the former treasurer of Brazil's governing Workers' Party and Renato de Souza Duque, Petrobras' former head of corporate services.", "ro": "Fostul ศ™ef al cabinetului prezidenศ›ial brazilian este adus รฎn faศ›a instanศ›ei Marศ›i, un judecฤƒtor federal a acceptat acuzaศ›iile aduse รฎmpotriva fostului ศ™ef al cabinetului prezidenศ›ial brazilian pentru presupusa implicare a acestuia รฎntr-o schemฤƒ masivฤƒ de corupศ›ie privind compania petrolierฤƒ de stat Petrobras. Biroul procurorului federal a declarat cฤƒ Jose Dirceu va fi trimis รฎn judecatฤƒ pentru acuzaศ›iile de corupศ›ie, รฎnศ™elฤƒtorie ศ™i spฤƒlare de bani aduse รฎn aceastฤƒ lunฤƒ. Alte paisprezece persoane vor fi judecate, printre acestea numฤƒrรขndu-se Joao Vaccari Neto, fostul trezorier al Partidului Muncitorilor, aflat la putere รฎn Brazilia, ศ™i Renato de Souza Duque, fostul preศ™edinte al serviciilor pentru รฎntreprinderi ale Petrobras." } } { "translation": { "en": "Dirceu is the most senior member of the ruling Workers' Party to be taken into custody in connection with the scheme. Dirceu served as former President Luiz Inacio Lula da Silva's chief of staff between 2003 and 2005. He was arrested early August in his home, where he already was under house arrest serving an 11-year sentence for his involvement in a cash-for-votes scheme in Congress more than 10 years ago. Prosecutors have said that Dirceu masterminded the kickback scheme at Petrobras, accepted bribes while in office and continued to receive payments from contractors after he was jailed in late 2013 for the vote-buying scandal.", "ro": "Dirceu este cel mai vechi membru al Partidului Muncitorilor aflat la guvernare luat รฎn custodie pentru legฤƒturile cu aceastฤƒ schemฤƒ. Dirceu a servit ca ศ™ef de cabinet al fostului preศ™edinte Luiz Inacio Lula da Silva รฎntre 2003 ศ™i 2005. A fost arestat la รฎnceputul lui august de acasฤƒ, unde deja se afla sub arest la domiciliu, cu o pedeapsฤƒ de 11 ani pentru implicarea รฎntr-o schemฤƒ de cumpฤƒrare a voturilor รฎn Congres cu peste 10 ani รฎn urmฤƒ. Procurorii au declarat cฤƒ Dirceu a dezvoltat schema de luare de mitฤƒ de la Petrobras, a acceptat mitฤƒ รฎn timp ce se afla รฎn funcศ›ie ศ™i a continuat sฤƒ primeascฤƒ plฤƒศ›i de la antreprenori dupฤƒ ce a fost รฎnchis la sfรขrศ™itul lui 2013 pentru scandalul voturilor cumpฤƒrate." } } { "translation": { "en": "According to prosecutors, the scheme at Petrobras involved roughly $2 billion in bribes and other illegal funds. Some of that money was allegedly funneled back to campaign coffers of the ruling party and its allies. It also allegedly included the payment of bribes to Petrobras executives in return for inflated contracts. 'Miraculous' recovery for Peshawar massacre schoolboy A teenager paralysed after being shot four times in Pakistan's deadliest terror attack has made a \"miraculous\" recovery following treatment in the UK. Muhammad Ibrahim Khan, 13, had been told by doctors in Pakistan that he would never walk again.", "ro": "Conform procurorilor, schema de la Petrobras a implicat aproximativ 2 miliarde de dolari sub formฤƒ de mitฤƒ ศ™i alte fonduri ilegale. O parte din acei bani s-ar fi รฎntors รฎn fondul de campanie al partidului aflat la guvernare ศ™i al aliaศ›ilor acestora. De asemenea, ar fi inclus mitฤƒ cฤƒtre directorii Petrobras รฎn schimbul unor contracte umflate. Recuperarea โ€žmiraculoasฤƒโ€ a unui elev supravieศ›uitor al masacrului de la Peshawar Un adolescent paralizat dupฤƒ ce fusese รฎmpuศ™cat de patru ori รฎn cel mai cumplit atac terorist din Pakistan a reuศ™it o recuperare โ€žmiraculoasฤƒโ€ dupฤƒ ce a urmat un tratament รฎn Regatul Unit. Lui Mohamed Ibrahim Khan, รฎn vรขrstฤƒ de 13 ani, doctorii din Pakistan รฎi spuseserฤƒ cฤƒ nu va mai putea sฤƒ meargฤƒ niciodatฤƒ." } } { "translation": { "en": "At least 140 people, mostly children, were killed when gunmen stormed Peshawar's Army Public School last December. Muhammad, who arrived in London last month for surgery, is being discharged from hospital later. Exactly nine months ago, on an ordinary Tuesday morning, Muhammad sat in his first aid class listening to his teachers intently. At the same time seven gunmen disguised in security uniforms were entering the Army Public School. They were strapped with explosives and had one simple mission in mind: Kill every man, woman and child they came across. \"I can't forget what happened that day,\" Muhammad says with a severe stare.", "ro": "Cel puศ›in 140 de persoane, majoritatea copii, au fost ucise cรขnd bฤƒrbaศ›i รฎnarmaศ›i au atacat ศ™coala publicฤƒ a armatei din Peshawar รฎn luna decembrie a anului trecut. Mohamed, care a sosit la Londra luna trecutฤƒ pentru operaศ›ie, va fi externat mai tรขrziu din spital. Exact cu nouฤƒ luni รฎn urmฤƒ, รฎntr-o dimineaศ›ฤƒ obiศ™nuitฤƒ de marศ›i, Mohamed stฤƒtea la ora de primul ajutor ศ™i รฎศ™i asculta atent profesorii. Chiar atunci, ศ™apte bฤƒrbaศ›i รฎnarmaศ›i deghizaศ›i รฎn uniformele agenศ›ilor de pazฤƒ intrau รฎn ศ™coala publicฤƒ a armatei. Purtau centuri cu explozivi ศ™i aveau de รฎndeplinit o misiune simplฤƒ: sฤƒ รฎi ucidฤƒ pe toศ›i bฤƒrbaศ›ii, femeile ศ™i copiii care le ieศ™eau รฎn cale. โ€žNu pot uita ce s-a รฎntรขmplat รฎn acea ziโ€, spune Mohamed cu o privire asprฤƒ." } } { "translation": { "en": "We were sitting in the auditorium, we were asking questions... and then we heard heavy gunfire outside. The terrorists moved inside and they started killing - our teacher was burned alive. Muhammad described pulling four other pupils out of the auditorium as the carnage unfolded. He said he then heard his friend, Hamza calling to him. He said, 'oh brother save me'. I held his hand. That's when I was shot in the back, and he was shot in the head. Most of the people killed in the attack were pupils Hamza died in Muhammad's arms. Muhammad recalled blacking out after that, and the next thing he knew he was in a hospital bed, paralysed from the waist down.", "ro": "Stฤƒteam รฎn amfiteatru, puneam รฎntrebฤƒri... apoi am auzit focuri de armฤƒ afarฤƒ. Teroriศ™tii au intrat รฎnฤƒuntru ศ™i au รฎnceput sฤƒ ucidฤƒ. Profesorul nostru a fost ars de viu. Mohamed descrie cum a scos patru elevi din amfiteatru รฎn timp ce se desfฤƒศ™ura carnagiul. Apoi spune cฤƒ ศ™i-a auzit prietenul, pe Hamza, strigรขndu-l. Spunea โ€žoh, frate, salveazฤƒ-mฤƒโ€. L-am ศ›inut de mรขnฤƒ. Atunci eu am fost รฎmpuศ™cat รฎn spate, iar el รฎn cap. Cei mai mulศ›i dintre cei uciศ™i รฎn atac erau elevi Hamza a murit รฎn braศ›ele lui Mohamed. Mohamed รฎศ™i aminteศ™te cฤƒ imediat dupฤƒ asta a leศ™inat ศ™i cฤƒ urmฤƒtorul lucru pe care l-a ศ™tiut a fost cฤƒ se afla pe un pat de spital, paralizat de la brรขu รฎn jos." } } { "translation": { "en": "Doctors in Peshawar in northern Pakistan, and then Rawalpindi, close to the capital, told his family there was no treatment, and he would never walk again. \"Seeing him I felt like my soul had left my body,\" says Muhammad's father, Sher Khan Those nine months were the hardest in my life. But Mr Khan and his wife, Sherbano, refused to believe that their cricket-mad son would never be able to use his legs again. They campaigned, and appealed for help on Pakistani TV, gaining the support of high profile people such as cricketer turned politician Imran Khan.", "ro": "Doctorii din Peshawar din nordul Pakistanului, apoi cei din Rawalpindi, aproape de capitalฤƒ, i-au spus familiei sale cฤƒ nu exista tratament ศ™i cฤƒ nu va mai putea merge niciodatฤƒ. โ€žCรขnd l-am vฤƒzut, am simศ›it cum รฎmi iese sufletulโ€, spune Sher Khan, tatฤƒl lui Mohamed. Acele nouฤƒ luni au fost cele mai grele din viaศ›a mea. รŽnsฤƒ Khan ศ™i soศ›ia lui, Sherbano, au refuzat sฤƒ creadฤƒ cฤƒ fiul lor atรขt de pasionat de crichet nu-ศ™i va mai putea folosi vreodatฤƒ picioarele. Au fฤƒcut o campanie ศ™i au cerut ajutor de la televiziunea pakistanezฤƒ, atrฤƒgรขnd sprijinul unor oameni faimoศ™i precum Imran Khan, jucฤƒtor de crichet devenit politician." } } { "translation": { "en": "Finally, they were able to raise the funds to bring Muhammad to the UK and provide him with treatment at London's private Harley Street Clinic. Consultant neurosurgeon Irfan Malik described Muhammad as \"terrified\" when he first arrived at the hospital. \"He'd spent the last [few] months lying on a bed, unable to move side to side,\" says Mr Malik. He was weak, he had a pressure sore on his back. He wasn't in great shape. A vertebra at the base of Muhammad's spine was destroyed Muhammad was shot in his shoulder, his hip, and his back during the attack, damaging his lower spine - leading to paralysis.", "ro": "รŽntr-un final, au reuศ™it sฤƒ strรขngฤƒ fonduri pentru a-l duce pe Mohamed รฎn Regatul Unit ศ™i a-i oferi tratament la clinica privatฤƒ Harley Street din Londra. Neurochirurgul consultant Irfan Malik l-a descris pe Mohamed drept โ€žรฎnspฤƒimรขntatโ€ cรขnd acesta a ajuns la spital. โ€žรŽศ™i petrecuse ultimele [cรขteva] luni zฤƒcรขnd รฎn pat, fฤƒrฤƒ sฤƒ se poatฤƒ miศ™ca de pe o parte pe alta, spune Malik. Era slฤƒbit, se pusese multฤƒ presiune pe spatele lui. Nu era รฎntr-o formฤƒ prea bunฤƒ. O vertebrฤƒ de la baza coloanei vertebrale a lui Mohamed fusese distrusฤƒ Mohamed fusese รฎmpuศ™cat รฎn umฤƒr, รฎn ศ™old ศ™i รฎn spate รฎn timpul atacului, iar coloana vertebralฤƒ inferioarฤƒ รฎi fusese distrusฤƒ, ducรขnd la paralizie." } } { "translation": { "en": "But during six hours of surgery, Mr Malik and his team were able to reattach nerve endings and reconstruct the damaged part of the spine. Even Mr Malik was surprised at what happened next. Exactly one week after the surgery Muhammad stood up and started taking steps and walking. We were not expecting to get that sort of excellent result. That was miraculous,\" he says. Less than two weeks after his operation, Muhammad is ready to leave hospital and start the long road to recovery. Muhammad has defied the odds and started to walk again He says he wants to build his strength and continue his education in the UK. But he says he is determined to return to Pakistan, join the army and help fight terrorism.", "ro": "รŽnsฤƒ, รฎn timpul unei operaศ›ii care a durat ศ™ase ore, Malik ศ™i echipa lui au reuศ™it sฤƒ lege din nou terminaศ›iile nervoase ศ™i sฤƒ reconstruiascฤƒ partea distrusฤƒ a coloanei. Chiar ศ™i Malik a fost surprins de ceea ce s-a รฎntรขmplat รฎn continuare. Exact la o sฤƒptฤƒmรขnฤƒ dupฤƒ operaศ›ie, Mohamed s-a ridicat ศ™i a รฎnceput sฤƒ facฤƒ paศ™i ศ™i sฤƒ meargฤƒ. Nu ne aศ™teptam la un rezultat atรขt de bun. A fost un miracolโ€, spune acesta. รŽn mai puศ›in de douฤƒ sฤƒptฤƒmรขni de la operaศ›ie, Mohamed este gata sฤƒ pฤƒrฤƒseascฤƒ spitalul ศ™i sฤƒ รฎnceapฤƒ procesul lung de recuperare. Mohamed a sfidat soarta ศ™i a รฎnceput sฤƒ meargฤƒ din nou Vrea sฤƒ devinฤƒ puternic ศ™i sฤƒ รฎศ™i continue studiile รฎn Regatul Unit. รŽnsฤƒ este hotฤƒrรขt sฤƒ revinฤƒ รฎn Pakistan, sฤƒ se รฎnroleze รฎn armatฤƒ ศ™i sฤƒ lupte รฎmpotriva terorismului." } } { "translation": { "en": "\"I feel like I have a second chance at life,\" he says as he shows off pictures he's drawn of guns scribbled out next to school books and pens Muhammad grows physically stronger every day but the psychological trauma he continues to endure is unimaginable. \"My anger is not diminishing\" he says. In my school little kids were killed. What was their crime? His mother, wiping a tear from her eye, caressed his head and said: \"I can see my son walking again.\" He'll be able to get on with his normal life. 'Super Voice' 4G service from Three offers better signal Three is making use of a lower frequency 4G spectrum that can travel more widely", "ro": "โ€žSimt cฤƒ am รฎncฤƒ o ศ™ansฤƒ la viaศ›ฤƒโ€ spune el, arฤƒtรขnd imaginile cu arme desenate de el lรขngฤƒ manuale ศ™colare ศ™i stilouri Fizic, Mohamed devine tot mai puternic รฎn fiecare zi, รฎnsฤƒ trauma psihologicฤƒ prin care trece ศ™i acum este de neimaginat. โ€žFuria mea nu a scฤƒzutโ€, mฤƒrturiseศ™te el. รŽn ศ™coala mea au fost uciศ™i copii mici. Ce crimฤƒ au comis ei? Mama lui รฎศ™i ศ™terge o lacrimฤƒ, รฎl mรขngรขie pe creศ™tet ศ™i spune: โ€žรŽmi vฤƒd fiul mergรขnd din nouโ€. Va putea sฤƒ-ศ™i continue firesc viaศ›a. Serviciul 4G โ€žSuper Voiceโ€ de la Three oferฤƒ semnal mai bun Three foloseศ™te un spectru 4G cu o frecvenศ›ฤƒ mai joasฤƒ, care poate acoperi o zonฤƒ mai extinsฤƒ" } } { "translation": { "en": "Mobile phone provider Three has launched a UK service it says will improve reception inside buildings and in rural black spots. Its 4G Super Voice enables customers to make calls and send texts using a lower frequency spectrum. Other networks are looking into introducing the technology, known as Voice Over Long-Term Evolution (VoLTE). It currently works on only the Samsung Galaxy S5, but recent iPhone handsets will be added in the coming months. Three said up to 5.5 million customers would have access to the service by 2017.", "ro": "Furnizorul de telefonie mobilฤƒ Three a lansat รฎn Regatul Unit un serviciu despre care spune cฤƒ va รฎmbunฤƒtฤƒศ›i recepศ›ia รฎn interiorul clฤƒdirilor ศ™i รฎn zonele rurale fฤƒrฤƒ semnal. Serviciul 4G Super Voice le permite clienศ›ilor sฤƒ efectueze apeluri ศ™i sฤƒ trimitฤƒ mesaje text folosind un spectru cu o frecvenศ›ฤƒ mai joasฤƒ. ศ˜i alte reศ›ele intenศ›ioneazฤƒ sฤƒ introducฤƒ aceeaศ™i tehnologie, cunoscutฤƒ ca โ€žVoice Over Long-Term Evolution (VoLTE)โ€. Aceasta funcศ›ioneazฤƒ momentan doar cu Samsung Galaxy S5, รฎnsฤƒ telefoanele iPhone recente vor beneficia de ea รฎn lunile urmฤƒtoare. Three menศ›ioneazฤƒ cฤƒ pรขnฤƒ la 5,5 milioane de clienศ›i vor avea acces la serviciu pรขnฤƒ รฎn 2017." } } { "translation": { "en": "Chief technology officer Bryn Jones said: \"By the end of the year, one million of our customers will have access to better indoor coverage and be able to use their phones in more places than ever before.\" Stars prepare for panto season Pantomime season is big business for theatres up and down the UK, with many getting ready for this year's season now. Some of the biggest names in showbusiness now take part in the yuletide theatre. Matthew Kelly and Hayley Mills will be appearing in Cinderella - one as an ugly sister, the other as fairy godmother. They reveal their panto secrets to BBC Breakfast. Steven Wilson: 'If I don't do anything, I feel this creeping guilt'", "ro": "Responsabilul ศ™ef pentru tehnologie, Bryn Jones a declarat: โ€žPรขnฤƒ la sfรขrศ™itul anului, un milion dintre clienศ›ii noศ™tri vor avea acces la o acoperire mai bunฤƒ รฎn interior ศ™i รฎศ™i vor putea folosi telefoanele รฎn mai multe locuri ca pรขnฤƒ acumโ€. Vedetele se pregฤƒtesc pentru stagiunea de pantomimฤƒ Stagiunea de pantomimฤƒ este foarte importantฤƒ pentru teatrele din tot Regatul Unit, multe dintre ele pregฤƒtindu-se acum pentru stagiunea din acest an. Acum, la teatrul de Crฤƒciun participฤƒ unele dintre numele cele mai mari din showbusiness. Matthew Kelly ศ™i Hayley Mills vor apฤƒrea รฎn Cenuศ™ฤƒreasa - primul รฎn rolul uneia dintre surorile rele, iar a doua รฎn rolul zรขnei. Aceศ™tia dezvฤƒluie secretele pantomimei lor la BBC Breakfast. Steven Wilson: โ€žDacฤƒ nu fac nimic, mฤƒ simt vinovatโ€" } } { "translation": { "en": "Steven Wilson was recently the big winner at the Progressive Music Awards Steven Wilson is often dubbed the hardest working musician in the world of progressive rock. The multi-talented musician won three prizes at this month's Progressive Music Awards in London, including album of the year for Hand. The Guardian's five-star review called it \"a smart, soulful and immersive work of art.\" Since the 1980s, Wilson has been the driving force in a number of musical projects, the best known of which is the rock band Porcupine Tree. Now, ahead of two sell-out shows at the Royal Albert Hall, Wilson is releasing a vinyl-only double LP, Transience, to showcase the \"more accessible\" side of his solo output.", "ro": "Steven Wilson a fost desemnat recent drept marele cรขศ™tigฤƒtor al Progressive Music Awards Steven Wilson a fost numit de multe ori drept cel mai muncitor muzician din lumea rockului progresiv. Talentatul muzician a cรขศ™tigat trei premii la Progressive Music Awards, care a avut loc luna aceasta la Londra, printre care ศ™i premiul pentru cel mai bun album al anului pentru Hand. รŽn recenzia sa de cinci stele, The Guardian a numit albumul โ€žo operฤƒ de artฤƒ inteligentฤƒ, expresivฤƒ ศ™i captivantฤƒโ€. รŽncฤƒ din anii 1980, Wilson este motorul mai multor proiecte muzicale, cel mai cunoscut dintre acestea fiind trupa de rock Porcupine Tree. Acum, รฎnainte de douฤƒ spectacole cu casa รฎnchisฤƒ la Royal Albert Hall, Wilson lanseazฤƒ un dublu LP doar รฎn format vinil, Transience, pentru a arฤƒta latura โ€žmai accesibilฤƒโ€ a activitฤƒศ›ii sale solo." } } { "translation": { "en": "He tells the BBC about his love of vinyl, his busy schedule and explains how comic actor Matt Berry came to be his support act. What does vinyl mean to you? I grew up at the very tail end of the vinyl era, and at the time, I remember, we couldn't wait for CD to come along because vinyl was so frustrating. You would buy the record, take it home, and it would have a scratch, and you would have to take it back again. I love CDs, and for some kinds of music - classical for example - it is better than vinyl. But the problem with the CD and digital downloads is that there's nothing you can really cherish or treasure. Owning vinyl is like having a beautiful painting hanging in your living room.", "ro": "A povestit pentru BBC despre dragostea lui pentru viniluri ศ™i despre programul sฤƒu รฎncฤƒrcat ศ™i a explicat cum a ajuns actorul de comedie Matt Berry sฤƒ รฎi deschidฤƒ spectacolele. Ce รฎnseamnฤƒ vinil pentru tine? Am crescut chiar รฎn perioada de sfรขrศ™it a erei vinilurilor ศ™i รฎmi amintesc cฤƒ atunci abia aศ™teptam apariศ›ia CD-ului, cฤƒci vinilul era atรขt de enervant. Cumpฤƒrai un disc, mergeai cu el acasฤƒ, avea o zgรขrieturฤƒ ศ™i trebuia sฤƒ รฎl aduci รฎnapoi. Iubesc CD-urile, iar pentru anumite tipuri de muzicฤƒ, de exemplu cea clasicฤƒ, sunt mai bune decรขt vinilurile. รŽnsฤƒ problema cu CD-urile ศ™i cu descฤƒrcฤƒrile digitale este aceea cฤƒ nu mai existฤƒ nimic pe care sฤƒ รฎl preศ›uieศ™ti cu adevฤƒrat. Sฤƒ ai un vinil e ca ศ™i cum ai avea un tablou frumos agฤƒศ›at รฎn sufragerie." } } { "translation": { "en": "It's something you can hold, pore over the lyrics and immerse yourself in the art work. I thought it was just a nostalgic thing, but it can't be if kids too young to remember vinyl are enjoying that kind of experience. Do you have a piece of vinyl that you treasure? The truth is I got rid of 100% of my vinyl in the 90s. All the vinyl I have is re-bought. I started off from the perspective that I wanted to recreate the collection I had when I was 15, but it's gone beyond that. The first record which I persuaded my parents to buy for me was Electric Light Orchestra's Out of the Blue.", "ro": "E ceva ce poศ›i ศ›ine รฎn mรขnฤƒ, รฎn timp ce te laศ™i absorbit de versuri ศ™i copleศ™it de actul artistic. Am crezut cฤƒ e doar o chestie nostalgicฤƒ, รฎnsฤƒ nu are cum sฤƒ fie aศ™a dacฤƒ unor puศ™ti prea tineri sฤƒ-ศ™i aminteascฤƒ de viniluri le place acest gen de experienศ›ฤƒ. Ai vreun vinil la care ศ›ii รฎn mod special? Recunosc cฤƒ am scฤƒpat de toate vinilurile รฎn anii '90. Toate vinilurile pe care le am sunt cumpฤƒrate din nou. Am pornit de la ideea de a reface colecศ›ia pe care o aveam la 15 ani, รฎnsฤƒ am trecut de limita aceea. Primul disc pe care mi-am convins pฤƒrinศ›ii sฤƒ mi-l cumpere a fost Out of the Blue de la Electric Light Orchestra." } } { "translation": { "en": "If I still had my original copy, it would have sentimental value, but, alas, it's in a charity shop somewhere. Steven Wilson hopes the album will be a doorway for potential new fans Why release your new compilation Transience on vinyl? It was originally conceived as an idea for Record Store Day, but we missed the boat on that. My record company had suggested I put together some of my shorter, more accessible songs. I got a bit obsessed by the idea to make something like \"an introduction to Steven Wilson,\" and I was committed to it being a vinyl-only release. Anyone who buys the vinyl does also get a high-resolution download.", "ro": "Dacฤƒ aศ™ mai fi avut รฎncฤƒ exemplarul iniศ›ial, acesta ar fi avut valoare sentimentalฤƒ, รฎnsฤƒ, din pฤƒcate, se aflฤƒ pe undeva printr-un magazin de caritate. Steven Wilson sperฤƒ cฤƒ albumul va fi o poartฤƒ cฤƒtre posibili fani noi De ce ศ›i-ai lansat noua compilaศ›ie Transience pe vinil? Aceasta a fost conceputฤƒ iniศ›ial ca idee pentru Ziua magazinelor de discuri, รฎnsฤƒ am ratat ocazia. Casa mea de discuri sugerase sฤƒ adun cรขteva dintre melodiile mele mai scurte ศ™i mai accesibile. Am ajuns sฤƒ fiu uศ™or obsedat de ideea de a face ceva gen โ€žintroducere รฎn muzica lui Steven Wilsonโ€ ศ™i am ศ›inut neapฤƒrat ca proiectul sฤƒ fie lansat doar pe vinil. Cine cumpฤƒrฤƒ vinilul primeศ™te, de asemenea, ศ™i o variantฤƒ descฤƒrcatฤƒ la rezoluศ›ie รฎnaltฤƒ." } } { "translation": { "en": "Do you have a concern that the album won't show your work in a true light?", "ro": "Eศ™ti รฎngrijorat cฤƒ albumul nu va arฤƒta muzica ta รฎn adevฤƒrata ei luminฤƒ?" } }
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/wmt_en_ro/test.json
{ "translation": { "en": "UN Chief Says There Is No Military Solution in Syria Secretary-General Ban Ki-moon says his response to Russia's stepped up military support for Syria is that \"there is no military solution\" to the nearly five-year conflict and more weapons will only worsen the violence and misery for millions of people. The U.N. chief again urged all parties, including the divided U.N. Security Council, to unite and support inclusive negotiations to find a political solution. Ban told a news conference Wednesday that he plans to meet with foreign ministers of the five permanent council nations - the U.S., Russia, China, Britain and France - on the sidelines of the General Assembly's ministerial session later this month to discuss Syria.", "ro": "ศ˜eful ONU declarฤƒ cฤƒ nu existฤƒ soluศ›ii militare รฎn Siria Secretarul General Ban Ki-moon afirmฤƒ cฤƒ rฤƒspunsul sฤƒu la suportul militar al Rusiei pentru Siria este cฤƒ โ€žnu existฤƒ o soluศ›ie militarฤƒโ€ la conflictul care dureazฤƒ de aproape cinci ani iar mai multe arme nu ar face decรขt sฤƒ agraveze violenศ›a ศ™i suferinศ›a a milioane de oameni. ศ˜eful ONU a solicitat din nou tuturor pฤƒrศ›ilor, inclusiv Consiliului de securitate ONU divizat sฤƒ se unifice ศ™i sฤƒ susศ›inฤƒ negocierile pentru a gฤƒsi o soluศ›ie politicฤƒ. Ban a declarat miercuri รฎn cadrul unei conferinศ›e cฤƒ intenศ›ioneazฤƒ sฤƒ se รฎntรขlneascฤƒ luna aceasta cu miniศ™trii de externe din cinci ศ›ฤƒri permanent prezente รฎn consiliu - SUA, Rusia, China, Anglia ศ™i Franศ›a - pe marginea sesiunii ministeriale a Adunฤƒrii Generale pentru a discuta despre Siria." } } { "translation": { "en": "He expressed regret that divisions in the council and among the Syrian people and regional powers \"made this situation unsolvable.\" Ban urged the five permanent members to show the solidarity and unity they did in achieving an Iran nuclear deal in addressing the Syria crisis. 8 Poll Numbers That Show Donald Trump Is For Real Some have tried to label him a flip-flopper. Others have dismissed him as a joke. And some are holding out for an implosion. But no matter how some Republicans are trying to drag Donald Trump down from atop the polls, it hasn't worked (yet).", "ro": "Ban ศ™i-a exprimat regretul cฤƒ divizฤƒrile รฎn consiliu ศ™i รฎntre poporul sirian ศ™i puterile regionale โ€žau fฤƒcut aceastฤƒ situaศ›ie de nerezolvatโ€. Ban le-a cerut celor cinci membri permanenศ›i sฤƒ dea dovadฤƒ de solidaritatea ศ™i unitatea arฤƒtate atunci cรขnd au reuศ™it sฤƒ รฎncheie un acord referitor la armele nucleare ale Iranului, abordรขnd astfel criza din Siria. 8 cifre din sondaje care aratฤƒ cฤƒ Donald Trump are ศ™anse reale Unii au รฎncercat sฤƒ รฎl eticheteze ca politician โ€žflip-flopโ€. Alศ›ii l-au numit o glumฤƒ. Iar alศ›ii aศ™teaptฤƒ implozia. รŽnsฤƒ indiferent de modul รฎn care unii republicani รฎncearcฤƒ sฤƒ รฎl dฤƒrรขme pe Donald Trump din vรขrful sondajelor, nu a funcศ›ionat (รฎncฤƒ)." } } { "translation": { "en": "Ten of the last 11 national polls have shown Donald Trump's lead at double digits, and some are starting to ask seriously what it means for the real estate mogul's nomination chances. Of course, it's still early in the election cycle. None of this is to say that Trump is likely to win the Republican nomination. Pundits point out that at this time in 2011, Rick Perry's lead was giving way to a rising Herman Cain, neither of whom won even one state in the nomination process. And there are many reasons he would struggle in a general election. But outside groups like Jeb Bush's Super PAC and the economic conservative group Club for Growth are recognizing Trump's staying power and beginning to unload their dollars to topple him.", "ro": "Zece din ultimele 11 sondaje naศ›ionale au arฤƒtat cฤƒ Donald Trump conduce cu un procent din douฤƒ cifre iar unele voci รฎncep sฤƒ se รฎntrebe serios ce รฎnseamnฤƒ acest lucru pentru ศ™ansele de numire ale mogulului imobiliar. Desigur, este รฎncฤƒ prematur. Nimic din toate acestea nu spune cฤƒ Trump va cรขศ™tiga cursa pentru nominalizarea republicanilor. Pundits aratฤƒ cฤƒ, รฎn aceeaศ™i perioadฤƒ a anului 2011, avansul lui Rick Perry รฎi fฤƒcea loc lui Herman Cain รฎn sondaje, dar niciunul dintre ei nu a cรขศ™tigat รฎn vreun stat รฎn cursa de nominalizare. Iar motivele pentru care s-ar lupta din greu la alegerile generale sunt numeroase. รŽnsฤƒ grupurile din exterior precum Super PAC al lui Jeb Bush ศ™i grupul conservator economic Club for Growth admit puterea lui Trump ศ™i รฎncep sฤƒ รฎl susศ›inฤƒ cu bani." } } { "translation": { "en": "Here are some recent poll numbers that suggest that the real estate mogul isn't just a passing phase: Trump's favorability ratings have turned 180 degrees. Right before Donald Trump announced his candidacy in mid-June, a Monmouth University poll showed only two in 10 Republicans had a positive view of the real estate mogul. By mid-July, it was 40 percent. In early August, it was 52 percent. Now, six in 10 Republicans have a favorable view of Donald Trump. Roughly three in 10 say they have a negative view. And these numbers hold up in early states. A Quinnipiac poll in Iowa last week found that 60 percent of Republicans there had a favorable view of Trump.", "ro": "รŽn continuare vฤƒ prezentฤƒm cรขteva cifre din sondaje recente care sugereazฤƒ cฤƒ mogulul imobiliar nu este doar ceva trecฤƒtor: Cifrele care indicฤƒ susศ›inerea faศ›ฤƒ de Trump s-au รฎntors la 180 grade. Chiar รฎnainte ca Donald Trump sฤƒ รฎศ™i anunศ›e candidatura, la mijlocul lui iunie, un sondaj realizat de Universitatea din Monmouth arฤƒta cฤƒ doar doi din 10 republicani aveau o pฤƒrere pozitivฤƒ despre mogulul imobiliar. Pรขnฤƒ la mijlocul lui iulie, procentul a urcat la 40%. La รฎnceputul lui august, era 52%. รŽn prezent, ศ™ase din 10 republicani au o pฤƒrere favorabilฤƒ despre Donald Trump. Aproximativ trei din 10 declarฤƒ cฤƒ au o pฤƒrere negativฤƒ. Aceste cifre se menศ›in. Un sondaj realizat sฤƒptฤƒmรขna trecutฤƒ de Quinnipiac รฎn Iowa a concluzionat cฤƒ 60% dintre republicanii din regiune au o pฤƒrere favorabilฤƒ despre Trump." } } { "translation": { "en": "Two-thirds of GOP voters would be happy with Trump as the nominee. In a CNN/ORC poll last week, 67 percent of Republicans said they would be either \"enthusiastic\" or \"satisfied\" if Trump were the nominee. Only two in 10 say they would be \"upset\" if he were the nominee. Only Ben Carson generates roughly the same level of enthusiasm as Trump (43 percent say they would be \"enthusiastic\" vs. 40 percent who say the same of Trump). The next closest in enthusiasm? Marco Rubio with only 21 percent.", "ro": "Douฤƒ treimi dintre alegฤƒtorii GOP ar fi fericiศ›i dacฤƒ Trump ar cรขศ™tiga cursa pentru nominalizare. รŽntr-un sondaj realizat sฤƒptฤƒmรขna trecutฤƒ de CNN/ORC, 67% dintre republicani au declarat cฤƒ ar fi โ€žentuziasmaศ›iโ€ sau โ€žmulศ›umiศ›iโ€ dacฤƒ Trump ar cรขศ™tiga cursa pentru nominalizare. Doar doi din 10 declarฤƒ cฤƒ ar fi โ€žsupฤƒraศ›iโ€ dacฤƒ Trump ar cรขศ™tiga cursa pentru nominalizare. Doar Ben Carson genereazฤƒ aproximativ acelaศ™i nivel de entuziasm ca Trump (43% declarฤƒ cฤƒ ar fi โ€žentuziasmaศ›iโ€ faศ›ฤƒ de 40% care declarฤƒ acelaศ™i lucru despre Trump). Cel mai aproape รฎn ceea ce priveศ™te entuziasmul? Marco Rubio, cu doar 21%." } } { "translation": { "en": "On the flip side, 47 percent of Republican voters say they would be \"dissatisfied\" or \"upset\" if establishment favorite Jeb Bush becomes the nominee. A majority of Republicans don't see Trump's temperament as a problem. While Donald Trump has been widely criticized for his bombast and insults, 52 percent of leaned Republican voters nationwide think that the real estate mogul has the right temperament to be president, according to Monday's ABC News/Washington Post poll. The same number holds in the first-in-the-nation caucus state of Iowa, where the same 52 percent of Republicans think he has the personality to be commander in chief, according to Quinnipiac last week.", "ro": "De partea cealaltฤƒ, 47% dintre alegฤƒtorii republicani afirmฤƒ cฤƒ ar fi โ€žnemulศ›umiศ›iโ€ sau โ€žsupฤƒraศ›iโ€ dacฤƒ favoritul Jeb Bush cรขศ™tigฤƒ cursa pentru nominalizare. Majoritatea republicanilor nu considerฤƒ temperamentul lui Trump o problemฤƒ. Deศ™i Donald Trump a fost puternic criticat pentru insultele aduse ศ™i stilul sฤƒu bombastic, 52% dintre alegฤƒtorii republicani la nivel naศ›ional considerฤƒ cฤƒ mogulul imobiliar are temperamentul potrivit pentru a fi preศ™edinte, conform sondajului realizat luni de ABC News/Washington Post. Regฤƒsim aceleaศ™i cifre รฎn statul Iowa, unde tot 52% dintre republicani cred cฤƒ Trump are personalitatea potrivitฤƒ pentru a fi conducฤƒtor, conform sondajului realizat sฤƒptฤƒmรขna trecutฤƒ de Quinnipiac." } } { "translation": { "en": "Still, 44 percent think he doesn't have the personality to serve effectively, and almost six in 10 independents say his temperament does not belong in the White House, according to ABC/Post. Republican voters are getting used to the idea. When they put on their pundit hats, Republican voters think Trump is for real. When asked who is most likely to win the GOP nomination, four in 10 said Trump was the best bet, according to a CNN/ORC poll out last week. That's a change from when four in 10 placed their money on Jeb Bush in late July. Full disclosure: GOP voters haven't had the clearest crystal ball in the past.", "ro": "Totuศ™i, 44% sunt de pฤƒrere cฤƒ nu are personalitatea necesarฤƒ pentru a acศ›iona eficient ศ™i aproape ศ™ase din 10 independenศ›i afirmฤƒ cฤƒ temperamentul sฤƒu nu are ce cฤƒuta la Casa Albฤƒ, conform ABC/Post. Alegฤƒtorii republicani se obiศ™nuiesc cu ideea. Atunci cรขnd iau atitudinea de intelectuali, alegฤƒtorii republicani considerฤƒ cฤƒ Trump este autentic. Conform unui sondaj realizat sฤƒptฤƒmรขna trecutฤƒ de CNN/ORC, la รฎntrebarea cine are cele mai multe ศ™anse sฤƒ cรขศ™tige cursa pentru nominalizare GOP, patru din 10 au declarat cฤƒ Trump. Situaศ›ia s-a schimbat faศ›ฤƒ de finalul lui iulie, cรขnd patru din 10 ar fi pariat pe Jeb Bush. Informare completฤƒ: รฎn trecut, alegฤƒtorii GOP nu au citit foarte bine viitorul." } } { "translation": { "en": "At this time last cycle, four in 10 Republicans picked Rick Perry to win the nomination, vs. only 28 percent for eventual nominee Mitt Romney. Still, it shows that a plurality of GOP voters see Trump's campaign as plausible. Even if Republicans rallied around another candidate, Trump still beats almost everyone. Some pundits point out that the splintered field is likely contributing to Trump's lead, while anti-Trump support is be spread diffusely among more than a dozen other candidates. But a Monmouth University poll in early September shows that, in a hypothetical head-to-head matchup between Trump and most other Republican candidates, Trump almost always garners majority support.", "ro": "รŽn aceeaศ™i perioadฤƒ a ultimelor alegeri, patru din 10 republicani l-au ales pe Rick Perry รฎn cursa pentru nominalizare, faศ›ฤƒ de doar 28% pentru Mitt Romney. รŽnsฤƒ, aceste cifre aratฤƒ cฤƒ majoritatea alegฤƒtorilor GOP considerฤƒ plauzibilฤƒ campania lui Trump. Chiar dacฤƒ republicanii sau repliat spre un alt candidat. Trump รฎncฤƒ se aflฤƒ รฎn fruntea tuturor. Unele voci spun cฤƒ situaศ›ia divizatฤƒ va contribui probabil la victoria lui Trump, รฎn timp ce susศ›inerea contra lui Trump se va รฎmpฤƒrศ›i la mai mult de doisprezece candidaศ›i. รŽnsฤƒ un sondaj derulat la รฎnceputul lui septembrie de Universitatea din Monmouth aratฤƒ cฤƒ, รฎn situaศ›ia ipoteticฤƒ a unei colaborฤƒri รฎntre Trump ศ™i majoritatea celorlalศ›i candidaศ›i republicani, aproape รฎntotdeauna Trump va beneficia de susศ›inerea majoritarฤƒ." } } { "translation": { "en": "He leads Carly Fiorina by 13 points, Marco Rubio by 14 points, Walker by 15 points, Jeb Bush by 19 points, and, finally, Rand Paul, John Kasich and Chris Christie by 33 points each. He's in a dead heat with Ted Cruz. The only candidate who beats him? Ben Carson would lead the businessman by a wide 19 points in a hypothetical head-to-head. A bare majority of Donald Trump's supporters say they've made up their minds. A new CBS/NYT poll out on Tuesday shows that just more than half of voters who support Trump say they have locked in their votes. Obviously, a lot can happen to change that, and no one can really say they would never change their mind.", "ro": "Trump se aflฤƒ la distanศ›ฤƒ de 13 puncte de Carly Fiorina, la 14 puncte de Marco Rubio, la 15 puncte de Walker, la 19 puncte de Jeb Bush ศ™i, รฎn cele din urmฤƒ, la cรขte 33 de puncte faศ›ฤƒ de Rand Paul, John Kasich ศ™i Chris Christie. Este aproape la egalitate cu Ted Cruz. Singurul candidat care รฎl รฎnvinge? Ben Carson l-ar รฎnvinge pe omul de afaceri cu 19 puncte รฎntr-o confruntare ipoteticฤƒ de unu la unu. Majoritatea susศ›inฤƒtorilor lui Donald Trump declarฤƒ cฤƒ s-au decis. Un nou sondaj realizat marศ›i de CBS/NYT aratฤƒ cฤƒ peste jumฤƒtate dintre alegฤƒtorii care รฎl susศ›in pe Trump declarฤƒ cฤƒ nu รฎศ™i schimbฤƒ opศ›iunea de vot. Evident, se pot รฎntรขmpla multe รฎn acest sens ศ™i nimeni nu poate spune cฤƒ aceศ™tia nu se vor rฤƒzgรขndi niciodatฤƒ." } } { "translation": { "en": "46 percent said they are leaving the door open to switching candidates. Still, Trump's strongest competition at the moment is from fellow outsider neurosurgeon Ben Carson, but voters who say they have made up their minds are twice as likely to go for Trump. Six in 10 Republicans say they agree with Trump on immigration. Even since Donald Trump called immigrants from Mexico \"rapists\" in his campaign announcement speech two months ago, immigration has been front and center in the 2016 conversation. Some are worried that Trump's bombast will drive crucial Hispanic voters away from the Republican Party and damage rebranding efforts.", "ro": "46% afirmฤƒ cฤƒ lasฤƒ portiศ›a deschisฤƒ posibilitฤƒศ›ii de a-ศ™i schimba opศ›iunea. Cu toate acestea, cel mai important adversar al lui Trump este รฎn prezent neurochirurgul Ben Carson, รฎnsฤƒ este de douฤƒ ori mai probabil ca alegฤƒtorii care declarฤƒ cฤƒ s-au decis sฤƒ voteze cu Trump. ศ˜ase din 10 republicani afirmฤƒ cฤƒ sunt de acord cu Trump รฎn problema imigrฤƒrii. De cรขnd Donald Trump i-a numit pe imigranศ›ii din Mexic โ€žviolatoriโ€ รฎn discursul de deschidere a campaniei sale, รฎn urmฤƒ cu douฤƒ luni, imigrarea a fost subiectul central รฎn campania pentru 2016. Unii sunt รฎngrijoraศ›i cฤƒ stilul bombastic al lui Trump va duce la o scindare รฎntre alegฤƒtorii hispanici importanศ›i ศ™i Partidul Republican ศ™i va prejudicia eforturile de rebranding." } } { "translation": { "en": "But according to Monday's new ABC/Post poll, six in 10 Republicans say they agree with Trump on immigration issues. So as long as immigration remains in the spotlight, it seems Donald Trump will remain too. Frustration with government is climbing to new highs. Donald Trump and Ben Carson now account for roughly half of the support from Republican voters, largely due to their outsider status. Six in 10 Republicans in Monday's new ABC/Post poll say they want a political outsider over someone with government experience. And they are angry at Washington, too.", "ro": "รŽnsฤƒ, conform sondajului realizat luni de ABC/Post, ศ™ase din 10 republicani afirmฤƒ cฤƒ sunt de acord cu Trump รฎn problema imigrฤƒrii. Aศ™a cฤƒ, se pare cฤƒ atรขta timp cรขt problema imigrฤƒrii rฤƒmรขne รฎn lumina reflectoarelor, la fel va rฤƒmรขne ศ™i Doland Trump. Frustrarea faศ›ฤƒ de autoritฤƒศ›i atinge noi culmi. Donald Trump ศ™i Ben Carson sunt acum susศ›inuศ›i de aproape jumฤƒtate dintre alegฤƒtorii republicani, รฎn mare parte datoritฤƒ statutului lor de outsideri. Conform sondajului realizat luni de ABC/Post, ศ™ase din 10 republicani afirmฤƒ cฤƒ preferฤƒ un outsider politic รฎn detrimentul cuiva cu experienศ›ฤƒ รฎn guvernare. Oamenii sunt de asemenea supฤƒraศ›i pe autoritฤƒศ›ile de la Washington." } } { "translation": { "en": "A Des Moines Register/Bloomberg poll in Iowa from two weeks ago shows that three in four Iowa Republicans are frustrated with Republicans in Congress, with 54 percent \"unsatisfied\" and 21 percent \"mad as hell.\" Jeremy Corbyn to make debut at Prime Minister's Questions Since his election, Mr Corbyn's debut at PMQs has been keenly awaited New Labour leader Jeremy Corbyn is to make his debut at Prime Minister's Questions later, taking on David Cameron for the first time.", "ro": "Un sondaj derulat รฎn urmฤƒ cu douฤƒ sฤƒptฤƒmรขni รฎn Iowa de cฤƒtre Des Moines Register/Bloomberg aratฤƒ cฤƒ trei din patru republicani din Iowa sunt frustraศ›i de prestaศ›ia republicanilor din COngres, 54% declarรขndu-se โ€žnemulศ›umiศ›iโ€ iar 21% โ€žnervoศ™i la culmeโ€. Jeremy Corbyn รฎศ™i face debutul la Prime Minister's Questions รŽncฤƒ de la alegerea sa, debutul domnului Corbyn la PMQs a fost รฎndelung aศ™teptat Noul lider al Partidului Laburist, Jeremy Corbyn, รฎศ™i va face mai tรขrziu debutul la Prime Minister's Questions, confruntรขndu-se pentru prima datฤƒ cu David Cameron." } } { "translation": { "en": "Mr Corbyn will rise to ask the first of his six allotted questions shortly after midday, with his performance likely to be closely scrutinised by the media and Labour MPs. He has called for \"less theatre and more facts\" at the weekly showpiece. He has also said he could skip some sessions, leaving them to colleagues. The encounter will be the first parliamentary test of Mr Corbyn's leadership, coming after his appointment of a shadow cabinet and his speech to the TUC annual congress on Tuesday.", "ro": "Dl Corbyn va adresa primele dintre cele ศ™ase รฎntrebฤƒri la care are dreptul la scurt timp dupฤƒ prรขnz; prestaศ›ia sa va fi probabil analizatฤƒ รฎndeaproape de mass-media ศ™i parlamentarii laburiศ™ti. รŽn cadrul apariศ›iilor sฤƒptฤƒmรขnale, el a cerut โ€žmai puศ›in teatru ศ™i mai multe fapteโ€. A declarat de asemenea cฤƒ poate renunศ›a la cรขteva participฤƒri ศ™i cฤƒ le cedeazฤƒ colegilor sฤƒi. Confruntarea va fi primul test parlamentar al Dl Corbyn รฎn poziศ›ie de lider, venind dupฤƒ ce a numit un โ€žcabinet fantomฤƒโ€ ศ™i dupฤƒ discursul pe care l-a ศ›inut marศ›i la congresul anual TUC." } } { "translation": { "en": "Meanwhile, the Labour leader's decision to stand in silence during the singing of the national anthem at a service on Tuesday to mark the 75th anniversary of the Battle of Britain has attracted criticism from a number of Tory MPs and is the focus of several front page stories in the newspapers. Mr Corbyn's decision not to sing the national anthem has attracted attention A spokesman for Mr Corbyn said he had \"stood in respectful silence\" and did recognise the \"heroism of the Royal Air Force in the Battle of Britain.\"", "ro": "รŽntre timp, decizia liderului Partidului laburist de a pฤƒstra tฤƒcerea la rostirea imnului naศ›ional รฎn cadrul unei slujbe ศ›inute marศ›i cu ocazia aniversฤƒrii a 75 de ani de la Bฤƒtฤƒlia Angliei a atras critici din partea unor parlamentari conservatori ศ™i a ศ›inut prima paginฤƒ a ziarelor. Decizia domnului Corbyn de a nu cรขnta imnul naศ›ional a atras atenศ›ia Un purtฤƒtor de cuvรขnt al Dl Corbyn a declarat cฤƒ acesta โ€ža pฤƒstrat tฤƒcerea รฎn mod respectuosโ€ ศ™i a recunoscut โ€žeroismul Forศ›elor aeriene britanice รฎn Bฤƒtฤƒlia Angliei.โ€" } } { "translation": { "en": "But a member of Mr Corbyn's shadow cabinet, Owen Smith, told BBC Two's Newsnight programme he would have advised the Labour leader to sing the national anthem \"irrespective\" of his belief that the monarchy should be abolished. Nearly a dozen shadow ministers have refused to serve in Mr Corbyn's top team, citing differences over the economy, defence and foreign affairs, while less than a sixth of the parliamentary party originally backed him as leader. BBC political correspondent Robin Brant says policy differences are also \"stacking up\" within Labour following Mr Corbyn's appointment over its position on the European Union and the government's cap on benefits.", "ro": "รŽnsฤƒ un membru al cabinetului fantomฤƒ al Dl Corbyn, Owen Smith, a declarat pentru emisiunea Two's Newsnight transmisฤƒ de BBC cฤƒ i-ar fi recomandat liderului laburist sฤƒ cรขnte imnul naศ›ional โ€žindiferentโ€ de credinศ›a sa cฤƒ monarhia ar trebui abolitฤƒ. รŽn jur de doisprezece miniศ™tri din cabinetul fantomฤƒ au refuzat sฤƒ facฤƒ parte din echipa de frunte a Dl Corbyn, argumentรขnd prin diferenศ›e de opinie legate de economie, apฤƒrare ศ™i externe, รฎn timp ce mai puศ›in de o ศ™esime din partidul parlamentar l-a susศ›inut ca lider. Corespondentul politic al BBC, Robin Brant, declarฤƒ cฤƒ diferenศ›ele de politicฤƒ โ€žse cumuleazฤƒโ€ รฎn Partidul Laburist dupฤƒ numirea domnului Corbyn referitor la poziศ›ia sa faศ›ฤƒ de Uniunea Europeanฤƒ ศ™i limita de beneficii." } } { "translation": { "en": "Mr Corbyn told the TUC conference Labour was putting forward amendments to remove the whole idea of a cap altogether. Hours later Mr Smith, the shadow work and pensions secretary, said the party was \"very clear\" that it was only opposing government plans to reduce the level of cap from ยฃ26,000 to ยฃ23,000. Mr Corbyn will be the fifth Labour leader that David Cameron has faced across the despatch box over the past decade since he became Tory leader. The Labour leader, who has promised a different approach to politics, says he has \"crowd sourced\" ideas for questions to ask Mr Cameron and has been given more than 30,000 suggestions.", "ro": "Dl Corbyn a declarat la conferinศ›a TUC cฤƒ Partidul Laburist va aduce modificฤƒri prin care se va elimina integral ideea limitฤƒrii. Cรขteva ore mai tรขrziu, Dl Smith, Ministrul Muncii ศ™i Pensiilor, a declarat cฤƒ partidul โ€žeste foarte clarโ€ รฎn opoziศ›ia exclusivฤƒ faศ›ฤƒ de planurile guvernului de a reduce nivelul โ€žcapโ€ de la 26.000 lire la 23.000 lire. Dl Corbyn va fi al cincilea lider laburist cu care se confruntฤƒ David Cameron la tribunฤƒ รฎn ultimul deceniu, de cรขnd a preluat conducerea Partidului Conservator. Liderul laburist, care a promis o abordare diferitฤƒ a politicii, spune cฤƒ are idei โ€ždin surse externeโ€ pentru รฎntrebฤƒri pe care sฤƒ i le adreseze Domnului Cameron ศ™i cฤƒ a primit peste 30.000 de sugestii." } } { "translation": { "en": "The Islington North MP has said PMQs is too confrontational and that he will refrain from both \"repartee\" and trading barbs, instead vowing to focus on serious issues such as poverty, inequality and the challenges facing young people. Mr Corbyn has said that Angela Eagle, the shadow business secretary, will deputise for him at PMQs when he does not attend - for instance when Mr Cameron is travelling abroad. He has also floated the idea of allowing other colleagues to take the floor on occasion, saying he had approached the Commons Speaker John Bercow to discuss the issue.", "ro": "Parlamentarul Islington North a afirmat cฤƒ PMQs implicฤƒ un nivel de confruntare prea รฎnalt ศ™i cฤƒ se va abศ›ine de la replici ศ™i atacuri, angajรขndu-se sฤƒ se concentreze รฎn schimb pe probleme serioase precum sฤƒrฤƒcia, inegalitatea ศ™i provocฤƒrile cu care se confruntฤƒ tinerii. Dl Corbyn a declarat cฤƒ Angela Eagle, Ministrul de finanศ›e, รฎi va ศ›ine locul la PMQs atunci cรขnd el nu poate participa - de exemplu atunci cรขnd Dl Cameron se deplaseazฤƒ รฎn strฤƒinฤƒtate. A exprimat de asemenea ideea cฤƒ va permite altor colegi sฤƒ ia cuvรขntul ocazional, spunรขnd cฤƒ l-a abordat pe Preศ™edintele Camerei Deputaศ›ilor, John Bercow, pentru a discuta acest aspect." } } { "translation": { "en": "When he became leader in 2005, Mr Cameron said he wanted to move away from the \"Punch and Judy\" style of politics often associated with PMQs but admitted some years later that he had failed. Since it was first televised in 1990, PMQs has been seen as a key barometer of a leader's judgement, their command of the Commons and their standing among their fellow MPs although critics have argued it has become a caricature and is in need of far-reaching reforms. 'Shot in Joburg': Homeless youth trained as photographers Downtown Johannesburg is a tough place to be homeless.", "ro": "รŽn 2005, cรขnd a preluat conducerea, Dl Cameron a declarat cฤƒ doreศ™te sฤƒ renunศ›e la stilul politic โ€žPunch and Judyโ€ asociat adesea cu PMQs รฎnsฤƒ a recunoscut cรขศ›iva ani mai tรขrziu cฤƒ nu a reuศ™it รฎn demersul sฤƒu. De la prima transmisie, รฎn 1990, PMQs a fost consideratฤƒ un barometru cheie al raศ›ionamentului unui lider, al modului รฎn care acesta conduce Camera Deputaศ›ilor ศ™i a poziศ›iei sale รฎn rรขndul colegilor parlamentari, deศ™i criticii afirmฤƒ a ca devenit o caricaturฤƒ ศ™i cฤƒ are nevoie de o reformare profundฤƒ. โ€žCadru รฎn Joburgโ€: Tineri fฤƒrฤƒ adฤƒpost beneficiazฤƒ de cursuri de fotografie Este dificil sฤƒ fii un om fฤƒrฤƒ adฤƒpost รฎn Johannesburg." } } { "translation": { "en": "But one group of former street children have found a way to learn a skill and make a living. \"I was shot in Joburg\" is a non-profit studio that teaches homeless youngsters how to take photographs of their neighbourhood and make a profit from it. BBC News went to meet one of the project's first graduates. JD Sports boss says higher wages could hurt expansion JD Sports Executive Chairman Peter Cowgill says a higher minimum wage for UK workers could mean \"more spending power in the pockets of potential consumers.\" But that spending power is unlikely to outweigh the higher labour costs at his firm, he says.", "ro": "รŽnsฤƒ un grup de oameni care au trฤƒit pe strฤƒzi รฎn copilฤƒrie au gฤƒsit un mod de a รฎnvฤƒศ›a o meserie ศ™i de a-ศ™i cรขศ™tiga traiul. โ€žI was shot รฎn Joburgโ€ este un studio non-profit care รฎi รฎnvaศ›ฤƒ pe tinerii fฤƒrฤƒ adฤƒpost sฤƒ facฤƒ fotografii ale zonelor รฎn care trฤƒiesc ศ™i sฤƒ cรขศ™tige bani din asta. BBC News s-a รฎntรขlnit cu unul dintre primii absolvenศ›i ai proiectului. ศ˜eful JD Sports spune cฤƒ salariile mai mari ar putea dฤƒuna extinderii Preศ™edintele JD Sports, Peter Cowgill, declarฤƒ cฤƒ o creศ™tere a salariului minim รฎn Marea Britanie ar putea รฎnsemna โ€žo putere de cumpฤƒrare mai mare รฎn buzunarele potenศ›ialilor consumatori.โ€ Este รฎnsฤƒ puศ›in probabil ca respectiva putere de cumpฤƒrare sฤƒ depฤƒศ™eascฤƒ costurile mai mari pentru forศ›a de muncฤƒ รฎn cadrul firmei, afirmฤƒ el." } } { "translation": { "en": "The costs could hit JD Sports' expansion plans, he added, which could mean fewer extra jobs. Thanasi Kokkinakis backed by Tennis Australia president Steve Healy Thanasi Kokkinakis deserves kudos rather than criticism for his behaviour. Thanasi Kokkinakis has been the collateral damage in the recent storm around his friend Nick Kyrgios and deserves kudos rather than criticism for his own behaviour, according to Tennis Australia president Steve Healy.", "ro": "Costurile ar putea avea impact asupra planurilor de extindere ale JD Sports, a adฤƒugat el, ceea ce ar putea รฎnsemna mai puศ›ine locuri de muncฤƒ noi. Thanasi Kokkinakis susศ›inut de preศ™edintele Tennis Australia, Steve Healy Thanasi Kokkinakis ar merita sฤƒ fie lฤƒudat ศ™i nu criticat pentru comportamentul sฤƒu. Thanasi Kokkinakis a fost victimฤƒ colateralฤƒ รฎn โ€žfurtunaโ€ creatฤƒ รฎn jurul prietenului sฤƒu, Nick Kyrgios, iar comportamentul sฤƒu meritฤƒ mai degrabฤƒ cuvinte de laudฤƒ ศ™i nu criticฤƒ, รฎn opinia preศ™edintelui Tennis Australia, Steve Healy." } }
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/wmt_en_ro/train.json
{ "translation": { "en": "Corrections to votes and voting intentions: see Minutes Assignment conferred on a Member: see Minutes Membership of committees and delegations: see Minutes Decisions concerning certain documents: see Minutes Forwarding of texts adopted during the sitting: see Minutes Dates for next sittings: see Minutes", "ro": "Corectฤƒrile voturilor ลŸi intenลฃiile de vot: a se vedea procesul-verbal Misiune รฎncredinลฃatฤƒ unui deputat: consultaลฃi procesul-verbal Componenลฃa comisiilor ลŸi a delegaลฃiilor: a se vedea procesul-verbal Decizii privind anumite documente: a se vedea procesul-verbal Transmiterea textelor adoptate รฎn cursul prezentei ลŸedinลฃe: a se vedea procesul-verbal Calendarul urmฤƒtoarelor ลŸedinลฃe: a se vedea procesul-verbal" } } { "translation": { "en": "Membership of Parliament: see Minutes Approval of Minutes of previous sitting: see Minutes Membership of Parliament: see Minutes Verification of credentials: see Minutes Documents received: see Minutes Written statements and oral questions (tabling): see Minutes Petitions: see Minutes Texts of agreements forwarded by the Council: see Minutes Action taken on Parliament's resolutions: see Minutes Agenda for next sitting: see Minutes Closure of sitting (The sitting was closed at 7.45 p.m.)", "ro": "Componenลฃa Parlamentului: a se vedea procesul-verbal Aprobarea procesului-verbal al ลŸedinลฃei precedente: a se vedea procesul-verbal Componenลฃa Parlamentului: a se vedea procesul-verbal Verificarea prerogativelor: a se vedea procesul-verbal Depunere de documente: a se vedea procesul-verbal Declaraลฃii scrise ลŸi รฎntrebฤƒri orale (depunere): consultaลฃi procesul-verbal Petiลฃii: a se vedea procesul-verbal Transmiterea de cฤƒtre Consiliu a textelor acordurilor: a se vedea procesul-verbal Cursul dat rezoluลฃiilor Parlamentului: a se vedea procesul-verbal Ordinea de zi a urmฤƒtoarei ลŸedinลฃe: a se vedea procesul-verbal Ridicarea ลŸedinลฃei (Se levanta la sesiรณn a las 19.45 horas)" } } { "translation": { "en": "Election of Vice-Presidents of the European Parliament (deadline for submitting nominations): see Minutes (The sitting was suspended at 12.40 p.m. and resumed at 3.00 p.m.) Election of Quaestors of the European Parliament (deadline for submitting nominations): see Minutes (The sitting was suspended at 3.25 p.m. and resumed at 6.00 p.m.) Agenda for next sitting: see Minutes Closure of sitting (The sitting was closed at 6.15 p.m.) Opening of the sitting (The sitting was opened at 9.35 a.m.) Documents received: see Minutes Approval of Minutes of previous sitting: see Minutes Membership of Parliament: see Minutes", "ro": "Alegerea vicepreลŸedinลฃilor Parlamentului European (termenul de depunere a candidaturilor): consultaลฃi procesul-verbal (Die Sitzung wird um 12.40 Uhr unterbrochen und um 15.00 Uhr wiederaufgenommen). Alegerea chestorilor Parlamentului European (termenul de depunere a candidaturilor): consultaลฃi procesul-verbal (Die Sitzung wird um 15.25 Uhr unterbrochen und um 18.00 Uhr wiederaufgenommen). Ordinea de zi a urmฤƒtoarei ลŸedinลฃe: a se vedea procesul-verbal Ridicarea ลŸedinลฃei (Die Sitzung wird um 18.15 Uhr geschlossen.) Deschiderea ลŸedinลฃei (Die Sitzung wird um 9.35 Uhr erรถffnet.) Depunerea documentelor: a se vedea procesul-verbal Aprobarea procesului-verbal al ลŸedinลฃei precedente: a se vedea procesul-verbal Componenลฃa Parlamentului: a se vedea procesul-verbal" } } { "translation": { "en": "Membership of committees (deadline for tabling amendments): see Minutes (The sitting was suspended at 7 p.m. and resumed at 9 p.m.) Agenda for next sitting: see Minutes Closure of sitting (The sitting was suspended at 23.25 p.m.) Documents received: see Minutes Communication of Council common positions: see Minutes (The sitting was suspended at 11.35 a.m. and resumed for voting time at noon) Approval of Minutes of previous sitting: see Minutes Committee of Inquiry into the crisis of the Equitable Life Assurance Society (extension of mandate): see Minutes", "ro": "Componenลฃa comisiilor (termenul de depunere a amendamentelor): consultaลฃi procesul-verbal (La seduta, sospesa alle 19.00, รจ ripresa alle 21.00) Ordinea de zi a urmฤƒtoarei ลŸedinลฃe: a se vedea procesul-verbal Ridicarea ลŸedinลฃei (Die Sitzung wird um 23.25 Uhr geschlossen.) Depunerea documentelor: a se vedea procesul-verbal Comunicarea poziลฃiilor comune ale Parlamentului: a se vedea procesul-verbal (La sรฉance, suspendue ร  11h35 dans l'attente de l'Heure des votes, est reprise ร  midi) Aprobarea procesului-verbal al ลŸedinลฃei precedente: a se vedea procesul-verbal Comisia de anchetฤƒ privind criza societฤƒลฃii de asigurฤƒri \"Equitable Lifeโ€ (prelungirea mandatului): consultaลฃi procesul-verbal" } } { "translation": { "en": "Announcement by the President: see Minutes 1. Membership of committees (vote) 2. Amendment of the ACP-EC Partnership Agreement (vote) 4. Certification of train drivers operating locomotives and trains on the railway system in the Community (vote) 6. Law applicable to non-contractual obligations (\"ROME II\") (vote) 8. Seventh and eighth annual reports on arms exports (vote) Corrections to votes and voting intentions: see Minutes Membership of committees and delegations: see Minutes Request for waiver of parliamentary immunity: see Minutes Decisions concerning certain documents: see Minutes", "ro": "Comunicarea PreลŸedintelui: consultaลฃi procesul-verbal 1. Componenลฃa comisiilor (vot) 2. Modificarea Acordului de parteneriat ACP-CE (\"Acordul de la Cotonouโ€) (vot) 4. Certificarea mecanicilor de locomotivฤƒ care conduc locomotive ลŸi trenuri รฎn sistemul feroviar comunitar (vot) 6. Legea aplicabilฤƒ obligaลฃiilor necontractuale (\"Roma IIโ€) (vot) 8. Al ลŸaptelea ลŸi al optulea raport anual privind exportul de armament (vot) Corectฤƒrile voturilor ลŸi intenลฃiile de vot: a se vedea procesul-verbal Componenลฃa comisiilor ลŸi a delegaลฃiilor: a se vedea procesul-verbal Cerere de ridicare a imunitฤƒลฃii parlamentare: consultaลฃi procesul-verbal Decizii privind anumite documente: a se vedea procesul-verbal" } } { "translation": { "en": "Written statements for entry", "ro": "Declaraลฃii scrise รฎnscrise" } } { "translation": { "en": "Written statements for entry in the register (Rule 116): see Minutes Forwarding of texts adopted during the sitting: see Minutes Dates for next sittings: see Minutes Adjournment of the session I declare the session of the European Parliament adjourned. (The sitting was closed at 1 p.m.) Approval of Minutes of previous sitting: see Minutes Membership of Parliament: see Minutes Request for the defence of parliamentary immunity: see Minutes Appointments to committees (proposal by the Conference of Presidents): see Minutes Documents received: see Minutes Texts of agreements forwarded by the Council: see Minutes", "ro": "Declaraลฃii scrise รฎnscrise รฎn registru (articolul 116 din Regulamentul de procedurฤƒ): a se vedea procesul-verbal Transmiterea textelor adoptate รฎn cursul prezentei ลŸedinลฃe: a se vedea procesul-verbal Calendarul urmฤƒtoarelor ลŸedinลฃe: a se vedea procesul-verbal รŽntreruperea sesiunii Dichiaro interrotta la sessione del Parlamento europeo. (La seduta รจ tolta alle 13.00) Aprobarea procesului-verbal al ลŸedinลฃei precedente: a se vedea procesul-verbal Componenลฃa Parlamentului: a se vedea procesul-verbal Cerere de apฤƒrare a imunitฤƒลฃii parlamentare: consultaลฃi procesul-verbal Numiri รฎn comisii (propunerea Conferinลฃei preลŸedinลฃilor): consultaลฃi procesul-verbal Depunerea documentelor: a se vedea procesul-verbal Transmiterea de cฤƒtre Consiliu a textelor acordurilor: a se vedea procesul-verbal" } } { "translation": { "en": "Action taken on Parliament's resolutions: see Minutes Oral questions and written statements (tabling): see Minutes Written statements (Rule 116): see Minutes Agenda: see Minutes 1. Appointments to parliamentary committees (vote): see Minutes Voting time Agenda for next sitting: see Minutes Closure of sitting (The sitting was closed at 12 midnight) Opening of the sitting (The sitting was opened at 09.05) Documents received: see Minutes Approval of Minutes of previous sitting: see Minutes 1. Protection of passengers against displaced luggage (vote) 2.", "ro": "Continuฤƒri ale rezoluลฃiilor Parlamentului: consultaลฃi procesul-verbal Declaraลฃii scrise ลŸi รฎntrebฤƒri orale (depunere): consultaลฃi procesul-verbal Declaraลฃii scrise (articolul 116 din Regulamentul de procedurฤƒ) Ordinea de zi: a se vedea procesul-verbal 1. Numiri รฎn comisiile parlamentare (vot): consultaลฃi procesul-verbal Timpul afectat votului Ordinea de zi a urmฤƒtoarei ลŸedinลฃe: a se vedea procesul-verbal Ridicarea ลŸedinลฃei (La seduta รจ tolta alle 24.00) Deschiderea ลŸedinลฃei (The sitting was opened at 09.05) Depunerea documentelor: a se vedea procesul-verbal Aprobarea procesului-verbal al ลŸedinลฃei precedente: a se vedea procesul-verbal 1. Protecลฃia pasagerilor รฎmpotriva deplasฤƒrii bagajelor (vot) 2." } } { "translation": { "en": "Approval of motor vehicles with regard to the forward field of vision of the driver (vote) 3. EC-Korea Agreement on scientific and technological cooperation (vote) 4. Mainstreaming sustainability in development cooperation policies (vote) 5. Draft Amending Budget No 1/2007 (vote) 7. EC-Gabon Fisheries Partnership (vote) 10. Limitation periods in cross-border disputes involving personal injuries and fatal accidents (vote) 12. Strategy for a strengthened partnership with the Pacific Islands (vote) 13. The European private company statute (vote) That concludes the vote.", "ro": "Omologarea vehiculelor cu motor cu privire la cรขmpul de vizibilitate รฎnainte al conducฤƒtorului auto (vot) 3. Acordul CE-Coreea de cooperare ลŸtiinลฃificฤƒ ลŸi tehnologicฤƒ (vot) 4. Integrarea durabilitฤƒลฃii รฎn politicile de cooperare pentru dezvoltare (vot) 5. Proiect de buget rectificativ nr.1/2007 (vot) 7. Acordul de parteneriat รฎn domeniul pescuitului รฎntre Comunitatea Europeanฤƒ ลŸi Republica Gabonezฤƒ (vot) 10. Termenele de prescripลฃie aplicabile รฎn cadrul litigiilor transfrontaliere cu privire la vฤƒtฤƒmฤƒrile corporale ลŸi accidentele mortale (vot) 12. Relaลฃiile UE cu insulele din Pacific: Strategie pentru un parteneriat consolidat (vot) 13. Statutul societฤƒลฃii private europene (vot) Damit ist die Abstimmungsstunde beendet." } } { "translation": { "en": "Corrections to votes and voting intentions: see Minutes Assignment conferred on a Member: see Minutes Membership of committees and delegations: see Minutes Decisions concerning certain documents: see Minutes Forwarding of texts adopted during the sitting: see Minutes Dates for next sittings: see Minutes", "ro": "Corectฤƒrile voturilor ลŸi intenลฃiile de vot: a se vedea procesul-verbal Misiune รฎncredinลฃatฤƒ unui deputat: consultaลฃi procesul-verbal Componenลฃa comisiilor ลŸi a delegaลฃiilor: a se vedea procesul-verbal Decizii privind anumite documente: a se vedea procesul-verbal Transmiterea textelor adoptate รฎn cursul prezentei ลŸedinลฃe: a se vedea procesul-verbal Calendarul urmฤƒtoarelor ลŸedinลฃe: a se vedea procesul-verbal" } } { "translation": { "en": "Written statements for entry", "ro": "Declaraลฃii scrise รฎnscrise" } }
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/xsum/sample.json
{"document": "The warning begins at 22:00 GMT on Saturday and ends at 10:00 on Sunday.\nThe ice could lead to difficult driving conditions on untreated roads and slippery conditions on pavements, the weather service warned.\nOnly the southernmost counties and parts of the most westerly counties are expected to escape.\nCounties expected to be affected are Carmarthenshire, Powys, Ceredigion, Pembrokeshire, Denbighshire, Gwynedd, Wrexham, Conwy, Flintshire, Anglesey, Monmouthshire, Blaenau Gwent, Caerphilly, Merthyr Tydfil, Neath Port Talbot, Rhondda Cynon Taff and Torfaen.", "summary": "The Met Office has issued a yellow weather warning for ice across most of Wales."} {"document": "You can see highlights of Sunderland v Arsenal on Match of the Day at 22:20 BST on Saturday on BBC One and the BBC Sport website.\nStoke and West Ham, for example, have started to climb away from the relegation zone but the biggest worry for Sunderland fans is that their side do not look remotely capable of doing the same.\nI know the Black Cats have got out of trouble before having found themselves in a similar situation but this time, after picking up only two points from their first nine games, things look really desperate for the only top-flight team without a win.\nAt least one element of their struggles seems to be self-inflicted, with everyone at the club feeling sorry for themselves - and not just because they have lost some players to injury and conceded some costly late goals.\nThere is a negative feeling about the place with the manager David Moyes and his players talking about how they have gone backwards since last season, when they should be searching for any kind of spark that could change things around.\nFrom the outside, looking at the way they play and their lack of creativity, it is hard to see what that spark might be or what could fundamentally change under Moyes until the January transfer window opens.\nIf they can get one win under their belt then they will get a bit of belief back but, the longer this winless run goes on, the more negativity there will be.\nMedia playback is not supported on this device\nSunderland finished last season on a high under Sam Allardyce, with a run of just one defeat in their last 11 games securing their safety.\nIn the space of five months, all of that confidence and momentum seems to have been sucked out of the club, despite them effectively having the same group of players who, not so long ago, looked inspired.\nThat is not all down to Moyes, but he has to take some responsibility for it.\nI am yet to see a defined style of play from Sunderland since he took charge at the end of July.\nThat is in contrast to Allardyce's time as manager, when they were resolute and difficult to beat and, at the end of his stint at the Stadium of Light, also played with a purpose when they went forward.\nOff the pitch, Moyes has not helped himself much either.\nThere was no need for him to be so pessimistic when he came out after the second game of the season and announced they would be in a relegation fight, which did not send out the right message to his players or the fans.\nWhen he took charge, he had actually started out by being unrealistically positive - talking about Sunderland becoming a club that regularly finished in the top half of the Premier League - but his expectations went downhill very quickly.\nI know you can argue that he has been proved right, because Sunderland are now battling the drop, but it meant there was a cloud over from them almost as soon as the season had started.\nIt seems to be a case that if you stop Jermain Defoe, you stop Sunderland. His statistics stand up well in comparison to last season, but the rest of their team are not doing enough in attack.\nThey were reliant on Defoe last season too, but others did chip in - in their first nine league games of 2015-16, five players found the net. This time around, only Defoe and Patrick van Aanholt have scored in the same period.\nIt is going to be a massive struggle for them to stay up from the position they are now in anyway, but they badly need a win and quickly. I don't see it coming at home to Arsenal on Saturday, though.\nDo they even look capable of holding out for a draw against the Gunners, the way another struggling team Middlesbrough did at Emirates Stadium last weekend? No.\nIf you struggle to make chances and score goals, as Sunderland do, that puts more pressure on your defence because you know if you concede then you are in big trouble.\nAnd the Black Cats have problems at the back as well - their only clean sheet in 12 matches under Moyes was against League One side Shrewsbury Town in the EFL Cup.\nIt does not bode well against an Arsenal side that are averaging more than two goals a game this season.\nIt is hard to find any positives from Sunderland's situation but at least they have not been cut adrift at the bottom - yet.\nUnless they win soon, that could happen. I think Hull are also in for a very tough season but when I look at the other two teams immediately above them, Boro and Swansea, they definitely have more about them than the Black Cats do.\nMedia playback is not supported on this device\nChanging manager has clearly not helped Sunderland and comparisons with his predecessor do not help Moyes much either.\nYou cannot tell me that, if Allardyce was still in charge, Sunderland would have only picked up two points so far. It just would not have happened.\nMoyes replaced him relatively late in the summer, which is difficult in itself, but he can only complain about the things that have gone against him up to a point. He should be doing much better than he is.\nHe is still the manager and he is capable of turning things around, so it is right there is no suggestion of him getting the sack.\nBut that will not last forever. This industry is results-driven and Moyes' results are not good enough.\nThat clearly has to change soon and, looking at Sunderland's next few fixtures, the one that stands out as a must-win is their home game against Hull on 19 November.\nIf they fail to beat Arsenal and Bournemouth, then the visit of the Tigers will be the game to define Moyes' tenure. If Sunderland are still without a win after that, things will become extremely difficult for him.\nChris Sutton was speaking to BBC Sport's Chris Bevan.", "summary": "We are exactly a quarter of the way through the Premier League season and some teams at the bottom of the table seem to be turning things around after making a bad start."} {"document": "The win keeps the Candystripes two points behind leaders Dundalk who won 2-0 away to Shamrock Rovers.\nFormer Plymouth striker Patterson scored his sixth goal of the season in the 14th minute at the Brandywell.\nHe shot into an empty net after the ball broke to him when keeper Dean Delany thwarted Barry McNamee.\nKurtis Byrne should have netted a speedy equaliser but the son of former Celtic player Paul Byrne completely missed his kick in front of goal.\nThat was the one big scare for Kenny Shiels' men on a night when both keepers had a quiet night.\nDerry City have won six and drawn two in the eight games they have played since losing to Finn Harps on the first day of the season.", "summary": "Rory Patterson's early goal proved enough to give second-placed Derry City a home victory over Bohemians in Friday night's Premier Division clash."} {"document": "The centre-right coalition led by Mr Passos Coelho won the most seats in the election on 4 October.\nBut Socialist leader Antonio Costa has been working to build a coalition with far-left parties.\nMany believe that Mr Passos Coelho will fail to pass the test of a vote of no confidence in Portugal's parliament.\nPresident Anibal Cavaco Silva would then be expected to ask the left to form a government.\nThere are fears that weeks of uncertainty could harm Portugal's economic recovery, more than a year after it exited the strict terms of its รขโ€šยฌ78bn (ร‚ยฃ57bn) international bailout.\nEU officials have threatened to take action against Portugal for missing a 15 October deadline to present its draft 2016 budget.\nPortugal is still running one of the highest budget deficits in the eurozone.\n12%\nof the workforce is unemployed\n20%\nof people live below the poverty line\n485,000 emigrated from Portugal between 2011 and 2014\n125% debt to GDP - the second highest rate in the European Union\nMr Passos Coelho's Social Democrats have promised to present a budget, but the two left-wing parties campaigned strongly against his outgoing government's record of harsh austerity.\nThe Left Bloc is seen as allied to the anti-austerity Syriza party in Greece, which for months tried to renegotiate the terms of Greece's eurozone bailout.\nPortugal's Communist Party is regarded as anti-euro and anti-Nato, although it is thought to have moderated its eurozone policies in recent weeks.\nIf Mr Costa's Socialists are eventually chosen to lead a left-wing coalition, it would be the first time since the fall of Portugal's dictatorship in 1974 that a right-wing president appointed a government backed by communists.\nAfter his re-appointment as prime minister leading a right-of-centre coalition, Pedro Passos Coelho has 10 days to appoint ministers and secure parliamentary approval.\nThat may prove impossible, since his coalition lost its majority in the 4 October election and the Socialists have pledged to reject his programme if their talks with other parties succeed.\nTogether, the Socialists, Left Bloc and Communist Party have a majority. All wanted the president to appoint Mr Costa - arguing that anything else was a waste of time.\nIf Mr Passos Coelho does fail, the president could then appoint Mr Costa or keep the incumbent on as caretaker.\nFresh legislative elections may only take place from June, after voters have elected a new president early next year.", "summary": "The Portuguese president has invited incumbent Prime Minister Pedro Passos Coelho to form the next government, despite him having lost his majority."} {"document": "Nev Edwards scored an early try for Sale, before Castres' Florian Vialelle went over, but Julien Dumora's penalty put the hosts 10-7 ahead at the break.\nJoe Ford sent over a penalty before Castres' Marc-Antoine Rallier and Sales' Will Addison were sin-binned.\nJulien Caminati's late attempt to stop Charlie Ingall saw Sale awarded the decisive penalty try.\nThe win moves the English Premiership side to within one point of Pool Two leaders Newport Gwent Dragons after three games.\nSale got off to the ideal start, Edwards sprinting away for the game's opening points from an Andrei Ostrikov kick, but Castres heaped the pressure on in search of a reply, which came through Vialelle on eight minutes.\nSharks flanker Magnus Lund was forced off with a head injury before the television match official denied Castres a second try, with replays showing that the Sharks defence did enough to force full-back Caminati into touch.\nFord had a chance to put Sale ahead again, but his penalty on 27 minutes drifted wide. Dumora, however, made no mistake soon after, slotting over to give the French side the lead on 33 minutes.\nA combination of probing grubber kicks and scrappy play eventually led to Ford teeing up his second penalty attempt, with the fly-half this time booting the three points to make it 10-10.\nRallier's yellow card following a scuffle saw Ford opt for the posts soon after, but he was off target again before Sales' one-man advantage was lost as Addison was sin-binned.\nSharks pushed for the breakthrough as Ingall went close to touching down, and the video referee eventually gave the penalty try after deciding that Caminati's attempt to stop the winger was illegal.\nCastres: Caminati; Martial, Vialelle, Combezou, Decrop; Dumora, Dupont; Taumoepeau, Rallier, Montes; Samson, Moreaux, Caballero, Diarra, Beattie.\nReplacements: Beziat, Tichit, Martinez, Desroche, Babillot, Fontaine, Lamerat, Seron.\nSale: Arscott; Edwards, Addison, Jennings, Ingall; Ford, Mitchell, Lewis-Roberts, Briggs, Mujati, Mills, Ostrikov, Lund, Seymour (capt), Easter.\nReplacements: Taylor, Flynn, Parker, Beaumont, Neild, Jeffers, James, Haley.\nReferee: David Wilkinson (Ireland)", "summary": "A late penalty try gave Sale victory over Castres at Stade Pierre-Antoine in their European Challenge Cup clash."} {"document": "The 33-year-old was released by Norwich this summer after five years at the club, during which time he made 75 Canaries first-team appearances.\nTurner also had spells on loan at Fulham and Sheffield Wednesday during his time at Carrow Road.\nIn total, the centre-back has made 436 senior career appearances for eight different clubs.\nFind all the latest football transfers on our dedicated page.", "summary": "League One side Southend United have signed former Hull and Norwich defender Michael Turner on a one-year deal."} {"document": "United contacted St Johnstone this week with a view to speaking to 52-year-old Wright about the job but this approach was rejected by the Saints board.\nThe Tannadice club - bottom of the Premiership - are seeking to replace Jackie McNamara, who left last month.\nDave Bowman took the first team for Saturday's loss to Partick Thistle.\nThe Tangerines have won only once this season and prop up the table with five points from 10 games.\nFormer Northern Ireland goalkeeper Wright, who replaced Steve Lomas at McDiarmid Park in 2013, led St Johnstone to Scottish Cup success in his first season in charge.\nHe has also secured two successive top-six finishes for the Perth side and previously managed in his homeland.", "summary": "St Johnstone boss Tommy Wright is no longer under consideration for the Dundee United manager's job, BBC Scotland has learned."} {"document": "Media playback is unsupported on your device\n2 November 2014 Last updated at 17:20 GMT\nHomes and businesses were damaged in the storm, but weather experts were not able to confirm it was a tornado.\nNavtej Johal reports.", "summary": "Residents in Coalville in Leicestershire are cleaning up after high winds hit the town."} {"document": "5 August 2015 Last updated at 06:36 BST\nShe's now 84 and has been telling Newsround the inspiring story of her life before and after that devastating and world-changing event.\nThis animation contains some sad moments that you might find upsetting.\nYou can find out more about what happened in Hiroshima here.\nWatch 'Hiroshima: A Newsround Special' - Thursday 6 August at 5.30pm on the CBBC channel and on the Newsround website.", "summary": "Bun Hashizume was 14 years old and lived in Hiroshima, in Japan, when a nuclear bomb was dropped on the city 70 years ago, at the end of World War Two."} {"document": "But what has been your moment of the year?\nFrom Ben Stokes' 258 off 198 balls against South Africa to Stuart Broad's 6-17 against the same opponents, and Alastair Cook being the first Englishman to reach 10,000 Test runs, there are lots of highlights.\nOr perhaps you revelled in Australia being skittled for just 85? Or the dog that invaded the pitch at Vizag?\nThe cricket brains of BBC Sport and BBC Radio 5 live asked you to rank your top 10, and your shortlist will be revealed on Tuesday's Tuffers and Vaughan Cricket Show (20:30 GMT, BBC Radio 5 live and online).\nVotes will no longer count but you can still pick your top 10 and share with friends.\nWhat are your top 10 cricketing moments from this year?", "summary": "It's been topsy-turvy for the England side but eventful and entertaining nonetheless."}
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/conll/sample.json
{"words": ["He", "was", "the", "27th", "pitcher", "used", "by", "the", "Angels", "this", "season", ",", "tying", "a", "major-league", "record", "."], "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "B-ORG", "O", "O", "O", "O", "O", "O", "O", "O"]} {"words": ["CHICAGO", "AT", "ATLANTA"], "ner": ["B-ORG", "O", "B-LOC"]} {"words": ["President", "Bill", "Clinton", "earlier", "this", "month", "invoked", "special", "powers", "to", "appoint", "Fowler", "during", "the", "congressional", "recess", "because", "the", "Senate", "delayed", "confirming", "his", "nomination", "."], "ner": ["O", "B-PER", "I-PER", "O", "O", "O", "O", "O", "O", "O", "O", "B-PER", "O", "O", "O", "O", "O", "O", "B-ORG", "O", "O", "O", "O", "O"]} {"words": ["goals", "for", ",", "goals", "against", ",", "points", ")", "."], "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"words": ["\"", "It", "is", "one", "step", "short", "of", "an", "emergency", "situation", ",", "\"", "a", "police", "spokesman", "said", "via", "telephone", "from", "a", "command", "post", "in", "the", "bush", "."], "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"words": ["U.S.", "Ambassador", "Myles", "Frechette", "applauded", "the", "move", ",", "saying", "it", "could", "prompt", "the", "Clinton", "administration", "to", "remove", "Colombia", "from", "a", "list", "of", "outcast", "nations", "that", "have", "failed", "to", "cooperate", "in", "U.S.", "counternarcotics", "efforts", "."], "ner": ["B-LOC", "O", "B-PER", "I-PER", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PER", "O", "O", "O", "B-LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-LOC", "O", "O", "O"]} {"words": ["Halftime"], "ner": ["O"]} {"words": ["It", "has", "manufacturing", "plants", "in", "San", "Diego", ";", "Creedmoor", ",", "N.C.", ";", "Hampshire", ",", "England", ";", "and", "Tijuana", ",", "Mexico", ",", "and", "distributes", "its", "prodcuts", "in", "more", "than", "120", "countries", "."], "ner": ["O", "O", "O", "O", "O", "B-LOC", "I-LOC", "O", "B-LOC", "O", "B-LOC", "O", "B-LOC", "O", "B-LOC", "O", "O", "B-LOC", "O", "B-LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"words": ["Scotland", "manager", "Craig", "Brown", "said", "on", "Thursday", ":", "\"", "I", "'ve", "watched", "Duncan", "Ferguson", "in", "action", "twice", "recently", "and", "he", "'s", "bang", "in", "form", "."], "ner": ["B-LOC", "O", "B-PER", "I-PER", "O", "O", "O", "O", "O", "O", "O", "O", "B-PER", "I-PER", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} {"words": ["Clinton", "flew", "in", "by", "helicopter", "from", "Michigan", "City", ",", "Indiana", ",", "after", "ending", "a", "four-day", ",", "559-mile", "trip", "aboard", "a", "campaign", "train", "from", "Washington", "."], "ner": ["B-PER", "O", "O", "O", "O", "O", "B-LOC", "I-LOC", "O", "B-LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-LOC", "O"]}
0
hf_public_repos/transformers/tests/fixtures/tests_samples
hf_public_repos/transformers/tests/fixtures/tests_samples/wmt16/sample.json
{"translation": {"en": "Membership of Parliament: see Minutes", "ro": "Componenลฃa Parlamentului: a se vedea procesul-verbal"}} {"translation": {"en": "Approval of Minutes of previous sitting: see Minutes", "ro": "Aprobarea procesului-verbal al ลŸedinลฃei precedente: a se vedea procesul-verbal"}} {"translation": {"en": "Membership of Parliament: see Minutes", "ro": "Componenลฃa Parlamentului: a se vedea procesul-verbal"}} {"translation": {"en": "Verification of credentials: see Minutes", "ro": "Verificarea prerogativelor: a se vedea procesul-verbal"}} {"translation": {"en": "Documents received: see Minutes", "ro": "Depunere de documente: a se vedea procesul-verbal"}} {"translation": {"en": "Written statements and oral questions (tabling): see Minutes", "ro": "Declaraลฃii scrise ลŸi รฎntrebฤƒri orale (depunere): consultaลฃi procesul-verbal"}} {"translation": {"en": "Petitions: see Minutes", "ro": "Petiลฃii: a se vedea procesul-verbal"}} {"translation": {"en": "Texts of agreements forwarded by the Council: see Minutes", "ro": "Transmiterea de cฤƒtre Consiliu a textelor acordurilor: a se vedea procesul-verbal"}} {"translation": {"en": "Action taken on Parliament's resolutions: see Minutes", "ro": "Cursul dat rezoluลฃiilor Parlamentului: a se vedea procesul-verbal"}} {"translation": {"en": "Agenda for next sitting: see Minutes", "ro": "Ordinea de zi a urmฤƒtoarei ลŸedinลฃe: a se vedea procesul-verbal"}}
0
hf_public_repos/transformers/tests
hf_public_repos/transformers/tests/optimization/test_optimization_tf.py
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import is_tf_available from transformers.testing_utils import require_tf if is_tf_available(): import tensorflow as tf from tensorflow.python.eager import context from tensorflow.python.framework import ops from transformers import GradientAccumulator, create_optimizer @require_tf class OptimizationFTest(unittest.TestCase): def assertListAlmostEqual(self, list1, list2, tol): self.assertEqual(len(list1), len(list2)) for a, b in zip(list1, list2): self.assertAlmostEqual(a, b, delta=tol) def testGradientAccumulator(self): accumulator = GradientAccumulator() accumulator([tf.constant([1.0, 2.0])]) accumulator([tf.constant([-2.0, 1.0])]) accumulator([tf.constant([-1.0, 2.0])]) with self.assertRaises(ValueError): accumulator([tf.constant([1.0, 1.0]), tf.constant([2.0, 2.0])]) self.assertEqual(accumulator.step, 3) self.assertEqual(len(accumulator.gradients), 1) self.assertListAlmostEqual(accumulator.gradients[0].numpy().tolist(), [-2.0, 5.0], tol=1e-2) accumulator.reset() self.assertEqual(accumulator.step, 0) self.assertListAlmostEqual(accumulator.gradients[0].numpy().tolist(), [0.0, 0.0], tol=1e-2) def testGradientAccumulatorDistributionStrategy(self): context._context = None ops.enable_eager_execution_internal() physical_devices = tf.config.list_physical_devices("CPU") if len(physical_devices) == 1: tf.config.set_logical_device_configuration( physical_devices[0], [tf.config.LogicalDeviceConfiguration(), tf.config.LogicalDeviceConfiguration()] ) devices = tf.config.list_logical_devices(device_type="CPU") strategy = tf.distribute.MirroredStrategy(devices=devices[:2]) with strategy.scope(): accumulator = GradientAccumulator() variable = tf.Variable([4.0, 3.0]) optimizer, _ = create_optimizer(5e-5, 10, 5) gradient_placeholder = tf.Variable([0.0, 0.0], trainable=False) def accumulate_on_replica(gradient): accumulator([gradient]) def apply_on_replica(): optimizer.apply_gradients(list(zip(accumulator.gradients, [variable]))) @tf.function def accumulate(grad1, grad2): with strategy.scope(): local_variables = strategy.experimental_local_results(gradient_placeholder) local_variables[0].assign(grad1) local_variables[1].assign(grad2) strategy.run(accumulate_on_replica, args=(gradient_placeholder,)) @tf.function def apply_grad(): with strategy.scope(): strategy.run(apply_on_replica) def _check_local_values(grad1, grad2): values = strategy.experimental_local_results(accumulator._gradients[0]) self.assertListAlmostEqual(values[0].value(), grad1, tol=1e-2) self.assertListAlmostEqual(values[1].value(), grad2, tol=1e-2) accumulate([1.0, 2.0], [-1.0, 1.0]) accumulate([3.0, -1.0], [-1.0, -1.0]) accumulate([-2.0, 2.0], [3.0, -2.0]) self.assertEqual(accumulator.step, 3) _check_local_values([2.0, 3.0], [1.0, -2.0]) apply_grad() self.assertListAlmostEqual(variable.value(), [4.0, 3.0], tol=1e-2) accumulator.reset() self.assertEqual(accumulator.step, 0) _check_local_values([0.0, 0.0], [0.0, 0.0])
0
hf_public_repos/transformers/tests
hf_public_repos/transformers/tests/optimization/test_optimization.py
# coding=utf-8 # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import tempfile import unittest from transformers import is_torch_available from transformers.testing_utils import require_torch if is_torch_available(): import torch from torch import nn from transformers import ( Adafactor, AdamW, get_constant_schedule, get_constant_schedule_with_warmup, get_cosine_schedule_with_warmup, get_cosine_with_hard_restarts_schedule_with_warmup, get_inverse_sqrt_schedule, get_linear_schedule_with_warmup, get_polynomial_decay_schedule_with_warmup, ) def unwrap_schedule(scheduler, num_steps=10): lrs = [] for _ in range(num_steps): lrs.append(scheduler.get_lr()[0]) scheduler.step() return lrs def unwrap_and_save_reload_schedule(scheduler, num_steps=10): lrs = [] for step in range(num_steps): lrs.append(scheduler.get_lr()[0]) scheduler.step() if step == num_steps // 2: with tempfile.TemporaryDirectory() as tmpdirname: file_name = os.path.join(tmpdirname, "schedule.bin") torch.save(scheduler.state_dict(), file_name) state_dict = torch.load(file_name) scheduler.load_state_dict(state_dict) return lrs @require_torch class OptimizationTest(unittest.TestCase): def assertListAlmostEqual(self, list1, list2, tol): self.assertEqual(len(list1), len(list2)) for a, b in zip(list1, list2): self.assertAlmostEqual(a, b, delta=tol) def test_adam_w(self): w = torch.tensor([0.1, -0.2, -0.1], requires_grad=True) target = torch.tensor([0.4, 0.2, -0.5]) criterion = nn.MSELoss() # No warmup, constant schedule, no gradient clipping optimizer = AdamW(params=[w], lr=2e-1, weight_decay=0.0) for _ in range(100): loss = criterion(w, target) loss.backward() optimizer.step() w.grad.detach_() # No zero_grad() function on simple tensors. we do it ourselves. w.grad.zero_() self.assertListAlmostEqual(w.tolist(), [0.4, 0.2, -0.5], tol=1e-2) def test_adafactor(self): w = torch.tensor([0.1, -0.2, -0.1], requires_grad=True) target = torch.tensor([0.4, 0.2, -0.5]) criterion = nn.MSELoss() # No warmup, constant schedule, no gradient clipping optimizer = Adafactor( params=[w], lr=1e-2, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False, ) for _ in range(1000): loss = criterion(w, target) loss.backward() optimizer.step() w.grad.detach_() # No zero_grad() function on simple tensors. we do it ourselves. w.grad.zero_() self.assertListAlmostEqual(w.tolist(), [0.4, 0.2, -0.5], tol=1e-2) @require_torch class ScheduleInitTest(unittest.TestCase): m = nn.Linear(50, 50) if is_torch_available() else None optimizer = AdamW(m.parameters(), lr=10.0) if is_torch_available() else None num_steps = 10 def assertListAlmostEqual(self, list1, list2, tol, msg=None): self.assertEqual(len(list1), len(list2)) for a, b in zip(list1, list2): self.assertAlmostEqual(a, b, delta=tol, msg=msg) def test_schedulers(self): common_kwargs = {"num_warmup_steps": 2, "num_training_steps": 10} # schedulers doct format # function: (sched_args_dict, expected_learning_rates) scheds = { get_constant_schedule: ({}, [10.0] * self.num_steps), get_constant_schedule_with_warmup: ( {"num_warmup_steps": 4}, [0.0, 2.5, 5.0, 7.5, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0], ), get_linear_schedule_with_warmup: ( {**common_kwargs}, [0.0, 5.0, 10.0, 8.75, 7.5, 6.25, 5.0, 3.75, 2.5, 1.25], ), get_cosine_schedule_with_warmup: ( {**common_kwargs}, [0.0, 5.0, 10.0, 9.61, 8.53, 6.91, 5.0, 3.08, 1.46, 0.38], ), get_cosine_with_hard_restarts_schedule_with_warmup: ( {**common_kwargs, "num_cycles": 2}, [0.0, 5.0, 10.0, 8.53, 5.0, 1.46, 10.0, 8.53, 5.0, 1.46], ), get_polynomial_decay_schedule_with_warmup: ( {**common_kwargs, "power": 2.0, "lr_end": 1e-7}, [0.0, 5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156], ), get_inverse_sqrt_schedule: ( {"num_warmup_steps": 2}, [0.0, 5.0, 10.0, 8.165, 7.071, 6.325, 5.774, 5.345, 5.0, 4.714], ), } for scheduler_func, data in scheds.items(): kwargs, expected_learning_rates = data scheduler = scheduler_func(self.optimizer, **kwargs) self.assertEqual(len([scheduler.get_lr()[0]]), 1) lrs_1 = unwrap_schedule(scheduler, self.num_steps) self.assertListAlmostEqual( lrs_1, expected_learning_rates, tol=1e-2, msg=f"failed for {scheduler_func} in normal scheduler", ) scheduler = scheduler_func(self.optimizer, **kwargs) if scheduler_func.__name__ != "get_constant_schedule": LambdaScheduleWrapper.wrap_scheduler(scheduler) # wrap to test picklability of the schedule lrs_2 = unwrap_and_save_reload_schedule(scheduler, self.num_steps) self.assertListEqual(lrs_1, lrs_2, msg=f"failed for {scheduler_func} in save and reload") class LambdaScheduleWrapper: """See https://github.com/huggingface/transformers/issues/21689""" def __init__(self, fn): self.fn = fn def __call__(self, *args, **kwargs): return self.fn(*args, **kwargs) @classmethod def wrap_scheduler(self, scheduler): scheduler.lr_lambdas = list(map(self, scheduler.lr_lambdas))
0
hf_public_repos/transformers
hf_public_repos/transformers/notebooks/README.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๐Ÿค— Transformers Notebooks You can find here a list of the official notebooks provided by Hugging Face. Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging ๐Ÿค— Transformers and would like to be listed here, please open a Pull Request so it can be included under the Community notebooks. ## Hugging Face's notebooks ๐Ÿค— ### Documentation notebooks You can open any page of the documentation as a notebook in Colab (there is a button directly on said pages) but they are also listed here if you need them: | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [Quicktour of the library](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb) | A presentation of the various APIs in Transformers |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/en/transformers_doc/quicktour.ipynb)| | [Summary of the tasks](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb) | How to run the models of the Transformers library task by task |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)| | [Preprocessing data](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb) | How to use a tokenizer to preprocess your data |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)| | [Fine-tuning a pretrained model](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb) | How to use the Trainer to fine-tune a pretrained model |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)| | [Summary of the tokenizers](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb) | The differences between the tokenizers algorithm |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)| | [Multilingual models](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb) | How to use the multilingual models of the library |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)| ### PyTorch Examples #### Natural Language Processing[[pytorch-nlp]] | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| | [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb) | How to easily start using transformers |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)| | [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| | [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| | [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| | [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| | [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| | [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation.ipynb)| | [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)| | [How to train a language model from scratch](https://github.com/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| Highlight all the steps to effectively train Transformer model on custom data | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| | [How to generate text](https://github.com/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| How to use different decoding methods for language generation with transformers | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| | [How to generate text (with constraints)](https://github.com/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| How to guide language generation with user-provided constraints | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| | [Reformer](https://github.com/huggingface/blog/blob/main/notebooks/03_reformer.ipynb)| How Reformer pushes the limits of language modeling | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)| #### Computer Vision[[pytorch-cv]] | Notebook | Description | | | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:| | [How to fine-tune a model on image classification (Torchvision)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)| | [How to fine-tune a model on image classification (Albumentations)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | Show how to preprocess the data using Albumentations and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb)| | [How to fine-tune a model on image classification (Kornia)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | Show how to preprocess the data using Kornia and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)| | [How to perform zero-shot object detection with OWL-ViT](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) | Show how to perform zero-shot object detection on images with text queries | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)| | [How to fine-tune an image captioning model](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | Show how to fine-tune BLIP for image captioning on a custom dataset | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb)| | [How to build an image similarity system with Transformers](https://github.com/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | Show how to build an image similarity system | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb)| | [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb)| | [How to fine-tune a VideoMAE model on video classification](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | Show how to preprocess the data and fine-tune a pretrained VideoMAE model on Video Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb)| #### Audio[[pytorch-audio]] | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [How to fine-tune a speech recognition model in English](https://github.com/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| | [How to fine-tune a speech recognition model in any language](https://github.com/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| | [How to fine-tune a model on audio classification](https://github.com/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| #### Biological Sequences[[pytorch-bio]] | Notebook | Description | | | |:----------|:----------------------------------------------------------------------------------------|:-------------|------:| | [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | | [How to generate protein folds](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | See how to go from protein sequence to a full protein model and PDB file | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | | [How to fine-tune a Nucleotide Transformer model](https://github.com/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) | See how to tokenize DNA and fine-tune a large pre-trained DNA "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) | | [Fine-tune a Nucleotide Transformer model with LoRA](https://github.com/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) | Train even larger DNA models in a memory-efficient way | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) | #### Other modalities[[pytorch-other]] | Notebook | Description | | | |:----------|:----------------------------------------------------------------------------------------|:-------------|------:| | [Probabilistic Time Series Forecasting](https://github.com/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | See how to train Time Series Transformer on a custom dataset | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | #### Utility notebooks[[pytorch-utility]] | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [How to export model to ONNX](https://github.com/huggingface/notebooks/blob/main/examples/onnx-export.ipynb)| Highlight how to export and run inference workloads through ONNX | | [How to use Benchmarks](https://github.com/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| How to benchmark models with transformers | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| ### TensorFlow Examples #### Natural Language Processing[[tensorflow-nlp]] | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| | [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb) | How to easily start using transformers |[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)| | [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| | [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| | [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| | [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| | [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| | [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| | [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| #### Computer Vision[[tensorflow-cv]] | Notebook | Description | | | |:---------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|:-------------|------:| | [How to fine-tune a model on image classification](https://github.com/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb) | Show how to preprocess the data and fine-tune any pretrained Vision model on Image Classification | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)| | [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)| #### Biological Sequences[[tensorflow-bio]] | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | #### Utility notebooks[[tensorflow-utility]] | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [How to train TF/Keras models on TPU](https://github.com/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | See how to train at high speed on Google's TPU hardware | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | ### Optimum notebooks ๐Ÿค— [Optimum](https://github.com/huggingface/optimum) is an extension of ๐Ÿค— Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares. | Notebook | Description | | | |:----------|:-------------|:-------------|------:| | [How to quantize a model with ONNX Runtime for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| Show how to apply static and dynamic quantization on a model using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| | [How to quantize a model with Intel Neural Compressor for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| Show how to apply static, dynamic and aware training quantization on a model using [Intel Neural Compressor (INC)](https://github.com/intel/neural-compressor) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| | [How to fine-tune a model on text classification with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| Show how to preprocess the data and fine-tune a model on any GLUE task using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| | [How to fine-tune a model on summarization with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| Show how to preprocess the data and fine-tune a model on XSUM using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| ## Community notebooks: More notebooks developed by the community are available [here](https://hf.co/docs/transformers/community#community-notebooks).
0
hf_public_repos/transformers
hf_public_repos/transformers/docs/README.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Generating the documentation To generate the documentation, you first have to build it. Several packages are necessary to build the doc, you can install them with the following command, at the root of the code repository: ```bash pip install -e ".[docs]" ``` Then you need to install our special tool that builds the documentation: ```bash pip install git+https://github.com/huggingface/doc-builder ``` --- **NOTE** You only need to generate the documentation to inspect it locally (if you're planning changes and want to check how they look before committing for instance). You don't have to commit the built documentation. --- ## Building the documentation Once you have setup the `doc-builder` and additional packages, you can generate the documentation by typing the following command: ```bash doc-builder build transformers docs/source/en/ --build_dir ~/tmp/test-build ``` You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite Markdown editor. ## Previewing the documentation To preview the docs, first install the `watchdog` module with: ```bash pip install watchdog ``` Then run the following command: ```bash doc-builder preview {package_name} {path_to_docs} ``` For example: ```bash doc-builder preview transformers docs/source/en/ ``` The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives. --- **NOTE** The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again). --- ## Adding a new element to the navigation bar Accepted files are Markdown (.md). Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml) file. ## Renaming section headers and moving sections It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information. Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor. So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file: ``` Sections that were moved: [ <a href="#section-b">Section A</a><a id="section-a"></a> ] ``` and of course, if you moved it to another file, then: ``` Sections that were moved: [ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ] ``` Use the relative style to link to the new file so that the versioned docs continue to work. For an example of a rich moved section set please see the very end of [the Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md). ## Writing Documentation - Specification The `huggingface/transformers` documentation follows the [Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings, although we can write them directly in Markdown. ### Adding a new tutorial Adding a new tutorial or section is done in two steps: - Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md). - Link that file in `./source/_toctree.yml` on the correct toc-tree. Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four. ### Translating When translating, refer to the guide at [./TRANSLATING.md](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). ### Adding a new model When adding a new model: - Create a file `xxx.md` or under `./source/model_doc` (don't hesitate to copy an existing file as template). - Link that file in `./source/_toctree.yml`. - Write a short overview of the model: - Overview with paper & authors - Paper abstract - Tips and tricks and how to use it best - Add the classes that should be linked in the model. This generally includes the configuration, the tokenizer, and every model of that class (the base model, alongside models with additional heads), both in PyTorch and TensorFlow. The order is generally: - Configuration - Tokenizer - PyTorch base model - PyTorch head models - TensorFlow base model - TensorFlow head models - Flax base model - Flax head models These classes should be added using our Markdown syntax. Usually as follows: ``` ## XXXConfig [[autodoc]] XXXConfig ``` This will include every public method of the configuration that is documented. If for some reason you wish for a method not to be displayed in the documentation, you can do so by specifying which methods should be in the docs: ``` ## XXXTokenizer [[autodoc]] XXXTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ``` If you just want to add a method that is not documented (for instance magic methods like `__call__` are not documented by default) you can put the list of methods to add in a list that contains `all`: ``` ## XXXTokenizer [[autodoc]] XXXTokenizer - all - __call__ ``` ### Writing source documentation Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names and objects like True, None, or any strings should usually be put in `code`. When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or function to be in the main package. If you want to create a link to some internal class or function, you need to provide its path. For instance: \[\`utils.ModelOutput\`\]. This will be converted into a link with `utils.ModelOutput` in the description. To get rid of the path and only keep the name of the object you are linking to in the description, add a ~: \[\`~utils.ModelOutput\`\] will generate a link with `ModelOutput` in the description. The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\]. #### Defining arguments in a method Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its description: ``` Args: n_layers (`int`): The number of layers of the model. ``` If the description is too long to fit in one line, another indentation is necessary before writing the description after the argument. Here's an example showcasing everything so far: ``` Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and [`~PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) ``` For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the following signature: ``` def my_function(x: str = None, a: float = 1): ``` then its documentation should look like this: ``` Args: x (`str`, *optional*): This argument controls ... a (`float`, *optional*, defaults to 1): This argument is used to ... ``` Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even if the first line describing your argument type and its default gets long, you can't break it on several lines. You can however write as many lines as you want in the indented description (see the example above with `input_ids`). #### Writing a multi-line code block Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown: ```` ``` # first line of code # second line # etc ``` ```` We follow the [doctest](https://docs.python.org/3/library/doctest.html) syntax for the examples to automatically test the results to stay consistent with the library. #### Writing a return block The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation. The first line should be the type of the return, followed by a line return. No need to indent further for the elements building the return. Here's an example of a single value return: ``` Returns: `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token. ``` Here's an example of a tuple return, comprising several objects: ``` Returns: `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs: - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` -- Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss. - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) -- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). ``` #### Adding an image Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images). If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images to this dataset. ## Styling the docstring We have an automatic script running with the `make style` comment that will make sure that: - the docstrings fully take advantage of the line width - all code examples are formatted using black, like the code of the Transformers library This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's recommended to commit your changes before running `make style`, so you can revert the changes done by that script easily. # Testing documentation examples Good documentation often comes with an example of how a specific function or class should be used. Each model class should contain at least one example showcasing how to use this model class in inference. *E.g.* the class [Wav2Vec2ForCTC](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC) includes an example of how to transcribe speech to text in the [docstring of its forward function](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC.forward). ## Writing documentation examples The syntax for Example docstrings can look as follows: ``` Example: ```python >>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") >>> # audio file is decoded on the fly >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> # transcribe speech >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' ``` ``` The docstring should give a minimal, clear example of how the respective model is to be used in inference and also include the expected (ideally sensible) output. Often, readers will try out the example before even going through the function or class definitions. Therefore, it is of utmost importance that the example works as expected. ## Docstring testing To do so each example should be included in the doctests. We use pytests' [doctest integration](https://docs.pytest.org/doctest.html) to verify that all of our examples run correctly. For Transformers, the doctests are run on a daily basis via GitHub Actions as can be seen [here](https://github.com/huggingface/transformers/actions/workflows/doctests.yml). ### For Python files Run all the tests in the docstrings of a given file with the following command, here is how we test the modeling file of Wav2Vec2 for instance: ```bash pytest --doctest-modules src/transformers/models/wav2vec2/modeling_wav2vec2.py -sv --doctest-continue-on-failure ``` If you want to isolate a specific docstring, just add `::` after the file name then type the whole path of the function/class/method whose docstring you want to test. For instance, here is how to just test the forward method of `Wav2Vec2ForCTC`: ```bash pytest --doctest-modules src/transformers/models/wav2vec2/modeling_wav2vec2.py::transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC.forward -sv --doctest-continue-on-failure ``` ### For Markdown files You can test locally a given file with this command (here testing the quicktour): ```bash pytest --doctest-modules docs/source/quicktour.md -sv --doctest-continue-on-failure --doctest-glob="*.md" ``` ### Writing doctests Here are a few tips to help you debug the doctests and make them pass: - The outputs of the code need to match the expected output **exactly**, so make sure you have the same outputs. In particular doctest will see a difference between single quotes and double quotes, or a missing parenthesis. The only exceptions to that rule are: * whitespace: one give whitespace (space, tabulation, new line) is equivalent to any number of whitespace, so you can add new lines where there are spaces to make your output more readable. * numerical values: you should never put more than 4 or 5 digits to expected results as different setups or library versions might get you slightly different results. `doctest` is configured to ignore any difference lower than the precision to which you wrote (so 1e-4 if you write 4 digits). - Don't leave a block of code that is very long to execute. If you can't make it fast, you can either not use the doctest syntax on it (so that it's ignored), or if you want to use the doctest syntax to show the results, you can add a comment `# doctest: +SKIP` at the end of the lines of code too long to execute - Each line of code that produces a result needs to have that result written below. You can ignore an output if you don't want to show it in your code example by adding a comment ` # doctest: +IGNORE_RESULT` at the end of the line of code producing it.
0
hf_public_repos/transformers
hf_public_repos/transformers/docs/TRANSLATING.md
### Translating the Transformers documentation into your language As part of our mission to democratize machine learning, we'd love to make the Transformers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language ๐Ÿ™. **๐Ÿ—ž๏ธ Open an issue** To get started, navigate to the [Issues](https://github.com/huggingface/transformers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button. Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list. **๐Ÿด Fork the repository** First, you'll need to [fork the Transformers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page. Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows: ```bash git clone https://github.com/YOUR-USERNAME/transformers.git ``` **๐Ÿ“‹ Copy-paste the English version with a new language code** The documentation files are in one leading directory: - [`docs/source`](https://github.com/huggingface/transformers/tree/main/docs/source): All the documentation materials are organized here by language. You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/transformers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following: ```bash cd ~/path/to/transformers/docs cp -r source/en source/LANG-ID ``` Here, `LANG-ID` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table. **โœ๏ธ Start translating** The fun part comes - translating the text! The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website. > ๐Ÿ™‹ If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/LANG-ID/` directory! The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml): ```yaml - sections: - local: pipeline_tutorial # Do not change this! Use the same name for your .md file title: Pipelines for inference # Translate this! ... title: Tutorials # Translate this! ``` Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter. > ๐Ÿ™‹ If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/transformers/issues) and tag @stevhliu and @MKhalusova.
0
hf_public_repos/transformers/docs
hf_public_repos/transformers/docs/source/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers installation ! pip install transformers datasets evaluate # To install from source instead of the last release, comment the command above and uncomment the following one. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/preprocessing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Vorverarbeiten [[open-in-colab]] Bevor Sie Ihre Daten in einem Modell verwenden kรถnnen, mรผssen die Daten in ein fรผr das Modell akzeptables Format gebracht werden. Ein Modell versteht keine Rohtexte, Bilder oder Audiodaten. Diese Eingaben mรผssen in Zahlen umgewandelt und zu Tensoren zusammengesetzt werden. In dieser Anleitung werden Sie: * Textdaten mit einem Tokenizer vorverarbeiten. * Bild- oder Audiodaten mit einem Feature Extractor vorverarbeiten. * Daten fรผr eine multimodale Aufgabe mit einem Prozessor vorverarbeiten. ## NLP <Youtube id="Yffk5aydLzg"/> Das wichtigste Werkzeug zur Verarbeitung von Textdaten ist ein [Tokenizer](main_classes/tokenizer). Ein Tokenizer zerlegt Text zunรคchst nach einer Reihe von Regeln in *Token*. Die Token werden in Zahlen umgewandelt, die zum Aufbau von Tensoren als Eingabe fรผr ein Modell verwendet werden. Alle zusรคtzlichen Eingaben, die ein Modell benรถtigt, werden ebenfalls vom Tokenizer hinzugefรผgt. <Tip> Wenn Sie ein vortrainiertes Modell verwenden mรถchten, ist es wichtig, den zugehรถrigen vortrainierten Tokenizer zu verwenden. Dadurch wird sichergestellt, dass der Text auf die gleiche Weise aufgeteilt wird wie das Pretraining-Korpus und die gleichen entsprechenden Token-zu-Index (in der Regel als *vocab* bezeichnet) wรคhrend des Pretrainings verwendet werden. </Tip> Laden Sie einen vortrainierten Tokenizer mit der Klasse [AutoTokenizer], um schnell loszulegen. Damit wird das *vocab* heruntergeladen, das verwendet wird, wenn ein Modell vortrainiert wird. ### Tokenize Laden Sie einen vortrainierten Tokenizer mit [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` Dann รผbergeben Sie Ihren Satz an den Tokenizer: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Der Tokenizer gibt ein Wรถrterbuch mit drei wichtigen Elementen zurรผck: * [input_ids](glossary#input-ids) sind die Indizes, die den einzelnen Token im Satz entsprechen. * [attention_mask](glossary#attention-mask) gibt an, ob ein Token beachtet werden soll oder nicht. * [token_type_ids](glossary#token-type-ids) gibt an, zu welcher Sequenz ein Token gehรถrt, wenn es mehr als eine Sequenz gibt. Sie kรถnnen die `input_ids` dekodieren, um die ursprรผngliche Eingabe zurรผckzugeben: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` Wie Sie sehen kรถnnen, hat der Tokenisierer zwei spezielle Token - `CLS` und `SEP` (Klassifikator und Separator) - zum Satz hinzugefรผgt. Nicht alle Modelle benรถtigen spezielle Token, aber wenn dies der Fall ist, fรผgt der Tokenisierer sie automatisch fรผr Sie hinzu. Wenn Sie mehrere Sรคtze verarbeiten wollen, รผbergeben Sie die Sรคtze als Liste an den Tokenizer: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### Pad Dies bringt uns zu einem wichtigen Thema. Wenn Sie einen Haufen von Sรคtzen verarbeiten, sind diese nicht immer gleich lang. Das ist ein Problem, weil Tensoren, die Eingabe fรผr das Modell, eine einheitliche Form haben mรผssen. Padding ist eine Strategie, die sicherstellt, dass Tensoren rechteckig sind, indem ein spezielles *Padding-Token* zu Sรคtzen mit weniger Token hinzugefรผgt wird. Setzen Sie den Parameter "padding" auf "true", um die kรผrzeren Sequenzen im Stapel so aufzufรผllen, dass sie der lรคngsten Sequenz entsprechen: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` Beachten Sie, dass der Tokenizer den ersten und den dritten Satz mit einer "0" aufgefรผllt hat, weil sie kรผrzer sind! ### Kรผrzung Auf der anderen Seite des Spektrums kann es vorkommen, dass eine Sequenz zu lang fรผr ein Modell ist. In diesem Fall mรผssen Sie die Sequenz auf eine kรผrzere Lรคnge kรผrzen. Setzen Sie den Parameter "truncation" auf "true", um eine Sequenz auf die vom Modell akzeptierte Hรถchstlรคnge zu kรผrzen: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ### Tensoren erstellen SchlieรŸlich mรถchten Sie, dass der Tokenizer die tatsรคchlichen Tensoren zurรผckgibt, die dem Modell zugefรผhrt werden. Setzen Sie den Parameter `return_tensors` entweder auf `pt` fรผr PyTorch, oder `tf` fรผr TensorFlow: <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> ## Audio Audioeingaben werden anders vorverarbeitet als Texteingaben, aber das Endziel bleibt dasselbe: numerische Sequenzen zu erstellen, die das Modell verstehen kann. Ein [feature extractor](main_classes/feature_extractor) dient dem ausdrรผcklichen Zweck, Merkmale aus Rohbild- oder Audiodaten zu extrahieren und in Tensoren zu konvertieren. Bevor Sie beginnen, installieren Sie ๐Ÿค— Datasets, um einen Audio-Datensatz zu laden, mit dem Sie experimentieren kรถnnen: ```bash pip install datasets ``` Laden Sie den [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) Datensatz (weitere Informationen zum Laden eines Datensatzes finden Sie im ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub)): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Greifen Sie auf das erste Element der `audio`-Spalte zu, um einen Blick auf die Eingabe zu werfen. Durch den Aufruf der Spalte "audio" wird die Audiodatei automatisch geladen und neu gesampelt: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` Dies gibt drei Elemente zurรผck: * "array" ist das Sprachsignal, das als 1D-Array geladen - und mรถglicherweise neu gesampelt - wurde. * Pfad" zeigt auf den Speicherort der Audiodatei. * `sampling_rate` bezieht sich darauf, wie viele Datenpunkte im Sprachsignal pro Sekunde gemessen werden. ### Resample Fรผr dieses Tutorial werden Sie das Modell [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) verwenden. Wie Sie aus der Modellkarte ersehen kรถnnen, ist das Wav2Vec2-Modell auf 16kHz abgetastetes Sprachaudio vortrainiert. Es ist wichtig, dass die Abtastrate Ihrer Audiodaten mit der Abtastrate des Datensatzes รผbereinstimmt, der fรผr das Pre-Training des Modells verwendet wurde. Wenn die Abtastrate Ihrer Daten nicht dieselbe ist, mรผssen Sie Ihre Audiodaten neu abtasten. Der Datensatz [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) hat zum Beispiel eine Abtastrate von 8000 kHz. Um das Wav2Vec2-Modell mit diesem Datensatz verwenden zu kรถnnen, mรผssen Sie die Abtastrate auf 16 kHz erhรถhen: ```py >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` 1. Verwenden Sie die Methode [~datasets.Dataset.cast_column] von ๐Ÿค— Datasets, um die Abtastrate auf 16kHz zu erhรถhen: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. Laden Sie die Audiodatei: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` Wie Sie sehen kรถnnen, ist die Abtastrate jetzt 16kHz! ### Merkmalsextraktor Der nรคchste Schritt ist das Laden eines Merkmalsextraktors, um die Eingabe zu normalisieren und aufzufรผllen. Beim Auffรผllen von Textdaten wird fรผr kรผrzere Sequenzen ein `0` hinzugefรผgt. Die gleiche Idee gilt fรผr Audiodaten, und der Audio-Feature-Extraktor fรผgt eine `0` - interpretiert als Stille - zu `array` hinzu. Laden Sie den Merkmalsextraktor mit [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` รœbergeben Sie das Audio-"Array" an den Feature-Extraktor. Wir empfehlen auch, das Argument `sampling_rate` im Feature Extractor hinzuzufรผgen, um eventuell auftretende stille Fehler besser zu beheben. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ### Auffรผllen und Kรผrzen Genau wie beim Tokenizer kรถnnen Sie variable Sequenzen in einem Stapel durch Auffรผllen oder Abschneiden behandeln. Werfen Sie einen Blick auf die Sequenzlรคnge dieser beiden Audiobeispiele: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` Wie Sie sehen kรถnnen, hat das erste Beispiel eine lรคngere Sequenz als das zweite Beispiel. Lassen Sie uns eine Funktion erstellen, die den Datensatz vorverarbeitet. Geben Sie eine maximale Lรคnge der Probe an, und der Feature-Extraktor wird die Sequenzen entweder auffรผllen oder abschneiden, damit sie dieser Lรคnge entsprechen: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` Wenden Sie die Funktion auf die ersten paar Beispiele im Datensatz an: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` Schauen Sie sich nun noch einmal die verarbeiteten Beispiel-Lรคngen an: ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` Die Lรคnge der ersten beiden Beispiele entspricht nun der von Ihnen angegebenen Maximallรคnge. ## Bildverarbeitung Ein Merkmalsextraktor wird auch verwendet, um Bilder fรผr Bildverarbeitungsaufgaben zu verarbeiten. Auch hier besteht das Ziel darin, das Rohbild in eine Reihe von Tensoren als Eingabe zu konvertieren. Laden wir den [food101](https://huggingface.co/datasets/food101) Datensatz fรผr dieses Tutorial. Verwenden Sie den Parameter ๐Ÿค— Datasets `split`, um nur eine kleine Stichprobe aus dem Trainingssplit zu laden, da der Datensatz recht groรŸ ist: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` Als Nรคchstes sehen Sie sich das Bild mit dem Merkmal ๐Ÿค— Datensรคtze [Bild] (https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) an: ```py >>> dataset[0]["image"] ``` ![vision-preprocess-tutorial.png](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png) ### Merkmalsextraktor Laden Sie den Merkmalsextraktor mit [`AutoImageProcessor.from_pretrained`]: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ### Datenerweiterung Bei Bildverarbeitungsaufgaben ist es รผblich, den Bildern als Teil der Vorverarbeitung eine Art von Datenerweiterung hinzuzufรผgen. Sie kรถnnen Erweiterungen mit jeder beliebigen Bibliothek hinzufรผgen, aber in diesem Tutorial werden Sie das Modul [`transforms`](https://pytorch.org/vision/stable/transforms.html) von torchvision verwenden. 1. Normalisieren Sie das Bild und verwenden Sie [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html), um einige Transformationen - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) und [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) - miteinander zu verknรผpfen: ```py >>> from torchvision.transforms import Compose, Normalize, RandomResizedCrop, ColorJitter, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> _transforms = Compose( ... [RandomResizedCrop(image_processor.size["height"]), ColorJitter(brightness=0.5, hue=0.5), ToTensor(), normalize] ... ) ``` 2. Das Modell akzeptiert [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) als Eingabe. Dieser Wert wird vom Merkmalsextraktor erzeugt. Erstellen Sie eine Funktion, die `pixel_values` aus den Transformationen erzeugt: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(image.convert("RGB")) for image in examples["image"]] ... return examples ``` 3. Dann verwenden Sie ๐Ÿค— Datasets [`set_transform`](https://huggingface.co/docs/datasets/process#format-transform), um die Transformationen im laufenden Betrieb anzuwenden: ```py >>> dataset.set_transform(transforms) ``` 4. Wenn Sie nun auf das Bild zugreifen, werden Sie feststellen, dass der Feature Extractor die Modelleingabe "pixel_values" hinzugefรผgt hat: ```py >>> dataset[0]["image"] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x7F1A7B0630D0>, 'label': 6, 'pixel_values': tensor([[[ 0.0353, 0.0745, 0.1216, ..., -0.9922, -0.9922, -0.9922], [-0.0196, 0.0667, 0.1294, ..., -0.9765, -0.9843, -0.9922], [ 0.0196, 0.0824, 0.1137, ..., -0.9765, -0.9686, -0.8667], ..., [ 0.0275, 0.0745, 0.0510, ..., -0.1137, -0.1216, -0.0824], [ 0.0667, 0.0824, 0.0667, ..., -0.0588, -0.0745, -0.0980], [ 0.0353, 0.0353, 0.0431, ..., -0.0039, -0.0039, -0.0588]], [[ 0.2078, 0.2471, 0.2863, ..., -0.9451, -0.9373, -0.9451], [ 0.1608, 0.2471, 0.3098, ..., -0.9373, -0.9451, -0.9373], [ 0.2078, 0.2706, 0.3020, ..., -0.9608, -0.9373, -0.8275], ..., [-0.0353, 0.0118, -0.0039, ..., -0.2392, -0.2471, -0.2078], [ 0.0196, 0.0353, 0.0196, ..., -0.1843, -0.2000, -0.2235], [-0.0118, -0.0039, -0.0039, ..., -0.0980, -0.0980, -0.1529]], [[ 0.3961, 0.4431, 0.4980, ..., -0.9216, -0.9137, -0.9216], [ 0.3569, 0.4510, 0.5216, ..., -0.9059, -0.9137, -0.9137], [ 0.4118, 0.4745, 0.5216, ..., -0.9137, -0.8902, -0.7804], ..., [-0.2314, -0.1922, -0.2078, ..., -0.4196, -0.4275, -0.3882], [-0.1843, -0.1686, -0.2000, ..., -0.3647, -0.3804, -0.4039], [-0.1922, -0.1922, -0.1922, ..., -0.2941, -0.2863, -0.3412]]])} ``` Hier sehen Sie, wie das Bild nach der Vorverarbeitung aussieht. Wie von den angewandten Transformationen zu erwarten, wurde das Bild willkรผrlich beschnitten und seine Farbeigenschaften sind anders. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` ![preprocessed_image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png) ## Multimodal Fรผr multimodale Aufgaben werden Sie eine Kombination aus allem, was Sie bisher gelernt haben, verwenden und Ihre Fรคhigkeiten auf eine Aufgabe der automatischen Spracherkennung (ASR) anwenden. Dies bedeutet, dass Sie einen: * Feature Extractor zur Vorverarbeitung der Audiodaten. * Tokenizer, um den Text zu verarbeiten. Kehren wir zum [LJ Speech](https://huggingface.co/datasets/lj_speech) Datensatz zurรผck: ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` Da Sie hauptsรคchlich an den Spalten "Audio" und "Text" interessiert sind, entfernen Sie die anderen Spalten: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` Schauen Sie sich nun die Spalten "Audio" und "Text" an: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` Erinnern Sie sich an den frรผheren Abschnitt รผber die Verarbeitung von Audiodaten: Sie sollten immer die Abtastrate Ihrer Audiodaten [resample](preprocessing#audio), damit sie mit der Abtastrate des Datensatzes รผbereinstimmt, der fรผr das Vortraining eines Modells verwendet wird: ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` ### Prozessor Ein Processor kombiniert einen Feature-Extraktor und einen Tokenizer. Laden Sie einen Processor mit [`AutoProcessor.from_pretrained]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. Erstellen Sie eine Funktion, die die Audiodaten zu `input_values` verarbeitet und den Text zu `labels` tokenisiert. Dies sind Ihre Eingaben fรผr das Modell: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. Wenden Sie die Funktion "prepare_dataset" auf ein Beispiel an: ```py >>> prepare_dataset(lj_speech[0]) ``` Beachten Sie, dass der Processor `input_values` und `labels` hinzugefรผgt hat. Auch die Abtastrate wurde korrekt auf 16kHz heruntergerechnet. Toll, Sie sollten jetzt in der Lage sein, Daten fรผr jede Modalitรคt vorzuverarbeiten und sogar verschiedene Modalitรคten zu kombinieren! Im nรคchsten Kurs lernen Sie, wie Sie ein Modell mit Ihren neu aufbereiteten Daten feinabstimmen kรถnnen.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Ein Modell teilen Die letzten beiden Tutorials haben gezeigt, wie man ein Modell mit PyTorch, Keras und ๐Ÿค— Accelerate fรผr verteilte Setups feinabstimmen kann. Der nรคchste Schritt besteht darin, Ihr Modell mit der Community zu teilen! Bei Hugging Face glauben wir an den offenen Austausch von Wissen und Ressourcen, um kรผnstliche Intelligenz fรผr alle zu demokratisieren. Wir ermutigen Sie, Ihr Modell mit der Community zu teilen, um anderen zu helfen, Zeit und Ressourcen zu sparen. In diesem Tutorial lernen Sie zwei Methoden kennen, wie Sie ein trainiertes oder verfeinertes Modell auf dem [Model Hub](https://huggingface.co/models) teilen kรถnnen: - Programmgesteuertes รœbertragen Ihrer Dateien auf den Hub. - Ziehen Sie Ihre Dateien per Drag-and-Drop รผber die Weboberflรคche in den Hub. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> Um ein Modell mit der ร–ffentlichkeit zu teilen, benรถtigen Sie ein Konto auf [huggingface.co](https://huggingface.co/join). Sie kรถnnen auch einer bestehenden Organisation beitreten oder eine neue Organisation grรผnden. </Tip> ## Repository-Funktionen Jedes Repository im Model Hub verhรคlt sich wie ein typisches GitHub-Repository. Unsere Repositorys bieten Versionierung, Commit-Historie und die Mรถglichkeit, Unterschiede zu visualisieren. Die integrierte Versionierung des Model Hub basiert auf Git und [git-lfs](https://git-lfs.github.com/). Mit anderen Worten: Sie kรถnnen ein Modell als ein Repository behandeln, was eine bessere Zugriffskontrolle und Skalierbarkeit ermรถglicht. Die Versionskontrolle ermรถglicht *Revisionen*, eine Methode zum Anheften einer bestimmten Version eines Modells mit einem Commit-Hash, Tag oder Branch. Folglich kรถnnen Sie eine bestimmte Modellversion mit dem Parameter "Revision" laden: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` Dateien lassen sich auch in einem Repository leicht bearbeiten, und Sie kรถnnen die Commit-Historie sowie die Unterschiede einsehen: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Einrichtung Bevor Sie ein Modell fรผr den Hub freigeben, benรถtigen Sie Ihre Hugging Face-Anmeldedaten. Wenn Sie Zugang zu einem Terminal haben, fรผhren Sie den folgenden Befehl in der virtuellen Umgebung aus, in der ๐Ÿค— Transformers installiert ist. Dadurch werden Ihre Zugangsdaten in Ihrem Hugging Face-Cache-Ordner (standardmรครŸig `~/.cache/`) gespeichert: ```bash huggingface-cli login ``` Wenn Sie ein Notebook wie Jupyter oder Colaboratory verwenden, stellen Sie sicher, dass Sie die [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) Bibliothek installiert haben. Diese Bibliothek ermรถglicht Ihnen die programmatische Interaktion mit dem Hub. ```bash pip install huggingface_hub ``` Verwenden Sie dann `notebook_login`, um sich beim Hub anzumelden, und folgen Sie dem Link [hier](https://huggingface.co/settings/token), um ein Token fรผr die Anmeldung zu generieren: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Ein Modell fรผr alle Frameworks konvertieren Um sicherzustellen, dass Ihr Modell von jemandem verwendet werden kann, der mit einem anderen Framework arbeitet, empfehlen wir Ihnen, Ihr Modell sowohl mit PyTorch- als auch mit TensorFlow-Checkpoints zu konvertieren und hochzuladen. Wรคhrend Benutzer immer noch in der Lage sind, Ihr Modell von einem anderen Framework zu laden, wenn Sie diesen Schritt รผberspringen, wird es langsamer sein, weil ๐Ÿค— Transformers den Checkpoint on-the-fly konvertieren mรผssen. Die Konvertierung eines Checkpoints fรผr ein anderes Framework ist einfach. Stellen Sie sicher, dass Sie PyTorch und TensorFlow installiert haben (siehe [hier](installation) fรผr Installationsanweisungen), und finden Sie dann das spezifische Modell fรผr Ihre Aufgabe in dem anderen Framework. <frameworkcontent> <pt> Geben Sie `from_tf=True` an, um einen Prรผfpunkt von TensorFlow nach PyTorch zu konvertieren: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> Geben Sie `from_pt=True` an, um einen Prรผfpunkt von PyTorch nach TensorFlow zu konvertieren: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` Dann kรถnnen Sie Ihr neues TensorFlow-Modell mit seinem neuen Checkpoint speichern: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> Wenn ein Modell in Flax verfรผgbar ist, kรถnnen Sie auch einen Kontrollpunkt von PyTorch nach Flax konvertieren: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Ein Modell wรคhrend des Trainings hochladen <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Die Weitergabe eines Modells an den Hub ist so einfach wie das Hinzufรผgen eines zusรคtzlichen Parameters oder Rรผckrufs. Erinnern Sie sich an das [Feinabstimmungs-Tutorial](training), in der Klasse [`TrainingArguments`] geben Sie Hyperparameter und zusรคtzliche Trainingsoptionen an. Eine dieser Trainingsoptionen beinhaltet die Mรถglichkeit, ein Modell direkt an den Hub zu pushen. Setzen Sie `push_to_hub=True` in Ihrer [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` รœbergeben Sie Ihre Trainingsargumente wie gewohnt an [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Nach der Feinabstimmung Ihres Modells rufen Sie [`~transformers.Trainer.push_to_hub`] auf [`Trainer`] auf, um das trainierte Modell an den Hub zu รผbertragen. Transformers fรผgt sogar automatisch Trainings-Hyperparameter, Trainingsergebnisse und Framework-Versionen zu Ihrer Modellkarte hinzu! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Geben Sie ein Modell mit [`PushToHubCallback`] an den Hub weiter. In der [`PushToHubCallback`] Funktion, fรผgen Sie hinzu: - Ein Ausgabeverzeichnis fรผr Ihr Modell. - Einen Tokenizer. - Die `hub_model_id`, die Ihr Hub-Benutzername und Modellname ist. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` Fรผgen Sie den Callback zu [`fit`](https://keras.io/api/models/model_training_apis/) hinzu, und ๐Ÿค— Transformers wird das trainierte Modell an den Hub weiterleiten: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Verwenden Sie die Funktion `push_to_hub`. Sie kรถnnen `push_to_hub` auch direkt fรผr Ihr Modell aufrufen, um es in den Hub hochzuladen. Geben Sie den Namen Ihres Modells in "push_to_hub" an: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` Dadurch wird ein Repository unter Ihrem Benutzernamen mit dem Modellnamen `my-awesome-model` erstellt. Benutzer kรถnnen nun Ihr Modell mit der Funktion `from_pretrained` laden: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` Wenn Sie zu einer Organisation gehรถren und Ihr Modell stattdessen unter dem Namen der Organisation pushen wollen, fรผgen Sie diesen einfach zur `repo_id` hinzu: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` Die Funktion "push_to_hub" kann auch verwendet werden, um andere Dateien zu einem Modell-Repository hinzuzufรผgen. Zum Beispiel kann man einen Tokenizer zu einem Modell-Repository hinzufรผgen: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` Oder vielleicht mรถchten Sie die TensorFlow-Version Ihres fein abgestimmten PyTorch-Modells hinzufรผgen: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` Wenn Sie nun zu Ihrem Hugging Face-Profil navigieren, sollten Sie Ihr neu erstelltes Modell-Repository sehen. Wenn Sie auf die Registerkarte **Dateien** klicken, werden alle Dateien angezeigt, die Sie in das Repository hochgeladen haben. Weitere Einzelheiten zum Erstellen und Hochladen von Dateien in ein Repository finden Sie in der Hub-Dokumentation [hier](https://huggingface.co/docs/hub/how-to-upstream). ## Hochladen mit der Weboberflรคche Benutzer, die einen no-code Ansatz bevorzugen, kรถnnen ein Modell รผber das Webinterface des Hubs hochladen. Besuchen Sie [huggingface.co/new](https://huggingface.co/new) um ein neues Repository zu erstellen: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) Fรผgen Sie von hier aus einige Informationen รผber Ihr Modell hinzu: - Wรคhlen Sie den **Besitzer** des Repositorys. Dies kรถnnen Sie selbst oder eine der Organisationen sein, denen Sie angehรถren. - Wรคhlen Sie einen Namen fรผr Ihr Modell, der auch der Name des Repositorys sein wird. - Wรคhlen Sie, ob Ihr Modell รถffentlich oder privat ist. - Geben Sie die Lizenzverwendung fรผr Ihr Modell an. Klicken Sie nun auf die Registerkarte **Dateien** und klicken Sie auf die Schaltflรคche **Datei hinzufรผgen**, um eine neue Datei in Ihr Repository hochzuladen. Ziehen Sie dann eine Datei per Drag-and-Drop hoch und fรผgen Sie eine รœbergabemeldung hinzu. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Hinzufรผgen einer Modellkarte Um sicherzustellen, dass die Benutzer die Fรคhigkeiten, Grenzen, mรถglichen Verzerrungen und ethischen Aspekte Ihres Modells verstehen, fรผgen Sie bitte eine Modellkarte zu Ihrem Repository hinzu. Die Modellkarte wird in der Datei `README.md` definiert. Sie kรถnnen eine Modellkarte hinzufรผgen, indem Sie: * Manuelles Erstellen und Hochladen einer "README.md"-Datei. * Klicken Sie auf die Schaltflรคche **Modellkarte bearbeiten** in Ihrem Modell-Repository. Werfen Sie einen Blick auf die DistilBert [model card](https://huggingface.co/distilbert-base-uncased) als gutes Beispiel fรผr die Art von Informationen, die eine Modellkarte enthalten sollte. Weitere Details รผber andere Optionen, die Sie in der Datei "README.md" einstellen kรถnnen, wie z.B. den Kohlenstoff-FuรŸabdruck eines Modells oder Beispiele fรผr Widgets, finden Sie in der Dokumentation [hier](https://huggingface.co/docs/hub/models-cards).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installation Installieren Sie ๐Ÿค— Transformers fรผr die Deep-Learning-Bibliothek, mit der Sie arbeiten, richten Sie Ihren Cache ein und konfigurieren Sie ๐Ÿค— Transformers optional fรผr den Offline-Betrieb. ๐Ÿค— Transformers wurde unter Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, und Flax getestet. Folgen Sie den Installationsanweisungen unten fรผr die von Ihnen verwendete Deep-Learning-Bibliothek: * [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions. * [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. ## Installation mit pip Sie sollten ๐Ÿค— Transformers in einer [virtuellen Umgebung](https://docs.python.org/3/library/venv.html) installieren. Wenn Sie mit virtuellen Python-Umgebungen nicht vertraut sind, werfen Sie einen Blick auf diese [Anleitung](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Eine virtuelle Umgebung macht es einfacher, verschiedene Projekte zu verwalten und Kompatibilitรคtsprobleme zwischen Abhรคngigkeiten zu vermeiden. Beginnen wir mit der Erstellung einer virtuellen Umgebung in Ihrem Projektverzeichnis: ```bash python -m venv .env ``` Aktivieren wir die virtuelle Umgebung. Unter Linux und MacOs: ```bash source .env/bin/activate ``` Aktivieren wir die virtuelle Umgebung unter Windows ```bash .env/Scripts/activate ``` Jetzt kรถnnen wir die ๐Ÿค— Transformers mit dem folgenden Befehl installieren: ```bash pip install transformers ``` Bei reiner CPU-Unterstรผtzung kรถnnen wir ๐Ÿค— Transformers und eine Deep-Learning-Bibliothek bequem in einer Zeile installieren. Installieren wir zum Beispiel ๐Ÿค— Transformers und PyTorch mit: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers und TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers und Flax: ```bash pip install transformers[flax] ``` รœberprรผfen wir abschlieรŸend, ob ๐Ÿค— Transformers ordnungsgemรครŸ installiert wurde, indem wir den folgenden Befehl ausfรผhren. Es wird ein vortrainiertes Modell heruntergeladen: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Dann wird die Kategorie und die Wahrscheinlichkeit ausgegeben: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installation aus dem Code Installieren wir ๐Ÿค— Transformers aus dem Quellcode mit dem folgenden Befehl: ```bash pip install git+https://github.com/huggingface/transformers ``` Dieser Befehl installiert die aktuelle `main` Version und nicht die neueste `stable` Version. Die `main`-Version ist nรผtzlich, um mit den neuesten Entwicklungen Schritt zu halten. Zum Beispiel, wenn ein Fehler seit der letzten offiziellen Version behoben wurde, aber eine neue Version noch nicht verรถffentlicht wurde. Das bedeutet jedoch, dass die "Hauptversion" nicht immer stabil ist. Wir bemรผhen uns, die Hauptversion einsatzbereit zu halten, und die meisten Probleme werden normalerweise innerhalb weniger Stunden oder eines Tages behoben. Wenn Sie auf ein Problem stoรŸen, รถffnen Sie bitte ein [Issue] (https://github.com/huggingface/transformers/issues), damit wir es noch schneller beheben kรถnnen! รœberprรผfen wir, ob ๐Ÿค— Transformers richtig installiert wurde, indem Sie den folgenden Befehl ausfรผhren: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Editierbare Installation Sie benรถtigen eine bearbeitbare Installation, wenn Sie: * die "Haupt"-Version des Quellcodes verwenden mรถchten. * Zu ๐Ÿค— Transformers beitragen und ร„nderungen am Code testen wollen. Klonen Sie das Repository und installieren ๐Ÿค— Transformers mit den folgenden Befehlen: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Diese Befehle verknรผpfen den Ordner, in den Sie das Repository geklont haben, mit den Pfaden Ihrer Python-Bibliotheken. Python wird nun in dem Ordner suchen, in den Sie geklont haben, zusรคtzlich zu den normalen Bibliothekspfaden. Wenn zum Beispiel Ihre Python-Pakete normalerweise in `~/anaconda3/envs/main/lib/python3.7/site-packages/` installiert sind, wird Python auch den Ordner durchsuchen, in den Sie geklont haben: `~/transformers/`. <Tip warning={true}> Sie mรผssen den Ordner `transformers` behalten, wenn Sie die Bibliothek weiter verwenden wollen. </Tip> Jetzt kรถnnen Sie Ihren Klon mit dem folgenden Befehl ganz einfach auf die neueste Version von ๐Ÿค— Transformers aktualisieren: ```bash cd ~/transformers/ git pull ``` Ihre Python-Umgebung wird beim nรคchsten Ausfรผhren die `main`-Version von ๐Ÿค— Transformers finden. ## Installation mit conda Installation von dem conda Kanal `huggingface`: ```bash conda install -c huggingface transformers ``` ## Cache Einrichtung Vorgefertigte Modelle werden heruntergeladen und lokal zwischengespeichert unter: `~/.cache/huggingface/hub`. Dies ist das Standardverzeichnis, das durch die Shell-Umgebungsvariable "TRANSFORMERS_CACHE" vorgegeben ist. Unter Windows wird das Standardverzeichnis durch `C:\Benutzer\Benutzername\.cache\huggingface\hub` angegeben. Sie kรถnnen die unten aufgefรผhrten Shell-Umgebungsvariablen - in der Reihenfolge ihrer Prioritรคt - รคndern, um ein anderes Cache-Verzeichnis anzugeben: 1. Shell-Umgebungsvariable (Standard): `HUGGINGFACE_HUB_CACHE` oder `TRANSFORMERS_CACHE`. 2. Shell-Umgebungsvariable: `HF_HOME`. 3. Shell-Umgebungsvariable: `XDG_CACHE_HOME` + `/huggingface`. <Tip> Transformers verwendet die Shell-Umgebungsvariablen `PYTORCH_TRANSFORMERS_CACHE` oder `PYTORCH_PRETRAINED_BERT_CACHE`, wenn Sie von einer frรผheren Iteration dieser Bibliothek kommen und diese Umgebungsvariablen gesetzt haben, sofern Sie nicht die Shell-Umgebungsvariable `TRANSFORMERS_CACHE` angeben. </Tip> ## Offline Modus Transformers ist in der Lage, in einer Firewall- oder Offline-Umgebung zu laufen, indem es nur lokale Dateien verwendet. Setzen Sie die Umgebungsvariable `TRANSFORMERS_OFFLINE=1`, um dieses Verhalten zu aktivieren. <Tip> Fรผgen sie [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) zu Ihrem Offline-Trainingsworkflow hinzufรผgen, indem Sie die Umgebungsvariable `HF_DATASETS_OFFLINE=1` setzen. </Tip> So wรผrden Sie beispielsweise ein Programm in einem normalen Netzwerk mit einer Firewall fรผr externe Instanzen mit dem folgenden Befehl ausfรผhren: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Fรผhren Sie das gleiche Programm in einer Offline-Instanz mit aus: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Das Skript sollte nun laufen, ohne sich aufzuhรคngen oder eine Zeitรผberschreitung abzuwarten, da es weiรŸ, dass es nur nach lokalen Dateien suchen soll. ### Abrufen von Modellen und Tokenizern zur Offline-Verwendung Eine andere Mรถglichkeit, ๐Ÿค— Transformers offline zu verwenden, besteht darin, die Dateien im Voraus herunterzuladen und dann auf ihren lokalen Pfad zu verweisen, wenn Sie sie offline verwenden mรผssen. Es gibt drei Mรถglichkeiten, dies zu tun: * Laden Sie eine Datei รผber die Benutzeroberflรคche des [Model Hub](https://huggingface.co/models) herunter, indem Sie auf das โ†“-Symbol klicken. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Verwenden Sie den [PreTrainedModel.from_pretrained] und [PreTrainedModel.save_pretrained] Workflow: 1. Laden Sie Ihre Dateien im Voraus mit [`PreTrainedModel.from_pretrained`] herunter: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Speichern Sie Ihre Dateien in einem bestimmten Verzeichnis mit [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Wenn Sie nun offline sind, laden Sie Ihre Dateien mit [`PreTrainedModel.from_pretrained`] aus dem bestimmten Verzeichnis: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Programmatisches Herunterladen von Dateien mit der [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) Bibliothek: 1. Installieren Sie die "huggingface_hub"-Bibliothek in Ihrer virtuellen Umgebung: ```bash python -m pip install huggingface_hub ``` 2. Verwenden Sie die Funktion [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub), um eine Datei in einen bestimmten Pfad herunterzuladen. Der folgende Befehl lรคdt zum Beispiel die Datei "config.json" aus dem Modell [T0](https://huggingface.co/bigscience/T0_3B) in den gewรผnschten Pfad herunter: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Sobald Ihre Datei heruntergeladen und lokal zwischengespeichert ist, geben Sie den lokalen Pfad an, um sie zu laden und zu verwenden: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Weitere Informationen zum Herunterladen von Dateien, die auf dem Hub gespeichert sind, finden Sie im Abschnitt [Wie man Dateien vom Hub herunterlรคdt] (https://huggingface.co/docs/hub/how-to-downstream). </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipelines fรผr Inferenzen Die [`pipeline`] macht es einfach, jedes beliebige Modell aus dem [Hub](https://huggingface.co/models) fรผr die Inferenz auf jede Sprache, Computer Vision, Sprache und multimodale Aufgaben zu verwenden. Selbst wenn Sie keine Erfahrung mit einer bestimmten Modalitรคt haben oder nicht mit dem zugrundeliegenden Code hinter den Modellen vertraut sind, kรถnnen Sie sie mit der [`pipeline`] fรผr Inferenzen verwenden! In diesem Beispiel lernen Sie, wie: * Eine [`pipeline`] fรผr Inferenz zu verwenden. * Einen bestimmten Tokenizer oder ein bestimmtes Modell zu verwenden. * Eine [`pipeline`] fรผr Audio-, Vision- und multimodale Aufgaben zu verwenden. <Tip> Eine vollstรคndige Liste der unterstรผtzten Aufgaben und verfรผgbaren Parameter finden Sie in der [`pipeline`]-Dokumentation. </Tip> ## Verwendung von Pipelines Obwohl jede Aufgabe eine zugehรถrige [`pipeline`] hat, ist es einfacher, die allgemeine [`pipeline`]-Abstraktion zu verwenden, die alle aufgabenspezifischen Pipelines enthรคlt. Die [`pipeline`] lรคdt automatisch ein Standardmodell und eine Vorverarbeitungsklasse, die fรผr Ihre Aufgabe inferenzfรคhig ist. 1. Beginnen Sie mit der Erstellung einer [`pipeline`] und geben Sie eine Inferenzaufgabe an: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation") ``` 2. รœbergeben Sie Ihren Eingabetext an die [`pipeline`]: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Iron-priests at the door to the east, and thirteen for the Lord Kings at the end of the mountain'}] ``` Wenn Sie mehr als eine Eingabe haben, รผbergeben Sie die Eingabe als Liste: ```py >>> generator( ... [ ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... "Nine for Mortal Men, doomed to die, One for the Dark Lord on his dark throne", ... ] ... ) # doctest: +SKIP ``` Alle zusรคtzlichen Parameter fรผr Ihre Aufgabe kรถnnen auch in die [`pipeline`] aufgenommen werden. Die Aufgabe `Text-Generierung` hat eine [`~generation.GenerationMixin.generate`]-Methode mit mehreren Parametern zur Steuerung der Ausgabe. Wenn Sie zum Beispiel mehr als eine Ausgabe erzeugen wollen, setzen Sie den Parameter `num_return_sequences`: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... num_return_sequences=2, ... ) # doctest: +SKIP ``` ### Wรคhlen Sie ein Modell und einen Tokenizer Die [`pipeline`] akzeptiert jedes Modell aus dem [Hub] (https://huggingface.co/models). Auf dem Hub gibt es Tags, mit denen Sie nach einem Modell filtern kรถnnen, das Sie fรผr Ihre Aufgabe verwenden mรถchten. Sobald Sie ein passendes Modell ausgewรคhlt haben, laden Sie es mit der entsprechenden `AutoModelFor` und [`AutoTokenizer`] Klasse. Laden Sie zum Beispiel die Klasse [`AutoModelForCausalLM`] fรผr eine kausale Sprachmodellierungsaufgabe: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") ``` Erstellen Sie eine [`pipeline`] fรผr Ihre Aufgabe, und geben Sie das Modell und den Tokenizer an, die Sie geladen haben: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) ``` รœbergeben Sie Ihren Eingabetext an die [`pipeline`] , um einen Text zu erzeugen: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Dragon-lords (for them to rule in a world ruled by their rulers, and all who live within the realm'}] ``` ## Audio-Pipeline Die [`pipeline`] unterstรผtzt auch Audioaufgaben wie Audioklassifizierung und automatische Spracherkennung. Lassen Sie uns zum Beispiel die Emotion in diesem Audioclip klassifizieren: ```py >>> from datasets import load_dataset >>> import torch >>> torch.manual_seed(42) # doctest: +IGNORE_RESULT >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> audio_file = ds[0]["audio"]["path"] ``` Finden Sie ein [Audioklassifikation](https://huggingface.co/models?pipeline_tag=audio-classification) Modell auf dem Model Hub fรผr Emotionserkennung und laden Sie es in die [`pipeline`]: ```py >>> from transformers import pipeline >>> audio_classifier = pipeline( ... task="audio-classification", model="ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` รœbergeben Sie die Audiodatei an die [`pipeline`]: ```py >>> preds = audio_classifier(audio_file) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}] ``` ## Bildverarbeitungs-Pipeline Die Verwendung einer [`pipeline`] fรผr Bildverarbeitungsaufgaben ist praktisch identisch. Geben Sie Ihre Aufgabe an und รผbergeben Sie Ihr Bild an den Klassifikator. Das Bild kann ein Link oder ein lokaler Pfad zu dem Bild sein. Zum Beispiel: Welche Katzenart ist unten abgebildet? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(task="image-classification") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ## Multimodale Pipeline Die [`pipeline`] unterstรผtzt mehr als eine Modalitรคt. Eine Aufgabe zur Beantwortung visueller Fragen (VQA) kombiniert zum Beispiel Text und Bild. Verwenden Sie einen beliebigen Bildlink und eine Frage, die Sie zu dem Bild stellen mรถchten. Das Bild kann eine URL oder ein lokaler Pfad zu dem Bild sein. Wenn Sie zum Beispiel das gleiche Bild wie in der obigen Vision-Pipeline verwenden: ```py >>> image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" >>> question = "Where is the cat?" ``` Erstellen Sie eine Pipeline fรผr "vqa" und รผbergeben Sie ihr das Bild und die Frage: ```py >>> from transformers import pipeline >>> vqa = pipeline(task="vqa") >>> preds = vqa(image=image, question=question) >>> preds = [{"score": round(pred["score"], 4), "answer": pred["answer"]} for pred in preds] >>> preds [{'score': 0.9112, 'answer': 'snow'}, {'score': 0.8796, 'answer': 'in snow'}, {'score': 0.6717, 'answer': 'outside'}, {'score': 0.0291, 'answer': 'on ground'}, {'score': 0.027, 'answer': 'ground'}] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/add_tensorflow_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Wie konvertiert man ein ๐Ÿค— Transformers-Modell in TensorFlow? Die Tatsache, dass mehrere Frameworks fรผr die Verwendung mit ๐Ÿค— Transformers zur Verfรผgung stehen, gibt Ihnen die Flexibilitรคt, deren Stรคrken beim Entwurf Ihrer Anwendung auszuspielen. Ihre Anwendung zu entwerfen, aber das bedeutet auch, dass die Kompatibilitรคt fรผr jedes Modell einzeln hinzugefรผgt werden muss. Die gute Nachricht ist, dass das Hinzufรผgen von TensorFlow-Kompatibilitรคt zu einem bestehenden Modell einfacher ist als [das Hinzufรผgen eines neuen Modells von Grund auf](add_new_model)! Ob Sie ein tieferes Verstรคndnis fรผr groรŸe TensorFlow-Modelle haben mรถchten, einen wichtigen Open-Source-Beitrag leisten oder TensorFlow fรผr das Modell Ihrer Wahl aktivieren wollen, dieser Leitfaden ist fรผr Sie. Dieser Leitfaden befรคhigt Sie, ein Mitglied unserer Gemeinschaft, TensorFlow-Modellgewichte und/oder Architekturen beizusteuern, die in ๐Ÿค— Transformers verwendet werden sollen, und zwar mit minimaler Betreuung durch das Hugging Face Team. Das Schreiben eines neuen Modells ist keine Kleinigkeit, aber ich hoffe, dass dieser Leitfaden dazu beitrรคgt, dass es weniger eine Achterbahnfahrt ๐ŸŽข und mehr ein Spaziergang im Park ๐Ÿšถ ist. Die Nutzung unserer kollektiven Erfahrungen ist absolut entscheidend, um diesen Prozess immer einfacher zu machen, und deshalb mรถchten wir ermutigen Sie daher, Verbesserungsvorschlรคge fรผr diesen Leitfaden zu machen! Bevor Sie tiefer eintauchen, empfehlen wir Ihnen, die folgenden Ressourcen zu lesen, wenn Sie neu in ๐Ÿค— Transformers sind: - [Allgemeiner รœberblick รผber ๐Ÿค— Transformers](add_new_model#general-overview-of-transformers) - [Die TensorFlow-Philosophie von Hugging Face](https://huggingface.co/blog/tensorflow-philosophy) Im Rest dieses Leitfadens werden Sie lernen, was nรถtig ist, um eine neue TensorFlow Modellarchitektur hinzuzufรผgen, die Verfahren zur Konvertierung von PyTorch in TensorFlow-Modellgewichte und wie Sie Unstimmigkeiten zwischen ML Frameworks. Legen Sie los! <Tip> Sind Sie unsicher, ob das Modell, das Sie verwenden mรถchten, bereits eine entsprechende TensorFlow-Architektur hat? &nbsp; รœberprรผfen Sie das Feld `model_type` in der `config.json` des Modells Ihrer Wahl ([Beispiel](https://huggingface.co/bert-base-uncased/blob/main/config.json#L14)). Wenn der entsprechende Modellordner in ๐Ÿค— Transformers eine Datei hat, deren Name mit "modeling_tf" beginnt, bedeutet dies, dass es eine entsprechende TensorFlow Architektur hat ([Beispiel](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert)). </Tip> ## Schritt-fรผr-Schritt-Anleitung zum Hinzufรผgen von TensorFlow-Modellarchitektur-Code Es gibt viele Mรถglichkeiten, eine groรŸe Modellarchitektur zu entwerfen, und viele Mรถglichkeiten, diesen Entwurf zu implementieren. Wie auch immer, Sie erinnern sich vielleicht an unseren [allgemeinen รœberblick รผber ๐Ÿค— Transformers](add_new_model#general-overview-of-transformers) wissen, dass wir ein meinungsfreudiger Haufen sind - die Benutzerfreundlichkeit von ๐Ÿค— Transformers hรคngt von konsistenten Designentscheidungen ab. Aus Erfahrung kรถnnen wir Ihnen ein paar wichtige Dinge รผber das Hinzufรผgen von TensorFlow-Modellen sagen: - Erfinden Sie das Rad nicht neu! In den meisten Fรคllen gibt es mindestens zwei Referenzimplementierungen, die Sie รผberprรผfen sollten: das PyTorch-ร„quivalent des Modells, das Sie implementieren, und andere TensorFlow-Modelle fรผr dieselbe Klasse von Problemen. - Gute Modellimplementierungen รผberleben den Test der Zeit. Dies geschieht nicht, weil der Code hรผbsch ist, sondern eher sondern weil der Code klar, einfach zu debuggen und darauf aufzubauen ist. Wenn Sie den Maintainern das Leben mit Ihrer TensorFlow-Implementierung leicht machen, indem Sie die gleichen Muster wie in anderen TensorFlow-Modellen nachbilden und die Abweichung zur PyTorch-Implementierung minimieren, stellen Sie sicher, dass Ihr Beitrag lange Bestand haben wird. - Bitten Sie um Hilfe, wenn Sie nicht weiterkommen! Das ๐Ÿค— Transformers-Team ist da, um zu helfen, und wir haben wahrscheinlich Lรถsungen fรผr die gleichen Probleme gefunden, vor denen Sie stehen. Hier finden Sie einen รœberblick รผber die Schritte, die zum Hinzufรผgen einer TensorFlow-Modellarchitektur erforderlich sind: 1. Wรคhlen Sie das Modell, das Sie konvertieren mรถchten 2. Bereiten Sie die Transformers-Entwicklungsumgebung vor. 3. (Optional) Verstehen Sie die theoretischen Aspekte und die bestehende Implementierung 4. Implementieren Sie die Modellarchitektur 5. Implementieren Sie Modelltests 6. Reichen Sie den Pull-Antrag ein 7. (Optional) Erstellen Sie Demos und teilen Sie diese mit der Welt ### 1.-3. Bereiten Sie Ihren Modellbeitrag vor **1. Wรคhlen Sie das Modell, das Sie konvertieren mรถchten** Beginnen wir mit den Grundlagen: Als erstes mรผssen Sie die Architektur kennen, die Sie konvertieren mรถchten. Wenn Sie Sie sich nicht auf eine bestimmte Architektur festgelegt haben, ist es eine gute Mรถglichkeit, das ๐Ÿค— Transformers-Team um Vorschlรคge zu bitten. Wir werden Sie zu den wichtigsten Architekturen fรผhren, die auf der TensorFlow-Seite noch fehlen. Seite fehlen. Wenn das spezifische Modell, das Sie mit TensorFlow verwenden mรถchten, bereits eine Implementierung der TensorFlow-Architektur in ๐Ÿค— Transformers, aber es fehlen Gewichte, kรถnnen Sie direkt in den Abschnitt [Gewichtskonvertierung](#adding-tensorflow-weights-to-hub) auf dieser Seite. Der Einfachheit halber wird im Rest dieser Anleitung davon ausgegangen, dass Sie sich entschieden haben, mit der TensorFlow-Version von *BrandNewBert* (dasselbe Beispiel wie in der [Anleitung](add_new_model), um ein neues Modell von Grund auf hinzuzufรผgen). <Tip> Bevor Sie mit der Arbeit an einer TensorFlow-Modellarchitektur beginnen, sollten Sie sich vergewissern, dass es keine laufenden Bemรผhungen in dieser Richtung gibt. Sie kรถnnen nach `BrandNewBert` auf der [pull request GitHub page](https://github.com/huggingface/transformers/pulls?q=is%3Apr), um zu bestรคtigen, dass es keine TensorFlow-bezogene Pull-Anfrage gibt. </Tip> **2. Transformers-Entwicklungsumgebung vorbereiten** Nachdem Sie die Modellarchitektur ausgewรคhlt haben, รถffnen Sie einen PR-Entwurf, um Ihre Absicht zu signalisieren, daran zu arbeiten. Folgen Sie den Anweisungen, um Ihre Umgebung einzurichten und einen PR-Entwurf zu รถffnen. 1. Forken Sie das [repository](https://github.com/huggingface/transformers), indem Sie auf der Seite des Repositorys auf die Schaltflรคche 'Fork' klicken. Seite des Repositorys klicken. Dadurch wird eine Kopie des Codes unter Ihrem GitHub-Benutzerkonto erstellt. 2. Klonen Sie Ihren `transformers` Fork auf Ihre lokale Festplatte und fรผgen Sie das Basis-Repository als Remote hinzu: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Richten Sie eine Entwicklungsumgebung ein, indem Sie z.B. den folgenden Befehl ausfรผhren: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` Abhรคngig von Ihrem Betriebssystem und da die Anzahl der optionalen Abhรคngigkeiten von Transformers wรคchst, kann es sein, dass Sie bei diesem Befehl einen Fehler mit diesem Befehl erhalten. Wenn das der Fall ist, stellen Sie sicher, dass Sie TensorFlow installieren und dann ausfรผhren: ```bash pip install -e ".[quality]" ``` **Hinweis:** Sie mรผssen CUDA nicht installiert haben. Es reicht aus, das neue Modell auf der CPU laufen zu lassen. 4. Erstellen Sie eine Verzweigung mit einem beschreibenden Namen von Ihrer Hauptverzweigung ```bash git checkout -b add_tf_brand_new_bert ``` 5. Abrufen und zurรผcksetzen auf die aktuelle Hauptversion ```bash git fetch upstream git rebase upstream/main ``` 6. Fรผgen Sie eine leere `.py` Datei in `transformers/src/models/brandnewbert/` mit dem Namen `modeling_tf_brandnewbert.py` hinzu. Dies wird Ihre TensorFlow-Modelldatei sein. 7. รœbertragen Sie die ร„nderungen auf Ihr Konto mit: ```bash git add . git commit -m "initial commit" git push -u origin add_tf_brand_new_bert ``` 8. Wenn Sie zufrieden sind, gehen Sie auf die Webseite Ihrer Abspaltung auf GitHub. Klicken Sie auf "Pull request". Stellen Sie sicher, dass Sie das GitHub-Handle einiger Mitglieder des Hugging Face-Teams als Reviewer hinzuzufรผgen, damit das Hugging Face-Team รผber zukรผnftige ร„nderungen informiert wird. zukรผnftige ร„nderungen benachrichtigt wird. 9. ร„ndern Sie den PR in einen Entwurf, indem Sie auf der rechten Seite der GitHub-Pull-Request-Webseite auf "In Entwurf umwandeln" klicken. Jetzt haben Sie eine Entwicklungsumgebung eingerichtet, um *BrandNewBert* nach TensorFlow in ๐Ÿค— Transformers zu portieren. **3. (Optional) Verstehen Sie die theoretischen Aspekte und die bestehende Implementierung** Sie sollten sich etwas Zeit nehmen, um die Arbeit von *BrandNewBert* zu lesen, falls eine solche Beschreibung existiert. Mรถglicherweise gibt es groรŸe Abschnitte des Papiers, die schwer zu verstehen sind. Wenn das der Fall ist, ist das in Ordnung - machen Sie sich keine Sorgen! Das Ziel ist ist es nicht, ein tiefes theoretisches Verstรคndnis des Papiers zu erlangen, sondern die notwendigen Informationen zu extrahieren, um das Modell mit Hilfe von TensorFlow effektiv in ๐Ÿค— Transformers neu zu implementieren. Das heiรŸt, Sie mรผssen nicht zu viel Zeit auf die viel Zeit auf die theoretischen Aspekte verwenden, sondern sich lieber auf die praktischen Aspekte konzentrieren, nรคmlich auf die bestehende Modelldokumentation Seite (z.B. [model docs for BERT](model_doc/bert)). Nachdem Sie die Grundlagen der Modelle, die Sie implementieren wollen, verstanden haben, ist es wichtig, die bestehende Implementierung zu verstehen. Dies ist eine gute Gelegenheit, sich zu vergewissern, dass eine funktionierende Implementierung mit Ihren Erwartungen an das Modell entspricht, und um technische Herausforderungen auf der TensorFlow-Seite vorauszusehen. Es ist ganz natรผrlich, dass Sie sich von der Menge an Informationen, die Sie gerade aufgesogen haben, รผberwรคltigt fรผhlen. Es ist Es ist definitiv nicht erforderlich, dass Sie in dieser Phase alle Facetten des Modells verstehen. Dennoch empfehlen wir Ihnen dringend ermutigen wir Sie, alle dringenden Fragen in unserem [Forum](https://discuss.huggingface.co/) zu klรคren. ### 4. Implementierung des Modells Jetzt ist es an der Zeit, endlich mit dem Programmieren zu beginnen. Als Ausgangspunkt empfehlen wir die PyTorch-Datei selbst: Kopieren Sie den Inhalt von modeling_brand_new_bert.py` in `src/transformers/models/brand_new_bert/` nach modeling_tf_brand_new_bert.py`. Das Ziel dieses Abschnitts ist es, die Datei zu รคndern und die Importstruktur von ๐Ÿค— Transformers zu aktualisieren, so dass Sie `TFBrandNewBert` und `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` erfolgreich ein funktionierendes TensorFlow *BrandNewBert* Modell lรคdt. Leider gibt es kein Rezept, um ein PyTorch-Modell in TensorFlow zu konvertieren. Sie kรถnnen jedoch unsere Auswahl an Tipps befolgen, um den Prozess so reibungslos wie mรถglich zu gestalten: - Stellen Sie `TF` dem Namen aller Klassen voran (z.B. wird `BrandNewBert` zu `TFBrandNewBert`). - Die meisten PyTorch-Operationen haben einen direkten TensorFlow-Ersatz. Zum Beispiel entspricht `torch.nn.Linear` der Klasse `tf.keras.layers.Dense`, `torch.nn.Dropout` entspricht `tf.keras.layers.Dropout`, usw. Wenn Sie sich nicht sicher sind รผber eine bestimmte Operation nicht sicher sind, kรถnnen Sie die [TensorFlow-Dokumentation](https://www.tensorflow.org/api_docs/python/tf) oder die [PyTorch-Dokumentation](https://pytorch.org/docs/stable/). - Suchen Sie nach Mustern in der Codebasis von ๐Ÿค— Transformers. Wenn Sie auf eine bestimmte Operation stoรŸen, fรผr die es keinen direkten Ersatz gibt Ersatz hat, stehen die Chancen gut, dass jemand anderes bereits das gleiche Problem hatte. - Behalten Sie standardmรครŸig die gleichen Variablennamen und die gleiche Struktur wie in PyTorch bei. Dies erleichtert die Fehlersuche, die Verfolgung von Probleme zu verfolgen und spรคtere Korrekturen vorzunehmen. - Einige Ebenen haben in jedem Framework unterschiedliche Standardwerte. Ein bemerkenswertes Beispiel ist die Schicht fรผr die Batch-Normalisierung epsilon (`1e-5` in [PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d) und `1e-3` in [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization)). Prรผfen Sie die Dokumentation genau! - Die Variablen `nn.Parameter` von PyTorch mรผssen in der Regel innerhalb von TF Layer's `build()` initialisiert werden. Siehe das folgende Beispiel: [PyTorch](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_vit_mae.py#L212) / [TensorFlow](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_tf_vit_mae.py#L220) - Wenn das PyTorch-Modell ein `#copied from ...` am Anfang einer Funktion hat, stehen die Chancen gut, dass Ihr TensorFlow-Modell diese Funktion auch diese Funktion von der Architektur ausleihen kann, von der sie kopiert wurde, vorausgesetzt, es hat eine TensorFlow-Architektur. - Die korrekte Zuweisung des Attributs `name` in TensorFlow-Funktionen ist entscheidend, um das `from_pt=True` Gewicht zu erreichen Cross-Loading. Name" ist fast immer der Name der entsprechenden Variablen im PyTorch-Code. Wenn `name` nicht nicht richtig gesetzt ist, sehen Sie dies in der Fehlermeldung beim Laden der Modellgewichte. - Die Logik der Basismodellklasse, `BrandNewBertModel`, befindet sich in `TFBrandNewBertMainLayer`, einer Keras Schicht-Unterklasse ([Beispiel](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L719)). TFBrandNewBertModel" ist lediglich ein Wrapper fรผr diese Schicht. - Keras-Modelle mรผssen erstellt werden, um die vorher trainierten Gewichte zu laden. Aus diesem Grund muss `TFBrandNewBertPreTrainedModel` ein Beispiel fรผr die Eingaben in das Modell enthalten, die `dummy_inputs` ([Beispiel](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)). - Wenn Sie nicht weiterkommen, fragen Sie nach Hilfe - wir sind fรผr Sie da! ๐Ÿค— Neben der Modelldatei selbst mรผssen Sie auch die Verweise auf die Modellklassen und die zugehรถrigen Dokumentationsseiten hinzufรผgen. Sie kรถnnen diesen Teil ganz nach den Mustern in anderen PRs erledigen ([Beispiel](https://github.com/huggingface/transformers/pull/18020/files)). Hier ist eine Liste der erforderlichen manuellen ร„nderungen: - Fรผgen Sie alle รถffentlichen Klassen von *BrandNewBert* in `src/transformers/__init__.py` ein. - Fรผgen Sie *BrandNewBert* Klassen zu den entsprechenden Auto Klassen in `src/transformers/models/auto/modeling_tf_auto.py` hinzu. - Fรผgen Sie die *BrandNewBert* zugehรถrigen Klassen fรผr trรคges Laden in `src/transformers/utils/dummy_tf_objects.py` hinzu. - Aktualisieren Sie die Importstrukturen fรผr die รถffentlichen Klassen in `src/transformers/models/brand_new_bert/__init__.py`. - Fรผgen Sie die Dokumentationszeiger auf die รถffentlichen Methoden von *BrandNewBert* in `docs/source/de/model_doc/brand_new_bert.md` hinzu. - Fรผgen Sie sich selbst zur Liste der Mitwirkenden an *BrandNewBert* in `docs/source/de/model_doc/brand_new_bert.md` hinzu. - Fรผgen Sie schlieรŸlich ein grรผnes Hรคkchen โœ… in der TensorFlow-Spalte von *BrandNewBert* in `docs/source/de/index.md` hinzu. Wenn Sie mit Ihrer Implementierung zufrieden sind, fรผhren Sie die folgende Checkliste aus, um zu bestรคtigen, dass Ihre Modellarchitektur fertig ist: 1. Alle Schichten, die sich zur Trainingszeit anders verhalten (z.B. Dropout), werden mit einem `Training` Argument aufgerufen, das von den Top-Level-Klassen weitergegeben wird 2. Sie haben `#copied from ...` verwendet, wann immer es mรถglich war. 3. Die Funktion `TFBrandNewBertMainLayer` und alle Klassen, die sie verwenden, haben ihre Funktion `call` mit `@unpack_inputs` dekoriert 4. TFBrandNewBertMainLayer` ist mit `@keras_serializable` dekoriert 5. Ein TensorFlow-Modell kann aus PyTorch-Gewichten mit `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` geladen werden. 6. Sie kรถnnen das TensorFlow Modell mit dem erwarteten Eingabeformat aufrufen ### 5. Modell-Tests hinzufรผgen Hurra, Sie haben ein TensorFlow-Modell implementiert! Jetzt ist es an der Zeit, Tests hinzuzufรผgen, um sicherzustellen, dass sich Ihr Modell wie erwartet verhรคlt. erwartet. Wie im vorigen Abschnitt schlagen wir vor, dass Sie zunรคchst die Datei `test_modeling_brand_new_bert.py` in `tests/models/brand_new_bert/` in die Datei `test_modeling_tf_brand_new_bert.py` zu kopieren und dann die notwendigen TensorFlow-Ersetzungen vornehmen. Fรผr den Moment sollten Sie in allen Aufrufen von `.from_pretrained()` das Flag `from_pt=True` verwenden, um die die vorhandenen PyTorch-Gewichte zu laden. Wenn Sie damit fertig sind, kommt der Moment der Wahrheit: Fรผhren Sie die Tests durch! ๐Ÿ˜ฌ ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` Das wahrscheinlichste Ergebnis ist, dass Sie eine Reihe von Fehlern sehen werden. Machen Sie sich keine Sorgen, das ist zu erwarten! Das Debuggen von ML-Modellen ist notorisch schwierig, und der Schlรผssel zum Erfolg ist Geduld (und `breakpoint()`). Nach unserer Erfahrung sind die schwierigsten Probleme aus subtilen Unstimmigkeiten zwischen ML-Frameworks, zu denen wir am Ende dieses Leitfadens ein paar Hinweise geben. In anderen Fรคllen kann es sein, dass ein allgemeiner Test nicht direkt auf Ihr Modell anwendbar ist; in diesem Fall empfehlen wir eine รœberschreibung auf der Ebene der Modelltestklasse. Zรถgern Sie nicht, in Ihrem Entwurf einer Pull-Anfrage um Hilfe zu bitten, wenn Sie nicht weiterkommen. Wenn alle Tests erfolgreich waren, kรถnnen Sie Ihr Modell in die ๐Ÿค— Transformers-Bibliothek aufnehmen! ๐ŸŽ‰ ### 6.-7. Stellen Sie sicher, dass jeder Ihr Modell verwenden kann **6. Reichen Sie den Pull Request ein** Sobald Sie mit der Implementierung und den Tests fertig sind, ist es an der Zeit, eine Pull-Anfrage einzureichen. Bevor Sie Ihren Code einreichen, fรผhren Sie unser Dienstprogramm zur Codeformatierung, `make fixup` ๐Ÿช„, aus. Damit werden automatisch alle Formatierungsfehler behoben, die dazu fรผhren wรผrden, dass unsere automatischen Prรผfungen fehlschlagen wรผrden. Nun ist es an der Zeit, Ihren Entwurf einer Pull-Anfrage in eine echte Pull-Anfrage umzuwandeln. Klicken Sie dazu auf die Schaltflรคche "Bereit fรผr Review" und fรผgen Sie Joao (`@gante`) und Matt (`@Rocketknight1`) als Reviewer hinzu. Eine Modell-Pull-Anfrage benรถtigt mindestens 3 Reviewer, aber sie werden sich darum kรผmmern, geeignete zusรคtzliche Reviewer fรผr Ihr Modell zu finden. Nachdem alle Gutachter mit dem Stand Ihres PR zufrieden sind, entfernen Sie als letzten Aktionspunkt das Flag `from_pt=True` in .from_pretrained()-Aufrufen zu entfernen. Da es keine TensorFlow-Gewichte gibt, mรผssen Sie sie hinzufรผgen! Lesen Sie den Abschnitt unten, um zu erfahren, wie Sie dies tun kรถnnen. Wenn schlieรŸlich die TensorFlow-Gewichte zusammengefรผhrt werden, Sie mindestens 3 Genehmigungen von Prรผfern haben und alle CI-Checks grรผn sind grรผn sind, รผberprรผfen Sie die Tests ein letztes Mal lokal ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` und wir werden Ihren PR zusammenfรผhren! Herzlichen Glรผckwunsch zu dem Meilenstein ๐ŸŽ‰. **7. (Optional) Erstellen Sie Demos und teilen Sie sie mit der Welt** Eine der schwierigsten Aufgaben bei Open-Source ist die Entdeckung. Wie kรถnnen die anderen Benutzer von der Existenz Ihres fabelhaften TensorFlow-Beitrags erfahren? Mit der richtigen Kommunikation, natรผrlich! ๐Ÿ“ฃ Es gibt vor allem zwei Mรถglichkeiten, Ihr Modell mit der Community zu teilen: - Erstellen Sie Demos. Dazu gehรถren Gradio-Demos, Notebooks und andere unterhaltsame Mรถglichkeiten, Ihr Modell vorzufรผhren. Wir raten Ihnen ermutigen Sie, ein Notizbuch zu unseren [community-driven demos](https://huggingface.co/docs/transformers/community) hinzuzufรผgen. - Teilen Sie Geschichten in sozialen Medien wie Twitter und LinkedIn. Sie sollten stolz auf Ihre Arbeit sein und sie mit der Ihre Leistung mit der Community teilen - Ihr Modell kann nun von Tausenden von Ingenieuren und Forschern auf der ganzen Welt genutzt werden der Welt genutzt werden ๐ŸŒ! Wir werden Ihre Beitrรคge gerne retweeten und Ihnen helfen, Ihre Arbeit mit der Community zu teilen. ## Hinzufรผgen von TensorFlow-Gewichten zum ๐Ÿค— Hub Unter der Annahme, dass die TensorFlow-Modellarchitektur in ๐Ÿค— Transformers verfรผgbar ist, ist die Umwandlung von PyTorch-Gewichten in TensorFlow-Gewichte ist ein Kinderspiel! Hier sehen Sie, wie es geht: 1. Stellen Sie sicher, dass Sie in Ihrem Terminal bei Ihrem Hugging Face Konto angemeldet sind. Sie kรถnnen sich mit dem folgenden Befehl anmelden `huggingface-cli login` (Ihre Zugangstoken finden Sie [hier](https://huggingface.co/settings/tokens)) 2. Fรผhren Sie `transformers-cli pt-to-tf --model-name foo/bar` aus, wobei `foo/bar` der Name des Modell-Repositorys ist ist, das die PyTorch-Gewichte enthรคlt, die Sie konvertieren mรถchten. 3. Markieren Sie `@joaogante` und `@Rocketknight1` in dem ๐Ÿค— Hub PR, den der obige Befehl gerade erstellt hat Das war's! ๐ŸŽ‰ ## Fehlersuche in verschiedenen ML-Frameworks ๐Ÿ› Irgendwann, wenn Sie eine neue Architektur hinzufรผgen oder TensorFlow-Gewichte fรผr eine bestehende Architektur erstellen, werden Sie stoรŸen Sie vielleicht auf Fehler, die sich รผber Unstimmigkeiten zwischen PyTorch und TensorFlow beschweren. Sie kรถnnten sich sogar dazu entschlieรŸen, den Modellarchitektur-Code fรผr die beiden Frameworks zu รถffnen, und stellen fest, dass sie identisch aussehen. Was ist denn da los? ๐Ÿค” Lassen Sie uns zunรคchst darรผber sprechen, warum es wichtig ist, diese Diskrepanzen zu verstehen. Viele Community-Mitglieder werden ๐Ÿค— Transformers-Modelle und vertrauen darauf, dass sich unsere Modelle wie erwartet verhalten. Wenn es eine groรŸe Diskrepanz gibt zwischen den beiden Frameworks auftritt, bedeutet dies, dass das Modell nicht der Referenzimplementierung fรผr mindestens eines der Frameworks folgt. der Frameworks folgt. Dies kann zu stillen Fehlern fรผhren, bei denen das Modell zwar lรคuft, aber eine schlechte Leistung aufweist. Dies ist wohl schlimmer als ein Modell, das รผberhaupt nicht lรคuft! Aus diesem Grund streben wir an, dass die Abweichung zwischen den Frameworks kleiner als 1e-5" in allen Phasen des Modells. Wie bei anderen numerischen Problemen auch, steckt der Teufel im Detail. Und wie bei jedem detailorientierten Handwerk ist die geheime Zutat hier Geduld. Hier ist unser Vorschlag fรผr den Arbeitsablauf, wenn Sie auf diese Art von Problemen stoรŸen: 1. Lokalisieren Sie die Quelle der Abweichungen. Das Modell, das Sie konvertieren, hat wahrscheinlich bis zu einem gewissen Punkt nahezu identische innere Variablen. bestimmten Punkt. Platzieren Sie `Breakpoint()`-Anweisungen in den Architekturen der beiden Frameworks und vergleichen Sie die Werte der numerischen Variablen von oben nach unten, bis Sie die Quelle der Probleme gefunden haben. 2. Nachdem Sie nun die Ursache des Problems gefunden haben, setzen Sie sich mit dem ๐Ÿค— Transformers-Team in Verbindung. Es ist mรถglich dass wir ein รคhnliches Problem schon einmal gesehen haben und umgehend eine Lรถsung anbieten kรถnnen. Als Ausweichmรถglichkeit kรถnnen Sie beliebte Seiten wie StackOverflow und GitHub-Probleme. 3. Wenn keine Lรถsung in Sicht ist, bedeutet das, dass Sie tiefer gehen mรผssen. Die gute Nachricht ist, dass Sie das Problem gefunden haben. Problem ausfindig gemacht haben, so dass Sie sich auf die problematische Anweisung konzentrieren und den Rest des Modells ausblenden kรถnnen! Die schlechte Nachricht ist dass Sie sich in die Quellimplementierung der besagten Anweisung einarbeiten mรผssen. In manchen Fรคllen finden Sie vielleicht ein Problem mit einer Referenzimplementierung - verzichten Sie nicht darauf, ein Problem im Upstream-Repository zu รถffnen. In einigen Fรคllen kรถnnen wir nach Rรผcksprache mit dem ๐Ÿค— Transformers-Team zu dem Schluss kommen, dass die Behebung der Abweichung nicht machbar ist. Wenn die Abweichung in den Ausgabeschichten des Modells sehr klein ist (aber mรถglicherweise groรŸ in den versteckten Zustรคnden), kรถnnen wir kรถnnten wir beschlieรŸen, sie zu ignorieren und das Modell zu verteilen. Die oben erwรคhnte CLI `pt-to-tf` hat ein `--max-error` Flag, um die Fehlermeldung bei der Gewichtskonvertierung zu รผberschreiben.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/transformers_agents.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformers Agents <Tip warning={true}> Transformers Agents ist eine experimentelle API, die jederzeit geรคndert werden kann. Die von den Agenten zurรผckgegebenen Ergebnisse zurรผckgegeben werden, kรถnnen variieren, da sich die APIs oder die zugrunde liegenden Modelle รคndern kรถnnen. </Tip> Transformers Version v4.29.0, die auf dem Konzept von *Tools* und *Agenten* aufbaut. Sie kรถnnen damit spielen in [dieses Colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj). Kurz gesagt, es bietet eine API fรผr natรผrliche Sprache auf der Grundlage von Transformers: Wir definieren eine Reihe von kuratierten Tools und entwerfen einen Agenten, um natรผrliche Sprache zu interpretieren und diese Werkzeuge zu verwenden. Es ist von vornherein erweiterbar; wir haben einige relevante Tools kuratiert, aber wir werden Ihnen zeigen, wie das System einfach erweitert werden kann, um jedes von der Community entwickelte Tool zu verwenden. Beginnen wir mit einigen Beispielen dafรผr, was mit dieser neuen API erreicht werden kann. Sie ist besonders leistungsfรคhig, wenn es um Sie ist besonders leistungsstark, wenn es um multimodale Aufgaben geht. Lassen Sie uns also eine Runde drehen, um Bilder zu erzeugen und Text vorzulesen. ```py agent.run("Caption the following image", image=image) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------| | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png" width=200> | A beaver is swimming in the water | --- ```py agent.run("Read the following text out loud", text=text) ``` | **Input** | **Output** | |-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | A beaver is swimming in the water | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tts_example.wav" type="audio/wav"> your browser does not support the audio element. </audio> --- ```py agent.run( "In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?", document=document, ) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|----------------| | <img src="https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/0/image/image.jpg" width=200> | ballroom foyer | ## Schnellstart Bevor Sie `agent.run` verwenden kรถnnen, mรผssen Sie einen Agenten instanziieren, der ein groรŸes Sprachmodell (LLM) ist. Wir bieten Unterstรผtzung fรผr openAI-Modelle sowie fรผr OpenSource-Alternativen von BigCode und OpenAssistant. Die openAI Modelle sind leistungsfรคhiger (erfordern aber einen openAI-API-Schlรผssel, kรถnnen also nicht kostenlos verwendet werden); Hugging Face bietet kostenlosen Zugang zu Endpunkten fรผr BigCode- und OpenAssistant-Modelle. To start with, please install the `agents` extras in order to install all default dependencies. ```bash pip install transformers[agents] ``` Um openAI-Modelle zu verwenden, instanziieren Sie einen [`OpenAiAgent`], nachdem Sie die `openai`-Abhรคngigkeit installiert haben: ```bash pip install openai ``` ```py from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>") ``` Um BigCode oder OpenAssistant zu verwenden, melden Sie sich zunรคchst an, um Zugriff auf die Inference API zu erhalten: ```py from huggingface_hub import login login("<YOUR_TOKEN>") ``` Dann instanziieren Sie den Agenten ```py from transformers import HfAgent # Starcoder agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") # StarcoderBase # agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase") # OpenAssistant # agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5") ``` Dies geschieht mit der Inferenz-API, die Hugging Face derzeit kostenlos zur Verfรผgung stellt. Wenn Sie Ihren eigenen Inferenz Endpunkt fรผr dieses Modell (oder einen anderen) haben, kรถnnen Sie die obige URL durch Ihren URL-Endpunkt ersetzen. <Tip> StarCoder und OpenAssistant sind kostenlos und leisten bei einfachen Aufgaben bewundernswert gute Arbeit. Allerdings halten die Kontrollpunkte nicht, wenn es um komplexere Aufforderungen geht. Wenn Sie mit einem solchen Problem konfrontiert sind, empfehlen wir Ihnen, das OpenAI Modell auszuprobieren, das zwar leider nicht quelloffen ist, aber zur Zeit eine bessere Leistung erbringt. </Tip> Sie sind jetzt startklar! Lassen Sie uns in die beiden APIs eintauchen, die Ihnen jetzt zur Verfรผgung stehen. ### Einzelne Ausfรผhrung (run) Die Methode der einmaligen Ausfรผhrung ist die Verwendung der [`~Agent.run`] Methode des Agenten: ```py agent.run("Draw me a picture of rivers and lakes.") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> Es wรคhlt automatisch das (oder die) Werkzeug(e) aus, das (die) fรผr die von Ihnen gewรผnschte Aufgabe geeignet ist (sind) und fรผhrt es (sie) entsprechend aus. Es kann eine oder mehrere Aufgaben in der gleichen Anweisung ausfรผhren (je komplexer Ihre Anweisung ist, desto wahrscheinlicher ist ein der Agent scheitern). ```py agent.run("Draw me a picture of the sea then transform the picture to add an island") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sea_and_island.png" width=200> <br/> Jede [`~Agent.run`] Operation ist unabhรคngig, so dass Sie sie mehrmals hintereinander mit unterschiedlichen Aufgaben ausfรผhren kรถnnen. Beachten Sie, dass Ihr `Agent` nur ein groรŸsprachiges Modell ist, so dass kleine Variationen in Ihrer Eingabeaufforderung vรถllig unterschiedliche Ergebnisse liefern kรถnnen. unterschiedliche Ergebnisse liefern. Es ist wichtig, dass Sie die Aufgabe, die Sie ausfรผhren mรถchten, so genau wie mรถglich erklรคren. Wir gehen noch weiter ins Detail wie man gute Prompts schreibt [hier](custom_tools#writing-good-user-inputs). Wenn Sie einen Status รผber Ausfรผhrungszeiten hinweg beibehalten oder dem Agenten Nicht-Text-Objekte รผbergeben mรถchten, kรถnnen Sie dies tun, indem Sie Variablen, die der Agent verwenden soll. Sie kรถnnten zum Beispiel das erste Bild von Flรผssen und Seen erzeugen, und das Modell bitten, dieses Bild zu aktualisieren und eine Insel hinzuzufรผgen, indem Sie Folgendes tun: ```python picture = agent.run("Generate a picture of rivers and lakes.") updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture) ``` <Tip> Dies kann hilfreich sein, wenn das Modell Ihre Anfrage nicht verstehen kann und die Werkzeuge verwechselt. Ein Beispiel wรคre: ```py agent.run("Draw me the picture of a capybara swimming in the sea") ``` Hier kรถnnte das Modell auf zwei Arten interpretieren: - Die Funktion `Text-zu-Bild` erzeugt ein Wasserschwein, das im Meer schwimmt. - Oder Sie lassen das `Text-zu-Bild` ein Wasserschwein erzeugen und verwenden dann das Werkzeug `Bildtransformation`, um es im Meer schwimmen zu lassen. Falls Sie das erste Szenario erzwingen mรถchten, kรถnnen Sie dies tun, indem Sie die Eingabeaufforderung als Argument รผbergeben: ```py agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea") ``` </Tip> ### Chat-basierte Ausfรผhrung (Chat) Der Agent verfรผgt auch รผber einen Chat-basierten Ansatz, der die Methode [`~Agent.chat`] verwendet: ```py agent.chat("Generate a picture of rivers and lakes") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ```py agent.chat("Transform the picture so that there is a rock in there") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_and_beaver.png" width=200> <br/> Dies ist ein interessanter Ansatz, wenn Sie den Zustand รผber Anweisungen hinweg beibehalten mรถchten. Er ist besser fรผr Experimente geeignet, eignet sich aber eher fรผr einzelne Anweisungen als fรผr komplexe Anweisungen (die die [`~Agent.run`] Methode besser verarbeiten kann). Diese Methode kann auch Argumente entgegennehmen, wenn Sie Nicht-Text-Typen oder bestimmte Aufforderungen รผbergeben mรถchten. ### โš ๏ธ Fernausfรผhrung Zu Demonstrationszwecken und damit es mit allen Setups verwendet werden kann, haben wir Remote-Executors fรผr mehrere der Standard-Tools erstellt, auf die der Agent in dieser Version Zugriff hat. Diese werden erstellt mit [inference endpoints](https://huggingface.co/inference-endpoints). Wir haben diese vorerst deaktiviert, aber um zu sehen, wie Sie selbst Remote Executors Tools einrichten kรถnnen, empfehlen wir die Lektรผre des [custom tool guide](./custom_tools). ### Was passiert hier? Was sind Tools und was sind Agenten? <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/diagram.png"> #### Agenten Der "Agent" ist hier ein groรŸes Sprachmodell, das wir auffordern, Zugang zu einem bestimmten Satz von Tools zu erhalten. LLMs sind ziemlich gut darin, kleine Codeproben zu erzeugen. Diese API macht sich das zunutze, indem sie das LLM ein kleines Codebeispiel gibt, das eine Aufgabe mit einer Reihe von Werkzeugen ausfรผhrt. Diese Aufforderung wird dann ergรคnzt durch die Aufgabe, die Sie Ihrem Agenten geben, und die Beschreibung der Werkzeuge, die Sie ihm geben. Auf diese Weise erhรคlt er Zugriff auf die Dokumentation der Tools, insbesondere die erwarteten Eingaben und Ausgaben, und kann den entsprechenden Code generieren. #### Tools Tools sind sehr einfach: Sie bestehen aus einer einzigen Funktion mit einem Namen und einer Beschreibung. Wir verwenden dann die Beschreibungen dieser Tools um den Agenten aufzufordern. Anhand der Eingabeaufforderung zeigen wir dem Agenten, wie er die Tools nutzen kann, um das zu tun, was in der in der Abfrage angefordert wurde. Dies geschieht mit brandneuen Tools und nicht mit Pipelines, denn der Agent schreibt besseren Code mit sehr atomaren Tools. Pipelines sind stรคrker refaktorisiert und fassen oft mehrere Aufgaben in einer einzigen zusammen. Tools sind dafรผr gedacht, sich auf eine einzige, sehr einfache Aufgabe konzentrieren. #### Code-Ausfรผhrung?! Dieser Code wird dann mit unserem kleinen Python-Interpreter auf den mit Ihren Tools รผbergebenen Eingaben ausgefรผhrt. Wir hรถren Sie schon schreien "Willkรผrliche Codeausfรผhrung!", aber lassen Sie uns erklรคren, warum das nicht der Fall ist. Die einzigen Funktionen, die aufgerufen werden kรถnnen, sind die von Ihnen zur Verfรผgung gestellten Tools und die Druckfunktion, so dass Sie bereits eingeschrรคnkt sind eingeschrรคnkt, was ausgefรผhrt werden kann. Sie sollten sicher sein, wenn es sich auf die Werkzeuge fรผr das Umarmungsgesicht beschrรคnkt. Dann lassen wir keine Attributsuche oder Importe zu (die ohnehin nicht benรถtigt werden, um die Inputs/Outputs an eine kleine Gruppe von Funktionen), so dass alle offensichtlichen Angriffe (und Sie mรผssten den LLM dazu auffordern, sie auszugeben) kein Problem darstellen sollten. Wenn Sie auf Nummer sicher gehen wollen, kรถnnen Sie die run()-Methode mit dem zusรคtzlichen Argument return_code=True ausfรผhren. In diesem Fall gibt der Agent nur den auszufรผhrenden Code zur Ausfรผhrung zurรผck und Sie kรถnnen entscheiden, ob Sie ihn ausfรผhren mรถchten oder nicht. Die Ausfรผhrung bricht bei jeder Zeile ab, in der versucht wird, eine illegale Operation auszufรผhren, oder wenn ein regulรคrer Python-Fehler mit dem vom Agenten generierten Code. ### Ein kuratierter Satz von Tools Wir haben eine Reihe von Tools identifiziert, die solche Agenten unterstรผtzen kรถnnen. Hier ist eine aktualisierte Liste der Tools, die wir integriert haben in `transformers` integriert haben: - **Beantwortung von Fragen zu Dokumenten**: Beantworten Sie anhand eines Dokuments (z.B. PDF) im Bildformat eine Frage zu diesem Dokument ([Donut](./model_doc/donut)) - Beantworten von Textfragen**: Geben Sie einen langen Text und eine Frage an, beantworten Sie die Frage im Text ([Flan-T5](./model_doc/flan-t5)) - **Unbedingte Bildunterschriften**: Beschriften Sie das Bild! ([BLIP](./model_doc/blip)) - **Bildfragebeantwortung**: Beantworten Sie bei einem Bild eine Frage zu diesem Bild ([VILT](./model_doc/vilt)) - **Bildsegmentierung**: Geben Sie ein Bild und einen Prompt an und geben Sie die Segmentierungsmaske dieses Prompts aus ([CLIPSeg](./model_doc/clipseg)) - **Sprache in Text**: Geben Sie eine Audioaufnahme einer sprechenden Person an und transkribieren Sie die Sprache in Text ([Whisper](./model_doc/whisper)) - **Text in Sprache**: wandelt Text in Sprache um ([SpeechT5](./model_doc/speecht5)) - **Zero-Shot-Textklassifizierung**: Ermitteln Sie anhand eines Textes und einer Liste von Bezeichnungen, welcher Bezeichnung der Text am ehesten entspricht ([BART](./model_doc/bart)) - **Textzusammenfassung**: fassen Sie einen langen Text in einem oder wenigen Sรคtzen zusammen ([BART](./model_doc/bart)) - **รœbersetzung**: รœbersetzen des Textes in eine bestimmte Sprache ([NLLB](./model_doc/nllb)) Diese Tools sind in Transformatoren integriert und kรถnnen auch manuell verwendet werden, zum Beispiel: ```py from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` ### Benutzerdefinierte Tools Wir haben zwar eine Reihe von Tools identifiziert, sind aber der festen รœberzeugung, dass der Hauptwert dieser Implementierung darin besteht die Mรถglichkeit, benutzerdefinierte Tools schnell zu erstellen und weiterzugeben. Indem Sie den Code eines Tools in einen Hugging Face Space oder ein Modell-Repository stellen, kรถnnen Sie das Tool direkt mit dem Agenten nutzen. Wir haben ein paar neue Funktionen hinzugefรผgt **transformers-agnostic** Tools zur [`huggingface-tools` Organisation](https://huggingface.co/huggingface-tools) hinzugefรผgt: - **Text-Downloader**: zum Herunterladen eines Textes von einer Web-URL - **Text zu Bild**: erzeugt ein Bild nach einer Eingabeaufforderung und nutzt dabei stabile Diffusion - **Bildtransformation**: verรคndert ein Bild anhand eines Ausgangsbildes und einer Eingabeaufforderung, unter Ausnutzung der stabilen pix2pix-Diffusion - **Text zu Video**: Erzeugen eines kleinen Videos nach einer Eingabeaufforderung, unter Verwendung von damo-vilab Das Text-zu-Bild-Tool, das wir von Anfang an verwendet haben, ist ein Remote-Tool, das sich in [*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)! Wir werden weiterhin solche Tools fรผr diese und andere Organisationen verรถffentlichen, um diese Implementierung weiter zu verbessern. Die Agenten haben standardmรครŸig Zugriff auf die Tools, die sich auf [*huggingface-tools*](https://huggingface.co/huggingface-tools) befinden. Wie Sie Ihre eigenen Tools schreiben und freigeben kรถnnen und wie Sie jedes benutzerdefinierte Tool, das sich auf dem Hub befindet, nutzen kรถnnen, erklรคren wir in [folgender Anleitung](custom_tools). ### Code-Erzeugung Bisher haben wir gezeigt, wie Sie die Agenten nutzen kรถnnen, um Aktionen fรผr Sie durchzufรผhren. Der Agent generiert jedoch nur Code den wir dann mit einem sehr eingeschrรคnkten Python-Interpreter ausfรผhren. Falls Sie den generierten Code in einer anderen Umgebung verwenden mรถchten einer anderen Umgebung verwenden mรถchten, kรถnnen Sie den Agenten auffordern, den Code zusammen mit einer Tooldefinition und genauen Importen zurรผckzugeben. Zum Beispiel die folgende Anweisung ```python agent.run("Draw me a picture of rivers and lakes", return_code=True) ``` gibt den folgenden Code zurรผck ```python from transformers import load_tool image_generator = load_tool("huggingface-tools/text-to-image") image = image_generator(prompt="rivers and lakes") ``` die Sie dann selbst รคndern und ausfรผhren kรถnnen.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Schnellstart - local: installation title: Installation title: Erste Schritte - sections: - local: pipeline_tutorial title: Pipelines fรผr Inferenzen - local: autoclass_tutorial title: Laden von vortrainierten Instanzen mit einer AutoClass - local: preprocessing title: Vorverarbeiten - local: training title: Optimierung eines vortrainierten Modells - local: run_scripts title: Trainieren mit einem Skript - local: accelerate title: Verteiltes Training mit ๐Ÿค— Accelerate - local: peft title: Laden und Trainieren von Adaptern mit ๐Ÿค— PEFT - local: model_sharing title: Ein Modell teilen - local: transformers_agents title: Agents - local: llm_tutorial title: Generation with LLMs title: Tutorials - sections: - local: add_new_model title: Wie fรผgt man ein Modell zu ๐Ÿค— Transformers hinzu? - local: add_tensorflow_model title: Wie konvertiert man ein ๐Ÿค— Transformers-Modell in TensorFlow? - local: add_new_pipeline title: Wie fรผgt man eine Pipeline zu ๐Ÿค— Transformers hinzu? - local: testing title: Testen - local: pr_checks title: รœberprรผfung einer Pull Request title: Contribute
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/add_new_model.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Wie kann ich ein Modell zu ๐Ÿค— Transformers hinzufรผgen? Die ๐Ÿค— Transformers-Bibliothek ist dank der Beitrรคge der Community oft in der Lage, neue Modelle anzubieten. Aber das kann ein anspruchsvolles Projekt sein und erfordert eine eingehende Kenntnis der ๐Ÿค— Transformers-Bibliothek und des zu implementierenden Modells. Bei Hugging Face versuchen wir, mehr Mitgliedern der Community die Mรถglichkeit zu geben, aktiv Modelle hinzuzufรผgen, und wir haben diese Anleitung zusammengestellt, die Sie durch den Prozess des Hinzufรผgens eines PyTorch-Modells fรผhrt (stellen Sie sicher, dass Sie [PyTorch installiert haben](https://pytorch.org/get-started/locally/)). <Tip> Wenn Sie daran interessiert sind, ein TensorFlow-Modell zu implementieren, werfen Sie einen Blick in die Anleitung [How to convert a ๐Ÿค— Transformers model to TensorFlow](add_tensorflow_model)! </Tip> Auf dem Weg dorthin, werden Sie: - Einblicke in bewรคhrte Open-Source-Verfahren erhalten - die Konstruktionsprinzipien hinter einer der beliebtesten Deep-Learning-Bibliotheken verstehen - lernen Sie, wie Sie groรŸe Modelle effizient testen kรถnnen - lernen Sie, wie Sie Python-Hilfsprogramme wie `black`, `ruff` und `make fix-copies` integrieren, um sauberen und lesbaren Code zu gewรคhrleisten Ein Mitglied des Hugging Face-Teams wird Ihnen dabei zur Seite stehen, damit Sie nicht alleine sind. ๐Ÿค— โค๏ธ Um loszulegen, รถffnen Sie eine [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) Ausgabe fรผr das Modell, das Sie in ๐Ÿค— Transformers sehen mรถchten. Wenn Sie nicht besonders wรคhlerisch sind, wenn es darum geht, ein bestimmtes Modell beizusteuern, kรถnnen Sie nach dem [New model label](https://github.com/huggingface/transformers/labels/New%20model) filtern, um zu sehen, ob es noch unbeanspruchte Modellanfragen gibt, und daran arbeiten. Sobald Sie eine neue Modellanfrage erรถffnet haben, sollten Sie sich zunรคchst mit ๐Ÿค— Transformers vertraut machen, falls Sie das noch nicht sind! ## Allgemeiner รœberblick รผber ๐Ÿค— Transformers Zunรคchst sollten Sie sich einen allgemeinen รœberblick รผber ๐Ÿค— Transformers verschaffen. ๐Ÿค— Transformers ist eine sehr meinungsfreudige Bibliothek, es ist also mรถglich, dass Es besteht also die Mรถglichkeit, dass Sie mit einigen der Philosophien oder Designentscheidungen der Bibliothek nicht einverstanden sind. Aus unserer Erfahrung heraus haben wir jedoch dass die grundlegenden Designentscheidungen und Philosophien der Bibliothek entscheidend sind, um ๐Ÿค— Transformers effizient zu skalieren. Transformatoren zu skalieren und gleichzeitig die Wartungskosten auf einem vernรผnftigen Niveau zu halten. Ein guter erster Ansatzpunkt, um die Bibliothek besser zu verstehen, ist die Lektรผre der [Dokumentation unserer Philosophie](Philosophie). Als Ergebnis unserer Arbeitsweise gibt es einige Entscheidungen, die wir versuchen, auf alle Modelle anzuwenden: - Komposition wird im Allgemeinen gegenรผber Abstraktion bevorzugt - Die Duplizierung von Code ist nicht immer schlecht, wenn sie die Lesbarkeit oder Zugรคnglichkeit eines Modells stark verbessert - Modelldateien sind so in sich geschlossen wie mรถglich, so dass Sie, wenn Sie den Code eines bestimmten Modells lesen, idealerweise nur in die entsprechende Datei `modeling_....py` schauen mรผssen. Unserer Meinung nach ist der Code der Bibliothek nicht nur ein Mittel, um ein Produkt bereitzustellen, *z.B.* die Mรถglichkeit, BERT fรผr Inferenz zu verwenden, sondern auch als das Produkt selbst, das wir verbessern wollen. Wenn Sie also ein Modell hinzufรผgen, ist der Benutzer nicht nur die Person, die Ihr Modell verwenden wird, sondern auch jeder, der Ihren Code liest, zu verstehen versucht und ihn mรถglicherweise verbessert. Lassen Sie uns daher ein wenig tiefer in das allgemeine Design der Bibliothek einsteigen. ### รœberblick รผber die Modelle Um ein Modell erfolgreich hinzuzufรผgen, ist es wichtig, die Interaktion zwischen Ihrem Modell und seiner Konfiguration zu verstehen, [`PreTrainedModel`] und [`PretrainedConfig`]. Als Beispiel werden wir das Modell, das zu ๐Ÿค— Transformers hinzugefรผgt werden soll, `BrandNewBert` nennen. Schauen wir uns das mal an: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> Wie Sie sehen, machen wir in ๐Ÿค— Transformers von der Vererbung Gebrauch, aber wir beschrรคnken die Abstraktionsebene auf ein absolutes Minimum. Minimum. Es gibt nie mehr als zwei Abstraktionsebenen fรผr ein Modell in der Bibliothek. `BrandNewBertModel` erbt von `BrandNewBertPreTrainedModel`, das wiederum von [`PreTrainedModel`] erbt und das war's. In der Regel wollen wir sicherstellen, dass ein neues Modell nur von [`PreTrainedModel`] abhรคngt. Die wichtigen Funktionalitรคten, die jedem neuen Modell automatisch zur Verfรผgung gestellt werden, sind Modell automatisch bereitgestellt werden, sind [`~PreTrainedModel.from_pretrained`] und [`~PreTrainedModel.save_pretrained`], die fรผr die Serialisierung und Deserialisierung verwendet werden. Alle anderen wichtigen Funktionalitรคten, wie `BrandNewBertModel.forward` sollten vollstรคndig in der neuen Skript `modeling_brand_new_bert.py` definiert werden. Als nรคchstes wollen wir sicherstellen, dass ein Modell mit einer bestimmten Kopfebene, wie z.B. `BrandNewBertForMaskedLM` nicht von `BrandNewBertModel` erbt, sondern `BrandNewBertModel` verwendet als Komponente, die im Forward Pass aufgerufen werden kann, um die Abstraktionsebene niedrig zu halten. Jedes neue Modell erfordert eine Konfigurationsklasse, genannt `BrandNewBertConfig`. Diese Konfiguration wird immer als ein Attribut in [PreTrainedModel] gespeichert und kann daher รผber das Attribut `config` fรผr alle Klassen aufgerufen werden die von `BrandNewBertPreTrainedModel` erben: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` ร„hnlich wie das Modell erbt die Konfiguration grundlegende Serialisierungs- und Deserialisierungsfunktionalitรคten von [`PretrainedConfig`]. Beachten Sie, dass die Konfiguration und das Modell immer in zwei verschiedene Formate serialisiert werden unterschiedliche Formate serialisiert werden - das Modell in eine *pytorch_model.bin* Datei und die Konfiguration in eine *config.json* Datei. Aufruf von [~PreTrainedModel.save_pretrained`] wird automatisch [~PretrainedConfig.save_pretrained`] auf, so dass sowohl das Modell als auch die Konfiguration gespeichert werden. ### Code-Stil Wenn Sie Ihr neues Modell kodieren, sollten Sie daran denken, dass Transformers eine Bibliothek mit vielen Meinungen ist und dass wir selbst ein paar Macken haben wie der Code geschrieben werden sollte :-) 1. Der Vorwรคrtsdurchlauf Ihres Modells sollte vollstรคndig in die Modellierungsdatei geschrieben werden und dabei vรถllig unabhรคngig von anderen Modellen in der Bibliothek. Wenn Sie einen Block aus einem anderen Modell wiederverwenden mรถchten, kopieren Sie den Code und fรผgen ihn mit einem `# Kopiert von` ein (siehe [hier](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) fรผr ein gutes Beispiel und [hier](pr_checks#check-copies) fรผr weitere Dokumentation zu Copied from). 2. Der Code sollte vollstรคndig verstรคndlich sein, auch fรผr einen Nicht-Muttersprachler. Das heiรŸt, Sie sollten beschreibende Variablennamen wรคhlen und Abkรผrzungen vermeiden. Ein Beispiel: `activation` ist `act` vorzuziehen. Von Variablennamen mit nur einem Buchstaben wird dringend abgeraten, es sei denn, es handelt sich um einen Index in einer for-Schleife. 3. Generell ziehen wir lรคngeren expliziten Code einem kurzen magischen Code vor. 4. Vermeiden Sie die Unterklassifizierung von `nn.Sequential` in PyTorch, sondern unterklassifizieren Sie `nn.Module` und schreiben Sie den Vorwรคrtspass, so dass jeder so dass jeder, der Ihren Code verwendet, ihn schnell debuggen kann, indem er Druckanweisungen oder Haltepunkte hinzufรผgt. 5. Ihre Funktionssignatur sollte mit einer Typ-Annotation versehen sein. Im รœbrigen sind gute Variablennamen viel lesbarer und verstรคndlicher verstรคndlicher als Typ-Anmerkungen. ### รœbersicht der Tokenizer Noch nicht ganz fertig :-( Dieser Abschnitt wird bald hinzugefรผgt! ## Schritt-fรผr-Schritt-Rezept zum Hinzufรผgen eines Modells zu ๐Ÿค— Transformers Jeder hat andere Vorlieben, was die Portierung eines Modells angeht. Daher kann es sehr hilfreich sein, wenn Sie sich Zusammenfassungen ansehen wie andere Mitwirkende Modelle auf Hugging Face portiert haben. Hier ist eine Liste von Blogbeitrรคgen aus der Community, wie man ein Modell portiert: 1. [Portierung eines GPT2-Modells](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) von [Thomas](https://huggingface.co/thomwolf) 2. [Portierung des WMT19 MT-Modells](https://huggingface.co/blog/porting-fsmt) von [Stas](https://huggingface.co/stas) Aus Erfahrung kรถnnen wir Ihnen sagen, dass die wichtigsten Dinge, die Sie beim Hinzufรผgen eines Modells beachten mรผssen, sind: - Erfinden Sie das Rad nicht neu! Die meisten Teile des Codes, den Sie fรผr das neue ๐Ÿค— Transformers-Modell hinzufรผgen werden, existieren bereits irgendwo in ๐Ÿค— Transformers. Nehmen Sie sich etwas Zeit, um รคhnliche, bereits vorhandene Modelle und Tokenizer zu finden, die Sie kopieren kรถnnen von. [grep](https://www.gnu.org/software/grep/) und [rg](https://github.com/BurntSushi/ripgrep) sind Ihre Freunde. Beachten Sie, dass es sehr gut mรถglich ist, dass der Tokenizer Ihres Modells auf einer Modellimplementierung basiert und und der Modellierungscode Ihres Modells auf einer anderen. *Z.B.* Der Modellierungscode von FSMT basiert auf BART, wรคhrend der Tokenizer-Code von FSMT auf XLM basiert. - Es handelt sich eher um eine technische als um eine wissenschaftliche Herausforderung. Sie sollten mehr Zeit auf die Schaffung einer eine effiziente Debugging-Umgebung zu schaffen, als zu versuchen, alle theoretischen Aspekte des Modells in dem Papier zu verstehen. - Bitten Sie um Hilfe, wenn Sie nicht weiterkommen! Modelle sind der Kernbestandteil von ๐Ÿค— Transformers, so dass wir bei Hugging Face mehr als mehr als glรผcklich, Ihnen bei jedem Schritt zu helfen, um Ihr Modell hinzuzufรผgen. Zรถgern Sie nicht zu fragen, wenn Sie merken, dass Sie nicht weiterkommen. Fortschritte machen. Im Folgenden versuchen wir, Ihnen ein allgemeines Rezept an die Hand zu geben, das uns bei der Portierung eines Modells auf ๐Ÿค— Transformers am nรผtzlichsten erschien. Die folgende Liste ist eine Zusammenfassung all dessen, was getan werden muss, um ein Modell hinzuzufรผgen und kann von Ihnen als To-Do verwendet werden Liste verwenden: โ˜ (Optional) Verstehen der theoretischen Aspekte des Modells<br> โ˜ Vorbereiten der ๐Ÿค— Transformers-Entwicklungsumgebung<br> โ˜ Debugging-Umgebung des ursprรผnglichen Repositorys eingerichtet<br> โ˜ Skript erstellt, das den Durchlauf `forward()` unter Verwendung des ursprรผnglichen Repositorys und des Checkpoints erfolgreich durchfรผhrt<br> โ˜ Erfolgreich das Modellskelett zu ๐Ÿค— Transformers hinzugefรผgt<br> โ˜ Erfolgreiche Umwandlung des ursprรผnglichen Prรผfpunkts in den ๐Ÿค— Transformers-Prรผfpunkt<br> โ˜ Erfolgreich den Durchlauf `forward()` in ๐Ÿค— Transformers ausgefรผhrt, der eine identische Ausgabe wie der ursprรผngliche Prรผfpunkt liefert<br> โ˜ Modell-Tests in ๐Ÿค— Transformers abgeschlossen<br> โ˜ Erfolgreich Tokenizer in ๐Ÿค— Transformers hinzugefรผgt<br> โ˜ End-to-End-Integrationstests ausgefรผhrt<br> โ˜ Docs fertiggestellt<br> โ˜ Modellgewichte in den Hub hochgeladen<br> โ˜ Die Pull-Anfrage eingereicht<br> โ˜ (Optional) Hinzufรผgen eines Demo-Notizbuchs Fรผr den Anfang empfehlen wir in der Regel, mit einem guten theoretischen Verstรคndnis von `BrandNewBert` zu beginnen. Wie auch immer, wenn Sie es vorziehen, die theoretischen Aspekte des Modells *on-the-job* zu verstehen, dann ist es vรถllig in Ordnung, direkt in die in die Code-Basis von `BrandNewBert` einzutauchen. Diese Option kรถnnte fรผr Sie besser geeignet sein, wenn Ihre technischen Fรคhigkeiten besser sind als als Ihre theoretischen Fรคhigkeiten, wenn Sie Schwierigkeiten haben, die Arbeit von `BrandNewBert` zu verstehen, oder wenn Sie einfach SpaรŸ am Programmieren mehr SpaรŸ am Programmieren haben als am Lesen wissenschaftlicher Abhandlungen. ### 1. (Optional) Theoretische Aspekte von BrandNewBert Sie sollten sich etwas Zeit nehmen, um die Abhandlung von *BrandNewBert* zu lesen, falls eine solche Beschreibung existiert. Mรถglicherweise gibt es groรŸe Abschnitte des Papiers, die schwer zu verstehen sind. Wenn das der Fall ist, ist das in Ordnung - machen Sie sich keine Sorgen! Das Ziel ist ist es nicht, ein tiefes theoretisches Verstรคndnis des Papiers zu erlangen, sondern die notwendigen Informationen zu extrahieren, um das Modell effektiv in ๐Ÿค— Transformers zu implementieren. Das heiรŸt, Sie mรผssen nicht zu viel Zeit auf die theoretischen Aspekten verbringen, sondern sich lieber auf die praktischen Aspekte konzentrieren, nรคmlich: - Welche Art von Modell ist *brand_new_bert*? BERT-รคhnliches Modell nur fรผr den Encoder? GPT2-รคhnliches reines Decoder-Modell? BART-รคhnliches Encoder-Decoder-Modell? Sehen Sie sich die [model_summary](model_summary) an, wenn Sie mit den Unterschieden zwischen diesen Modellen nicht vertraut sind. - Was sind die Anwendungen von *brand_new_bert*? Textklassifizierung? Texterzeugung? Seq2Seq-Aufgaben, *z.B.,* Zusammenfassungen? - Was ist die neue Eigenschaft des Modells, die es von BERT/GPT-2/BART unterscheidet? - Welches der bereits existierenden [๐Ÿค— Transformers-Modelle](https://huggingface.co/transformers/#contents) ist am รคhnlichsten รคhnlich wie *brand_new_bert*? - Welche Art von Tokenizer wird verwendet? Ein Satzteil-Tokenisierer? Ein Wortstรผck-Tokenisierer? Ist es derselbe Tokenisierer, der fรผr fรผr BERT oder BART? Nachdem Sie das Gefรผhl haben, einen guten รœberblick รผber die Architektur des Modells erhalten zu haben, kรถnnen Sie dem Hugging Face Team schreiben und Ihre Fragen stellen. Dazu kรถnnen Fragen zur Architektur des Modells gehรถren, seiner Aufmerksamkeitsebene usw. Wir werden Ihnen gerne weiterhelfen. ### 2. Bereiten Sie als nรคchstes Ihre Umgebung vor 1. Forken Sie das [Repository](https://github.com/huggingface/transformers), indem Sie auf der Seite des Repositorys auf die Schaltflรคche 'Fork' klicken. Seite des Repositorys klicken. Dadurch wird eine Kopie des Codes unter Ihrem GitHub-Benutzerkonto erstellt. 2. Klonen Sie Ihren `transformers` Fork auf Ihre lokale Festplatte und fรผgen Sie das Basis-Repository als Remote hinzu: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Richten Sie eine Entwicklungsumgebung ein, indem Sie z.B. den folgenden Befehl ausfรผhren: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` Abhรคngig von Ihrem Betriebssystem und da die Anzahl der optionalen Abhรคngigkeiten von Transformers wรคchst, kann es sein, dass Sie bei diesem Befehl einen Fehler mit diesem Befehl. Stellen Sie in diesem Fall sicher, dass Sie das Deep Learning Framework, mit dem Sie arbeiten, installieren (PyTorch, TensorFlow und/oder Flax) und fรผhren Sie es aus: ```bash pip install -e ".[quality]" ``` was fรผr die meisten Anwendungsfรคlle ausreichend sein sollte. Sie kรถnnen dann zum รผbergeordneten Verzeichnis zurรผckkehren ```bash cd .. ``` 4. Wir empfehlen, die PyTorch-Version von *brand_new_bert* zu Transformers hinzuzufรผgen. Um PyTorch zu installieren, folgen Sie bitte den Anweisungen auf https://pytorch.org/get-started/locally/. **Anmerkung:** Sie mรผssen CUDA nicht installiert haben. Es reicht aus, das neue Modell auf der CPU zum Laufen zu bringen. 5. Um *brand_new_bert* zu portieren, benรถtigen Sie auรŸerdem Zugriff auf das Original-Repository: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` Jetzt haben Sie eine Entwicklungsumgebung eingerichtet, um *brand_new_bert* auf ๐Ÿค— Transformers zu portieren. ### 3.-4. Fรผhren Sie einen Pre-Training-Checkpoint mit dem Original-Repository durch Zunรคchst werden Sie mit dem ursprรผnglichen *brand_new_bert* Repository arbeiten. Oft ist die ursprรผngliche Implementierung sehr "forschungslastig". Das bedeutet, dass es an Dokumentation mangeln kann und der Code schwer zu verstehen sein kann. Aber das sollte genau Ihre Motivation sein, *brand_new_bert* neu zu implementieren. Eines unserer Hauptziele bei Hugging Face ist es, *die Menschen dazu zu bringen auf den Schultern von Giganten zu stehen*, was sich hier sehr gut darin ausdrรผckt, dass wir ein funktionierendes Modell nehmen und es umschreiben, um es so es so **zugรคnglich, benutzerfreundlich und schรถn** wie mรถglich zu machen. Dies ist die wichtigste Motivation fรผr die Neuimplementierung von Modelle in ๐Ÿค— Transformers umzuwandeln - der Versuch, komplexe neue NLP-Technologie fรผr **jeden** zugรคnglich zu machen. Sie sollten damit beginnen, indem Sie in das Original-Repository eintauchen. Die erfolgreiche Ausfรผhrung des offiziellen Pre-Trainingsmodells im Original-Repository ist oft **der schwierigste** Schritt. Unserer Erfahrung nach ist es sehr wichtig, dass Sie einige Zeit damit verbringen, sich mit der ursprรผnglichen Code-Basis vertraut zu machen. Sie mรผssen das Folgende herausfinden: - Wo finden Sie die vortrainierten Gewichte? - Wie lรคdt man die vorab trainierten Gewichte in das entsprechende Modell? - Wie kann der Tokenizer unabhรคngig vom Modell ausgefรผhrt werden? - Verfolgen Sie einen Forward Pass, damit Sie wissen, welche Klassen und Funktionen fรผr einen einfachen Forward Pass erforderlich sind. Normalerweise, mรผssen Sie nur diese Funktionen reimplementieren. - Sie mรผssen in der Lage sein, die wichtigen Komponenten des Modells zu finden: Wo befindet sich die Klasse des Modells? Gibt es Unterklassen des Modells, *z.B.* EncoderModel, DecoderModel? Wo befindet sich die Selbstaufmerksamkeitsschicht? Gibt es mehrere verschiedene Aufmerksamkeitsebenen, *z.B.* *Selbstaufmerksamkeit*, *Kreuzaufmerksamkeit*...? - Wie kรถnnen Sie das Modell in der ursprรผnglichen Umgebung des Repo debuggen? Mรผssen Sie *print* Anweisungen hinzufรผgen, kรถnnen Sie mit einem interaktiven Debugger wie *ipdb* arbeiten oder sollten Sie eine effiziente IDE zum Debuggen des Modells verwenden, wie z.B. PyCharm? Es ist sehr wichtig, dass Sie, bevor Sie mit der Portierung beginnen, den Code im Original-Repository **effizient** debuggen kรถnnen Repository kรถnnen! Denken Sie auch daran, dass Sie mit einer Open-Source-Bibliothek arbeiten, also zรถgern Sie nicht, ein Problem oder oder sogar eine Pull-Anfrage im Original-Repository zu stellen. Die Betreuer dieses Repositorys sind wahrscheinlich sehr froh darรผber dass jemand in ihren Code schaut! An diesem Punkt liegt es wirklich an Ihnen, welche Debugging-Umgebung und Strategie Sie zum Debuggen des ursprรผnglichen Modell zu debuggen. Wir raten dringend davon ab, eine kostspielige GPU-Umgebung einzurichten, sondern arbeiten Sie einfach auf einer CPU, sowohl wenn Sie mit dem in das ursprรผngliche Repository einzutauchen und auch, wenn Sie beginnen, die ๐Ÿค— Transformers-Implementierung des Modells zu schreiben. Nur ganz am Ende, wenn das Modell bereits erfolgreich auf ๐Ÿค— Transformers portiert wurde, sollte man รผberprรผfen, ob das Modell auch auf der GPU wie erwartet funktioniert. Im Allgemeinen gibt es zwei mรถgliche Debugging-Umgebungen fรผr die Ausfรผhrung des Originalmodells - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Lokale Python-Skripte. Jupyter-Notebooks haben den Vorteil, dass sie eine zellenweise Ausfรผhrung ermรถglichen, was hilfreich sein kann, um logische Komponenten besser voneinander zu trennen und logische Komponenten voneinander zu trennen und schnellere Debugging-Zyklen zu haben, da Zwischenergebnisse gespeichert werden kรถnnen. AuรŸerdem, AuรŸerdem lassen sich Notebooks oft leichter mit anderen Mitwirkenden teilen, was sehr hilfreich sein kann, wenn Sie das Hugging Face Team um Hilfe bitten mรถchten. Face Team um Hilfe bitten. Wenn Sie mit Jupyter-Notizbรผchern vertraut sind, empfehlen wir Ihnen dringend, mit ihnen zu arbeiten. Der offensichtliche Nachteil von Jupyter-Notizbรผchern ist, dass Sie, wenn Sie nicht daran gewรถhnt sind, mit ihnen zu arbeiten, einige Zeit damit verbringen mรผssen einige Zeit damit verbringen mรผssen, sich an die neue Programmierumgebung zu gewรถhnen, und dass Sie mรถglicherweise Ihre bekannten Debugging-Tools nicht mehr verwenden kรถnnen wie z.B. `ipdb` nicht mehr verwenden kรถnnen. Fรผr jede Codebasis ist es immer ein guter erster Schritt, einen **kleinen** vortrainierten Checkpoint zu laden und in der Lage zu sein, einen einzelnen Vorwรคrtsdurchlauf mit einem Dummy-Integer-Vektor von Eingabe-IDs als Eingabe zu reproduzieren. Ein solches Skript kรถnnte wie folgt aussehen (in Pseudocode): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` Was die Debugging-Strategie anbelangt, so kรถnnen Sie im Allgemeinen aus mehreren Strategien wรคhlen: - Zerlegen Sie das ursprรผngliche Modell in viele kleine testbare Komponenten und fรผhren Sie fรผr jede dieser Komponenten einen Vorwรคrtsdurchlauf zur รœberprรผfung - Zerlegen Sie das ursprรผngliche Modell nur in den ursprรผnglichen *Tokenizer* und das ursprรผngliche *Modell*, fรผhren Sie einen Vorwรคrtsdurchlauf fรผr diese Komponenten durch und verwenden Sie dazwischenliegende Druckanweisungen oder Haltepunkte zur รœberprรผfung. Auch hier bleibt es Ihnen รผberlassen, welche Strategie Sie wรคhlen. Oft ist die eine oder die andere Strategie vorteilhaft, je nach der ursprรผnglichen Codebasis Basis. Wenn die ursprรผngliche Codebasis es Ihnen erlaubt, das Modell in kleinere Teilkomponenten zu zerlegen, *z.B.* wenn die ursprรผngliche Code-Basis problemlos im Eager-Modus ausgefรผhrt werden kann, lohnt es sich in der Regel, dies zu tun. Es gibt einige wichtige Vorteile am Anfang den schwierigeren Weg zu gehen: - Wenn Sie spรคter das ursprรผngliche Modell mit der Hugging Face-Implementierung vergleichen, kรถnnen Sie automatisch รผberprรผfen, ob fรผr jede Komponente einzeln รผberprรผfen, ob die entsprechende Komponente der ๐Ÿค— Transformers-Implementierung รผbereinstimmt, anstatt sich auf anstatt sich auf den visuellen Vergleich รผber Druckanweisungen zu verlassen - kรถnnen Sie das groรŸe Problem der Portierung eines Modells in kleinere Probleme der Portierung einzelner Komponenten zerlegen einzelnen Komponenten zu zerlegen und so Ihre Arbeit besser zu strukturieren - Die Aufteilung des Modells in logisch sinnvolle Komponenten hilft Ihnen, einen besseren รœberblick รผber das Design des Modells zu bekommen und somit das Modell besser zu verstehen - In einem spรคteren Stadium helfen Ihnen diese komponentenweisen Tests dabei, sicherzustellen, dass keine Regressionen auftreten, wรคhrend Sie fortfahren Ihren Code รคndern [Lysandre's](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) Integrationstests fรผr ELECTRA gibt ein schรถnes Beispiel dafรผr, wie dies geschehen kann. Wenn die ursprรผngliche Codebasis jedoch sehr komplex ist oder nur die Ausfรผhrung von Zwischenkomponenten in einem kompilierten Modus erlaubt, kรถnnte es zu zeitaufwรคndig oder sogar unmรถglich sein, das Modell in kleinere testbare Teilkomponenten zu zerlegen. Ein gutes Beispiel ist die [T5's MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) Bibliothek, die sehr komplex ist sehr komplex ist und keine einfache Mรถglichkeit bietet, das Modell in seine Unterkomponenten zu zerlegen. Bei solchen Bibliotheken ist man oft auf die รœberprรผfung von Druckanweisungen angewiesen. Unabhรคngig davon, welche Strategie Sie wรคhlen, ist die empfohlene Vorgehensweise oft die gleiche, nรคmlich dass Sie mit der Fehlersuche in den die Anfangsebenen zuerst und die Endebenen zuletzt debuggen. Es wird empfohlen, dass Sie die Ausgaben der folgenden Ebenen abrufen, entweder durch Druckanweisungen oder Unterkomponentenfunktionen Schichten in der folgenden Reihenfolge abrufen: 1. Rufen Sie die Eingabe-IDs ab, die an das Modell รผbergeben wurden 2. Rufen Sie die Worteinbettungen ab 3. Rufen Sie die Eingabe der ersten Transformer-Schicht ab 4. Rufen Sie die Ausgabe der ersten Transformer-Schicht ab 5. Rufen Sie die Ausgabe der folgenden n - 1 Transformer-Schichten ab 6. Rufen Sie die Ausgabe des gesamten BrandNewBert Modells ab Die Eingabe-IDs sollten dabei aus einem Array von Ganzzahlen bestehen, *z.B.* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` Die Ausgaben der folgenden Schichten bestehen oft aus mehrdimensionalen Float-Arrays und kรถnnen wie folgt aussehen: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` Wir erwarten, dass jedes zu ๐Ÿค— Transformers hinzugefรผgte Modell eine Reihe von Integrationstests besteht, was bedeutet, dass das ursprรผngliche Modell und die neu implementierte Version in ๐Ÿค— Transformers exakt dieselbe Ausgabe liefern mรผssen, und zwar mit einer Genauigkeit von 0,001! Da es normal ist, dass das exakt gleiche Modell, das in verschiedenen Bibliotheken geschrieben wurde, je nach Bibliotheksrahmen eine leicht unterschiedliche Ausgabe liefern kann eine leicht unterschiedliche Ausgabe liefern kann, akzeptieren wir eine Fehlertoleranz von 1e-3 (0,001). Es reicht nicht aus, wenn das Modell fast das gleiche Ergebnis liefert, sie mรผssen fast identisch sein. Daher werden Sie sicherlich die Zwischenergebnisse Zwischenergebnisse der ๐Ÿค— Transformers-Version mehrfach mit den Zwischenergebnissen der ursprรผnglichen Implementierung von *brand_new_bert* vergleichen. In diesem Fall ist eine **effiziente** Debugging-Umgebung des ursprรผnglichen Repositorys absolut wichtig ist. Hier sind einige Ratschlรคge, um Ihre Debugging-Umgebung so effizient wie mรถglich zu gestalten. - Finden Sie den besten Weg, um Zwischenergebnisse zu debuggen. Ist das ursprรผngliche Repository in PyTorch geschrieben? Dann sollten Sie dann sollten Sie sich wahrscheinlich die Zeit nehmen, ein lรคngeres Skript zu schreiben, das das ursprรผngliche Modell in kleinere Unterkomponenten zerlegt, um Zwischenwerte abzurufen. Ist das ursprรผngliche Repository in Tensorflow 1 geschrieben? Dann mรผssen Sie sich mรถglicherweise auf die TensorFlow Druckoperationen wie [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) verlassen, um die Zwischenwerte auszugeben. Ist das ursprรผngliche Repository in Jax geschrieben? Dann stellen Sie sicher, dass das Modell **nicht jitted** ist, wenn wenn Sie den Vorwรคrtsdurchlauf ausfรผhren, *z.B.* schauen Sie sich [dieser Link](https://github.com/google/jax/issues/196) an. - Verwenden Sie den kleinsten vortrainierten Prรผfpunkt, den Sie finden kรถnnen. Je kleiner der Prรผfpunkt ist, desto schneller wird Ihr Debugging-Zyklus wird. Es ist nicht effizient, wenn Ihr vorab trainiertes Modell so groรŸ ist, dass Ihr Vorwรคrtsdurchlauf mehr als 10 Sekunden dauert. Falls nur sehr groรŸe Checkpoints verfรผgbar sind, kann es sinnvoller sein, ein Dummy-Modell in der neuen Umgebung mit zufรคllig initialisierten Gewichten zu erstellen und diese Gewichte zum Vergleich mit der ๐Ÿค— Transformers-Version Ihres Modells - Vergewissern Sie sich, dass Sie den einfachsten Weg wรคhlen, um einen Forward Pass im ursprรผnglichen Repository aufzurufen. Idealerweise sollten Sie die Funktion im originalen Repository finden, die **nur** einen einzigen Vorwรคrtspass aufruft, *d.h.* die oft aufgerufen wird Vorhersagen", "Auswerten", "Vorwรคrts" oder "Aufruf" genannt wird. Sie wollen keine Funktion debuggen, die `forward` aufruft mehrfach aufruft, *z.B.* um Text zu erzeugen, wie `autoregressive_sample`, `generate`. - Versuchen Sie, die Tokenisierung vom *Forward*-Pass des Modells zu trennen. Wenn das Original-Repository Beispiele zeigt, bei denen Sie eine Zeichenkette eingeben mรผssen, dann versuchen Sie herauszufinden, an welcher Stelle im Vorwรคrtsaufruf die Zeichenketteneingabe in Eingabe-IDs geรคndert wird geรคndert wird und beginnen Sie an dieser Stelle. Das kรถnnte bedeuten, dass Sie mรถglicherweise selbst ein kleines Skript schreiben oder den Originalcode so รคndern mรผssen, dass Sie die ids direkt eingeben kรถnnen, anstatt eine Zeichenkette einzugeben. - Vergewissern Sie sich, dass sich das Modell in Ihrem Debugging-Setup **nicht** im Trainingsmodus befindet, der oft dazu fรผhrt, dass das Modell Dies fรผhrt hรคufig zu zufรคlligen Ergebnissen, da das Modell mehrere Dropout-Schichten enthรคlt. Stellen Sie sicher, dass der Vorwรคrtsdurchlauf in Ihrer Debugging Umgebung **deterministisch** ist, damit die Dropout-Schichten nicht verwendet werden. Oder verwenden Sie *transformers.utils.set_seed*. wenn sich die alte und die neue Implementierung im selben Framework befinden. Im folgenden Abschnitt finden Sie genauere Details/Tipps, wie Sie dies fรผr *brand_new_bert* tun kรถnnen. ### 5.-14. Portierung von BrandNewBert auf ๐Ÿค— Transformatoren Als nรคchstes kรถnnen Sie endlich damit beginnen, neuen Code zu ๐Ÿค— Transformers hinzuzufรผgen. Gehen Sie in den Klon Ihres ๐Ÿค— Transformers Forks: ```bash cd transformers ``` In dem speziellen Fall, dass Sie ein Modell hinzufรผgen, dessen Architektur genau mit der Modellarchitektur eines Modells รผbereinstimmt, mรผssen Sie nur ein Konvertierungsskript hinzufรผgen, wie in [diesem Abschnitt](#write-a-conversion-script) beschrieben. In diesem Fall kรถnnen Sie einfach die gesamte Modellarchitektur des bereits vorhandenen Modells wiederverwenden. Andernfalls beginnen wir mit der Erstellung eines neuen Modells. Sie haben hier zwei Mรถglichkeiten: - `transformers-cli add-new-model-like`, um ein neues Modell wie ein bestehendes hinzuzufรผgen - `transformers-cli add-new-model`, um ein neues Modell aus unserer Vorlage hinzuzufรผgen (sieht dann aus wie BERT oder Bart, je nachdem, welche Art von Modell Sie wรคhlen) In beiden Fรคllen werden Sie mit einem Fragebogen aufgefordert, die grundlegenden Informationen zu Ihrem Modell auszufรผllen. Fรผr den zweiten Befehl mรผssen Sie `cookiecutter` installieren, weitere Informationen dazu finden Sie [hier](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model). **Erรถffnen Sie einen Pull Request auf dem Haupt-Repositorium huggingface/transformers** Bevor Sie mit der Anpassung des automatisch generierten Codes beginnen, ist es nun an der Zeit, einen "Work in progress (WIP)" Pull Anfrage, *z.B.* "[WIP] Add *brand_new_bert*", in ๐Ÿค— Transformers zu รถffnen, damit Sie und das Hugging Face Team Seite an Seite an der Integration des Modells in ๐Ÿค— Transformers arbeiten kรถnnen. Sie sollten Folgendes tun: 1. Erstellen Sie eine Verzweigung mit einem beschreibenden Namen von Ihrer Hauptverzweigung ```bash git checkout -b add_brand_new_bert ``` 2. Bestรคtigen Sie den automatisch generierten Code: ```bash git add . git commit ``` 3. Abrufen und zurรผcksetzen auf die aktuelle Haupt ```bash git fetch upstream git rebase upstream/main ``` 4. รœbertragen Sie die ร„nderungen auf Ihr Konto mit: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. Wenn Sie zufrieden sind, gehen Sie auf die Webseite Ihrer Abspaltung auf GitHub. Klicken Sie auf "Pull request". Stellen Sie sicher, dass Sie das GitHub-Handle einiger Mitglieder des Hugging Face-Teams als Reviewer hinzuzufรผgen, damit das Hugging Face-Team รผber zukรผnftige ร„nderungen informiert wird. zukรผnftige ร„nderungen benachrichtigt wird. 6. ร„ndern Sie den PR in einen Entwurf, indem Sie auf der rechten Seite der GitHub-Pull-Request-Webseite auf "In Entwurf umwandeln" klicken. Vergessen Sie im Folgenden nicht, wenn Sie Fortschritte gemacht haben, Ihre Arbeit zu committen und in Ihr Konto zu pushen, damit sie in der Pull-Anfrage erscheint. damit sie in der Pull-Anfrage angezeigt wird. AuรŸerdem sollten Sie darauf achten, dass Sie Ihre Arbeit von Zeit zu Zeit mit dem aktuellen main von Zeit zu Zeit zu aktualisieren, indem Sie dies tun: ```bash git fetch upstream git merge upstream/main ``` Generell sollten Sie alle Fragen, die Sie in Bezug auf das Modell oder Ihre Implementierung haben, in Ihrem PR stellen und in der PR diskutiert/gelรถst werden. Auf diese Weise wird das Hugging Face Team immer benachrichtigt, wenn Sie neuen Code einreichen oder wenn Sie eine Frage haben. Es ist oft sehr hilfreich, das Hugging Face-Team auf Ihren hinzugefรผgten Code hinzuweisen, damit das Hugging Face-Team Ihr Problem oder Ihre Frage besser verstehen kann. Face-Team Ihr Problem oder Ihre Frage besser verstehen kann. Gehen Sie dazu auf die Registerkarte "Geรคnderte Dateien", auf der Sie alle Ihre ร„nderungen sehen, gehen Sie zu einer Zeile, zu der Sie eine Frage stellen mรถchten eine Frage stellen mรถchten, und klicken Sie auf das "+"-Symbol, um einen Kommentar hinzuzufรผgen. Wenn eine Frage oder ein Problem gelรถst wurde, kรถnnen Sie auf die Schaltflรคche "Lรถsen" des erstellten Kommentars klicken. Auf dieselbe Weise wird das Hugging Face-Team Kommentare รถffnen, wenn es Ihren Code รผberprรผft. Wir empfehlen, die meisten Fragen auf GitHub in Ihrem PR zu stellen. Fรผr einige sehr allgemeine Fragen, die fรผr die ร–ffentlichkeit nicht sehr nรผtzlich sind, kรถnnen Sie das Hugging Face Team per Slack oder E-Mail zu stellen. **5. Passen Sie den Code der generierten Modelle fรผr brand_new_bert** an. Zunรคchst werden wir uns nur auf das Modell selbst konzentrieren und uns nicht um den Tokenizer kรผmmern. Den gesamten relevanten Code sollten Sie finden Sie in den generierten Dateien `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` und `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Jetzt kรถnnen Sie endlich mit dem Programmieren beginnen :). Der generierte Code in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` wird entweder die gleiche Architektur wie BERT haben, wenn wenn es sich um ein reines Encoder-Modell handelt oder BART, wenn es sich um ein Encoder-Decoder-Modell handelt. An diesem Punkt sollten Sie sich daran erinnern, was was Sie am Anfang รผber die theoretischen Aspekte des Modells gelernt haben: *Wie unterscheidet sich das Modell von BERT oder BART?*". Implementieren Sie diese ร„nderungen, was oft bedeutet, dass Sie die *Selbstaufmerksamkeitsschicht*, die Reihenfolge der Normalisierungsschicht usw. รคndern mรผssen. Schicht usw... Auch hier ist es oft nรผtzlich, sich die รคhnliche Architektur bereits bestehender Modelle in Transformers anzusehen, um ein besseres Gefรผhl dafรผr zu bekommen ein besseres Gefรผhl dafรผr zu bekommen, wie Ihr Modell implementiert werden sollte. **Beachten Sie**, dass Sie an diesem Punkt nicht sehr sicher sein mรผssen, dass Ihr Code vรถllig korrekt oder sauber ist. Vielmehr ist es Sie sollten vielmehr eine erste *unbereinigte*, kopierte Version des ursprรผnglichen Codes in src/transformers/models/brand_new_bert/modeling_brand_new_bert.py" hinzuzufรผgen, bis Sie das Gefรผhl haben, dass der gesamte notwendige Code hinzugefรผgt wurde. Unserer Erfahrung nach ist es viel effizienter, schnell eine erste Version des erforderlichen Codes hinzuzufรผgen und den Code iterativ mit dem Konvertierungsskript zu verbessern/korrigieren, wie im nรคchsten Abschnitt beschrieben. Das einzige, was zu diesem Zeitpunkt funktionieren muss, ist, dass Sie die ๐Ÿค— Transformers-Implementierung von *brand_new_bert* instanziieren kรถnnen, *d.h.* der folgende Befehl sollte funktionieren: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` Der obige Befehl erstellt ein Modell gemรครŸ den Standardparametern, die in `BrandNewBertConfig()` definiert sind, mit zufรคlligen Gewichten und stellt damit sicher, dass die `init()` Methoden aller Komponenten funktionieren. Beachten Sie, dass alle zufรคlligen Initialisierungen in der Methode `_init_weights` Ihres `BrandnewBertPreTrainedModel` stattfinden sollten. Klasse erfolgen sollte. Sie sollte alle Blattmodule in Abhรคngigkeit von den Variablen der Konfiguration initialisieren. Hier ist ein Beispiel mit der BERT `_init_weights` Methode: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` Sie kรถnnen weitere benutzerdefinierte Schemata verwenden, wenn Sie eine spezielle Initialisierung fรผr einige Module benรถtigen. Zum Beispiel in `Wav2Vec2ForPreTraining` mรผssen die letzten beiden linearen Schichten die Initialisierung des regulรคren PyTorch `nn.Linear` haben. aber alle anderen sollten eine Initialisierung wie oben verwenden. Dies ist wie folgt kodiert: ```py def _init_weights(self, module): """Initialize the weights""" if isinstnace(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` Das Flag `_is_hf_initialized` wird intern verwendet, um sicherzustellen, dass wir ein Submodul nur einmal initialisieren. Wenn Sie es auf True` fรผr `module.project_q` und `module.project_hid` setzen, stellen wir sicher, dass die benutzerdefinierte Initialisierung, die wir vorgenommen haben, spรคter nicht รผberschrieben wird, die Funktion `_init_weights` nicht auf sie angewendet wird. **6. Schreiben Sie ein Konvertierungsskript** Als nรคchstes sollten Sie ein Konvertierungsskript schreiben, mit dem Sie den Checkpoint, den Sie zum Debuggen von *brand_new_bert* im im ursprรผnglichen Repository in einen Prรผfpunkt konvertieren, der mit Ihrer gerade erstellten ๐Ÿค— Transformers-Implementierung von *brand_new_bert*. Es ist nicht ratsam, das Konvertierungsskript von Grund auf neu zu schreiben, sondern die bereits bestehenden Konvertierungsskripten in ๐Ÿค— Transformers nach einem Skript zu suchen, das fรผr die Konvertierung eines รคhnlichen Modells verwendet wurde, das im demselben Framework wie *brand_new_bert* geschrieben wurde. Normalerweise reicht es aus, ein bereits vorhandenes Konvertierungsskript zu kopieren und es fรผr Ihren Anwendungsfall leicht anzupassen. Zรถgern Sie nicht, das Hugging Face Team zu bitten, Sie auf ein รคhnliches, bereits vorhandenes Konvertierungsskript fรผr Ihr Modell zu finden. - Wenn Sie ein Modell von TensorFlow nach PyTorch portieren, ist ein guter Ausgangspunkt das Konvertierungsskript von BERT [hier] (https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - Wenn Sie ein Modell von PyTorch nach PyTorch portieren, ist ein guter Ausgangspunkt das Konvertierungsskript von BART [hier](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) Im Folgenden werden wir kurz erklรคren, wie PyTorch-Modelle Ebenengewichte speichern und Ebenennamen definieren. In PyTorch wird der Name einer Ebene durch den Namen des Klassenattributs definiert, das Sie der Ebene geben. Lassen Sie uns ein Dummy-Modell in PyTorch, das wir `SimpleModel` nennen, wie folgt: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` Jetzt kรถnnen wir eine Instanz dieser Modelldefinition erstellen, die alle Gewichte ausfรผllt: `dense`, `intermediate`, `layer_norm` mit zufรคlligen Gewichten. Wir kรถnnen das Modell ausdrucken, um seine Architektur zu sehen ```python model = SimpleModel() print(model) ``` Dies gibt folgendes aus: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` Wir kรถnnen sehen, dass die Ebenennamen durch den Namen des Klassenattributs in PyTorch definiert sind. Sie kรถnnen die Gewichtswerte Werte einer bestimmten Ebene anzeigen lassen: ```python print(model.dense.weight.data) ``` um zu sehen, dass die Gewichte zufรคllig initialisiert wurden ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` Im Konvertierungsskript sollten Sie diese zufรคllig initialisierten Gewichte mit den genauen Gewichten der entsprechenden Ebene im Kontrollpunkt. *Z.B.* ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` Dabei mรผssen Sie sicherstellen, dass jedes zufรคllig initialisierte Gewicht Ihres PyTorch-Modells und sein entsprechendes Checkpoint-Gewicht in **Form und Name** genau รผbereinstimmen. Zu diesem Zweck ist es **notwendig**, assert Anweisungen fรผr die Form hinzuzufรผgen und die Namen der Checkpoint-Gewichte auszugeben. Sie sollten z.B. Anweisungen hinzufรผgen wie: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` AuรŸerdem sollten Sie die Namen der beiden Gewichte ausdrucken, um sicherzustellen, dass sie รผbereinstimmen, *z.B.*. ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` Wenn entweder die Form oder der Name nicht รผbereinstimmt, haben Sie wahrscheinlich das falsche Kontrollpunktgewicht einer zufรคllig Ebene der ๐Ÿค— Transformers-Implementierung zugewiesen. Eine falsche Form ist hรถchstwahrscheinlich auf eine falsche Einstellung der Konfigurationsparameter in `BrandNewBertConfig()` zurรผckzufรผhren, die nicht genau mit denen รผbereinstimmen, die fรผr den zu konvertierenden Prรผfpunkt verwendet wurden. Es kรถnnte aber auch sein, dass die PyTorch-Implementierung eines Layers erfordert, dass das Gewicht vorher transponiert wird. SchlieรŸlich sollten Sie auch รผberprรผfen, ob **alle** erforderlichen Gewichte initialisiert sind und alle Checkpoint-Gewichte ausgeben, die die nicht zur Initialisierung verwendet wurden, um sicherzustellen, dass das Modell korrekt konvertiert wurde. Es ist vรถllig normal, dass die Konvertierungsversuche entweder mit einer falschen Shape-Anweisung oder einer falschen Namenszuweisung fehlschlagen. Das liegt hรถchstwahrscheinlich daran, dass entweder Sie haben falsche Parameter in `BrandNewBertConfig()` verwendet, haben eine falsche Architektur in der ๐Ÿค— Transformers Implementierung, Sie haben einen Fehler in den `init()` Funktionen einer der Komponenten der ๐Ÿค— Transformers Implementierung oder Sie mรผssen eine der Kontrollpunktgewichte transponieren. Dieser Schritt sollte mit dem vorherigen Schritt wiederholt werden, bis alle Gewichte des Kontrollpunkts korrekt in das Transformers-Modell geladen sind. Nachdem Sie den Prรผfpunkt korrekt in die ๐Ÿค— Transformers-Implementierung geladen haben, kรถnnen Sie das Modell das Modell unter einem Ordner Ihrer Wahl `/path/to/converted/checkpoint/folder` speichern, der dann sowohl ein Datei `pytorch_model.bin` und eine Datei `config.json` enthalten sollte: ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. Implementieren Sie den Vorwรคrtspass** Nachdem es Ihnen gelungen ist, die trainierten Gewichte korrekt in die ๐Ÿค— Transformers-Implementierung zu laden, sollten Sie nun dafรผr sorgen sicherstellen, dass der Forward Pass korrekt implementiert ist. In [Machen Sie sich mit dem ursprรผnglichen Repository vertraut](#34-run-a-pretrained-checkpoint-using-the-original-repository) haben Sie bereits ein Skript erstellt, das einen Forward Pass Durchlauf des Modells unter Verwendung des Original-Repositorys durchfรผhrt. Jetzt sollten Sie ein analoges Skript schreiben, das die ๐Ÿค— Transformers Implementierung anstelle der Originalimplementierung verwenden. Es sollte wie folgt aussehen: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` Es ist sehr wahrscheinlich, dass die ๐Ÿค— Transformers-Implementierung und die ursprรผngliche Modell-Implementierung nicht genau die gleiche Ausgabe liefern. beim ersten Mal nicht die gleiche Ausgabe liefern oder dass der Vorwรคrtsdurchlauf einen Fehler auslรถst. Seien Sie nicht enttรคuscht - das ist zu erwarten! Erstens, sollten Sie sicherstellen, dass der Vorwรคrtsdurchlauf keine Fehler auslรถst. Es passiert oft, dass die falschen Dimensionen verwendet werden verwendet werden, was zu einem *Dimensionality mismatch* Fehler fรผhrt oder dass der falsche Datentyp verwendet wird, *z.B.* `torch.long` anstelle von `torch.float32`. Zรถgern Sie nicht, das Hugging Face Team um Hilfe zu bitten, wenn Sie bestimmte Fehler nicht lรถsen kรถnnen. bestimmte Fehler nicht lรถsen kรถnnen. Um sicherzustellen, dass die Implementierung von ๐Ÿค— Transformers korrekt funktioniert, mรผssen Sie sicherstellen, dass die Ausgaben einer Genauigkeit von `1e-3` entsprechen. Zunรคchst sollten Sie sicherstellen, dass die Ausgabeformen identisch sind, *d.h.*. Die Ausgabeform *outputs.shape* sollte fรผr das Skript der ๐Ÿค— Transformers-Implementierung und die ursprรผngliche Implementierung ergeben. Als nรคchstes sollten Sie sicherstellen, dass auch die Ausgabewerte identisch sind. Dies ist einer der schwierigsten Teile des Hinzufรผgens eines neuen Modells. Hรคufige Fehler, warum die Ausgaben nicht identisch sind, sind: - Einige Ebenen wurden nicht hinzugefรผgt, *d.h.* eine *Aktivierungsebene* wurde nicht hinzugefรผgt, oder die Restverbindung wurde vergessen - Die Worteinbettungsmatrix wurde nicht gebunden - Es werden die falschen Positionseinbettungen verwendet, da die ursprรผngliche Implementierung einen Offset verwendet - Dropout wird wรคhrend des Vorwรคrtsdurchlaufs angewendet. Um dies zu beheben, stellen Sie sicher, dass *model.training auf False* steht und dass keine Dropout Schicht wรคhrend des Vorwรคrtsdurchlaufs fรคlschlicherweise aktiviert wird, *d.h.* รผbergeben Sie *self.training* an [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout) Der beste Weg, das Problem zu beheben, besteht normalerweise darin, sich den Vorwรคrtsdurchlauf der ursprรผnglichen Implementierung und die ๐Ÿค— Transformers-Implementierung nebeneinander zu sehen und zu prรผfen, ob es Unterschiede gibt. Idealerweise sollten Sie die Zwischenergebnisse beider Implementierungen des Vorwรคrtsdurchlaufs debuggen/ausdrucken, um die genaue Position im Netzwerk zu finden, an der die ๐Ÿค— Transformers-Implementierung eine andere Ausgabe zeigt als die ursprรผngliche Implementierung. Stellen Sie zunรคchst sicher, dass die hartcodierten `input_ids` in beiden Skripten identisch sind. รœberprรผfen Sie dann, ob die Ausgaben der ersten Transformation von der `input_ids` (normalerweise die Worteinbettungen) identisch sind. Und dann arbeiten Sie sich bis zur allerletzten Schicht des Netzwerks. Irgendwann werden Sie einen Unterschied zwischen den beiden Implementierungen feststellen, der Sie auf den Fehler in der Implementierung von ๐Ÿค— Transformers hinweist. Unserer Erfahrung nach ist ein einfacher und effizienter Weg, viele Druckanweisungen hinzuzufรผgen sowohl in der Original-Implementierung als auch in der ๐Ÿค— Transformers-Implementierung an den gleichen Stellen im Netzwerk hinzuzufรผgen und nacheinander Druckanweisungen zu entfernen, die dieselben Werte fรผr Zwischenprรคsentationen anzeigen. Wenn Sie sicher sind, dass beide Implementierungen die gleiche Ausgabe liefern, รผberprรผfen Sie die Ausgaben mit `torch.allclose(original_output, output, atol=1e-3)` รผberprรผfen, haben Sie den schwierigsten Teil hinter sich! Herzlichen Glรผckwunsch - die Arbeit, die noch zu erledigen ist, sollte ein Kinderspiel sein ๐Ÿ˜Š. **8. Hinzufรผgen aller notwendigen Modelltests** An diesem Punkt haben Sie erfolgreich ein neues Modell hinzugefรผgt. Es ist jedoch sehr gut mรถglich, dass das Modell noch nicht noch nicht vollstรคndig mit dem erforderlichen Design รผbereinstimmt. Um sicherzustellen, dass die Implementierung vollstรคndig kompatibel mit ๐Ÿค— Transformers ist, sollten alle gemeinsamen Tests bestehen. Der Cookiecutter sollte automatisch eine Testdatei fรผr Ihr Modell hinzugefรผgt haben, wahrscheinlich unter demselben `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`. Fรผhren Sie diese Testdatei aus, um zu รผberprรผfen, ob alle gรคngigen Tests bestehen: ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` Nachdem Sie alle allgemeinen Tests festgelegt haben, mรผssen Sie nun sicherstellen, dass all die schรถne Arbeit, die Sie geleistet haben, gut getestet ist, damit - a) die Community Ihre Arbeit leicht nachvollziehen kann, indem sie sich spezifische Tests von *brand_new_bert* ansieht - b) zukรผnftige ร„nderungen an Ihrem Modell keine wichtigen Funktionen des Modells zerstรถren. Als erstes sollten Sie Integrationstests hinzufรผgen. Diese Integrationstests tun im Wesentlichen dasselbe wie die Debugging-Skripte die Sie zuvor zur Implementierung des Modells in ๐Ÿค— Transformers verwendet haben. Eine Vorlage fรผr diese Modelltests wurde bereits von dem Cookiecutter hinzugefรผgt, die `BrandNewBertModelIntegrationTests` heiรŸt und nur noch von Ihnen ausgefรผllt werden muss. Um sicherzustellen, dass diese Tests erfolgreich sind, fรผhren Sie ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Falls Sie Windows verwenden, sollten Sie `RUN_SLOW=1` durch `SET RUN_SLOW=1` ersetzen. </Tip> Zweitens sollten alle Funktionen, die speziell fรผr *brand_new_bert* sind, zusรคtzlich in einem separaten Test getestet werden unter `BrandNewBertModelTester`/``BrandNewBertModelTest`. Dieser Teil wird oft vergessen, ist aber in zweierlei Hinsicht รคuรŸerst nรผtzlich Weise: - Er hilft dabei, das Wissen, das Sie wรคhrend der Modellerweiterung erworben haben, an die Community weiterzugeben, indem er zeigt, wie die speziellen Funktionen von *brand_new_bert* funktionieren sollten. - Kรผnftige Mitwirkende kรถnnen ร„nderungen am Modell schnell testen, indem sie diese speziellen Tests ausfรผhren. **9. Implementieren Sie den Tokenizer** Als nรคchstes sollten wir den Tokenizer von *brand_new_bert* hinzufรผgen. Normalerweise ist der Tokenizer รคquivalent oder sehr รคhnlich zu einem bereits vorhandenen Tokenizer von ๐Ÿค— Transformers. Es ist sehr wichtig, die ursprรผngliche Tokenizer-Datei zu finden/extrahieren und es zu schaffen, diese Datei in die ๐Ÿค— Transformers Implementierung des Tokenizers zu laden. Um sicherzustellen, dass der Tokenizer korrekt funktioniert, empfiehlt es sich, zunรคchst ein Skript im ursprรผnglichen Repository zu erstellen zu erstellen, das eine Zeichenkette eingibt und die `input_ids` zurรผckgibt. Es kรถnnte etwa so aussehen (in Pseudocode): ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` Mรถglicherweise mรผssen Sie noch einmal einen Blick in das ursprรผngliche Repository werfen, um die richtige Tokenizer-Funktion zu finden, oder Sie mรผssen Sie mรผssen vielleicht sogar ร„nderungen an Ihrem Klon des Original-Repositorys vornehmen, um nur die `input_ids` auszugeben. Nach dem Schreiben ein funktionierendes Tokenisierungsskript geschrieben, das das ursprรผngliche Repository verwendet, sollten Sie ein analoges Skript fรผr ๐Ÿค— Transformers erstellt werden. Es sollte รคhnlich wie dieses aussehen: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` Wenn beide `input_ids` die gleichen Werte ergeben, sollte als letzter Schritt auch eine Tokenizer-Testdatei hinzugefรผgt werden. Analog zu den Modellierungstestdateien von *brand_new_bert* sollten auch die Tokenisierungs-Testdateien von *brand_new_bert* eine Reihe von fest kodierten Integrationstests enthalten. **10. Fรผhren Sie End-to-End-Integrationstests aus** Nachdem Sie den Tokenizer hinzugefรผgt haben, sollten Sie auch ein paar End-to-End-Integrationstests, die sowohl das Modell als auch den Tokenizer zu `tests/models/brand_new_bert/test_modeling_brand_new_bert.py` in ๐Ÿค— Transformers. Ein solcher Test sollte bei einem aussagekrรคftigen Text-zu-Text-Beispiel zeigen, dass die Implementierung von ๐Ÿค— Transformers wie erwartet funktioniert. Ein aussagekrรคftiges Text-zu-Text-Beispiel kann z.B. *ein Quell-zu-Ziel-รœbersetzungspaar, ein Artikel-zu-Zusammenfassung-Paar, ein Frage-zu-Antwort-Paar, usw... Wenn keiner der der portierten Prรผfpunkte in einer nachgelagerten Aufgabe feinabgestimmt wurde, genรผgt es, sich einfach auf die Modelltests zu verlassen. In einem letzten Schritt, um sicherzustellen, dass das Modell voll funktionsfรคhig ist, sollten Sie alle Tests auch auf der GPU durchfรผhren. Es kann Es kann vorkommen, dass Sie vergessen haben, einige `.to(self.device)` Anweisungen zu internen Tensoren des Modells hinzuzufรผgen, was in einem solchen Test zu einem Fehler fรผhren wรผrde. Falls Sie keinen Zugang zu einem Grafikprozessor haben, kann das Hugging Face Team diese Tests fรผr Sie durchfรผhren. Tests fรผr Sie รผbernehmen. **11. Docstring hinzufรผgen** Nun sind alle notwendigen Funktionen fรผr *brand_new_bert* hinzugefรผgt - Sie sind fast fertig! Das Einzige, was Sie noch hinzufรผgen mรผssen, ist ein schรถner Docstring und eine Doku-Seite. Der Cookiecutter sollte eine Vorlagendatei namens `docs/source/model_doc/brand_new_bert.md` hinzugefรผgt haben, die Sie ausfรผllen sollten. Die Benutzer Ihres Modells werden in der Regel zuerst einen Blick auf diese Seite ansehen, bevor sie Ihr Modell verwenden. Daher muss die Dokumentation verstรคndlich und prรคgnant sein. Es ist sehr nรผtzlich fรผr die Gemeinschaft, einige *Tipps* hinzuzufรผgen, um zu zeigen, wie das Modell verwendet werden sollte. Zรถgern Sie nicht, das Hugging Face-Team anzupingen bezรผglich der Docstrings. Stellen Sie als nรคchstes sicher, dass der zu `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` hinzugefรผgte docstring korrekt ist und alle erforderlichen Eingaben und Ausgaben enthรคlt. Wir haben eine ausfรผhrliche Anleitung zum Schreiben von Dokumentationen und unserem Docstring-Format [hier](writing-documentation). Es ist immer gut, sich daran zu erinnern, dass die Dokumentation mindestens so sorgfรคltig behandelt werden sollte wie der Code in ๐Ÿค— Transformers, denn die Dokumentation ist in der Regel der erste Kontaktpunkt der Berรผhrungspunkt der Community mit dem Modell ist. **Code refactor** GroรŸartig, jetzt haben Sie den gesamten erforderlichen Code fรผr *brand_new_bert* hinzugefรผgt. An diesem Punkt sollten Sie einige mรถgliche falschen Codestil korrigieren, indem Sie ausfรผhren: ```bash make style ``` und รผberprรผfen Sie, ob Ihr Kodierungsstil die Qualitรคtsprรผfung besteht: ```bash make quality ``` Es gibt noch ein paar andere sehr strenge Designtests in ๐Ÿค— Transformers, die mรถglicherweise noch fehlschlagen, was sich in den den Tests Ihres Pull Requests. Dies liegt oft an fehlenden Informationen im Docstring oder an einer falschen Benennung. Das Hugging Face Team wird Ihnen sicherlich helfen, wenn Sie hier nicht weiterkommen. Und schlieรŸlich ist es immer eine gute Idee, den eigenen Code zu refaktorisieren, nachdem man sichergestellt hat, dass er korrekt funktioniert. Wenn alle Tests bestanden haben, ist es nun an der Zeit, den hinzugefรผgten Code noch einmal durchzugehen und einige รœberarbeitungen vorzunehmen. Sie haben nun den Codierungsteil abgeschlossen, herzlichen Glรผckwunsch! ๐ŸŽ‰ Sie sind groรŸartig! ๐Ÿ˜Ž **12. Laden Sie die Modelle in den Model Hub hoch** In diesem letzten Teil sollten Sie alle Checkpoints konvertieren und in den Modell-Hub hochladen und eine Modellkarte fรผr jeden hochgeladenen Modell-Kontrollpunkt. Sie kรถnnen sich mit den Hub-Funktionen vertraut machen, indem Sie unsere [Model sharing and uploading Page](model_sharing) lesen. Hier sollten Sie mit dem Hugging Face-Team zusammenarbeiten, um einen passenden Namen fรผr jeden Checkpoint festzulegen und die erforderlichen Zugriffsrechte zu erhalten, um das Modell unter der Organisation des Autors *brand_new_bert* hochladen zu kรถnnen. *brand_new_bert*. Die Methode `push_to_hub`, die in allen Modellen in `transformers` vorhanden ist, ist ein schneller und effizienter Weg, Ihren Checkpoint in den Hub zu pushen. Ein kleines Snippet ist unten eingefรผgt: ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` Es lohnt sich, etwas Zeit darauf zu verwenden, fรผr jeden Kontrollpunkt passende Musterkarten zu erstellen. Die Modellkarten sollten die spezifischen Merkmale dieses bestimmten Prรผfpunkts hervorheben, * z.B.* auf welchem Datensatz wurde der Prรผfpunkt vortrainiert/abgestimmt? Fรผr welche nachgelagerte Aufgabe sollte das Modell verwendet werden? Und fรผgen Sie auch etwas Code bei, wie Sie wie das Modell korrekt verwendet wird. **13. (Optional) Notizbuch hinzufรผgen** Es ist sehr hilfreich, ein Notizbuch hinzuzufรผgen, in dem im Detail gezeigt wird, wie *brand_new_bert* fรผr Schlussfolgerungen verwendet werden kann und/oder bei einer nachgelagerten Aufgabe feinabgestimmt wird. Dies ist nicht zwingend erforderlich, um Ihren PR zusammenzufรผhren, aber sehr nรผtzlich fรผr die Gemeinschaft. **14. Reichen Sie Ihren fertigen PR ein** Sie sind jetzt mit der Programmierung fertig und kรถnnen zum letzten Schritt รผbergehen, nรคmlich der Zusammenfรผhrung Ihres PR mit main. Normalerweise hat das Hugging Face Team Ihnen an diesem Punkt bereits geholfen haben, aber es lohnt sich, sich etwas Zeit zu nehmen, um Ihrem fertigen PR eine schรถne Beschreibung zu geben und eventuell Kommentare zu Ihrem Code hinzuzufรผgen, wenn Sie Ihren Gutachter auf bestimmte Designentscheidungen hinweisen wollen. Gutachter hinweisen wollen. ### Teilen Sie Ihre Arbeit!! Jetzt ist es an der Zeit, von der Community Anerkennung fรผr Ihre Arbeit zu bekommen! Die Fertigstellung einer Modellergรคnzung ist ein wichtiger Beitrag zu Transformers und der gesamten NLP-Gemeinschaft. Ihr Code und die portierten vortrainierten Modelle werden sicherlich von Hunderten und vielleicht sogar Tausenden von Entwicklern und Forschern genutzt werden. Sie sollten stolz auf Ihre Arbeit sein und Ihre Ihre Leistung mit der Gemeinschaft teilen. **Sie haben ein weiteres Modell erstellt, das fรผr jeden in der Community super einfach zugรคnglich ist! ๐Ÿคฏ**
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Trainieren mit einem Skript Neben den ๐Ÿค— Transformers [notebooks](./noteboks/README) gibt es auch Beispielskripte, die zeigen, wie man ein Modell fรผr eine Aufgabe mit [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow) oder [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax) trainiert. Sie werden auch Skripte finden, die wir in unseren [Forschungsprojekten](https://github.com/huggingface/transformers/tree/main/examples/research_projects) und [Legacy-Beispielen](https://github.com/huggingface/transformers/tree/main/examples/legacy) verwendet haben und die grรถรŸtenteils von der Community stammen. Diese Skripte werden nicht aktiv gepflegt und erfordern eine bestimmte Version von ๐Ÿค— Transformers, die hรถchstwahrscheinlich nicht mit der neuesten Version der Bibliothek kompatibel ist. Es wird nicht erwartet, dass die Beispielskripte bei jedem Problem sofort funktionieren. Mรถglicherweise mรผssen Sie das Skript an das Problem anpassen, das Sie zu lรถsen versuchen. Um Ihnen dabei zu helfen, legen die meisten Skripte vollstรคndig offen, wie die Daten vorverarbeitet werden, so dass Sie sie nach Bedarf fรผr Ihren Anwendungsfall bearbeiten kรถnnen. Fรผr jede Funktion, die Sie in einem Beispielskript implementieren mรถchten, diskutieren Sie bitte im [Forum] (https://discuss.huggingface.co/) oder in einem [issue] (https://github.com/huggingface/transformers/issues), bevor Sie einen Pull Request einreichen. Wir freuen uns zwar รผber Fehlerkorrekturen, aber es ist unwahrscheinlich, dass wir einen Pull Request zusammenfรผhren, der mehr Funktionalitรคt auf Kosten der Lesbarkeit hinzufรผgt. Diese Anleitung zeigt Ihnen, wie Sie ein Beispiel fรผr ein Trainingsskript zur Zusammenfassung in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) und [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) ausfรผhren kรถnnen. Sofern nicht anders angegeben, sollten alle Beispiele mit beiden Frameworks funktionieren. ## Einrichtung Um die neueste Version der Beispielskripte erfolgreich auszufรผhren, **mรผssen Sie ๐Ÿค— Transformers aus dem Quellcode** in einer neuen virtuellen Umgebung installieren: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` Fรผr รคltere Versionen der Beispielskripte klicken Sie auf die Umschalttaste unten: <details> <summary>Beispiele fรผr รคltere Versionen von ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Dann stellen Sie Ihren aktuellen Klon von ๐Ÿค— Transformers auf eine bestimmte Version um, z.B. v3.5.1: ```bash git checkout tags/v3.5.1 ``` Nachdem Sie die richtige Bibliotheksversion eingerichtet haben, navigieren Sie zu dem Beispielordner Ihrer Wahl und installieren die beispielspezifischen Anforderungen: ```bash pip install -r requirements.txt ``` ## Ein Skript ausfรผhren <frameworkcontent> <pt> Das Beispielskript lรคdt einen Datensatz aus der ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) Bibliothek herunter und verarbeitet ihn vor. Dann nimmt das Skript eine Feinabstimmung eines Datensatzes mit dem [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) auf einer Architektur vor, die eine Zusammenfassung unterstรผtzt. Das folgende Beispiel zeigt, wie die Feinabstimmung von [T5-small](https://huggingface.co/t5-small) auf dem Datensatz [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) durchgefรผhrt wird. Das T5-Modell benรถtigt aufgrund der Art und Weise, wie es trainiert wurde, ein zusรคtzliches Argument `source_prefix`. Mit dieser Eingabeaufforderung weiรŸ T5, dass es sich um eine Zusammenfassungsaufgabe handelt. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Das Beispielskript lรคdt einen Datensatz aus der ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) Bibliothek herunter und verarbeitet ihn vor. AnschlieรŸend nimmt das Skript die Feinabstimmung eines Datensatzes mit Keras auf einer Architektur vor, die die Zusammenfassung unterstรผtzt. Das folgende Beispiel zeigt, wie die Feinabstimmung von [T5-small](https://huggingface.co/t5-small) auf dem [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) Datensatz durchgefรผhrt wird. Das T5-Modell benรถtigt aufgrund der Art und Weise, wie es trainiert wurde, ein zusรคtzliches Argument `source_prefix`. Mit dieser Eingabeaufforderung weiรŸ T5, dass es sich um eine Zusammenfassungsaufgabe handelt. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Verteiltes Training und gemischte Prรคzision Der [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) unterstรผtzt verteiltes Training und gemischte Prรคzision, d.h. Sie kรถnnen ihn auch in einem Skript verwenden. So aktivieren Sie diese beiden Funktionen: - Fรผgen Sie das Argument `fp16` hinzu, um gemischte Genauigkeit zu aktivieren. - Legen Sie die Anzahl der zu verwendenden GPUs mit dem Argument `nproc_per_node` fest. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` TensorFlow-Skripte verwenden eine [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) fรผr verteiltes Training, und Sie mรผssen dem Trainingsskript keine zusรคtzlichen Argumente hinzufรผgen. Das TensorFlow-Skript verwendet standardmรครŸig mehrere GPUs, wenn diese verfรผgbar sind. ## Ein Skript auf einer TPU ausfรผhren <frameworkcontent> <pt> Tensor Processing Units (TPUs) sind speziell fรผr die Beschleunigung der Leistung konzipiert. PyTorch unterstรผtzt TPUs mit dem [XLA](https://www.tensorflow.org/xla) Deep Learning Compiler (siehe [hier](https://github.com/pytorch/xla/blob/master/README.md) fรผr weitere Details). Um eine TPU zu verwenden, starten Sie das Skript `xla_spawn.py` und verwenden das Argument `num_cores`, um die Anzahl der TPU-Kerne festzulegen, die Sie verwenden mรถchten. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Tensor Processing Units (TPUs) sind speziell fรผr die Beschleunigung der Leistung konzipiert. TensorFlow Skripte verwenden eine [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) fรผr das Training auf TPUs. Um eine TPU zu verwenden, รผbergeben Sie den Namen der TPU-Ressource an das Argument `tpu`. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Fรผhren Sie ein Skript mit ๐Ÿค— Accelerate aus. ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) ist eine reine PyTorch-Bibliothek, die eine einheitliche Methode fรผr das Training eines Modells auf verschiedenen Arten von Setups (nur CPU, mehrere GPUs, TPUs) bietet und dabei die vollstรคndige Transparenz der PyTorch-Trainingsschleife beibehรคlt. Stellen Sie sicher, dass Sie ๐Ÿค— Accelerate installiert haben, wenn Sie es nicht bereits haben: > Hinweis: Da Accelerate schnell weiterentwickelt wird, muss die Git-Version von Accelerate installiert sein, um die Skripte auszufรผhren. ```bash pip install git+https://github.com/huggingface/accelerate ``` Anstelle des Skripts `run_summarization.py` mรผssen Sie das Skript `run_summarization_no_trainer.py` verwenden. Die von Accelerate unterstรผtzten Skripte haben eine Datei `task_no_trainer.py` im Ordner. Beginnen Sie mit dem folgenden Befehl, um eine Konfigurationsdatei zu erstellen und zu speichern: ```bash accelerate config ``` Testen Sie Ihre Einrichtung, um sicherzustellen, dass sie korrekt konfiguriert ist: ```bash accelerate test ``` Jetzt sind Sie bereit, das Training zu starten: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Verwenden Sie einen benutzerdefinierten Datensatz Das Verdichtungsskript unterstรผtzt benutzerdefinierte Datensรคtze, solange es sich um eine CSV- oder JSON-Line-Datei handelt. Wenn Sie Ihren eigenen Datensatz verwenden, mรผssen Sie mehrere zusรคtzliche Argumente angeben: - `train_file` und `validation_file` geben den Pfad zu Ihren Trainings- und Validierungsdateien an. - text_column` ist der Eingabetext, der zusammengefasst werden soll. - Summary_column" ist der auszugebende Zieltext. Ein Zusammenfassungsskript, das einen benutzerdefinierten Datensatz verwendet, wรผrde wie folgt aussehen: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Testen Sie ein Skript Es ist oft eine gute Idee, Ihr Skript an einer kleineren Anzahl von Beispielen fรผr Datensรคtze auszufรผhren, um sicherzustellen, dass alles wie erwartet funktioniert, bevor Sie sich auf einen ganzen Datensatz festlegen, dessen Fertigstellung Stunden dauern kann. Verwenden Sie die folgenden Argumente, um den Datensatz auf eine maximale Anzahl von Stichproben zu beschrรคnken: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Nicht alle Beispielskripte unterstรผtzen das Argument `max_predict_samples`. Wenn Sie sich nicht sicher sind, ob Ihr Skript dieses Argument unterstรผtzt, fรผgen Sie das Argument `-h` hinzu, um dies zu รผberprรผfen: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Training vom Kontrollpunkt fortsetzen Eine weitere hilfreiche Option, die Sie aktivieren kรถnnen, ist die Wiederaufnahme des Trainings von einem frรผheren Kontrollpunkt aus. Auf diese Weise kรถnnen Sie im Falle einer Unterbrechung Ihres Trainings dort weitermachen, wo Sie aufgehรถrt haben, ohne von vorne beginnen zu mรผssen. Es gibt zwei Methoden, um das Training von einem Kontrollpunkt aus wieder aufzunehmen. Die erste Methode verwendet das Argument `output_dir previous_output_dir`, um das Training ab dem letzten in `output_dir` gespeicherten Kontrollpunkt wieder aufzunehmen. In diesem Fall sollten Sie `overwrite_output_dir` entfernen: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` Die zweite Methode verwendet das Argument `Resume_from_checkpoint path_to_specific_checkpoint`, um das Training ab einem bestimmten Checkpoint-Ordner wieder aufzunehmen. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Teilen Sie Ihr Modell Alle Skripte kรถnnen Ihr endgรผltiges Modell in den [Model Hub](https://huggingface.co/models) hochladen. Stellen Sie sicher, dass Sie bei Hugging Face angemeldet sind, bevor Sie beginnen: ```bash huggingface-cli login ``` Dann fรผgen Sie dem Skript das Argument `push_to_hub` hinzu. Mit diesem Argument wird ein Repository mit Ihrem Hugging Face-Benutzernamen und dem in `output_dir` angegebenen Ordnernamen erstellt. Wenn Sie Ihrem Repository einen bestimmten Namen geben mรถchten, fรผgen Sie ihn mit dem Argument `push_to_hub_model_id` hinzu. Das Repository wird automatisch unter Ihrem Namensraum aufgefรผhrt. Das folgende Beispiel zeigt, wie Sie ein Modell mit einem bestimmten Repository-Namen hochladen kรถnnen: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers installation ! pip install transformers datasets # To install from source instead of the last release, comment the command above and uncomment the following one. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Vortrainierte Instanzen mit einer AutoClass laden Bei so vielen verschiedenen Transformator-Architekturen kann es eine Herausforderung sein, eine fรผr Ihren Checkpoint zu erstellen. Als Teil der ๐Ÿค— Transformers Kernphilosophie, die Bibliothek leicht, einfach und flexibel nutzbar zu machen, leitet eine `AutoClass` automatisch die richtige Architektur aus einem gegebenen Checkpoint ab und lรคdt sie. Mit der Methode `from_pretrained()` kann man schnell ein vortrainiertes Modell fรผr eine beliebige Architektur laden, so dass man keine Zeit und Ressourcen aufwenden muss, um ein Modell von Grund auf zu trainieren. Die Erstellung dieser Art von Checkpoint-agnostischem Code bedeutet, dass Ihr Code, wenn er fรผr einen Checkpoint funktioniert, auch mit einem anderen Checkpoint funktionieren wird - solange er fรผr eine รคhnliche Aufgabe trainiert wurde - selbst wenn die Architektur unterschiedlich ist. <Tip> Denken Sie daran, dass sich die Architektur auf das Skelett des Modells bezieht und die Checkpoints die Gewichte fรผr eine bestimmte Architektur sind. Zum Beispiel ist [BERT](https://huggingface.co/bert-base-uncased) eine Architektur, wรคhrend `bert-base-uncased` ein Checkpoint ist. Modell ist ein allgemeiner Begriff, der entweder Architektur oder Prรผfpunkt bedeuten kann. </Tip> In dieser Anleitung lernen Sie, wie man: * Einen vortrainierten Tokenizer lรคdt. * Einen vortrainierten Merkmalsextraktor lรคdt. * Einen vortrainierten Prozessor lรคdt. * Ein vortrainiertes Modell lรคdt. ## AutoTokenizer Nahezu jede NLP-Aufgabe beginnt mit einem Tokenizer. Ein Tokenizer wandelt Ihre Eingabe in ein Format um, das vom Modell verarbeitet werden kann. Laden Sie einen Tokenizer mit [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` Dann tokenisieren Sie Ihre Eingabe wie unten gezeigt: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoFeatureExtractor Fรผr Audio- und Bildverarbeitungsaufgaben verarbeitet ein Merkmalsextraktor das Audiosignal oder Bild in das richtige Eingabeformat. Laden Sie einen Merkmalsextraktor mit [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Multimodale Aufgaben erfordern einen Prozessor, der zwei Arten von Vorverarbeitungswerkzeugen kombiniert. Das Modell [LayoutLMV2](model_doc/layoutlmv2) beispielsweise benรถtigt einen Feature-Extraktor fรผr Bilder und einen Tokenizer fรผr Text; ein Prozessor kombiniert beide. Laden Sie einen Prozessor mit [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Mit den `AutoModelFor`-Klassen kรถnnen Sie schlieรŸlich ein vortrainiertes Modell fรผr eine bestimmte Aufgabe laden (siehe [hier](model_doc/auto) fรผr eine vollstรคndige Liste der verfรผgbaren Aufgaben). Laden Sie zum Beispiel ein Modell fรผr die Sequenzklassifikation mit [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Sie kรถnnen denselben Prรผfpunkt problemlos wiederverwenden, um eine Architektur fรผr eine andere Aufgabe zu laden: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` <Tip warning={true}> Fรผr PyTorch-Modelle verwendet die Methode `from_pretrained()` `torch.load()`, die intern `pickle` verwendet und als unsicher bekannt ist. Generell sollte man niemals ein Modell laden, das aus einer nicht vertrauenswรผrdigen Quelle stammen kรถnnte, oder das manipuliert worden sein kรถnnte. Dieses Sicherheitsrisiko wird fรผr รถffentliche Modelle, die auf dem Hugging Face Hub gehostet werden, teilweise gemildert, da diese bei jeder รœbertragung [auf Malware](https://huggingface.co/docs/hub/security-malware) gescannt werden. Siehe die [Hub-Dokumentation](https://huggingface.co/docs/hub/security) fรผr Best Practices wie [signierte Commit-Verifizierung](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg) mit GPG. TensorFlow- und Flax-Checkpoints sind nicht betroffen und kรถnnen in PyTorch-Architekturen mit den Kwargs `from_tf` und `from_flax` fรผr die Methode `from_pretrained` geladen werden, um dieses Problem zu umgehen. </Tip> Im Allgemeinen empfehlen wir die Verwendung der Klasse "AutoTokenizer" und der Klasse "AutoModelFor", um trainierte Instanzen von Modellen zu laden. Dadurch wird sichergestellt, dass Sie jedes Mal die richtige Architektur laden. Im nรคchsten [Tutorial] (Vorverarbeitung) erfahren Sie, wie Sie Ihren neu geladenen Tokenizer, Feature Extractor und Prozessor verwenden, um einen Datensatz fรผr die Feinabstimmung vorzuverarbeiten. </pt> <tf> Mit den Klassen `TFAutoModelFor` schlieรŸlich kรถnnen Sie ein vortrainiertes Modell fรผr eine bestimmte Aufgabe laden (siehe [hier](model_doc/auto) fรผr eine vollstรคndige Liste der verfรผgbaren Aufgaben). Laden Sie zum Beispiel ein Modell fรผr die Sequenzklassifikation mit [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Sie kรถnnen denselben Prรผfpunkt problemlos wiederverwenden, um eine Architektur fรผr eine andere Aufgabe zu laden: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` Im Allgemeinen empfehlen wir, die Klasse "AutoTokenizer" und die Klasse "TFAutoModelFor" zu verwenden, um vortrainierte Instanzen von Modellen zu laden. Dadurch wird sichergestellt, dass Sie jedes Mal die richtige Architektur laden. Im nรคchsten [Tutorial] (Vorverarbeitung) erfahren Sie, wie Sie Ihren neu geladenen Tokenizer, Feature Extractor und Prozessor verwenden, um einen Datensatz fรผr die Feinabstimmung vorzuverarbeiten. </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/peft.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Adapter mit ๐Ÿค— PEFT laden [[open-in-colab]] Die [Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) Methoden frieren die vorab trainierten Modellparameter wรคhrend der Feinabstimmung ein und fรผgen eine kleine Anzahl trainierbarer Parameter (die Adapter) hinzu. Die Adapter werden trainiert, um aufgabenspezifische Informationen zu lernen. Es hat sich gezeigt, dass dieser Ansatz sehr speichereffizient ist und weniger Rechenleistung beansprucht, wรคhrend die Ergebnisse mit denen eines vollstรคndig feinabgestimmten Modells vergleichbar sind. Adapter, die mit PEFT trainiert wurden, sind in der Regel um eine GrรถรŸenordnung kleiner als das vollstรคndige Modell, so dass sie bequem gemeinsam genutzt, gespeichert und geladen werden kรถnnen. <div class="flex flex-col justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/> <figcaption class="text-center">Die Adaptergewichte fรผr ein OPTForCausalLM-Modell, die auf dem Hub gespeichert sind, sind nur ~6MB groรŸ, verglichen mit der vollen GrรถรŸe der Modellgewichte, die ~700MB betragen kรถnnen.</figcaption> </div> Wenn Sie mehr รผber die ๐Ÿค— PEFT-Bibliothek erfahren mรถchten, sehen Sie sich die [Dokumentation](https://huggingface.co/docs/peft/index) an. ## Setup Starten Sie mit der Installation von ๐Ÿค— PEFT: ```bash pip install peft ``` Wenn Sie die brandneuen Funktionen ausprobieren mรถchten, sollten Sie die Bibliothek aus dem Quellcode installieren: ```bash pip install git+https://github.com/huggingface/peft.git ``` ## Unterstรผtzte PEFT-Modelle Transformers unterstรผtzt nativ einige PEFT-Methoden, d.h. Sie kรถnnen lokal oder auf dem Hub gespeicherte Adaptergewichte laden und sie mit wenigen Zeilen Code einfach ausfรผhren oder trainieren. Die folgenden Methoden werden unterstรผtzt: - [Low Rank Adapters](https://huggingface.co/docs/peft/conceptual_guides/lora) - [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3) - [AdaLoRA](https://arxiv.org/abs/2303.10512) Wenn Sie andere PEFT-Methoden, wie z.B. Prompt Learning oder Prompt Tuning, verwenden mรถchten, oder รผber die ๐Ÿค— PEFT-Bibliothek im Allgemeinen, lesen Sie bitte die [Dokumentation](https://huggingface.co/docs/peft/index). ## Laden Sie einen PEFT-Adapter Um ein PEFT-Adaptermodell von ๐Ÿค— Transformers zu laden und zu verwenden, stellen Sie sicher, dass das Hub-Repository oder das lokale Verzeichnis eine `adapter_config.json`-Datei und die Adaptergewichte enthรคlt, wie im obigen Beispielbild gezeigt. Dann kรถnnen Sie das PEFT-Adaptermodell mit der Klasse `AutoModelFor` laden. Um zum Beispiel ein PEFT-Adaptermodell fรผr die kausale Sprachmodellierung zu laden: 1. Geben Sie die PEFT-Modell-ID an. 2. รผbergeben Sie es an die Klasse [`AutoModelForCausalLM`]. ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) ``` <Tip> Sie kรถnnen einen PEFT-Adapter entweder mit einer `AutoModelFor`-Klasse oder der Basismodellklasse wie `OPTForCausalLM` oder `LlamaForCausalLM` laden. </Tip> Sie kรถnnen einen PEFT-Adapter auch laden, indem Sie die Methode `load_adapter` aufrufen: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "facebook/opt-350m" peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ``` ## Laden in 8bit oder 4bit Die `bitsandbytes`-Integration unterstรผtzt Datentypen mit 8bit und 4bit Genauigkeit, was fรผr das Laden groรŸer Modelle nรผtzlich ist, weil es Speicher spart (lesen Sie den `bitsandbytes`-Integrations [guide](./quantization#bitsandbytes-integration), um mehr zu erfahren). Fรผgen Sie die Parameter `load_in_8bit` oder `load_in_4bit` zu [`~PreTrainedModel.from_pretrained`] hinzu und setzen Sie `device_map="auto"`, um das Modell effektiv auf Ihre Hardware zu verteilen: ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", load_in_8bit=True) ``` ## Einen neuen Adapter hinzufรผgen Sie kรถnnen [`~peft.PeftModel.add_adapter`] verwenden, um einen neuen Adapter zu einem Modell mit einem bestehenden Adapter hinzuzufรผgen, solange der neue Adapter vom gleichen Typ ist wie der aktuelle Adapter. Wenn Sie zum Beispiel einen bestehenden LoRA-Adapter an ein Modell angehรคngt haben: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=["q_proj", "k_proj"], init_lora_weights=False ) model.add_adapter(lora_config, adapter_name="adapter_1") ``` Um einen neuen Adapter hinzuzufรผgen: ```py # attach new adapter with same config model.add_adapter(lora_config, adapter_name="adapter_2") ``` Jetzt kรถnnen Sie mit [`~peft.PeftModel.set_adapter`] festlegen, welcher Adapter verwendet werden soll: ```py # use adapter_1 model.set_adapter("adapter_1") output = model.generate(**inputs) print(tokenizer.decode(output_disabled[0], skip_special_tokens=True)) # use adapter_2 model.set_adapter("adapter_2") output_enabled = model.generate(**inputs) print(tokenizer.decode(output_enabled[0], skip_special_tokens=True)) ``` ## Aktivieren und Deaktivieren von Adaptern Sobald Sie einen Adapter zu einem Modell hinzugefรผgt haben, kรถnnen Sie das Adaptermodul aktivieren oder deaktivieren. So aktivieren Sie das Adaptermodul: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" adapter_model_id = "ybelkada/opt-350m-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello" inputs = tokenizer(text, return_tensors="pt") model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) ``` So deaktivieren Sie das Adaptermodul: ```py model.disable_adapters() output = model.generate(**inputs) ``` ## PEFT-Adapter trainieren PEFT-Adapter werden von der Klasse [`Trainer`] unterstรผtzt, so dass Sie einen Adapter fรผr Ihren speziellen Anwendungsfall trainieren kรถnnen. Dazu mรผssen Sie nur ein paar weitere Codezeilen hinzufรผgen. Zum Beispiel, um einen LoRA-Adapter zu trainieren: <Tip> Wenn Sie mit der Feinabstimmung eines Modells mit [`Trainer`] noch nicht vertraut sind, werfen Sie einen Blick auf das Tutorial [Feinabstimmung eines vortrainierten Modells](Training). </Tip> 1. Definieren Sie Ihre Adapterkonfiguration mit dem Aufgabentyp und den Hyperparametern (siehe [`~peft.LoraConfig`] fรผr weitere Details darรผber, was die Hyperparameter tun). ```py from peft import LoraConfig peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) ``` 2. Fรผgen Sie dem Modell einen Adapter hinzu. ```py model.add_adapter(peft_config) ``` 3. Jetzt kรถnnen Sie das Modell an [`Trainer`] รผbergeben! ```py trainer = Trainer(model=model, ...) trainer.train() ``` So speichern Sie Ihren trainierten Adapter und laden ihn wieder: ```py model.save_pretrained(save_dir) model = AutoModelForCausalLM.from_pretrained(save_dir) ``` <!-- TODO: (@younesbelkada @stevhliu) - Link to PEFT docs for further details - Trainer - 8-bit / 4-bit examples ? -->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Verteiltes Training mit ๐Ÿค— Accelerate Da die Modelle immer grรถรŸer werden, hat sich die Parallelitรคt als Strategie zum Trainieren grรถรŸerer Modelle auf begrenzter Hardware und zur Beschleunigung der Trainingsgeschwindigkeit um mehrere GrรถรŸenordnungen erwiesen. Bei Hugging Face haben wir die Bibliothek [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) entwickelt, um Nutzern zu helfen, ein ๐Ÿค— Transformers-Modell auf jeder Art von verteiltem Setup zu trainieren, egal ob es sich um mehrere GPUs auf einer Maschine oder mehrere GPUs auf mehreren Maschinen handelt. In diesem Tutorial lernen Sie, wie Sie Ihre native PyTorch-Trainingsschleife anpassen, um das Training in einer verteilten Umgebung zu ermรถglichen. ## Einrichtung Beginnen Sie mit der Installation von ๐Ÿค— Accelerate: ```bash pip install accelerate ``` Dann importieren und erstellen Sie ein [`~accelerate.Accelerator`]-Objekt. Der [`~accelerate.Accelerator`] wird automatisch Ihre Art der verteilten Einrichtung erkennen und alle notwendigen Komponenten fรผr das Training initialisieren. Sie mรผssen Ihr Modell nicht explizit auf einem Gerรคt platzieren. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## Vorbereiten auf die Beschleunigung Der nรคchste Schritt ist die รœbergabe aller relevanten Trainingsobjekte an die Methode [`~accelerate.Accelerator.prepare`]. Dazu gehรถren Ihre Trainings- und Evaluierungs-DataLoader, ein Modell und ein Optimierer: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## Rรผckwรคrts Die letzte Ergรคnzung besteht darin, das typische `loss.backward()` in der Trainingsschleife durch die ๐Ÿค— Accelerate-Methode [`~accelerate.Accelerator.backward`] zu ersetzen: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` Wie Sie im folgenden Code sehen kรถnnen, mรผssen Sie nur vier zusรคtzliche Codezeilen zu Ihrer Trainingsschleife hinzufรผgen, um verteiltes Training zu ermรถglichen! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## Trainieren Sobald Sie die entsprechenden Codezeilen hinzugefรผgt haben, starten Sie Ihr Training in einem Skript oder einem Notebook wie Colaboratory. ### Trainieren mit einem Skript Wenn Sie Ihr Training mit einem Skript durchfรผhren, fรผhren Sie den folgenden Befehl aus, um eine Konfigurationsdatei zu erstellen und zu speichern: ```bash accelerate config ``` Dann starten Sie Ihr Training mit: ```bash accelerate launch train.py ``` ### Trainieren mit einem Notebook ๐Ÿค— Accelerate kann auch in einem Notebook laufen, wenn Sie planen, die TPUs von Colaboratory zu verwenden. Verpacken Sie den gesamten Code, der fรผr das Training verantwortlich ist, in eine Funktion und รผbergeben Sie diese an [`~accelerate.notebook_launcher`]: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` Weitere Informationen รผber ๐Ÿค— Accelerate und seine umfangreichen Funktionen finden Sie in der [Dokumentation](https://huggingface.co/docs/accelerate).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/pr_checks.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # รœberprรผfungen bei einer Pull-Anfrage Wenn Sie eine Pull-Anfrage fรผr ๐Ÿค— Transformers รถffnen, wird eine ganze Reihe von Prรผfungen durchgefรผhrt, um sicherzustellen, dass der Patch, den Sie hinzufรผgen, nichts Bestehendes zerstรถrt. Es gibt vier Arten von Prรผfungen: - regulรคre Tests - Erstellung der Dokumentation - Stil von Code und Dokumentation - allgemeine Konsistenz des Repository In diesem Dokument werden wir versuchen zu erklรคren, worum es sich bei diesen verschiedenen Prรผfungen handelt und wie Sie sie lokal debuggen kรถnnen, wenn eine der Prรผfungen in Ihrer PR fehlschlรคgt. Beachten Sie, dass Sie im Idealfall eine Dev-Installation benรถtigen: ```bash pip install transformers[dev] ``` oder fรผr eine bearbeitbare Installation: ```bash pip install -e .[dev] ``` innerhalb des Transformers Repo. Da die Anzahl der optionalen Abhรคngigkeiten von Transformers stark zugenommen hat, ist es mรถglich, dass Sie nicht alle davon bekommen kรถnnen. Wenn die Dev-Installation fehlschlรคgt, stellen Sie sicher, dass Sie das Deep Learning-Framework, mit dem Sie arbeiten, installieren (PyTorch, TensorFlow und/oder Flax). ```bash pip install transformers[quality] ``` oder fรผr eine bearbeitbare Installation: ```bash pip install -e .[quality] ``` ## Tests Alle Jobs, die mit `ci/circleci: run_tests_` beginnen, fรผhren Teile der Transformers-Testsuite aus. Jeder dieser Jobs konzentriert sich auf einen Teil der Bibliothek in einer bestimmten Umgebung: `ci/circleci: run_tests_pipelines_tf` zum Beispiel fรผhrt den Pipelines-Test in einer Umgebung aus, in der nur TensorFlow installiert ist. Beachten Sie, dass nur ein Teil der Testsuite jedes Mal ausgefรผhrt wird, um zu vermeiden, dass Tests ausgefรผhrt werden, wenn es keine wirkliche ร„nderung in den Modulen gibt, die sie testen: ein Dienstprogramm wird ausgefรผhrt, um die Unterschiede in der Bibliothek zwischen vor und nach dem PR zu ermitteln (was GitHub Ihnen auf der Registerkarte "Files changes" anzeigt) und die Tests auszuwรคhlen, die von diesem Unterschied betroffen sind. Dieses Dienstprogramm kann lokal mit ausgefรผhrt werden: ```bash python utils/tests_fetcher.py ``` aus dem Stammverzeichnis des Transformers-Repositoriums. Es wird: 1. รœberprรผfen Sie fรผr jede Datei im Diff, ob die ร„nderungen im Code oder nur in Kommentaren oder Docstrings enthalten sind. Nur die Dateien mit echten Codeรคnderungen werden beibehalten. 2. Erstellen Sie eine interne Map, die fรผr jede Datei des Quellcodes der Bibliothek alle Dateien angibt, auf die sie rekursiv Einfluss nimmt. Von Modul A wird gesagt, dass es sich auf Modul B auswirkt, wenn Modul B Modul A importiert. Fรผr die rekursive Auswirkung benรถtigen wir eine Kette von Modulen, die von Modul A zu Modul B fรผhrt und in der jedes Modul das vorherige importiert. 3. Wenden Sie diese Zuordnung auf die in Schritt 1 gesammelten Dateien an. So erhalten wir die Liste der Modelldateien, die von der PR betroffen sind. 4. Ordnen Sie jede dieser Dateien der/den entsprechenden Testdatei(en) zu und erhalten Sie die Liste der auszufรผhrenden Tests. Wenn Sie das Skript lokal ausfรผhren, sollten Sie die Ergebnisse von Schritt 1, 3 und 4 ausgegeben bekommen und somit wissen, welche Tests ausgefรผhrt werden. Das Skript erstellt auรŸerdem eine Datei namens `test_list.txt`, die die Liste der auszufรผhrenden Tests enthรคlt, die Sie mit dem folgenden Befehl lokal ausfรผhren kรถnnen: ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` Fรผr den Fall, dass Ihnen etwas entgangen ist, wird die komplette Testreihe ebenfalls tรคglich ausgefรผhrt. ## Dokumentation erstellen Der Job `build_pr_documentation` erstellt und generiert eine Vorschau der Dokumentation, um sicherzustellen, dass alles in Ordnung ist, wenn Ihr PR zusammengefรผhrt wird. Ein Bot fรผgt einen Link zur Vorschau der Dokumentation zu Ihrem PR hinzu. Alle ร„nderungen, die Sie an dem PR vornehmen, werden automatisch in der Vorschau aktualisiert. Wenn die Dokumentation nicht erstellt werden kann, klicken Sie auf **Details** neben dem fehlgeschlagenen Auftrag, um zu sehen, wo der Fehler liegt. Oft ist der Fehler so einfach wie eine fehlende Datei im `toctree`. Wenn Sie daran interessiert sind, die Dokumentation lokal zu erstellen oder in der Vorschau anzusehen, werfen Sie einen Blick in die [`README.md`](https://github.com/huggingface/transformers/tree/main/docs) im Ordner docs. ## Code und Dokumentationsstil Die Formatierung des Codes erfolgt fรผr alle Quelldateien, die Beispiele und die Tests mit `black` und `ruff`. Wir haben auch ein benutzerdefiniertes Tool, das sich um die Formatierung von docstrings und `rst`-Dateien kรผmmert (`utils/style_doc.py`), sowie um die Reihenfolge der Lazy-Importe, die in den Transformers `__init__.py`-Dateien durchgefรผhrt werden (`utils/custom_init_isort.py`). All dies kรถnnen Sie starten, indem Sie Folgendes ausfรผhren ```bash make style ``` Das CI prรผft, ob diese innerhalb der Prรผfung `ci/circleci: check_code_quality` angewendet wurden. Es fรผhrt auch `ruff` aus, das einen grundlegenden Blick auf Ihren Code wirft und sich beschwert, wenn es eine undefinierte Variable findet oder eine, die nicht verwendet wird. Um diese Prรผfung lokal auszufรผhren, verwenden Sie ```bash make quality ``` Dies kann sehr viel Zeit in Anspruch nehmen. Um dasselbe nur fรผr die Dateien zu tun, die Sie im aktuellen Zweig geรคndert haben, fรผhren Sie ```bash make fixup ``` Dieser letzte Befehl fรผhrt auch alle zusรคtzlichen Prรผfungen fรผr die Konsistenz des Repositorys durch. Schauen wir uns diese an. ## Repository-Konsistenz Dies fasst alle Tests zusammen, die sicherstellen, dass Ihr PR das Repository in einem guten Zustand verlรคsst. Sie kรถnnen diese Prรผfung lokal durchfรผhren, indem Sie Folgendes ausfรผhren: ```bash make repo-consistency ``` Dies รผberprรผft, ob: - Alle zum Init hinzugefรผgten Objekte sind dokumentiert (ausgefรผhrt von `utils/check_repo.py`) - Alle `__init__.py`-Dateien haben in ihren beiden Abschnitten den gleichen Inhalt (ausgefรผhrt von `utils/check_inits.py`) - Der gesamte Code, der als Kopie eines anderen Moduls identifiziert wurde, stimmt mit dem Original รผberein (ausgefรผhrt von `utils/check_copies.py`) - Alle Konfigurationsklassen haben mindestens einen gรผltigen Prรผfpunkt, der in ihren Dokumentationen erwรคhnt wird (ausgefรผhrt von `utils/check_config_docstrings.py`) - Alle Konfigurationsklassen enthalten nur Attribute, die in den entsprechenden Modellierungsdateien verwendet werden (ausgefรผhrt von `utils/check_config_attributes.py`) - Die รœbersetzungen der READMEs und der Index des Dokuments haben die gleiche Modellliste wie die Haupt-README (durchgefรผhrt von `utils/check_copies.py`) - Die automatisch generierten Tabellen in der Dokumentation sind auf dem neuesten Stand (ausgefรผhrt von `utils/check_table.py`) - Die Bibliothek verfรผgt รผber alle Objekte, auch wenn nicht alle optionalen Abhรคngigkeiten installiert sind (ausgefรผhrt von `utils/check_dummies.py`) Sollte diese Prรผfung fehlschlagen, mรผssen die ersten beiden Punkte manuell korrigiert werden, die letzten vier kรถnnen automatisch fรผr Sie korrigiert werden, indem Sie den Befehl ```bash make fix-copies ``` Zusรคtzliche Prรผfungen betreffen PRs, die neue Modelle hinzufรผgen, vor allem, dass: - Alle hinzugefรผgten Modelle befinden sich in einer Auto-Zuordnung (durchgefรผhrt von `utils/check_repo.py`) <!-- TODO Sylvain, add a check that makes sure the common tests are implemented.--> - Alle Modelle werden ordnungsgemรครŸ getestet (ausgefรผhrt von `utils/check_repo.py`) <!-- TODO Sylvain, add the following - All models are added to the main README, inside the main doc - All checkpoints used actually exist on the Hub --> ### Kopien prรผfen Da die Transformers-Bibliothek in Bezug auf den Modellcode sehr eigenwillig ist und jedes Modell vollstรคndig in einer einzigen Datei implementiert sein sollte, ohne sich auf andere Modelle zu stรผtzen, haben wir einen Mechanismus hinzugefรผgt, der รผberprรผft, ob eine Kopie des Codes einer Ebene eines bestimmten Modells mit dem Original รผbereinstimmt. Auf diese Weise kรถnnen wir bei einer Fehlerbehebung alle anderen betroffenen Modelle sehen und entscheiden, ob wir die ร„nderung weitergeben oder die Kopie zerstรถren. <Tip> Wenn eine Datei eine vollstรคndige Kopie einer anderen Datei ist, sollten Sie sie in der Konstante `FULL_COPIES` von `utils/check_copies.py` registrieren. </Tip> Dieser Mechanismus stรผtzt sich auf Kommentare der Form `# Kopiert von xxx`. Das `xxx` sollte den gesamten Pfad zu der Klasse der Funktion enthalten, die darunter kopiert wird. Zum Beispiel ist `RobertaSelfOutput` eine direkte Kopie der Klasse `BertSelfOutput`. Sie kรถnnen also [hier](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289) sehen, dass sie einen Kommentar hat: ```py # Copied from transformers.models.bert.modeling_bert.BertSelfOutput ``` Beachten Sie, dass Sie dies nicht auf eine ganze Klasse anwenden, sondern auf die entsprechenden Methoden, von denen kopiert wird. Zum Beispiel [hier](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598) kรถnnen Sie sehen, wie `RobertaPreTrainedModel._init_weights` von der gleichen Methode in `BertPreTrainedModel` mit dem Kommentar kopiert wird: ```py # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights ``` Manchmal ist die Kopie bis auf die Namen genau gleich: zum Beispiel verwenden wir in `RobertaAttention` `RobertaSelfAttention` anstelle von `BertSelfAttention`, aber ansonsten ist der Code genau derselbe. Aus diesem Grund unterstรผtzt `#Copied from` einfache String-Ersetzungen mit der folgenden Syntax: `Kopiert von xxx mit foo->bar`. Das bedeutet, dass der Code kopiert wird, wobei alle Instanzen von "foo" durch "bar" ersetzt werden. Sie kรถnnen sehen, wie es [hier](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L304C1-L304C86) in `RobertaAttention` mit dem Kommentar verwendet wird: ```py # Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta ``` Beachten Sie, dass um den Pfeil herum keine Leerzeichen stehen sollten (es sei denn, das Leerzeichen ist Teil des zu ersetzenden Musters, natรผrlich). Sie kรถnnen mehrere Muster durch ein Komma getrennt hinzufรผgen. Zum Beispiel ist hier `CamemberForMaskedLM` eine direkte Kopie von `RobertaForMaskedLM` mit zwei Ersetzungen: `Roberta` zu `Camembert` und `ROBERTA` zu `CAMEMBERT`. Sie kรถnnen [hier](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/camembert/modeling_camembert.py#L929) sehen, wie dies mit dem Kommentar gemacht wird: ```py # Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT ``` Wenn die Reihenfolge eine Rolle spielt (weil eine der Ersetzungen mit einer vorherigen in Konflikt geraten kรถnnte), werden die Ersetzungen von links nach rechts ausgefรผhrt. <Tip> Wenn die Ersetzungen die Formatierung รคndern (wenn Sie z.B. einen kurzen Namen durch einen sehr langen Namen ersetzen), wird die Kopie nach Anwendung des automatischen Formats รผberprรผft. </Tip> Eine andere Mรถglichkeit, wenn es sich bei den Mustern nur um verschiedene Umschreibungen derselben Ersetzung handelt (mit einer groรŸ- und einer kleingeschriebenen Variante), besteht darin, die Option `all-casing` hinzuzufรผgen. [Hier](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/mobilebert/modeling_mobilebert.py#L1237) ist ein Beispiel in `MobileBertForSequenceClassification` mit dem Kommentar: ```py # Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing ``` In diesem Fall wird der Code von `BertForSequenceClassification` kopiert, indem er ersetzt wird: - `Bert` durch `MobileBert` (zum Beispiel bei der Verwendung von `MobileBertModel` in der Init) - `bert` durch `mobilebert` (zum Beispiel bei der Definition von `self.mobilebert`) - `BERT` durch `MOBILEBERT` (in der Konstante `MOBILEBERT_INPUTS_DOCSTRING`)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/add_new_pipeline.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Wie erstellt man eine benutzerdefinierte Pipeline? In dieser Anleitung sehen wir uns an, wie Sie eine benutzerdefinierte Pipeline erstellen und sie auf dem [Hub](hf.co/models) freigeben oder sie der ๐Ÿค— Transformers-Bibliothek hinzufรผgen. Zuallererst mรผssen Sie entscheiden, welche Roheingaben die Pipeline verarbeiten kann. Es kann sich um Strings, rohe Bytes, Dictionaries oder was auch immer die wahrscheinlichste gewรผnschte Eingabe ist. Versuchen Sie, diese Eingaben so rein wie mรถglich in Python zu halten denn das macht die Kompatibilitรคt einfacher (auch mit anderen Sprachen รผber JSON). Dies werden die Eingaben der Pipeline (`Vorverarbeitung`). Definieren Sie dann die `Outputs`. Dieselbe Richtlinie wie fรผr die Eingรคnge. Je einfacher, desto besser. Dies werden die Ausgaben der Methode `Postprocess`. Beginnen Sie damit, die Basisklasse `Pipeline` mit den 4 Methoden zu erben, die fรผr die Implementierung von `preprocess` benรถtigt werden, Weiterleiten", "Nachbearbeitung" und "Parameter sรคubern". ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Maybe {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` Die Struktur dieser Aufteilung soll eine relativ nahtlose Unterstรผtzung fรผr CPU/GPU ermรถglichen und gleichzeitig die Durchfรผhrung von Vor-/Nachbearbeitung auf der CPU in verschiedenen Threads Preprocess" nimmt die ursprรผnglich definierten Eingaben und wandelt sie in etwas um, das in das Modell eingespeist werden kann. Es kann mehr Informationen enthalten und ist normalerweise ein `Dict`. `_forward` ist das Implementierungsdetail und ist nicht dafรผr gedacht, direkt aufgerufen zu werden. Weiterleiten" ist die bevorzugte aufgerufene Methode, da sie Sicherheitsvorkehrungen enthรคlt, die sicherstellen, dass alles auf dem erwarteten Gerรคt funktioniert. Wenn etwas mit einem realen Modell verknรผpft ist, gehรถrt es in die Methode `_forward`, alles andere gehรถrt in die Methoden preprocess/postprocess. Die Methode `Postprocess` nimmt die Ausgabe von `_forward` und verwandelt sie in die endgรผltige Ausgabe, die zuvor festgelegt wurde. zuvor entschieden wurde. Die Methode `_sanitize_parameters` ermรถglicht es dem Benutzer, beliebige Parameter zu รผbergeben, wann immer er mรถchte, sei es bei der Initialisierung Zeit `pipeline(...., maybe_arg=4)` oder zur Aufrufzeit `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`. Die Rรผckgabe von `_sanitize_parameters` sind die 3 Dicts von kwargs, die direkt an `preprocess` รผbergeben werden, `_forward` und `postprocess` รผbergeben werden. Fรผllen Sie nichts aus, wenn der Aufrufer keinen zusรคtzlichen Parameter angegeben hat. Das erlaubt es, die Standardargumente in der Funktionsdefinition beizubehalten, was immer "natรผrlicher" ist. Ein klassisches Beispiel wรคre das Argument `top_k` in der Nachbearbeitung bei Klassifizierungsaufgaben. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit `_sanitize_parameters` to allow this new parameter. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # Add logic to handle top_k return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` Versuchen Sie, die Eingaben/Ausgaben sehr einfach und idealerweise JSON-serialisierbar zu halten, da dies die Verwendung der Pipeline sehr einfach macht ohne dass die Benutzer neue Arten von Objekten verstehen mรผssen. Es ist auch relativ รผblich, viele verschiedene Arten von Argumenten zu unterstรผtzen von Argumenten zu unterstรผtzen (Audiodateien, die Dateinamen, URLs oder reine Bytes sein kรถnnen). ## Hinzufรผgen zur Liste der unterstรผtzten Aufgaben Um Ihre `neue Aufgabe` in die Liste der unterstรผtzten Aufgaben aufzunehmen, mรผssen Sie sie zur `PIPELINE_REGISTRY` hinzufรผgen: ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` Wenn Sie mรถchten, kรถnnen Sie ein Standardmodell angeben. In diesem Fall sollte es mit einer bestimmten Revision (die der Name einer Verzweigung oder ein Commit-Hash sein kann, hier haben wir `"abcdef"` genommen) sowie mit dem Typ versehen sein: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # current support type: text, audio, image, multimodal ) ``` ## Teilen Sie Ihre Pipeline auf dem Hub Um Ihre benutzerdefinierte Pipeline auf dem Hub freizugeben, mรผssen Sie lediglich den benutzerdefinierten Code Ihrer `Pipeline`-Unterklasse in einer Python-Datei speichern. Nehmen wir zum Beispiel an, Sie mรถchten eine benutzerdefinierte Pipeline fรผr die Klassifizierung von Satzpaaren wie folgt verwenden: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` Die Implementierung ist Framework-unabhรคngig und funktioniert fรผr PyTorch- und TensorFlow-Modelle. Wenn wir dies in einer Datei einer Datei namens `pair_classification.py` gespeichert haben, kรถnnen wir sie importieren und wie folgt registrieren: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` Sobald dies geschehen ist, kรถnnen wir es mit einem vortrainierten Modell verwenden. Zum Beispiel wurde `sgugger/finetuned-bert-mrpc` auf den auf den MRPC-Datensatz abgestimmt, der Satzpaare als Paraphrasen oder nicht klassifiziert. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` Dann kรถnnen wir sie auf dem Hub mit der Methode `save_pretrained` in einem `Repository` freigeben: ```py from huggingface_hub import Repository repo = Repository("test-dynamic-pipeline", clone_from="{your_username}/test-dynamic-pipeline") classifier.save_pretrained("test-dynamic-pipeline") repo.push_to_hub() ``` Dadurch wird die Datei, in der Sie `PairClassificationPipeline` definiert haben, in den Ordner `"test-dynamic-pipeline"` kopiert, und speichert das Modell und den Tokenizer der Pipeline, bevor Sie alles in das Repository verschieben `{Ihr_Benutzername}/test-dynamic-pipeline`. Danach kann jeder die Pipeline verwenden, solange er die Option `trust_remote_code=True` angeben: ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## Hinzufรผgen der Pipeline zu ๐Ÿค— Transformers Wenn Sie Ihre Pipeline zu ๐Ÿค— Transformers beitragen mรถchten, mรผssen Sie ein neues Modul im Untermodul `pipelines` hinzufรผgen mit dem Code Ihrer Pipeline hinzufรผgen. Fรผgen Sie es dann der Liste der in `pipelines/__init__.py` definierten Aufgaben hinzu. Dann mรผssen Sie noch Tests hinzufรผgen. Erstellen Sie eine neue Datei `tests/test_pipelines_MY_PIPELINE.py` mit Beispielen fรผr die anderen Tests. Die Funktion `run_pipeline_test` ist sehr allgemein gehalten und lรคuft auf kleinen Zufallsmodellen auf jeder mรถglichen Architektur, wie durch `model_mapping` und `tf_model_mapping` definiert. Dies ist sehr wichtig, um die zukรผnftige Kompatibilitรคt zu testen, d.h. wenn jemand ein neues Modell fรผr `XXXForQuestionAnswering` hinzufรผgt, wird der Pipeline-Test versuchen, mit diesem Modell zu arbeiten. Da die Modelle zufรคllig sind, ist es ist es unmรถglich, die tatsรคchlichen Werte zu รผberprรผfen. Deshalb gibt es eine Hilfsfunktion `ANY`, die einfach versucht, die Ausgabe der Pipeline TYPE. AuรŸerdem *mรผssen* Sie 2 (idealerweise 4) Tests implementieren. - test_small_model_pt` : Definieren Sie 1 kleines Modell fรผr diese Pipeline (es spielt keine Rolle, ob die Ergebnisse keinen Sinn ergeben) und testen Sie die Ausgaben der Pipeline. Die Ergebnisse sollten die gleichen sein wie bei `test_small_model_tf`. - test_small_model_tf : Definieren Sie 1 kleines Modell fรผr diese Pipeline (es spielt keine Rolle, ob die Ergebnisse keinen Sinn ergeben) und testen Sie die Ausgaben der Pipeline. Die Ergebnisse sollten die gleichen sein wie bei `test_small_model_pt`. - test_large_model_pt` (`optional`): Testet die Pipeline an einer echten Pipeline, bei der die Ergebnisse Sinn machen. Diese Tests sind langsam und sollten als solche gekennzeichnet werden. Hier geht es darum, die Pipeline zu prรคsentieren und sicherzustellen sicherzustellen, dass es in zukรผnftigen Versionen keine Abweichungen gibt. - test_large_model_tf` (`optional`): Testet die Pipeline an einer echten Pipeline, bei der die Ergebnisse Sinn machen. Diese Tests sind langsam und sollten als solche gekennzeichnet werden. Hier geht es darum, die Pipeline zu prรคsentieren und sicherzustellen sicherzustellen, dass es in zukรผnftigen Versionen keine Abweichungen gibt.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimierung eines vortrainierten Modells [[open-in-colab]] Die Verwendung eines vorab trainierten Modells hat erhebliche Vorteile. Es reduziert die Rechenkosten und den CO2-FuรŸabdruck und ermรถglicht Ihnen die Verwendung von Modellen, die dem neuesten Stand der Technik entsprechen, ohne dass Sie ein Modell von Grund auf neu trainieren mรผssen. Transformers bietet Zugang zu Tausenden von vortrainierten Modellen fรผr eine Vielzahl von Aufgaben. Wenn Sie ein vorab trainiertes Modell verwenden, trainieren Sie es auf einem fรผr Ihre Aufgabe spezifischen Datensatz. Dies wird als Feinabstimmung bezeichnet und ist eine unglaublich leistungsfรคhige Trainingstechnik. In diesem Tutorial werden Sie ein vortrainiertes Modell mit einem Deep-Learning-Framework Ihrer Wahl feinabstimmen: * Feinabstimmung eines vorab trainierten Modells mit ๐Ÿค— Transformers [`Trainer`]. * Feinabstimmung eines vorab trainierten Modells in TensorFlow mit Keras. * Feinabstimmung eines vorab trainierten Modells in nativem PyTorch. <a id='data-processing'></a> ## Vorbereitung eines Datensatzes <Youtube id="_BZearw7f0w"/> Bevor Sie die Feinabstimmung eines vortrainierten Modells vornehmen kรถnnen, mรผssen Sie einen Datensatz herunterladen und fรผr das Training vorbereiten. Im vorangegangenen Leitfaden haben Sie gelernt, wie man Daten fรผr das Training aufbereitet, und jetzt haben Sie die Gelegenheit, diese Fรคhigkeiten zu testen! Laden Sie zunรคchst den Datensatz [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full): ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` Wie Sie nun wissen, benรถtigen Sie einen Tokenizer, um den Text zu verarbeiten und eine Auffรผll- und Abschneidungsstrategie einzubauen, um mit variablen Sequenzlรคngen umzugehen. Um Ihren Datensatz in einem Schritt zu verarbeiten, verwenden Sie die ๐Ÿค— Methode Datasets [`map`](https://huggingface.co/docs/datasets/process#map), um eine Vorverarbeitungsfunktion auf den gesamten Datensatz anzuwenden: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` Wenn Sie mรถchten, kรถnnen Sie eine kleinere Teilmenge des gesamten Datensatzes fรผr die Feinabstimmung erstellen, um den Zeitaufwand zu verringern: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Training An dieser Stelle sollten Sie dem Abschnitt folgen, der dem Rahmen entspricht, den Sie verwenden mรถchten. Sie kรถnnen รผber die Links in der rechten Seitenleiste kรถnnen Sie zu dem gewรผnschten Abschnitt springen - und wenn Sie den gesamten Inhalt eines bestimmten Frameworks ausblenden mรถchten, klicken Sie einfach auf die Schaltflรคche oben rechts im Block des jeweiligen Frameworks! <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ## Trainieren mit PyTorch Trainer ๐Ÿค— Transformers bietet eine [`Trainer`]-Klasse, die fรผr das Training von ๐Ÿค— Transformers-Modellen optimiert ist und es einfacher macht, mit dem Training zu beginnen, ohne manuell eine eigene Trainingsschleife zu schreiben. Die [`Trainer`]-API unterstรผtzt eine breite Palette von Trainingsoptionen und Funktionen wie Logging, Gradientenakkumulation und gemischte Prรคzision. Beginnen Sie mit dem Laden Ihres Modells und geben Sie die Anzahl der erwarteten Labels an. Aus dem Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields) wissen Sie, dass es fรผnf Labels gibt: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` <Tip> Es wird eine Warnung angezeigt, dass einige der trainierten Parameter nicht verwendet werden und einige Parameter zufรคllig initialisiert werden. Machen Sie sich keine Sorgen, das ist vรถllig normal! Der vorher trainierte Kopf des BERT-Modells wird verworfen und durch einen zufรคllig initialisierten Klassifikationskopf ersetzt. Sie werden diesen neuen Modellkopf in Ihrer Sequenzklassifizierungsaufgabe feinabstimmen, indem Sie das Wissen des vortrainierten Modells auf ihn รผbertragen. </Tip> ### Hyperparameter fรผr das Training Als Nรคchstes erstellen Sie eine Klasse [`TrainingArguments`], die alle Hyperparameter enthรคlt, die Sie einstellen kรถnnen, sowie Flags zur Aktivierung verschiedener Trainingsoptionen. Fรผr dieses Lernprogramm kรถnnen Sie mit den Standard- [Hyperparametern](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) beginnen, aber Sie kรถnnen mit diesen experimentieren, um Ihre optimalen Einstellungen zu finden. Geben Sie an, wo die Kontrollpunkte Ihres Trainings gespeichert werden sollen: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### Auswerten Der [`Trainer`] wertet die Leistung des Modells wรคhrend des Trainings nicht automatisch aus. Sie mรผssen [`Trainer`] eine Funktion รผbergeben, um Metriken zu berechnen und zu berichten. Die [๐Ÿค— Evaluate](https://huggingface.co/docs/evaluate/index) Bibliothek bietet eine einfache [`accuracy`](https://huggingface.co/spaces/evaluate-metric/accuracy) Funktion, die Sie mit der [`evaluate.load`] Funktion laden kรถnnen (siehe diese [quicktour](https://huggingface.co/docs/evaluate/a_quick_tour) fรผr weitere Informationen): ```py >>> import numpy as np >>> import evaluate >>> metric = evaluate.load("accuracy") ``` Rufen Sie [`~evaluate.compute`] auf `metric` auf, um die Genauigkeit Ihrer Vorhersagen zu berechnen. Bevor Sie Ihre Vorhersagen an `compute` รผbergeben, mรผssen Sie die Vorhersagen in Logits umwandeln (denken Sie daran, dass alle ๐Ÿค— Transformers-Modelle Logits zurรผckgeben): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` Wenn Sie Ihre Bewertungsmetriken wรคhrend der Feinabstimmung รผberwachen mรถchten, geben Sie den Parameter `evaluation_strategy` in Ihren Trainingsargumenten an, um die Bewertungsmetrik am Ende jeder Epoche zu ermitteln: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") ``` ### Trainer Erstellen Sie ein [`Trainer`]-Objekt mit Ihrem Modell, Trainingsargumenten, Trainings- und Testdatensรคtzen und einer Evaluierungsfunktion: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` AnschlieรŸend kรถnnen Sie Ihr Modell durch den Aufruf von [`~transformers.Trainer.train`] optimieren: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> ## Trainieren Sie ein TensorFlow-Modell mit Keras Sie kรถnnen auch ๐Ÿค— Transformers Modelle in TensorFlow mit der Keras API trainieren! ### Laden von Daten fรผr Keras Wenn Sie ein ๐Ÿค— Transformers Modell mit der Keras API trainieren wollen, mรผssen Sie Ihren Datensatz in ein Format konvertieren, das Keras versteht. Wenn Ihr Datensatz klein ist, kรถnnen Sie das Ganze einfach in NumPy-Arrays konvertieren und an Keras รผbergeben. Probieren wir das zuerst aus, bevor wir etwas Komplizierteres tun. Laden Sie zunรคchst ein Dataset. Wir werden den CoLA-Datensatz aus dem [GLUE-Benchmark](https://huggingface.co/datasets/glue) verwenden, da es sich um eine einfache Aufgabe zur Klassifizierung von binรคrem Text handelt, und nehmen vorerst nur den Trainingssplit. ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now ``` Als nรคchstes laden Sie einen Tokenizer und tokenisieren die Daten als NumPy-Arrays. Beachten Sie, dass die Beschriftungen bereits eine Liste von 0 und 1en sind, Wir kรถnnen sie also ohne Tokenisierung direkt in ein NumPy-Array konvertieren! ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_data = tokenizer(dataset["text"], return_tensors="np", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 ``` SchlieรŸlich laden, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) und [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) Sie das Modell: ```py from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) model.fit(tokenized_data, labels) ``` <Tip> Sie mรผssen Ihren Modellen kein Verlustargument รผbergeben, wenn Sie sie `compile()`! Hugging-Face-Modelle wรคhlen automatisch einen Loss, der fรผr ihre Aufgabe und Modellarchitektur geeignet ist, wenn dieses Argument leer gelassen wird. Sie kรถnnen jederzeit auรŸer Kraft setzen, indem Sie selbst einen Loss angeben, wenn Sie das mรถchten! </Tip> Dieser Ansatz eignet sich hervorragend fรผr kleinere Datensรคtze, aber bei grรถรŸeren Datensรคtzen kann er zu einem Problem werden. Warum? Weil das tokenisierte Array und die Beschriftungen vollstรคndig in den Speicher geladen werden mรผssten, und weil NumPy nicht mit "gezackte" Arrays nicht verarbeiten kann, so dass jedes tokenisierte Sample auf die Lรคnge des lรคngsten Samples im gesamten Datensatz aufgefรผllt werden mรผsste. Datensatzes aufgefรผllt werden. Dadurch wird das Array noch grรถรŸer, und all die aufgefรผllten Token verlangsamen auch das Training! ### Laden von Daten als tf.data.Dataset Wenn Sie eine Verlangsamung des Trainings vermeiden wollen, kรถnnen Sie Ihre Daten stattdessen als `tf.data.Dataset` laden. Sie kรถnnen zwar Ihre eigene tf.data"-Pipeline schreiben kรถnnen, wenn Sie wollen, haben wir zwei bequeme Methoden, um dies zu tun: - [`~TFPreTrainedModel.prepare_tf_dataset`]: Dies ist die Methode, die wir in den meisten Fรคllen empfehlen. Da es sich um eine Methode Ihres Modells ist, kann sie das Modell inspizieren, um automatisch herauszufinden, welche Spalten als Modelleingaben verwendet werden kรถnnen, und verwirft die anderen, um einen einfacheren, leistungsfรคhigeren Datensatz zu erstellen. - [~datasets.Dataset.to_tf_dataset`]: Diese Methode ist eher auf niedriger Ebene angesiedelt und ist nรผtzlich, wenn Sie genau kontrollieren wollen, wie Dataset erstellt wird, indem man genau angibt, welche `columns` und `label_cols` einbezogen werden sollen. Bevor Sie [~TFPreTrainedModel.prepare_tf_dataset`] verwenden kรถnnen, mรผssen Sie die Tokenizer-Ausgaben als Spalten zu Ihrem Datensatz hinzufรผgen, wie in dem folgenden Codebeispiel: ```py def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` Denken Sie daran, dass Hugging Face-Datensรคtze standardmรครŸig auf der Festplatte gespeichert werden, so dass dies nicht zu einem erhรถhten Arbeitsspeicherbedarf fรผhren wird! Sobald die Spalten hinzugefรผgt wurden, kรถnnen Sie Batches aus dem Datensatz streamen und zu jedem Batch Auffรผllungen hinzufรผgen, was die Anzahl der Auffรผllungs-Token im Vergleich zum Auffรผllen des gesamten Datensatzes reduziert. ```py >>> tf_dataset = model.prepare_tf_dataset(dataset, batch_size=16, shuffle=True, tokenizer=tokenizer) ``` Beachten Sie, dass Sie im obigen Codebeispiel den Tokenizer an `prepare_tf_dataset` รผbergeben mรผssen, damit die Stapel beim Laden korrekt aufgefรผllt werden kรถnnen. Wenn alle Stichproben in Ihrem Datensatz die gleiche Lรคnge haben und kein Auffรผllen erforderlich ist, kรถnnen Sie dieses Argument weglassen. Wenn Sie etwas Komplexeres als nur das Auffรผllen von Stichproben benรถtigen (z. B. das Korrumpieren von Token fรผr die maskierte Sprachmodellierung), kรถnnen Sie das Argument Modellierung), kรถnnen Sie stattdessen das Argument `collate_fn` verwenden, um eine Funktion zu รผbergeben, die aufgerufen wird, um die Liste von Stichproben in einen Stapel umwandelt und alle gewรผnschten Vorverarbeitungen vornimmt. Siehe unsere [examples](https://github.com/huggingface/transformers/tree/main/examples) oder [notebooks](https://huggingface.co/docs/transformers/notebooks), um diesen Ansatz in Aktion zu sehen. Sobald Sie einen `tf.data.Dataset` erstellt haben, kรถnnen Sie das Modell wie zuvor kompilieren und anpassen: ```py model.compile(optimizer=Adam(3e-5)) model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## Trainieren in nativem PyTorch <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`] kรผmmert sich um die Trainingsschleife und ermรถglicht die Feinabstimmung eines Modells in einer einzigen Codezeile. Fรผr Benutzer, die es vorziehen, ihre eigene Trainingsschleife zu schreiben, kรถnnen Sie auch eine Feinabstimmung eines ๐Ÿค— Transformers-Modells in nativem PyTorch vornehmen. An diesem Punkt mรผssen Sie mรถglicherweise Ihr Notebook neu starten oder den folgenden Code ausfรผhren, um etwas Speicher freizugeben: ```py del model del pytorch_model del trainer torch.cuda.empty_cache() ``` Als Nรคchstes mรผssen Sie den Datensatz `tokenized_dataset` manuell nachbearbeiten, um ihn fรผr das Training vorzubereiten. 1. Entfernen Sie die Spalte "Text", da das Modell keinen Rohtext als Eingabe akzeptiert: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. Benennen Sie die Spalte "Label" in "Labels" um, da das Modell erwartet, dass das Argument "Labels" genannt wird: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. Stellen Sie das Format des Datensatzes so ein, dass PyTorch-Tensoren anstelle von Listen zurรผckgegeben werden: ```py >>> tokenized_datasets.set_format("torch") ``` Erstellen Sie dann eine kleinere Teilmenge des Datensatzes, wie zuvor gezeigt, um die Feinabstimmung zu beschleunigen: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader Erstellen Sie einen `DataLoader` fรผr Ihre Trainings- und Testdatensรคtze, damit Sie รผber die Datenstapel iterieren kรถnnen: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` Laden Sie Ihr Modell mit der Anzahl der erwarteten Kennzeichnungen: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` ### Optimierer und Lernratensteuerung Erstellen Sie einen Optimierer und einen Scheduler fรผr die Lernrate, um das Modell fein abzustimmen. Wir verwenden den Optimierer [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) aus PyTorch: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` Erstellen Sie den Standard-Lernratenplaner aus [`Trainer`]: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` Geben Sie schlieรŸlich `device` an, um einen Grafikprozessor zu verwenden, wenn Sie Zugang zu einem solchen haben. Andernfalls kann das Training auf einer CPU mehrere Stunden statt ein paar Minuten dauern. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> Holen Sie sich mit einem gehosteten Notebook wie [Colaboratory](https://colab.research.google.com/) oder [SageMaker StudioLab](https://studiolab.sagemaker.aws/) kostenlosen Zugang zu einem Cloud-GPU, wenn Sie noch keinen haben. </Tip> GroรŸartig, Sie sind bereit fรผr das Training! ๐Ÿฅณ ### Trainingsschleife Um Ihren Trainingsfortschritt zu verfolgen, verwenden Sie die [tqdm](https://tqdm.github.io/) Bibliothek, um einen Fortschrittsbalken รผber die Anzahl der Trainingsschritte hinzuzufรผgen: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### Auswertung Genauso wie Sie eine Bewertungsfunktion zu [`Trainer`] hinzugefรผgt haben, mรผssen Sie dasselbe tun, wenn Sie Ihre eigene Trainingsschleife schreiben. Aber anstatt die Metrik am Ende jeder Epoche zu berechnen und zu melden, werden Sie dieses Mal alle Stapel mit [`~evaluate.add_batch`] akkumulieren und die Metrik ganz am Ende berechnen. ```py >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## Zusรคtzliche Ressourcen Weitere Beispiele fรผr die Feinabstimmung finden Sie unter: - [๐Ÿค— Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) enthรคlt Skripte um gรคngige NLP-Aufgaben in PyTorch und TensorFlow zu trainieren. - [๐Ÿค— Transformers Notebooks](notebooks) enthรคlt verschiedene Notebooks zur Feinabstimmung eines Modells fรผr bestimmte Aufgaben in PyTorch und TensorFlow.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Maschinelles Lernen auf dem neuesten Stand der Technik fรผr PyTorch, TensorFlow und JAX. ๐Ÿค— Transformers bietet APIs zum einfachen Herunterladen und Trainieren von vortrainierten Modellen auf dem neuesten Stand der Technik. Die Verwendung von vortrainierten Modellen kann Rechenkosten sparen und den CO2-FuรŸabdruck reduzieren und Zeit sparen, die fรผr das Training eines Modells von Grund auf benรถtigt wird. Die Modelle kรถnnen fรผr verschiedene Modalitรคten verwendet werden, wie z. B.: * ๐Ÿ“ Text: Textklassifizierung, Informationsextrahierung, Beantwortung von Fragen, Zusammenfassung, รœbersetzung und Texterstellung in รผber 100 Sprachen. * ๐Ÿ–ผ๏ธ Bilder: Bildklassifizierung, Objekterkennung und Segmentierung. * ๐Ÿ—ฃ๏ธ Audio: Spracherkennung und Audioklassifizierung. * ๐Ÿ™ Multimodal: Beantwortung von Tabellenfragen, optische Zeichenerkennung, Informationsextraktion aus gescannten Dokumenten, Videoklassifizierung und Beantwortung visueller Fragen. Unsere Bibliothek unterstรผtzt die nahtlose Integration von drei der beliebtesten Deep-Learning-Bibliotheken: [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) und [JAX](https://jax.readthedocs.io/en/latest/). Trainieren Sie Ihr Modell in drei Codezeilen in einem Framework und laden Sie es zur Inferenz mit einem anderen. Jede ๐Ÿค— Transformers-Architektur ist in einem eigenstรคndigen Python-Modul definiert, so dass sie leicht fรผr Forschung und Experimente angepasst werden kann. ## Wenn Sie auf der Suche nach individueller Unterstรผtzung durch das Hugging Face-Team sind <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Inhalt Die Dokumentation ist in fรผnf Teile gegliedert: - **GET STARTED** enthรคlt eine kurze Tour und Installationsanweisungen, um mit ๐Ÿค— Transformers loszulegen. - **TUTORIALS** sind ein hervorragender Ausgangspunkt, wenn Sie neu in unserer Bibliothek sind. Dieser Abschnitt hilft Ihnen, die grundlegenden Fรคhigkeiten zu erlangen, die Sie benรถtigen, um mit ๐Ÿค— Transformers zu arbeiten. - **HOW-TO GUIDES** zeigen Ihnen, wie Sie ein bestimmtes Ziel erreichen kรถnnen, z. B. die Feinabstimmung eines vortrainierten Modells fรผr die Sprachmodellierung oder die Erstellung eines benutzerdefinierten Modellkopfs. - **KONZEPTUELLE ANLEITUNGEN** bietet weitere Diskussionen und Erklรคrungen zu den zugrunde liegenden Konzepten und Ideen hinter Modellen, Aufgaben und der Designphilosophie von ๐Ÿค— Transformers. - **API** beschreibt jede Klasse und Funktion, gruppiert in: - **MAIN CLASSES** fรผr die Hauptklassen, die die wichtigsten APIs der Bibliothek darstellen. - MODELLE** fรผr die Klassen und Funktionen, die zu jedem in der Bibliothek implementierten Modell gehรถren. - **INTERNAL HELPERS** fรผr die Klassen und Funktionen, die wir intern verwenden. Die Bibliothek enthรคlt derzeit JAX-, PyTorch- und TensorFlow-Implementierungen, vortrainierte Modellgewichte, Nutzungsskripte und Konvertierungsprogramme fรผr die folgenden Modelle. ### Unterstรผtze Modelle <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UMT5](model_doc/umt5)** (from Google Research) released with the paper [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLM-V](model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Unterstรผtzte Frameworks Die folgende Tabelle zeigt die derzeitige Unterstรผtzung in der Bibliothek fรผr jedes dieser Modelle, unabhรคngig davon, ob sie einen Python Tokenizer haben (als "langsam" bezeichnet), ein "schneller" Tokenizer, der von der ๐Ÿค— Tokenizers Bibliothek unterstรผtzt wird, ob sie Unterstรผtzung in Jax (via Flax), PyTorch, und/oder TensorFlow haben. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLOOM | โŒ | โœ… | โœ… | โŒ | โœ… | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GroupViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โŒ | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileViT | โŒ | โŒ | โœ… | โŒ | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โŒ | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/llm_tutorial.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Generation with LLMs [[open-in-colab]] LLMs (Large Language Models) sind die Schlรผsselkomponente bei der Texterstellung. Kurz gesagt, bestehen sie aus groรŸen, vortrainierten Transformationsmodellen, die darauf trainiert sind, das nรคchste Wort (oder genauer gesagt Token) aus einem Eingabetext vorherzusagen. Da sie jeweils ein Token vorhersagen, mรผssen Sie etwas Aufwรคndigeres tun, um neue Sรคtze zu generieren, als nur das Modell aufzurufen - Sie mรผssen eine autoregressive Generierung durchfรผhren. Die autoregressive Generierung ist ein Verfahren zur Inferenzzeit, bei dem ein Modell mit seinen eigenen generierten Ausgaben iterativ aufgerufen wird, wenn einige anfรคngliche Eingaben vorliegen. In ๐Ÿค— Transformers wird dies von der Methode [`~generation.GenerationMixin.generate`] รผbernommen, die allen Modellen mit generativen Fรคhigkeiten zur Verfรผgung steht. Dieses Tutorial zeigt Ihnen, wie Sie: * Text mit einem LLM generieren * Vermeiden Sie hรคufige Fallstricke * Nรคchste Schritte, damit Sie das Beste aus Ihrem LLM herausholen kรถnnen Bevor Sie beginnen, stellen Sie sicher, dass Sie alle erforderlichen Bibliotheken installiert haben: ```bash pip install transformers bitsandbytes>=0.39.0 -q ``` ## Text generieren Ein Sprachmodell, das fรผr [causal language modeling](tasks/language_modeling) trainiert wurde, nimmt eine Folge von Text-Token als Eingabe und gibt die Wahrscheinlichkeitsverteilung fรผr das nรคchste Token zurรผck. <!-- [GIF 1 -- FWD PASS] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov" ></video> <figcaption>"Forward pass of an LLM"</figcaption> </figure> Ein wichtiger Aspekt der autoregressiven Generierung mit LLMs ist die Auswahl des nรคchsten Tokens aus dieser Wahrscheinlichkeitsverteilung. In diesem Schritt ist alles mรถglich, solange Sie am Ende ein Token fรผr die nรคchste Iteration haben. Das heiรŸt, es kann so einfach sein wie die Auswahl des wahrscheinlichsten Tokens aus der Wahrscheinlichkeitsverteilung oder so komplex wie die Anwendung von einem Dutzend Transformationen vor der Stichprobenziehung aus der resultierenden Verteilung. <!-- [GIF 2 -- TEXT GENERATION] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov" ></video> <figcaption>"Die autoregressive Generierung wรคhlt iterativ das nรคchste Token aus einer Wahrscheinlichkeitsverteilung aus, um Text zu erzeugen"</figcaption> </figure> Der oben dargestellte Prozess wird iterativ wiederholt, bis eine bestimmte Abbruchbedingung erreicht ist. Im Idealfall wird die Abbruchbedingung vom Modell vorgegeben, das lernen sollte, wann es ein Ende-der-Sequenz-Token (EOS) ausgeben muss. Ist dies nicht der Fall, stoppt die Generierung, wenn eine vordefinierte Maximallรคnge erreicht ist. Damit sich Ihr Modell so verhรคlt, wie Sie es fรผr Ihre Aufgabe erwarten, mรผssen Sie den Schritt der Token-Auswahl und die Abbruchbedingung richtig einstellen. Aus diesem Grund haben wir zu jedem Modell eine [`~generation.GenerationConfig`]-Datei, die eine gute generative Standardparametrisierung enthรคlt und zusammen mit Ihrem Modell geladen wird. Lassen Sie uns รผber Code sprechen! <Tip> Wenn Sie an der grundlegenden Verwendung von LLMs interessiert sind, ist unsere High-Level-Schnittstelle [`Pipeline`](pipeline_tutorial) ein guter Ausgangspunkt. LLMs erfordern jedoch oft fortgeschrittene Funktionen wie Quantisierung und Feinsteuerung des Token-Auswahlschritts, was am besten รผber [`~generation.GenerationMixin.generate`] erfolgt. Die autoregressive Generierung mit LLMs ist ebenfalls ressourcenintensiv und sollte fรผr einen angemessenen Durchsatz auf einer GPU ausgefรผhrt werden. </Tip> <!-- TODO: update example to llama 2 (or a newer popular baseline) when it becomes ungated --> Zunรคchst mรผssen Sie das Modell laden. ```py >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( ... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True ... ) ``` Sie werden zwei Flags in dem Aufruf `from_pretrained` bemerken: - `device_map` stellt sicher, dass das Modell auf Ihre GPU(s) รผbertragen wird - `load_in_4bit` wendet [dynamische 4-Bit-Quantisierung](main_classes/quantization) an, um die Ressourcenanforderungen massiv zu reduzieren Es gibt noch andere Mรถglichkeiten, ein Modell zu initialisieren, aber dies ist eine gute Grundlage, um mit einem LLM zu beginnen. Als nรคchstes mรผssen Sie Ihre Texteingabe mit einem [tokenizer](tokenizer_summary) vorverarbeiten. ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b") >>> model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to("cuda") ``` Die Variable `model_inputs` enthรคlt die tokenisierte Texteingabe sowie die Aufmerksamkeitsmaske. Obwohl [`~generation.GenerationMixin.generate`] sein Bestes tut, um die Aufmerksamkeitsmaske abzuleiten, wenn sie nicht รผbergeben wird, empfehlen wir, sie fรผr optimale Ergebnisse wann immer mรถglich zu รผbergeben. Rufen Sie schlieรŸlich die Methode [~generation.GenerationMixin.generate] auf, um die generierten Token zurรผckzugeben, die vor dem Drucken in Text umgewandelt werden sollten. ```py >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A list of colors: red, blue, green, yellow, black, white, and brown' ``` Und das war's! Mit ein paar Zeilen Code kรถnnen Sie sich die Macht eines LLM zunutze machen. ## Hรคufige Fallstricke Es gibt viele [Generierungsstrategien](generation_strategies), und manchmal sind die Standardwerte fรผr Ihren Anwendungsfall vielleicht nicht geeignet. Wenn Ihre Ausgaben nicht mit dem รผbereinstimmen, was Sie erwarten, haben wir eine Liste der hรคufigsten Fallstricke erstellt und wie Sie diese vermeiden kรถnnen. ```py >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b") >>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default >>> model = AutoModelForCausalLM.from_pretrained( ... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True ... ) ``` ### Generierte Ausgabe ist zu kurz/lang Wenn in der Datei [~generation.GenerationConfig`] nichts angegeben ist, gibt `generate` standardmรครŸig bis zu 20 Token zurรผck. Wir empfehlen dringend, `max_new_tokens` in Ihrem `generate`-Aufruf manuell zu setzen, um die maximale Anzahl neuer Token zu kontrollieren, die zurรผckgegeben werden kรถnnen. Beachten Sie, dass LLMs (genauer gesagt, [decoder-only models](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)) auch die Eingabeaufforderung als Teil der Ausgabe zurรผckgeben. ```py >>> model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda") >>> # By default, the output will contain up to 20 tokens >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5' >>> # Setting `max_new_tokens` allows you to control the maximum length >>> generated_ids = model.generate(**model_inputs, max_new_tokens=50) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,' ``` ### Falscher Generierungsmodus StandardmรครŸig und sofern nicht in der Datei [~generation.GenerationConfig`] angegeben, wรคhlt `generate` bei jeder Iteration das wahrscheinlichste Token aus (gierige Dekodierung). Je nach Aufgabe kann dies unerwรผnscht sein; kreative Aufgaben wie Chatbots oder das Schreiben eines Aufsatzes profitieren vom Sampling. Andererseits profitieren Aufgaben, bei denen es auf die Eingabe ankommt, wie z.B. Audiotranskription oder รœbersetzung, von der gierigen Dekodierung. Aktivieren Sie das Sampling mit `do_sample=True`. Mehr zu diesem Thema erfahren Sie in diesem [Blogbeitrag] (https://huggingface.co/blog/how-to-generate). ```py >>> # Set seed or reproducibility -- you don't need this unless you want full reproducibility >>> from transformers import set_seed >>> set_seed(0) >>> model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda") >>> # LLM + greedy decoding = repetitive, boring output >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. I am a cat. I am a cat. I am a cat' >>> # With sampling, the output becomes more creative! >>> generated_ids = model.generate(**model_inputs, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat.\nI just need to be. I am always.\nEvery time' ``` ### Falsche Auffรผllseite LLMs sind [decoder-only](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)-Architekturen, d.h. sie iterieren weiter รผber Ihre Eingabeaufforderung. Wenn Ihre Eingaben nicht die gleiche Lรคnge haben, mรผssen sie aufgefรผllt werden. Da LLMs nicht darauf trainiert sind, mit aufgefรผllten Token fortzufahren, muss Ihre Eingabe links aufgefรผllt werden. Vergessen Sie auch nicht, die Aufmerksamkeitsmaske an generate zu รผbergeben! ```py >>> # The tokenizer initialized above has right-padding active by default: the 1st sequence, >>> # which is shorter, has padding on the right side. Generation fails. >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0] '' >>> # With left-padding, it works as expected! >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", padding_side="left") >>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 3, 4, 5, 6,' ``` <!-- TODO: when the prompting guide is ready, mention the importance of setting the right prompt in this section --> ## Weitere Ressourcen Wรคhrend der Prozess der autoregressiven Generierung relativ einfach ist, kann die optimale Nutzung Ihres LLM ein schwieriges Unterfangen sein, da es viele bewegliche Teile gibt. Fรผr Ihre nรคchsten Schritte, die Ihnen helfen, tiefer in die LLM-Nutzung und das Verstรคndnis einzutauchen: <!-- TODO: mit neuen Anleitungen vervollstรคndigen --> ### Fortgeschrittene Nutzung generieren 1. [Leitfaden](generation_strategies) zur Steuerung verschiedener Generierungsmethoden, zur Einrichtung der Generierungskonfigurationsdatei und zum Streaming der Ausgabe; 2. API-Referenz zu [`~generation.GenerationConfig`], [`~generation.GenerationMixin.generate`] und [generate-bezogene Klassen](internal/generation_utils). ### LLM-Ranglisten 1. [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), das sich auf die Qualitรคt der Open-Source-Modelle konzentriert; 2. [Open LLM-Perf Leaderboard](https://huggingface.co/spaces/optimum/llm-perf-leaderboard), das sich auf den LLM-Durchsatz konzentriert. ### Latenz und Durchsatz 1. [Leitfaden](main_classes/quantization) zur dynamischen Quantisierung, der Ihnen zeigt, wie Sie Ihren Speicherbedarf drastisch reduzieren kรถnnen. ### Verwandte Bibliotheken 1. [text-generation-inference](https://github.com/huggingface/text-generation-inference), ein produktionsreifer Server fรผr LLMs; 2. [`optimum`](https://github.com/huggingface/optimum), eine Erweiterung von ๐Ÿค— Transformers, die fรผr bestimmte Hardware-Gerรคte optimiert.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Schnellstart [[open-in-colab]] Mit ๐Ÿค— Transformers kรถnnen Sie sofort loslegen! Verwenden Sie die [`pipeline`] fรผr schnelle Inferenz und laden Sie schnell ein vortrainiertes Modell und einen Tokenizer mit einer [AutoClass](./model_doc/auto), um Ihre Text-, Bild- oder Audioaufgabe zu lรถsen. <Tip> Alle in der Dokumentation vorgestellten Codebeispiele haben oben links einen Umschalter fรผr PyTorch und TensorFlow. Wenn nicht, wird erwartet, dass der Code fรผr beide Backends ohne ร„nderungen funktioniert. </Tip> ## Pipeline [`pipeline`] ist der einfachste Weg, ein vortrainiertes Modell fรผr eine bestimmte Aufgabe zu verwenden. <Youtube id="tiZFewofSLM"/> Die [`pipeline`] unterstรผtzt viele gรคngige Aufgaben: **Text**: * Stimmungsanalyse: Klassifizierung der Polaritรคt eines gegebenen Textes. * Textgenerierung (auf Englisch): Generierung von Text aus einer gegebenen Eingabe. * Name-Entity-Recognition (NER): Kennzeichnung jedes Worts mit der Entitรคt, die es reprรคsentiert (Person, Datum, Ort usw.). * Beantwortung von Fragen: Extrahieren der Antwort aus dem Kontext, wenn ein gewisser Kontext und eine Frage gegeben sind. * Fill-mask: Ausfรผllen von Lรผcken in einem Text mit maskierten Wรถrtern. * Zusammenfassung: Erstellung einer Zusammenfassung einer langen Text- oder Dokumentensequenz. * รœbersetzung: รœbersetzen eines Textes in eine andere Sprache. * Merkmalsextraktion: Erstellen einer Tensordarstellung des Textes. **Bild**: * Bildklassifizierung: Klassifizierung eines Bildes. * Bildsegmentierung: Klassifizierung jedes Pixels in einem Bild. * Objekterkennung: Erkennen von Objekten innerhalb eines Bildes. **Audio**: * Audioklassifizierung: Zuweisung eines Labels zu einem bestimmten Audiosegment. * Automatische Spracherkennung (ASR): Transkription von Audiodaten in Text. <Tip> Fรผr mehr Details รผber die [`pipeline`] und assoziierte Aufgaben, schauen Sie in die Dokumentation [hier](./main_classes/pipelines). </Tip> ### Verwendung der Pipeline Im folgenden Beispiel werden Sie die [`pipeline`] fรผr die Stimmungsanalyse verwenden. Installieren Sie die folgenden Abhรคngigkeiten, falls Sie dies nicht bereits getan haben: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importieren sie die [`pipeline`] und spezifizieren sie die Aufgabe, welche sie lรถsen mรถchten: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Die Pipeline lรคdt ein standardmรครŸiges [vortrainiertes Modell] (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) und einen Tokenizer fรผr die Stimmungs-Analyse herunter und speichert sie. Jetzt kรถnnen Sie den "Klassifikator" auf Ihren Zieltext anwenden: ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` For more than one sentence, pass a list of sentences to the [`pipeline`] which returns a list of dictionaries: ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` Die [`pipeline`] kann auch รผber einen ganzen Datensatz iterieren. Starten wir mit der Installation der [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) Bibliothek: ```bash pip install datasets ``` Erstellen wir eine [`pipeline`] mit der Aufgabe die wir lรถsen und dem Modell welches wir nutzen mรถchten. ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Als nรคchstes laden wir den Datensatz (siehe ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart) fรผr mehr Details) welches wir nutzen mรถchten. Zum Beispiel laden wir den [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) Datensatz: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Wir mรผssen sicherstellen, dass die Abtastrate des Datensatzes der Abtastrate entspricht, mit der `facebook/wav2vec2-base-960h` trainiert wurde. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Audiodateien werden automatisch geladen und neu abgetastet, wenn die Spalte "audio" aufgerufen wird. Extrahieren wir die rohen Wellenform-Arrays der ersten 4 Beispiele und รผbergeben wir sie als Liste an die Pipeline: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Bei einem grรถรŸeren Datensatz mit vielen Eingaben (wie bei Sprache oder Bildverarbeitung) sollten Sie einen Generator anstelle einer Liste รผbergeben, der alle Eingaben in den Speicher lรคdt. Weitere Informationen finden Sie in der [Pipeline-Dokumentation](./main_classes/pipelines). ### Ein anderes Modell und einen anderen Tokenizer in der Pipeline verwenden Die [`pipeline`] kann jedes Modell aus dem [Model Hub] (https://huggingface.co/models) verwenden, wodurch es einfach ist, die [`pipeline`] fรผr andere Anwendungsfรคlle anzupassen. Wenn Sie beispielsweise ein Modell wรผnschen, das franzรถsischen Text verarbeiten kann, verwenden Sie die Tags im Model Hub, um nach einem geeigneten Modell zu filtern. Das oberste gefilterte Ergebnis liefert ein mehrsprachiges [BERT-Modell](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment), das auf die Stimmungsanalyse abgestimmt ist. GroรŸartig, verwenden wir dieses Modell! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Use the [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `AutoClass` below): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Use the [`TFAutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` below): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Dann kรถnnen Sie das Modell und den Tokenizer in der [`pipeline`] angeben und den `Klassifikator` auf Ihren Zieltext anwenden: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Wenn Sie kein Modell fรผr Ihren Anwendungsfall finden kรถnnen, mรผssen Sie ein vortrainiertes Modell auf Ihren Daten feinabstimmen. Schauen Sie sich unser [Feinabstimmungs-Tutorial](./training) an, um zu erfahren, wie das geht. Und schlieรŸlich, nachdem Sie Ihr trainiertes Modell verfeinert haben, sollten Sie es mit der Community im Model Hub teilen (siehe Tutorial [hier](./model_sharing)), um NLP fรผr alle zu demokratisieren! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Unter der Haube arbeiten die Klassen [`AutoModelForSequenceClassification`] und [`AutoTokenizer`] zusammen, um die [`pipeline`] zu betreiben. Eine [`AutoClass`](./model_doc/auto) ist eine Abkรผrzung, die automatisch die Architektur eines trainierten Modells aus dessen Namen oder Pfad abruft. Sie mรผssen nur die passende `AutoClass` fรผr Ihre Aufgabe und den zugehรถrigen Tokenizer mit [`AutoTokenizer`] auswรคhlen. Kehren wir zu unserem Beispiel zurรผck und sehen wir uns an, wie Sie die `AutoClass` verwenden kรถnnen, um die Ergebnisse der [`pipeline`] zu replizieren. ### AutoTokenizer Ein Tokenizer ist fรผr die Vorverarbeitung von Text in ein fรผr das Modell verstรคndliches Format zustรคndig. Zunรคchst zerlegt der Tokenisierer den Text in Wรถrter, die *Token* genannt werden. Es gibt mehrere Regeln fรผr den Tokenisierungsprozess, z. B. wie und auf welcher Ebene ein Wort aufgespalten wird (weitere Informationen รผber Tokenisierung [hier](./tokenizer_summary)). Das Wichtigste ist jedoch, dass Sie den Tokenizer mit demselben Modellnamen instanziieren mรผssen, um sicherzustellen, dass Sie dieselben Tokenisierungsregeln verwenden, mit denen ein Modell zuvor trainiert wurde. Laden sie einen Tokenizer mit [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` AnschlieรŸend wandelt der Tokenizer die Token in Zahlen um, um einen Tensor als Eingabe fรผr das Modell zu konstruieren. Dieser wird als *Vokabular* des Modells bezeichnet. รœbergeben Sie Ihren Text an den Tokenizer: ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Der Tokenizer gibt ein Wรถrterbuch zurรผck, das Folgendes enthรคlt: * [input_ids](./glossary#input-ids): numerische Reprรคsentationen Ihrer Token. * [atttention_mask](.glossary#attention-mask): gibt an, welche Token beachtet werden sollen. Genau wie die [`pipeline`] akzeptiert der Tokenizer eine Liste von Eingaben. Darรผber hinaus kann der Tokenizer den Text auch auffรผllen und kรผrzen, um einen Stapel mit einheitlicher Lรคnge zurรผckzugeben: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Lesen Sie das Tutorial [preprocessing](./preprocessing) fรผr weitere Details zur Tokenisierung. ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers bietet eine einfache und einheitliche Mรถglichkeit, vortrainierte Instanzen zu laden. Das bedeutet, dass Sie ein [`AutoModel`] laden kรถnnen, wie Sie einen [`AutoTokenizer`] laden wรผrden. Der einzige Unterschied ist die Auswahl des richtigen [`AutoModel`] fรผr die Aufgabe. Da Sie eine Text- oder Sequenzklassifizierung vornehmen, laden Sie [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse fรผr welche Aufgabe zu verwenden ist. </Tip> Jetzt kรถnnen Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell รผbergeben. Sie mรผssen nur das Wรถrterbuch entpacken, indem Sie `**` hinzufรผgen: ```py >>> pt_outputs = pt_model(**pt_batch) ``` Das Modell gibt die endgรผltigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers bietet eine einfache und einheitliche Methode zum Laden von vortrainierten Instanzen. Das bedeutet, dass Sie ein [`TFAutoModel`] genauso laden kรถnnen, wie Sie einen [`AutoTokenizer`] laden wรผrden. Der einzige Unterschied ist die Auswahl des richtigen [`TFAutoModel`] fรผr die Aufgabe. Da Sie Text - oder Sequenz - Klassifizierung machen, laden Sie [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse fรผr welche Aufgabe zu verwenden ist. </Tip> Jetzt kรถnnen Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell รผbergeben, indem Sie die Wรถrterbuchschlรผssel direkt an die Tensoren รผbergeben: ```py >>> tf_outputs = tf_model(tf_batch) ``` Das Modell gibt die endgรผltigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Alle ๐Ÿค— Transformers-Modelle (PyTorch oder TensorFlow) geben die Tensoren *vor* der endgรผltigen Aktivierungsfunktion Funktion (wie Softmax) aus, da die endgรผltige Aktivierungsfunktion oft mit dem Verlusten verschmolzen ist. </Tip> Modelle sind ein standardmรครŸiges [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) oder ein [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model), sodass Sie sie in Ihrer รผblichen Trainingsschleife verwenden kรถnnen. Um jedoch die Dinge einfacher zu machen, bietet ๐Ÿค— Transformers eine [`Trainer`]-Klasse fรผr PyTorch, die Funktionalitรคt fรผr verteiltes Training, gemischte Prรคzision und mehr bietet. Fรผr TensorFlow kรถnnen Sie die Methode `fit` aus [Keras](https://keras.io/) verwenden. Siehe das [training tutorial](./training) fรผr weitere Details. <Tip> Transformers-Modellausgaben sind spezielle Datenklassen, so dass ihre Attribute in einer IDE automatisch vervollstรคndigt werden. Die Modellausgรคnge verhalten sich auch wie ein Tupel oder ein Wรถrterbuch (z.B. kรถnnen Sie mit einem Integer, einem Slice oder einem String indexieren), wobei die Attribute, die "None" sind, ignoriert werden. </Tip> ### Modell speichern <frameworkcontent> <pt> Sobald Ihr Modell feinabgestimmt ist, kรถnnen Sie es mit seinem Tokenizer speichern, indem Sie [`PreTrainedModel.save_pretrained`] verwenden: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Wenn Sie bereit sind, das Modell erneut zu verwenden, laden Sie es mit [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Sobald Ihr Modell feinabgestimmt ist, kรถnnen Sie es mit seinem Tokenizer unter Verwendung von [`TFPreTrainedModel.save_pretrained`] speichern: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Wenn Sie bereit sind, das Modell wieder zu verwenden, laden Sie es mit [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Ein besonders cooles ๐Ÿค— Transformers-Feature ist die Mรถglichkeit, ein Modell zu speichern und es entweder als PyTorch- oder TensorFlow-Modell wieder zu laden. Der Parameter "from_pt" oder "from_tf" kann das Modell von einem Framework in das andere konvertieren: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Custom model builds Sie kรถnnen die Konfigurationsklasse des Modells รคndern, um zu bestimmen, wie ein Modell aufgebaut ist. Die Konfiguration legt die Attribute eines Modells fest, z. B. die Anzahl der verborgenen Schichten oder der Aufmerksamkeitskรถpfe. Wenn Sie ein Modell aus einer benutzerdefinierten Konfigurationsklasse initialisieren, beginnen Sie bei Null. Die Modellattribute werden zufรคllig initialisiert, und Sie mรผssen das Modell trainieren, bevor Sie es verwenden kรถnnen, um aussagekrรคftige Ergebnisse zu erhalten. Beginnen Sie mit dem Import von [`AutoConfig`] und laden Sie dann das trainierte Modell, das Sie รคndern mรถchten. Innerhalb von [`AutoConfig.from_pretrained`] kรถnnen Sie das Attribut angeben, das Sie รคndern mรถchten, z. B. die Anzahl der Aufmerksamkeitskรถpfe: ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Create a model from your custom configuration with [`AutoModel.from_config`]: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Create a model from your custom configuration with [`TFAutoModel.from_config`]: ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Weitere Informationen zur Erstellung von benutzerdefinierten Konfigurationen finden Sie in der Anleitung [Erstellen einer benutzerdefinierten Architektur](./create_a_model). ## Wie geht es weiter? Nachdem Sie nun die ๐Ÿค— Transformers-Kurztour abgeschlossen haben, schauen Sie sich unsere Anleitungen an und erfahren Sie, wie Sie spezifischere Dinge tun kรถnnen, wie das Schreiben eines benutzerdefinierten Modells, die Feinabstimmung eines Modells fรผr eine Aufgabe und wie man ein Modell mit einem Skript trainiert. Wenn Sie mehr รผber die Kernkonzepte von ๐Ÿค— Transformers erfahren mรถchten, nehmen Sie sich eine Tasse Kaffee und werfen Sie einen Blick auf unsere konzeptionellen Leitfรคden!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/testing.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Testen Werfen wir einen Blick darauf, wie ๐Ÿค— Transformers-Modelle getestet werden und wie Sie neue Tests schreiben und die vorhandenen verbessern kรถnnen. Es gibt 2 Testsuiten im Repository: 1. `tests` -- Tests fรผr die allgemeine API 2. `examples` -- Tests hauptsรคchlich fรผr verschiedene Anwendungen, die nicht Teil der API sind ## Wie Transformatoren getestet werden 1. Sobald ein PR eingereicht wurde, wird er mit 9 CircleCi Jobs getestet. Jeder neue Commit zu diesem PR wird erneut getestet. Diese Auftrรคge sind in dieser [Konfigurationsdatei](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml) definiert, so dass Sie bei Bedarf die gleiche Umgebung auf Ihrem Rechner reproduzieren kรถnnen. Umgebung auf Ihrem Rechner reproduzieren kรถnnen. Diese CI-Jobs fรผhren keine `@slow`-Tests durch. 2. Es gibt 3 Jobs, die von [github actions](https://github.com/huggingface/transformers/actions) ausgefรผhrt werden: - [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): prรผft, ob die torch hub Integration funktioniert. - [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): fรผhrt schnelle Tests auf der GPU nur bei Commits auf `main`. Es wird nur ausgefรผhrt, wenn ein Commit auf `main` den Code in einem der folgenden Ordner aktualisiert hat: `src`, `tests`, `.github` (um zu verhindern, dass er auf hinzugefรผgten Modellkarten, Notebooks usw. lรคuft) - [self-hosted runner](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-scheduled.yml): fรผhrt normale und langsame Tests auf GPU in `tests` und `examples`: ```bash RUN_SLOW=1 pytest tests/ RUN_SLOW=1 pytest examples/ ``` Die Ergebnisse kรถnnen Sie [hier](https://github.com/huggingface/transformers/actions) sehen. ## Tests ausfรผhren ### Auswahl der auszufรผhrenden Tests In diesem Dokument wird ausfรผhrlich erlรคutert, wie Tests ausgefรผhrt werden kรถnnen. Wenn Sie nach der Lektรผre noch mehr Details benรถtigen finden Sie diese [hier](https://docs.pytest.org/en/latest/usage.html). Hier sind einige der nรผtzlichsten Mรถglichkeiten, Tests auszufรผhren. Alle ausfรผhren: ```console pytest ``` oder: ```bash make test ``` Beachten Sie, dass Letzteres wie folgt definiert ist: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/ ``` was pytest anweist: - so viele Testprozesse laufen zu lassen, wie es CPU-Kerne gibt (was zu viele sein kรถnnten, wenn Sie nicht รผber eine Menge RAM verfรผgen!) - sicherzustellen, dass alle Tests aus derselben Datei von demselben Testprozess ausgefรผhrt werden - Erfassen Sie keine Ausgaben - im ausfรผhrlichen Modus laufen lassen ### Abrufen der Liste aller Tests Alle Tests der Testsuite: ```bash pytest --collect-only -q ``` Alle Tests einer bestimmten Testdatei: ```bash pytest tests/test_optimization.py --collect-only -q ``` ### Fรผhren Sie ein bestimmtes Testmodul aus Um ein einzelnes Testmodul auszufรผhren: ```bash pytest tests/utils/test_logging.py ``` ### Spezifische Tests ausfรผhren Da unittest in den meisten Tests verwendet wird, mรผssen Sie, um bestimmte Untertests auszufรผhren, den Namen der unittest Klasse, die diese Tests enthรคlt. Er kรถnnte zum Beispiel lauten: ```bash pytest tests/test_optimization.py::OptimizationTest::test_adam_w ``` Hier: - `tests/test_optimization.py` - die Datei mit den Tests - `OptimizationTest` - der Name der Klasse - `test_adam_w` - der Name der spezifischen Testfunktion Wenn die Datei mehrere Klassen enthรคlt, kรถnnen Sie auswรคhlen, dass nur die Tests einer bestimmten Klasse ausgefรผhrt werden sollen. Zum Beispiel: ```bash pytest tests/test_optimization.py::OptimizationTest ``` fรผhrt alle Tests innerhalb dieser Klasse aus. Wie bereits erwรคhnt, kรถnnen Sie sehen, welche Tests in der Klasse `OptimizationTest` enthalten sind, indem Sie sie ausfรผhren: ```bash pytest tests/test_optimization.py::OptimizationTest --collect-only -q ``` Sie kรถnnen Tests mit Hilfe von Schlรผsselwortausdrรผcken ausfรผhren. Um nur Tests auszufรผhren, deren Name `adam` enthรคlt: ```bash pytest -k adam tests/test_optimization.py ``` Die logischen `und` und `oder` kรถnnen verwendet werden, um anzugeben, ob alle Schlรผsselwรถrter รผbereinstimmen sollen oder nur eines. `nicht` kann verwendet werden, um negieren. Um alle Tests auszufรผhren, auรŸer denen, deren Name `adam` enthรคlt: ```bash pytest -k "not adam" tests/test_optimization.py ``` Und Sie kรถnnen die beiden Muster in einem kombinieren: ```bash pytest -k "ada and not adam" tests/test_optimization.py ``` Um zum Beispiel sowohl `test_adafactor` als auch `test_adam_w` auszufรผhren, kรถnnen Sie verwenden: ```bash pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py ``` Beachten Sie, dass wir hier `oder` verwenden, da wir wollen, dass eines der Schlรผsselwรถrter รผbereinstimmt, um beide einzuschlieรŸen. Wenn Sie nur Tests einschlieรŸen mรถchten, die beide Muster enthalten, mรผssen Sie `und` verwenden: ```bash pytest -k "test and ada" tests/test_optimization.py ``` ### Fรผhren Sie `accelerate` Tests durch Manchmal mรผssen Sie `accelerate` Tests fรผr Ihre Modelle ausfรผhren. Dazu fรผgen Sie einfach `-m accelerate_tests` zu Ihrem Befehl hinzu, wenn Sie diese Tests bei einem `OPT`-Lauf ausfรผhren mรถchten: ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py ``` ### Dokumentationstests ausfรผhren Um zu testen, ob die Dokumentationsbeispiele korrekt sind, sollten Sie รผberprรผfen, ob die `doctests` erfolgreich sind. Lassen Sie uns als Beispiel den docstring von [WhisperModel.forward](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1017-L1035) verwenden: ```python r""" Returns: Example: ```python >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" ``` Fรผhren Sie einfach die folgende Zeile aus, um automatisch jedes docstring-Beispiel in der gewรผnschten Datei zu testen: ```bash pytest --doctest-modules <path_to_file_or_dir> ``` Wenn die Datei eine Markdown-Erweiterung hat, sollten Sie das Argument `--doctest-glob="*.md"` hinzufรผgen. ### Nur geรคnderte Tests ausfรผhren Mit [pytest-picked](https://github.com/anapaulagomes/pytest-picked) kรถnnen Sie die Tests ausfรผhren, die sich auf die unstaged Dateien oder den aktuellen Zweig (gemรครŸ Git) beziehen. Auf diese Weise kรถnnen Sie schnell testen, ob Ihre ร„nderungen nichts kaputt gemacht haben. nichts kaputt gemacht haben, da die Tests fรผr Dateien, die Sie nicht verรคndert haben, nicht ausgefรผhrt werden. ```bash pip install pytest-picked ``` ```bash pytest --picked ``` Alle Tests werden von Dateien und Ordnern ausgefรผhrt, die geรคndert, aber noch nicht รผbergeben wurden. ### Fehlgeschlagene Tests bei ร„nderung der Quelle automatisch wiederholen [pytest-xdist](https://github.com/pytest-dev/pytest-xdist) bietet eine sehr nรผtzliche Funktion zur Erkennung aller fehlgeschlagenen Tests zu erkennen und dann darauf zu warten, dass Sie Dateien รคndern, um die fehlgeschlagenen Tests so lange zu wiederholen, bis sie erfolgreich sind, wรคhrend Sie die sie reparieren. So mรผssen Sie pytest nicht erneut starten, nachdem Sie die Korrektur vorgenommen haben. Dies wird so lange wiederholt, bis alle Tests bestanden sind. Danach wird erneut ein vollstรคndiger Durchlauf durchgefรผhrt. ```bash pip install pytest-xdist ``` So rufen Sie den Modus auf: `pytest -f` oder `pytest --looponfail` Datei-ร„nderungen werden erkannt, indem die Wurzelverzeichnisse von `looponfailroots` und alle ihre Inhalte (rekursiv) untersucht werden. Wenn die Vorgabe fรผr diesen Wert fรผr Sie nicht funktioniert, kรถnnen Sie ihn in Ihrem Projekt รคndern, indem Sie eine Konfigurations Option in der Datei `setup.cfg` รคndern: ```ini [tool:pytest] looponfailroots = transformers tests ``` oder die Dateien `pytest.ini`/`tox.ini``: ```ini [pytest] looponfailroots = transformers tests ``` Dies wรผrde dazu fรผhren, dass nur nach Dateiรคnderungen in den jeweiligen Verzeichnissen gesucht wird, die relativ zum Verzeichnis der ini-Datei angegeben sind. Verzeichnis. [pytest-watch](https://github.com/joeyespo/pytest-watch) ist eine alternative Implementierung dieser Funktionalitรคt. ### รœberspringen eines Testmoduls Wenn Sie alle Testmodule ausfรผhren mรถchten, mit Ausnahme einiger weniger, kรถnnen Sie diese ausschlieรŸen, indem Sie eine explizite Liste der auszufรผhrenden Tests angeben. Fรผr Beispiel: Um alle Tests auรŸer `test_modeling_*.py` auszufรผhren: ```bash pytest *ls -1 tests/*py | grep -v test_modeling* ``` ### Status leeren CI-Builds und wenn Isolation wichtig ist (gegen Geschwindigkeit), sollte der Cache geleert werden: ```bash pytest --cache-clear tests ``` ### Tests parallel ausfรผhren Wie bereits erwรคhnt, fรผhrt `make test` รผber das Plugin `pytest-xdist` Tests parallel aus (Argument `-n X`, z.B. `-n 2` um 2 Jobs parallel laufen zu lassen). Mit der Option `--dist=` von `pytest-xdist` kรถnnen Sie steuern, wie die Tests gruppiert werden. Mit `--dist=loadfile` werden die Tests, die sich in einer Datei befinden, in denselben Prozess. Da die Reihenfolge der ausgefรผhrten Tests unterschiedlich und nicht vorhersehbar ist, kann die Ausfรผhrung der Testsuite mit `pytest-xdist` zu Fehlern fรผhrt (was bedeutet, dass wir einige unentdeckte gekoppelte Tests haben), verwenden Sie [pytest-replay](https://github.com/ESSS/pytest-replay), um die Tests in der gleichen Reihenfolge abzuspielen, was dabei helfen sollte diese fehlgeschlagene Sequenz auf ein Minimum zu reduzieren. ### Testreihenfolge und Wiederholung Es ist gut, die Tests mehrmals zu wiederholen, nacheinander, zufรคllig oder in Gruppen, um mรถgliche Abhรคngigkeiten und zustandsbezogene Fehler zu erkennen (Abriss). Und die einfache, mehrfache Wiederholung ist einfach gut, um einige Probleme zu erkennen, die durch die Zufรคlligkeit von DL aufgedeckt werden. #### Wiederholungstests - [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder): ```bash pip install pytest-flakefinder ``` Und fรผhren Sie dann jeden Test mehrmals durch (standardmรครŸig 50): ```bash pytest --flake-finder --flake-runs=5 tests/test_failing_test.py ``` <Tip> Dieses Plugin funktioniert nicht mit dem `-n` Flag von `pytest-xdist`. </Tip> <Tip> Es gibt noch ein anderes Plugin `pytest-repeat`, aber es funktioniert nicht mit `unittest`. </Tip> #### Run tests in a random order ```bash pip install pytest-random-order ``` Wichtig: Das Vorhandensein von `pytest-random-order` sorgt fรผr eine automatische Zufallsanordnung der Tests, es sind keine Konfigurationsรคnderungen oder Befehlszeilenoptionen sind nicht erforderlich. Wie bereits erlรคutert, ermรถglicht dies die Erkennung von gekoppelten Tests - bei denen der Zustand eines Tests den Zustand eines anderen beeinflusst. Wenn `pytest-random-order` installiert ist, gibt es den Zufallswert aus, der fรผr diese Sitzung verwendet wurde, z.B: ```bash pytest tests [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` Wenn eine bestimmte Sequenz fehlschlรคgt, kรถnnen Sie sie reproduzieren, indem Sie genau diesen Seed hinzufรผgen, z.B: ```bash pytest --random-order-seed=573663 [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` Es wird nur dann die exakte Reihenfolge reproduzieren, wenn Sie genau dieselbe Liste von Tests (oder gar keine Liste) verwenden. Sobald Sie beginnen, die Liste die Liste manuell einzugrenzen, kรถnnen Sie sich nicht mehr auf den Seed verlassen, sondern mรผssen die Tests manuell in der genauen Reihenfolge auflisten auflisten und pytest anweisen, sie nicht zu randomisieren, indem Sie `--random-order-bucket=none` verwenden, z.B.: ```bash pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py ``` So deaktivieren Sie das Shuffling fรผr alle Tests: ```bash pytest --random-order-bucket=none ``` StandardmรครŸig ist `--random-order-bucket=module` impliziert, wodurch die Dateien auf den Modulebenen gemischt werden. Es kann auch auf den Ebenen `class`, `package`, `global` und `none` mischen. Die vollstรคndigen Details entnehmen Sie bitte der [Dokumentation] (https://github.com/jbasko/pytest-random-order). Eine weitere Alternative zur Randomisierung ist: [`pytest-random`](https://github.com/pytest-dev/pytest-randomly). Dieses Modul hat eine sehr รคhnliche Funktionalitรคt/Schnittstelle, aber es hat nicht die Eimermodi, die in `pytest-random-order` zur Verfรผgung. Es hat das gleiche Problem, dass es sich nach der Installation aufdrรคngt. ### Variationen von Aussehen und Bedienung #### pytest-zucker [pytest-sugar](https://github.com/Frozenball/pytest-sugar) ist ein Plugin, das das Erscheinungsbild verbessert, eine Fortschrittsbalken hinzufรผgt und Tests, die fehlschlagen, sowie die Bestรคtigung sofort anzeigt. Es wird bei der Installation automatisch aktiviert. ```bash pip install pytest-sugar ``` Um Tests ohne sie durchzufรผhren, fรผhren Sie aus: ```bash pytest -p no:sugar ``` oder deinstallieren Sie es. #### Melden Sie den Namen jedes Subtests und seinen Fortschritt Fรผr einen einzelnen oder eine Gruppe von Tests รผber `pytest` (nach `pip install pytest-pspec`): ```bash pytest --pspec tests/test_optimization.py ``` #### Zeigt fehlgeschlagene Tests sofort an [pytest-instafail](https://github.com/pytest-dev/pytest-instafail) zeigt Fehlschlรคge und Fehler sofort an, anstatt bis zum Ende der Testsitzung zu warten. ```bash pip install pytest-instafail ``` ```bash pytest --instafail ``` ### Zu GPU oder nicht zu GPU Bei einem GPU-aktivierten Setup fรผgen Sie zum Testen im reinen CPU-Modus `CUDA_VISIBLE_DEVICES=""` hinzu: ```bash CUDA_VISIBLE_DEVICES="" pytest tests/utils/test_logging.py ``` oder wenn Sie mehrere Grafikprozessoren haben, kรถnnen Sie angeben, welcher von `pytest` verwendet werden soll. Wenn Sie zum Beispiel nur den zweiten Grafikkarte zu verwenden, wenn Sie die Grafikkarten `0` und `1` haben, kรถnnen Sie folgendes ausfรผhren: ```bash CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py ``` Dies ist praktisch, wenn Sie verschiedene Aufgaben auf verschiedenen GPUs ausfรผhren mรถchten. Einige Tests mรผssen nur auf der CPU ausgefรผhrt werden, andere entweder auf der CPU, der GPU oder der TPU und wieder andere auf mehreren GPUs. Die folgenden skip Dekorateure werden verwendet, um die Anforderungen von Tests in Bezug auf CPU/GPU/TPU festzulegen: - `require_torch` - dieser Test wird nur unter Torch ausgefรผhrt - `require_torch_gpu` - wie `require_torch` plus erfordert mindestens 1 GPU - `require_torch_multi_gpu` - wie `require_torch` und zusรคtzlich mindestens 2 GPUs erforderlich - `require_torch_non_multi_gpu` - wie `require_torch` plus benรถtigt 0 oder 1 GPUs - `require_torch_up_to_2_gpus` - wie `require_torch` plus erfordert 0 oder 1 oder 2 GPUs - `require_torch_tpu` - wie `require_torch` plus erfordert mindestens 1 TPU Lassen Sie uns die GPU-Anforderungen in der folgenden Tabelle darstellen: | n gpus | decorator | |--------|--------------------------------| | `>= 0` | `@require_torch` | | `>= 1` | `@require_torch_gpu` | | `>= 2` | `@require_torch_multi_gpu` | | `< 2` | `@require_torch_non_multi_gpu` | | `< 3` | `@require_torch_up_to_2_gpus` | Hier ist zum Beispiel ein Test, der nur ausgefรผhrt werden muss, wenn 2 oder mehr GPUs verfรผgbar sind und pytorch installiert ist: ```python no-style @require_torch_multi_gpu def test_example_with_multi_gpu(): ``` Wenn ein Test `tensorflow` benรถtigt, verwenden Sie den Dekorator `require_tf`. Zum Beispiel: ```python no-style @require_tf def test_tf_thing_with_tensorflow(): ``` Diese Dekors kรถnnen gestapelt werden. Wenn zum Beispiel ein Test langsam ist und mindestens eine GPU unter pytorch benรถtigt, kรถnnen Sie wie Sie ihn einrichten kรถnnen: ```python no-style @require_torch_gpu @slow def test_example_slow_on_gpu(): ``` Einige Dekoratoren wie `@parametrized` schreiben Testnamen um, daher mรผssen `@require_*`-Sprungdekoratoren als letztes aufgefรผhrt werden. zuletzt aufgefรผhrt werden, damit sie korrekt funktionieren. Hier ist ein Beispiel fรผr die korrekte Verwendung: ```python no-style @parameterized.expand(...) @require_torch_multi_gpu def test_integration_foo(): ``` Dieses Problem mit der Reihenfolge gibt es bei `@pytest.mark.parametrize` nicht, Sie kรถnnen es an den Anfang oder an den Schluss setzen und es wird trotzdem funktionieren. funktionieren. Aber es funktioniert nur bei Nicht-Unittests. Innerhalb von Tests: - Wie viele GPUs sind verfรผgbar: ```python from transformers.testing_utils import get_gpu_count n_gpu = get_gpu_count() # works with torch and tf ``` ### Testen mit einem bestimmten PyTorch-Backend oder Gerรคt Um die Testsuite auf einem bestimmten Torch-Gerรคt auszufรผhren, fรผgen Sie `TRANSFORMERS_TEST_DEVICE="$Gerรคt"` hinzu, wobei `$Gerรคt` das Ziel-Backend ist. Zum Beispiel, um nur auf der CPU zu testen: ```bash TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/utils/test_logging.py ``` Diese Variable ist nรผtzlich, um benutzerdefinierte oder weniger verbreitete PyTorch-Backends wie `mps` zu testen. Sie kann auch verwendet werden, um den gleichen Effekt wie `CUDA_VISIBLE_DEVICES` zu erzielen, indem Sie bestimmte GPUs anvisieren oder im reinen CPU-Modus testen. Bestimmte Gerรคte erfordern einen zusรคtzlichen Import, nachdem Sie `torch` zum ersten Mal importiert haben. Dies kann รผber die Umgebungsvariable `TRANSFORMERS_TEST_BACKEND` festgelegt werden: ```bash TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/utils/test_logging.py ``` ### Verteiltes Training `pytest` kann nicht direkt mit verteiltem Training umgehen. Wenn dies versucht wird, tun die Unterprozesse nicht das Richtige und denken am Ende, sie seien `pytest` und beginnen, die Testsuite in Schleifen auszufรผhren. Es funktioniert jedoch, wenn man einen normalen Prozess erzeugt, der dann mehrere Worker erzeugt und die IO-Pipes verwaltet. Hier sind einige Tests, die dies verwenden: - [test_trainer_distributed.py](https://github.com/huggingface/transformers/tree/main/tests/trainer/test_trainer_distributed.py) - [test_deepspeed.py](https://github.com/huggingface/transformers/tree/main/tests/deepspeed/test_deepspeed.py) Um direkt mit der Ausfรผhrung zu beginnen, suchen Sie in diesen Tests nach dem Aufruf `execute_subprocess_async`. Sie benรถtigen mindestens 2 GPUs, um diese Tests in Aktion zu sehen: ```bash CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py ``` ### Erfassung von Ausgaben Wรคhrend der Testausfรผhrung werden alle Ausgaben, die an `stdout` und `stderr` gesendet werden, aufgezeichnet. Wenn ein Test oder eine Setup-Methode fehlschlรคgt, wird die wird die entsprechende aufgezeichnete Ausgabe in der Regel zusammen mit dem Fehler-Traceback angezeigt. Um die Aufzeichnung von Ausgaben zu deaktivieren und `stdout` und `stderr` normal zu erhalten, verwenden Sie `-s` oder `--capture=no`: ```bash pytest -s tests/utils/test_logging.py ``` So senden Sie Testergebnisse an die JUnit-Formatausgabe: ```bash py.test tests --junitxml=result.xml ``` ### Farbsteuerung Keine Farbe zu haben (z.B. gelb auf weiรŸem Hintergrund ist nicht lesbar): ```bash pytest --color=no tests/utils/test_logging.py ``` ### Testbericht an den Online-Dienst pastebin senden Erstellen Sie eine URL fรผr jeden Testfehler: ```bash pytest --pastebin=failed tests/utils/test_logging.py ``` Dadurch werden Informationen รผber den Testlauf an einen entfernten Paste-Dienst รผbermittelt und eine URL fรผr jeden Fehlschlag bereitgestellt. Sie kรถnnen die Tests wie gewohnt auswรคhlen oder z.B. -x hinzufรผgen, wenn Sie nur einen bestimmten Fehler senden mรถchten. Erstellen einer URL fรผr ein ganzes Testsitzungsprotokoll: ```bash pytest --pastebin=all tests/utils/test_logging.py ``` ## Tests schreiben ๐Ÿค— Die Tests von Transformers basieren auf `unittest`, werden aber von `pytest` ausgefรผhrt, so dass die meiste Zeit Funktionen aus beiden Systemen verwendet werden kรถnnen. Sie kรถnnen [hier](https://docs.pytest.org/en/stable/unittest.html) nachlesen, welche Funktionen unterstรผtzt werden, aber das Wichtigste ist Wichtig ist, dass die meisten `pytest`-Fixtures nicht funktionieren. Auch die Parametrisierung nicht, aber wir verwenden das Modul `parametrisiert`, das auf รคhnliche Weise funktioniert. ### Parametrisierung Oft besteht die Notwendigkeit, denselben Test mehrmals auszufรผhren, aber mit unterschiedlichen Argumenten. Das kรถnnte innerhalb des Tests geschehen des Tests gemacht werden, aber dann gibt es keine Mรถglichkeit, den Test mit nur einem Satz von Argumenten auszufรผhren. ```python # test_this1.py import unittest from parameterized import parameterized class TestMathUnitTest(unittest.TestCase): @parameterized.expand( [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ] ) def test_floor(self, name, input, expected): assert_equal(math.floor(input), expected) ``` Nun wird dieser Test standardmรครŸig 3 Mal ausgefรผhrt, wobei jedes Mal die letzten 3 Argumente von `test_floor` den entsprechenden Argumenten in der Parameterliste zugeordnet werden. die entsprechenden Argumente in der Parameterliste. Sie kรถnnen auch nur die Parameter `negativ` und `ganzzahlig` mit ausfรผhren: ```bash pytest -k "negative and integer" tests/test_mytest.py ``` oder alle Untertests auรŸer `negativ`, mit: ```bash pytest -k "not negative" tests/test_mytest.py ``` Neben der Verwendung des gerade erwรคhnten Filters `-k` kรถnnen Sie auch den genauen Namen jedes Untertests herausfinden und jeden oder alle unter Verwendung ihrer genauen Namen ausfรผhren. ```bash pytest test_this1.py --collect-only -q ``` und es wird aufgelistet: ```bash test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer test_this1.py::TestMathUnitTest::test_floor_2_large_fraction ``` Jetzt kรถnnen Sie also nur 2 spezifische Untertests durchfรผhren: ```bash pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer ``` Das Modul [parametrisiert](https://pypi.org/project/parameterized/), das sich bereits in den Entwickler-Abhรคngigkeiten befindet von `transformers` befindet, funktioniert sowohl fรผr `unittests` als auch fรผr `pytest` Tests. Wenn es sich bei dem Test jedoch nicht um einen `Unittest` handelt, kรถnnen Sie `pytest.mark.parametrize` verwenden (oder Sie kรถnnen sehen, dass es in einigen bestehenden Tests verwendet wird, meist unter `Beispiele`). Hier ist das gleiche Beispiel, diesmal unter Verwendung der `parametrize`-Markierung von `pytest`: ```python # test_this2.py import pytest @pytest.mark.parametrize( "name, input, expected", [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ], ) def test_floor(name, input, expected): assert_equal(math.floor(input), expected) ``` Genau wie bei `parametrisiert` kรถnnen Sie mit `pytest.mark.parametrize` genau steuern, welche Subtests ausgefรผhrt werden ausgefรผhrt werden, wenn der Filter `-k` nicht ausreicht. Allerdings erzeugt diese Parametrisierungsfunktion einen etwas anderen Satz von Namen fรผr die Untertests. Sie sehen folgendermaรŸen aus: ```bash pytest test_this2.py --collect-only -q ``` und es wird aufgelistet: ```bash test_this2.py::test_floor[integer-1-1.0] test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[large fraction-1.6-1] ``` Jetzt kรถnnen Sie also nur den spezifischen Test durchfรผhren: ```bash pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0] ``` wie im vorherigen Beispiel. ### Dateien und Verzeichnisse In Tests mรผssen wir oft wissen, wo sich Dinge relativ zur aktuellen Testdatei befinden, und das ist nicht trivial, da der Test von mehreren Verzeichnissen aus aufgerufen werden kann oder sich in Unterverzeichnissen mit unterschiedlicher Tiefe befinden kann. Eine Hilfsklasse `transformers.test_utils.TestCasePlus` lรถst dieses Problem, indem sie alle grundlegenden Pfade sortiert und einfache Zugriffsmรถglichkeiten auf sie bietet: - `pathlib`-Objekte (alle vollstรคndig aufgelรถst): - `test_file_path` - der aktuelle Testdateipfad, d.h. `__file__` - `test_file_dir` - das Verzeichnis, das die aktuelle Testdatei enthรคlt - `tests_dir` - das Verzeichnis der `tests` Testreihe - `examples_dir` - das Verzeichnis der `examples` Test-Suite - repo_root_dir` - das Verzeichnis des Repositorys - src_dir` - das Verzeichnis von `src` (d.h. wo sich das Unterverzeichnis `transformers` befindet) - stringifizierte Pfade - wie oben, aber diese geben Pfade als Strings zurรผck, anstatt als `pathlib`-Objekte: - `test_file_path_str` - `test_file_dir_str` - `tests_dir_str` - `examples_dir_str` - `repo_root_dir_str` - `src_dir_str` Um diese zu verwenden, mรผssen Sie lediglich sicherstellen, dass der Test in einer Unterklasse von `transformers.test_utils.TestCasePlus` befindet. Zum Beispiel: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_local_locations(self): data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro" ``` Wenn Sie Pfade nicht รผber `pathlib` manipulieren mรผssen oder nur einen Pfad als String benรถtigen, kรถnnen Sie jederzeit `str()` auf das `pathlib`-Objekt anwenden oder die Accessoren mit der Endung `_str` verwenden. Zum Beispiel: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_stringified_locations(self): examples_dir = self.examples_dir_str ``` ### Temporรคre Dateien und Verzeichnisse Die Verwendung eindeutiger temporรคrer Dateien und Verzeichnisse ist fรผr die parallele Durchfรผhrung von Tests unerlรคsslich, damit sich die Tests nicht gegenseitig รผberschreiben. Daten gegenseitig รผberschreiben. AuรŸerdem mรถchten wir, dass die temporรคren Dateien und Verzeichnisse am Ende jedes Tests, der sie erstellt hat, gelรถscht werden. erstellt hat. Daher ist die Verwendung von Paketen wie `tempfile`, die diese Anforderungen erfรผllen, unerlรคsslich. Beim Debuggen von Tests mรผssen Sie jedoch sehen kรถnnen, was in der temporรคren Datei oder dem temporรคren Verzeichnis gespeichert wird und Sie mรถchten Sie mรผssen den genauen Pfad kennen und dรผrfen ihn nicht bei jedem neuen Testdurchlauf zufรคllig รคndern. Fรผr solche Zwecke ist die Hilfsklasse `transformers.test_utils.TestCasePlus` am besten geeignet. Sie ist eine Unterklasse von Unittest.TestCase`, so dass wir in den Testmodulen einfach von ihr erben kรถnnen. Hier ist ein Beispiel fรผr die Verwendung dieser Klasse: ```python from transformers.testing_utils import TestCasePlus class ExamplesTests(TestCasePlus): def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` Dieser Code erstellt ein eindeutiges temporรคres Verzeichnis und setzt `tmp_dir` auf dessen Speicherort. - Erstellen Sie ein eindeutiges temporรคres Verzeichnis: ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` tmp_dir" enthรคlt den Pfad zu dem erstellten temporรคren Verzeichnis. Es wird am Ende des Tests automatisch entfernt. Tests entfernt. - Erstellen Sie ein temporรคres Verzeichnis meiner Wahl, stellen Sie sicher, dass es leer ist, bevor der Test beginnt, und leeren Sie es nach dem Test nicht. ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir("./xxx") ``` Dies ist nรผtzlich fรผr die Fehlersuche, wenn Sie ein bestimmtes Verzeichnis รผberwachen und sicherstellen mรถchten, dass die vorherigen Tests keine Daten darin hinterlassen haben. keine Daten dort hinterlassen haben. - Sie kรถnnen das Standardverhalten auรŸer Kraft setzen, indem Sie die Argumente `before` und `after` direkt รผberschreiben, was zu einem der folgenden Verhaltensweisen fรผhrt folgenden Verhaltensweisen: - `before=True`: das temporรคre Verzeichnis wird immer zu Beginn des Tests gelรถscht. - `before=False`: wenn das temporรคre Verzeichnis bereits existiert, bleiben alle vorhandenen Dateien dort erhalten. - `after=True`: das temporรคre Verzeichnis wird immer am Ende des Tests gelรถscht. - `after=False`: das temporรคre Verzeichnis wird am Ende des Tests immer beibehalten. <Tip> Um das ร„quivalent von `rm -r` sicher ausfรผhren zu kรถnnen, sind nur Unterverzeichnisse des Projektarchivs checkout erlaubt, wenn ein explizites `tmp_dir` verwendet wird, so dass nicht versehentlich ein `/tmp` oder ein รคhnlich wichtiger Teil des Dateisystems vernichtet wird. d.h. geben Sie bitte immer Pfade an, die mit `./` beginnen. </Tip> <Tip> Jeder Test kann mehrere temporรคre Verzeichnisse registrieren, die alle automatisch entfernt werden, sofern nicht anders gewรผnscht. anders. </Tip> ### Temporรคre รœberschreibung von sys.path Wenn Sie `sys.path` vorรผbergehend รผberschreiben mรผssen, um z.B. von einem anderen Test zu importieren, kรถnnen Sie den Kontextmanager `ExtendSysPath` verwenden. Beispiel: ```python import os from transformers.testing_utils import ExtendSysPath bindir = os.path.abspath(os.path.dirname(__file__)) with ExtendSysPath(f"{bindir}/.."): from test_trainer import TrainerIntegrationCommon # noqa ``` ### รœberspringen von Tests Dies ist nรผtzlich, wenn ein Fehler gefunden und ein neuer Test geschrieben wird, der Fehler aber noch nicht behoben ist. Damit wir ihn in das Haupt-Repository zu รผbertragen, mรผssen wir sicherstellen, dass er bei `make test` รผbersprungen wird. Methoden: - Ein **Skip** bedeutet, dass Sie erwarten, dass Ihr Test nur dann erfolgreich ist, wenn einige Bedingungen erfรผllt sind, andernfalls sollte pytest den Test รผberspringen. die Ausfรผhrung des Tests ganz รผberspringen. รœbliche Beispiele sind das รœberspringen von Tests, die nur unter Windows laufen, auf Nicht-Windows-Plattformen oder das รœberspringen von Tests, die von einer externen Ressource abhรคngen, die im Moment nicht verfรผgbar ist (z.B. eine Datenbank). - Ein **xfail** bedeutet, dass Sie erwarten, dass ein Test aus irgendeinem Grund fehlschlรคgt. Ein gรคngiges Beispiel ist ein Test fรผr eine Funktion, die noch nicht noch nicht implementiert oder ein noch nicht behobener Fehler. Wenn ein Test trotz eines erwarteten Fehlschlags bestanden wird (markiert mit pytest.mark.xfail), ist dies ein xpass und wird in der Testzusammenfassung gemeldet. Einer der wichtigsten Unterschiede zwischen den beiden ist, dass `skip` den Test nicht ausfรผhrt, wรคhrend `xfail` dies tut. Wenn also der Code, der fehlerhaft ist, einen schlechten Zustand verursacht, der sich auf andere Tests auswirkt, sollten Sie also nicht `xfail` verwenden. #### Implementierung - Hier sehen Sie, wie Sie einen ganzen Test bedingungslos รผberspringen kรถnnen: ```python no-style @unittest.skip("this bug needs to be fixed") def test_feature_x(): ``` oder mit pytest: ```python no-style @pytest.mark.skip(reason="this bug needs to be fixed") ``` oder mit dem `xfail` Weg: ```python no-style @pytest.mark.xfail def test_feature_x(): ``` - Hier erfahren Sie, wie Sie einen Test aufgrund einer internen Prรผfung innerhalb des Tests auslassen kรถnnen: ```python def test_feature_x(): if not has_something(): pytest.skip("unsupported configuration") ``` oder das ganze Modul: ```python import pytest if not pytest.config.getoption("--custom-flag"): pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True) ``` oder mit dem `xfail` Weg: ```python def test_feature_x(): pytest.xfail("expected to fail until bug XYZ is fixed") ``` - Hier erfahren Sie, wie Sie alle Tests in einem Modul รผberspringen kรถnnen, wenn ein Import fehlt: ```python docutils = pytest.importorskip("docutils", minversion="0.3") ``` - Einen Test aufgrund einer Bedingung รผberspringen: ```python no-style @pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher") def test_feature_x(): ``` oder: ```python no-style @unittest.skipIf(torch_device == "cpu", "Can't do half precision") def test_feature_x(): ``` oder รผberspringen Sie das ganze Modul: ```python no-style @pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows") class TestClass(): def test_feature_x(self): ``` Weitere Details, Beispiele und Mรถglichkeiten finden Sie [hier](https://docs.pytest.org/en/latest/skipping.html). ### Langsame Tests Die Bibliothek der Tests wรคchst stรคndig, und einige der Tests brauchen Minuten, um ausgefรผhrt zu werden, daher kรถnnen wir es uns nicht leisten, eine Stunde zu warten, bis die eine Stunde auf die Fertigstellung der Testsuite auf CI zu warten. Daher sollten langsame Tests, mit einigen Ausnahmen fรผr wichtige Tests, wie im folgenden Beispiel wie im folgenden Beispiel markiert werden: ```python no-style from transformers.testing_utils import slow @slow def test_integration_foo(): ``` Sobald ein Test als `@langsam` markiert ist, setzen Sie die Umgebungsvariable `RUN_SLOW=1`, um solche Tests auszufรผhren, z.B: ```bash RUN_SLOW=1 pytest tests ``` Einige Dekoratoren wie `@parameterized` schreiben Testnamen um, daher mรผssen `@slow` und die รผbrigen Skip-Dekoratoren `@require_*` mรผssen als letztes aufgefรผhrt werden, damit sie korrekt funktionieren. Hier ist ein Beispiel fรผr die korrekte Verwendung: ```python no-style @parameteriz ed.expand(...) @slow def test_integration_foo(): ``` Wie zu Beginn dieses Dokuments erlรคutert, werden langsame Tests nach einem Zeitplan ausgefรผhrt und nicht in PRs CI Prรผfungen. Es ist also mรถglich, dass einige Probleme bei der Einreichung eines PRs รผbersehen werden und zusammengefรผhrt werden. Solche Probleme werden werden beim nรคchsten geplanten CI-Job abgefangen. Das bedeutet aber auch, dass es wichtig ist, die langsamen Tests auf Ihrem Rechner auszufรผhren, bevor Sie den PR einreichen. Hier ist ein grober Entscheidungsmechanismus fรผr die Auswahl der Tests, die als langsam markiert werden sollen: Wenn der Test auf eine der internen Komponenten der Bibliothek ausgerichtet ist (z.B. Modellierungsdateien, Tokenisierungsdateien, Pipelines), dann sollten wir diesen Test in der nicht langsamen Testsuite ausfรผhren. Wenn er sich auf einen anderen Aspekt der Bibliothek bezieht, wie z.B. die Dokumentation oder die Beispiele, dann sollten wir diese Tests in der langsamen Testsuite durchfรผhren. Und dann, zur Verfeinerung Ansatz zu verfeinern, sollten wir Ausnahmen einfรผhren: - Alle Tests, die einen umfangreichen Satz von Gewichten oder einen Datensatz mit einer GrรถรŸe von mehr als ~50MB herunterladen mรผssen (z.B. Modell- oder Tokenizer-Integrationstests, Pipeline-Integrationstests) sollten auf langsam gesetzt werden. Wenn Sie ein neues Modell hinzufรผgen, sollten Sie sollten Sie eine kleine Version des Modells (mit zufรคlligen Gewichtungen) fรผr Integrationstests erstellen und in den Hub hochladen. Dies wird wird in den folgenden Abschnitten erlรคutert. - Alle Tests, die ein Training durchfรผhren mรผssen, das nicht speziell auf Schnelligkeit optimiert ist, sollten auf langsam gesetzt werden. - Wir kรถnnen Ausnahmen einfรผhren, wenn einige dieser Tests, die nicht langsam sein sollten, unertrรคglich langsam sind, und sie auf @langsam`. Auto-Modellierungstests, die groรŸe Dateien auf der Festplatte speichern und laden, sind ein gutes Beispiel fรผr Tests, die als als `@langsam` markiert sind. - Wenn ein Test in weniger als 1 Sekunde auf CI abgeschlossen wird (einschlieรŸlich eventueller Downloads), sollte es sich trotzdem um einen normalen Test handeln. Insgesamt mรผssen alle nicht langsamen Tests die verschiedenen Interna abdecken und dabei schnell bleiben. Zum Beispiel, kann eine signifikante Abdeckung erreicht werden, indem Sie mit speziell erstellten kleinen Modellen mit zufรคlligen Gewichten testen. Solche Modelle haben eine sehr geringe Anzahl von Schichten (z.B. 2), Vokabeln (z.B. 1000), usw. Dann kรถnnen die `@slow`-Tests groรŸe langsame Modelle verwenden, um qualitative Tests durchzufรผhren. Um die Verwendung dieser Modelle zu sehen, suchen Sie einfach nach *winzigen* Modellen mit: ```bash grep tiny tests examples ``` Hier ist ein Beispiel fรผr ein [Skript](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py), das das winzige Modell erstellt hat [stas/tiny-wmt19-en-de](https://huggingface.co/stas/tiny-wmt19-en-de). Sie kรถnnen es ganz einfach an Ihre eigene Architektur Ihres Modells anpassen. Es ist leicht, die Laufzeit falsch zu messen, wenn zum Beispiel ein groรŸes Modell heruntergeladen wird, aber wenn Sie es lokal testen, wรผrden die heruntergeladenen Dateien zwischengespeichert und somit die Download-Zeit nicht gemessen werden. Prรผfen Sie daher den Ausfรผhrungsgeschwindigkeitsbericht in den CI-Protokollen (die Ausgabe von `pytest --durations=0 tests`). Dieser Bericht ist auch nรผtzlich, um langsame AusreiรŸer zu finden, die nicht als solche gekennzeichnet sind oder die neu geschrieben werden mรผssen, um schnell zu sein. Wenn Sie bemerken, dass die Testsuite beim CI langsam wird, zeigt die oberste Liste dieses Berichts die langsamsten Tests. ### Testen der stdout/stderr-Ausgabe Um Funktionen zu testen, die in `stdout` und/oder `stderr` schreiben, kann der Test auf diese Strรถme zugreifen, indem er die [capsys system](https://docs.pytest.org/en/latest/capture.html) von `pytest` zugreifen. So wird dies bewerkstelligt: ```python import sys def print_to_stdout(s): print(s) def print_to_stderr(s): sys.stderr.write(s) def test_result_and_stdout(capsys): msg = "Hello" print_to_stdout(msg) print_to_stderr(msg) out, err = capsys.readouterr() # consume the captured output streams # optional: if you want to replay the consumed streams: sys.stdout.write(out) sys.stderr.write(err) # test: assert msg in out assert msg in err ``` Und natรผrlich wird `stderr` in den meisten Fรคllen als Teil einer Ausnahme auftreten, so dass try/except in einem solchen Fall verwendet werden muss Fall verwendet werden: ```python def raise_exception(msg): raise ValueError(msg) def test_something_exception(): msg = "Not a good value" error = "" try: raise_exception(msg) except Exception as e: error = str(e) assert msg in error, f"{msg} is in the exception:\n{error}" ``` Ein anderer Ansatz zur Erfassung von stdout ist `contextlib.redirect_stdout`: ```python from io import StringIO from contextlib import redirect_stdout def print_to_stdout(s): print(s) def test_result_and_stdout(): msg = "Hello" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # optional: if you want to replay the consumed streams: sys.stdout.write(out) # test: assert msg in out ``` Ein wichtiges potenzielles Problem beim Erfassen von stdout ist, dass es `r` Zeichen enthalten kann, die bei normalem `print` alles zurรผcksetzen, was bisher gedruckt wurde. Mit `pytest` gibt es kein Problem, aber mit `pytest -s` werden diese werden diese Zeichen in den Puffer aufgenommen. Um den Test mit und ohne `-s` laufen zu lassen, mรผssen Sie also eine zusรคtzliche Bereinigung zusรคtzliche Bereinigung der erfassten Ausgabe vornehmen, indem Sie `re.sub(r'~.*\r', '', buf, 0, re.M)` verwenden. Aber dann haben wir einen Hilfskontextmanager-Wrapper, der sich automatisch um alles kรผmmert, unabhรคngig davon, ob er einige "*.*.*.*" enthรคlt oder nicht: ```python from transformers.testing_utils import CaptureStdout with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out) ``` Hier ist ein vollstรคndiges Testbeispiel: ```python from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}" ``` Wenn Sie `stderr` aufzeichnen mรถchten, verwenden Sie stattdessen die Klasse `CaptureStderr`: ```python from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err) ``` Wenn Sie beide Streams auf einmal erfassen mรผssen, verwenden Sie die รผbergeordnete Klasse `CaptureStd`: ```python from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out) ``` Um das Debuggen von Testproblemen zu erleichtern, geben diese Kontextmanager standardmรครŸig die aufgezeichneten Streams beim Verlassen aus dem Kontext wieder. ### Erfassen von Logger-Streams Wenn Sie die Ausgabe eines Loggers validieren mรผssen, kรถnnen Sie `CaptureLogger` verwenden: ```python from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger("transformers.models.bart.tokenization_bart") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n" ``` ### Testen mit Umgebungsvariablen Wenn Sie die Auswirkungen von Umgebungsvariablen fรผr einen bestimmten Test testen mรถchten, kรถnnen Sie einen Hilfsdekorator verwenden `transformers.testing_utils.mockenv` ```python from transformers.testing_utils import mockenv class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None) ``` Manchmal muss ein externes Programm aufgerufen werden, was die Einstellung von `PYTHONPATH` in `os.environ` erfordert, um mehrere lokale Pfade einzuschlieรŸen. mehrere lokale Pfade. Eine Hilfsklasse `transformers.test_utils.TestCasePlus` hilft Ihnen dabei: ```python from transformers.testing_utils import TestCasePlus class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # now call the external program, passing `env` to it ``` Je nachdem, ob die Testdatei in der Testsuite `tests` oder in `examples` war, wird sie korrekt eingerichtet env[PYTHONPATH]` eines dieser beiden Verzeichnisse und auch das `src` Verzeichnis, um sicherzustellen, dass der Test gegen das aktuelle um sicherzustellen, dass der Test mit dem aktuellen Projektarchiv durchgefรผhrt wird, und schlieรŸlich mit dem, was in `env[PYTHONPATH]` bereits eingestellt war, bevor der Test aufgerufen wurde. wenn รผberhaupt. Diese Hilfsmethode erstellt eine Kopie des Objekts `os.environ`, so dass das Original intakt bleibt. ### Reproduzierbare Ergebnisse erhalten In manchen Situationen mรถchten Sie vielleicht die Zufรคlligkeit Ihrer Tests beseitigen. Um identische, reproduzierbare Ergebnisse zu erhalten, mรผssen Sie mรผssen Sie den Seed festlegen: ```python seed = 42 # python RNG import random random.seed(seed) # pytorch RNGs import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # numpy RNG import numpy as np np.random.seed(seed) # tf RNG tf.random.set_seed(seed) ``` ### Tests debuggen Um einen Debugger an der Stelle zu starten, an der die Warnung auftritt, gehen Sie wie folgt vor: ```bash pytest tests/utils/test_logging.py -W error::UserWarning --pdb ``` ## Arbeiten mit Github-Aktionen-Workflows Um einen CI-Job fรผr einen Self-Push-Workflow auszulรถsen, mรผssen Sie: 1. Erstellen Sie einen neuen Zweig auf `transformers` Ursprung (keine Gabelung!). 2. Der Name der Verzweigung muss entweder mit `ci_` oder `ci-` beginnen (`main` lรถst ihn auch aus, aber wir kรถnnen keine PRs auf `main`). Es wird auch nur fรผr bestimmte Pfade ausgelรถst - Sie kรถnnen die aktuelle Definition finden, falls sie falls sie sich seit der Erstellung dieses Dokuments geรคndert hat [hier](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml) unter *push:* 3. Erstellen Sie einen PR von diesem Zweig. 4. Dann kรถnnen Sie sehen, wie der Job erscheint [hier](https://github.com/huggingface/transformers/actions/workflows/self-push.yml). Er wird mรถglicherweise nicht sofort ausgefรผhrt, wenn es ein Backlog vorhanden ist. ## Testen experimenteller CI-Funktionen Das Testen von CI-Funktionen kann potenziell problematisch sein, da es die normale CI-Funktion beeintrรคchtigen kann. Wenn also eine neue CI-Funktion hinzugefรผgt werden soll, sollte dies wie folgt geschehen. 1. Erstellen Sie einen neuen Auftrag, der die zu testende Funktion testet. 2. Der neue Job muss immer erfolgreich sein, so dass er uns ein grรผnes โœ“ gibt (Details unten). 3. Lassen Sie ihn einige Tage lang laufen, um zu sehen, dass eine Vielzahl verschiedener PR-Typen darauf laufen (Benutzer-Gabelzweige, nicht geforkte Zweige, Zweige, die von github.com UI direct file edit stammen, verschiedene erzwungene Pushes, etc. - es gibt es gibt so viele), wรคhrend Sie die Protokolle des experimentellen Jobs รผberwachen (nicht den gesamten Job grรผn, da er absichtlich immer grรผn) 4. Wenn klar ist, dass alles in Ordnung ist, fรผgen Sie die neuen ร„nderungen in die bestehenden Jobs ein. Auf diese Weise wird der normale Arbeitsablauf nicht durch Experimente mit der CI-Funktionalitรคt selbst beeintrรคchtigt. Wie kรถnnen wir nun dafรผr sorgen, dass der Auftrag immer erfolgreich ist, wรคhrend die neue CI-Funktion entwickelt wird? Einige CIs, wie TravisCI, unterstรผtzen ignore-step-failure und melden den gesamten Job als erfolgreich, aber CircleCI und Github Actions unterstรผtzen dies zum jetzigen Zeitpunkt nicht. Sie kรถnnen also die folgende Abhilfe verwenden: 1. Setzen Sie `set +euo pipefail` am Anfang des Ausfรผhrungsbefehls, um die meisten potenziellen Fehler im Bash-Skript zu unterdrรผcken. 2. Der letzte Befehl muss ein Erfolg sein: `echo "done"` oder einfach `true` reicht aus. Hier ist ein Beispiel: ```yaml - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" ``` Fรผr einfache Befehle kรถnnen Sie auch Folgendes tun: ```bash cmd_that_may_fail || true ``` Wenn Sie mit den Ergebnissen zufrieden sind, integrieren Sie den experimentellen Schritt oder Job natรผrlich in den Rest der normalen Jobs, Entfernen Sie dabei `set +euo pipefail` oder andere Dinge, die Sie eventuell hinzugefรผgt haben, um sicherzustellen, dass der experimentelle Auftrag nicht den normalen CI-Betrieb nicht beeintrรคchtigt. Dieser ganze Prozess wรคre viel einfacher gewesen, wenn wir nur etwas wie `allow-failure` fรผr den experimentellen Schritt festlegen kรถnnten und ihn scheitern lassen wรผrden, ohne den Gesamtstatus der PRs zu beeintrรคchtigen. Aber wie bereits erwรคhnt, haben CircleCI und Github Actions dies im Moment nicht unterstรผtzen. Sie kรถnnen in diesen CI-spezifischen Threads fรผr diese Funktion stimmen und sehen, wo sie steht: - [Github Actions:](https://github.com/actions/toolkit/issues/399) - [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/in_translation.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Traduction en cours.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Visite rapide - local: in_translation title: Installation title: Dรฉmarrer - sections: - local: in_translation title: Pipelines pour l'infรฉrence - local: in_translation title: Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass - local: in_translation title: Prรฉparation des donnรฉes - local: in_translation title: Fine-tune un modรจle prรฉ-entraรฎnรฉ - local: in_translation title: Entraรฎnement distribuรฉ avec ๐Ÿค— Accelerate - local: in_translation title: Partager un modรจle title: Tutoriels - sections: - sections: - local: in_translation title: Crรฉer votre architecture - local: in_translation title: Partager vos modรจles - local: in_translation title: Entraรฎnement avec un script - local: in_translation title: Entraรฎnement avec Amazon SageMaker - local: in_translation title: Convertir depuis des checkpoints Tensorflow - local: in_translation title: Exporter vers ONNX - local: in_translation title: Exporter vers TorchScript - local: in_translation title: Aide au dรฉpannage title: Usage gรฉnรฉral - sections: - local: in_translation title: Utiliser les tokenizers de ๐Ÿค— Tokenizers - local: in_translation title: Infรฉrence avec les modรจles multilingues - local: in_translation title: Stratรฉgies de gรฉnรฉration de texte - sections: - isExpanded: false local: in_translation title: Classification de texte - local: in_translation title: Classification de token - local: in_translation title: Systรจme de question-rรฉponse - local: in_translation title: Modรฉlisation causale du langage - local: in_translation title: Modรฉlisation du langage avec masque - local: in_translation title: Traduction - local: in_translation title: Gรฉnรฉration de rรฉsumรฉ - local: in_translation title: Question ร  choix multiple title: Guides des tรขches title: Traitement automatique des langues - sections: - local: in_translation title: Classification audio - local: in_translation title: Reconnaissance automatique de la parole title: Audio - sections: - local: in_translation title: Classification d'images - local: in_translation title: Segmentation sรฉmantique - local: in_translation title: Classification de vidรฉos - local: in_translation title: Dรฉtection d'objets title: Vision par ordinateur - sections: - local: in_translation title: Performance et extensibilitรฉ - sections: - local: in_translation title: Comment contribuer ร  transformers? - local: in_translation title: Comment ajouter un modรจle ร  ๐Ÿค— Transformers? - local: in_translation title: Comment convertir un modรจle ๐Ÿค— Transformers vers TensorFlow? - local: in_translation title: Comment ajouter un pipeline ร  ๐Ÿค— Transformers? - local: in_translation title: Tester - local: in_translation title: Vรฉrification pour une Pull Request title: Contribuer - local: in_translation title: ๐Ÿค— Transformers Notebooks - local: in_translation title: Ressources communautaires - local: in_translation title: Benchmarks - local: in_translation title: Migration ร  partir de versions prรฉcรฉdentes title: Guides d'utilisation - sections: - local: in_translation title: Philosophie - local: in_translation title: Glossaire - local: in_translation title: Qu'est ce ๐Ÿค— Transformers peut faire ? - local: in_translation title: Quelles tรขches ๐Ÿค— Transformers peut rรฉsoudre ? - local: in_translation title: Rรฉsumรฉ des modรจles - local: in_translation title: Rรฉsumรฉ des tokenizers - local: in_translation title: Remplissage et troncature - local: in_translation title: BERTology - local: in_translation title: Perplexitรฉ des modรจles ร  longueur fixe - local: in_translation title: Pipelines pour infรฉrence avec des serveurs web title: Guides conceptuels - sections: - isExpanded: false sections: - local: in_translation title: Classes principales - local: in_translation title: Modรจles textuels - local: in_translation title: Modรจles visuels - local: in_translation title: Modรจles audio - local: in_translation title: Modรจles multimodal - local: in_translation title: Modรจles d'apprentissage par renforcement - local: in_translation title: Modรจles de sรฉries temporelles - local: in_translation title: Graph models title: Modรจles - sections: - local: in_translation title: Utilitaires internes title: API
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Installation de Transformers ! pip install transformers datasets # Pour installer ร  partir du code source au lieu de la derniรจre version, commentez la commande ci-dessus et dรฉcommentez la suivante. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Apprentissage automatique de pointe pour [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), et [JAX](https://jax.readthedocs.io/en/latest/). ๐Ÿค— Transformers fournit des API et des outils pour tรฉlรฉcharger et entraรฎner facilement des modรจles prรฉ-entraรฎnรฉs de pointe. L'utilisation de modรจles prรฉ-entraรฎnรฉs peut rรฉduire vos coรปts de calcul, votre empreinte carbone, et vous faire รฉconomiser le temps et les ressources nรฉcessaires pour entraรฎner un modรจle ร  partir de zรฉro. Ces modรจles prennent en charge des tรขches courantes dans diffรฉrentes modalitรฉs, telles que : ๐Ÿ“ **Traitement automatique des langues**: classification de texte, reconnaissance d'entitรฉs, systรจme de question-rรฉponse, modรจle de langage, gรฉnรฉration de rรฉsumรฉ, traduction, question ร  choix multiples et gรฉnรฉration de texte.<br> ๐Ÿ–ผ๏ธ **Vision par ordinateur**: classification d'image, dรฉtection d'objet et segmentation.<br> ๐Ÿ—ฃ๏ธ **Audio**: reconnaissance automatique de la parole et classification audio.<br> ๐Ÿ™ **Multimodalitรฉ**: systรจme de question-rรฉponse avec des tableaux ou images, reconnaissance optique de caractรจres, extraction d'information depuis des documents scannรฉs et classification de vidรฉo. ๐Ÿค— Transformers prend en charge l'interopรฉrabilitรฉ entre PyTorch, TensorFlow et JAX. Cela permet d'utiliser un framework diffรฉrent ร  chaque รฉtape de la vie d'un modรจle, par exemple entraรฎner un modรจle en trois lignes de code avec un framework, et le charger pour l'infรฉrence avec un autre. Les modรจles peuvent รฉgalement รชtre exportรฉs dans un format comme ONNX et TorchScript pour รชtre dรฉployรฉs dans des environnements de production. Rejoignez la communautรฉ grandissante sur le [Hub](https://huggingface.co/models), le [forum](https://discuss.huggingface.co/) ou [Discord](https://discord.com/invite/JfAtkvEtRb) dรจs aujourd'hui ! ## Si vous cherchez un support personnalisรฉ de l'รฉquipe Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contents La documentation est organisรฉe en 5 parties: - **DEMARRER** propose une visite rapide de la bibliothรจque et des instructions d'installation pour รชtre opรฉrationnel. - **TUTORIELS** excellent point de dรฉpart pour les dรฉbutants. Cette section vous aidera ร  acquรฉrir les compรฉtences de base dont vous avez besoin pour commencer ร  utiliser la bibliothรจque. - **GUIDES D'UTILISATION** pour diffรฉrentes tรขches comme par exemple le finetuning d'un modรจle prรฉ-entraรฎnรฉ pour la classification de texte ou comment crรฉer et partager votre propre modรจle. - **GUIDES CONCEPTUELS** pour plus de discussions et d'explications sur les concepts et les idรฉes sous-jacentes aux modรจles, aux tรขches et ร  la philosophie de conception de ๐Ÿค— Transformers. - **API** dรฉcrit toutes les classes et fonctions : - **CLASSES PRINCIPALES** dรฉtaille les classes les plus importantes comme la configuration, le modรจle, le tokenizer et le pipeline.. - **MODELES** dรฉtaille les classes et les fonctions propres ร  chaque modรจle de la bibliothรจque. - **UTILITAIRES INTERNES** dรฉtaille les classes et fonctions utilitaires utilisรฉes en interne. ### Modรจles supportรฉs <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[AltCLIP](model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. 1. **[Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BioGpt](model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 1. **[BiT](model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLIP](model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[BridgeTower](model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[Chinese-CLIP](model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETA](model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krรคhenbรผhl. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientFormer](model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GIT](model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPT-Sw3](model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey ร–hman, Fredrik Carlsson, Magnus Sahlgren. 1. **[Graphormer](model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileNetV1](model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 1. **[MobileNetV2](model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[NAT](model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechT5](model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[Swin2SR](model_doc/swin2sr)** (from University of Wรผrzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. 1. **[SwitchTransformers](model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[TimeSformer](model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[UPerNet](model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViT Hybrid](model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Frameworks compatibles Le tableau ci-dessous reprรฉsente la prise en charge actuelle dans la bibliothรจque pour chacun de ces modรจles, qu'ils aient ou non un tokenizer Python (appelรฉ "slow"). Un tokenizer rapide ("fast") soutenu par la bibliothรจque ๐Ÿค— Tokenizers, qu'ils aient un support en Jax (via Flax), PyTorch, et/ou TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Modรจle | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | AltCLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | Audio Spectrogram Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | BioGpt | โœ… | โŒ | โœ… | โŒ | โŒ | | BiT | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | BridgeTower | โŒ | โŒ | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | Chinese-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETA | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DiNAT | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | EfficientFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GIT | โŒ | โŒ | โœ… | โŒ | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GPT-Sw3 | โœ… | โœ… | โœ… | โœ… | โœ… | | Graphormer | โŒ | โŒ | โœ… | โŒ | โŒ | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | Mask2Former | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormerSwin | โŒ | โŒ | โŒ | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileNetV1 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileNetV2 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | NAT | โŒ | โŒ | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OneFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โŒ | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoBERTa-PreLayerNorm | โŒ | โŒ | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | SpeechT5 | โœ… | โŒ | โœ… | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | Swin2SR | โŒ | โŒ | โœ… | โŒ | โŒ | | SwitchTransformers | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TimeSformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | UPerNet | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViT Hybrid | โŒ | โŒ | โœ… | โŒ | โŒ | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Visite rapide [[open-in-colab]] Soyez opรฉrationnel avec ๐Ÿค— Transformers ! Que vous soyez un dรฉveloppeur ou un utilisateur lambda, cette visite rapide vous aidera ร  dรฉmarrer et vous montrera comment utiliser le [`pipeline`] pour l'infรฉrence, charger un modรจle prรฉ-entraรฎnรฉ et un prรฉprocesseur avec une [AutoClass](./model_doc/auto), et entraรฎner rapidement un modรจle avec PyTorch ou TensorFlow. Si vous รชtes un dรฉbutant, nous vous recommandons de consulter nos tutoriels ou notre [cours](https://huggingface.co/course/chapter1/1) suivant pour des explications plus approfondies des concepts prรฉsentรฉs ici. Avant de commencer, assurez-vous que vous avez installรฉ toutes les bibliothรจques nรฉcessaires : ```bash !pip install transformers datasets ``` Vous aurez aussi besoin d'installer votre bibliothรจque d'apprentissage profond favorite : <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## Pipeline <Youtube id="tiZFewofSLM"/> Le [`pipeline`] est le moyen le plus simple d'utiliser un modรจle prรฉ-entraรฎnรฉ pour l'infรฉrence. Vous pouvez utiliser le [`pipeline`] prรชt ร  l'emploi pour de nombreuses tรขches dans diffรฉrentes modalitรฉs. Consultez le tableau ci-dessous pour connaรฎtre les tรขches prises en charge : | **Tรขche** | **Description** | **Modalitรฉ** | **Identifiant du pipeline** | |------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------|-----------------------------------------------| | Classification de texte | Attribue une catรฉgorie ร  une sรฉquence de texte donnรฉe | Texte | pipeline(task="sentiment-analysis") | | Gรฉnรฉration de texte | Gรฉnรจre du texte ร  partir d'une consigne donnรฉe | Texte | pipeline(task="text-generation") | | Reconnaissance de token nommรฉ | Attribue une catรฉgorie ร  chaque token dans une sรฉquence (personnes, organisation, localisation, etc.) | Texte | pipeline(task="ner") | | Question rรฉponse | Extrait une rรฉponse du texte en fonction du contexte et d'une question | Texte | pipeline(task="question-answering") | | Prรฉdiction de token masquรฉ | Prรฉdit correctement le token masquรฉ dans une sรฉquence | Texte | pipeline(task="fill-mask") | | Gรฉnรฉration de rรฉsumรฉ | Gรฉnรจre un rรฉsumรฉ d'une sรฉquence de texte donnรฉe ou d'un document | Texte | pipeline(task="summarization") | | Traduction | Traduit du texte d'un langage ร  un autre | Texte | pipeline(task="translation") | | Classification d'image | Attribue une catรฉgorie ร  une image | Image | pipeline(task="image-classification") | | Segmentation d'image | Attribue une catรฉgorie ร  chaque pixel d'une image (supporte la segmentation sรฉmantique, panoptique et d'instance) | Image | pipeline(task="image-segmentation") | | Dรฉtection d'objets | Prรฉdit les dรฉlimitations et catรฉgories d'objets dans une image | Image | pipeline(task="object-detection") | | Classification d'audio | Attribue une catรฉgorie ร  un fichier audio | Audio | pipeline(task="audio-classification") | | Reconnaissance automatique de la parole | Extrait le discours d'un fichier audio en texte | Audio | pipeline(task="automatic-speech-recognition") | | Question rรฉponse visuels | Etant donnรฉes une image et une question, rรฉpond correctement ร  une question sur l'image | Modalitรฉs multiples | pipeline(task="vqa") | Commencez par crรฉer une instance de [`pipeline`] et spรฉcifiez la tรขche pour laquelle vous souhaitez l'utiliser. Vous pouvez utiliser le [`pipeline`] pour n'importe laquelle des tรขches mentionnรฉes dans le tableau prรฉcรฉdent. Pour obtenir une liste complรจte des tรขches prises en charge, consultez la documentation de l'[API pipeline](./main_classes/pipelines). Dans ce guide, nous utiliserons le [`pipeline`] pour l'analyse des sentiments ร  titre d'exemple : ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Le [`pipeline`] tรฉlรฉcharge et stocke en cache un [modรจle prรฉ-entraรฎnรฉ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) et un tokenizer par dรฉfaut pour l'analyse des sentiments. Vous pouvez maintenant utiliser le `classifier` sur le texte de votre choix : ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` Si vous voulez classifier plus qu'un texte, donnez une liste de textes au [`pipeline`] pour obtenir une liste de dictionnaires en retour : ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, avec le score de: {round(result['score'], 4)}") label: POSITIVE, avec le score de: 0.9998 label: NEGATIVE, avec le score de: 0.5309 ``` Le [`pipeline`] peut aussi itรฉrer sur un jeu de donnรฉes entier pour n'importe quelle tรขche. Prenons par exemple la reconnaissance automatique de la parole : ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Chargez un jeu de donnรฉes audio (voir le ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) pour plus de dรฉtails) sur lequel vous souhaitez itรฉrer. Pour cet exemple, nous chargeons le jeu de donnรฉes [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) : ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Vous devez vous assurer que le taux d'รฉchantillonnage de l'ensemble de donnรฉes correspond au taux d'รฉchantillonnage sur lequel [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) a รฉtรฉ entraรฎnรฉ : ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Les fichiers audio sont automatiquement chargรฉs et rรฉรฉchantillonnรฉs lors de l'appel de la colonne `"audio"`. Extrayez les tableaux de formes d'ondes brutes des quatre premiers รฉchantillons et passez-les comme une liste au pipeline : ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Pour les ensembles de donnรฉes plus importants oรน les entrรฉes sont volumineuses (comme dans les domaines de la parole ou de la vision), utilisez plutรดt un gรฉnรฉrateur au lieu d'une liste pour charger toutes les entrรฉes en mรฉmoire. Pour plus d'informations, consultez la documentation de l'[API pipeline](./main_classes/pipelines). ### Utiliser une autre modรจle et tokenizer dans le pipeline Le [`pipeline`] peut รชtre utilisรฉ avec n'importe quel modรจle du [Hub](https://huggingface.co/models), ce qui permet d'adapter facilement le [`pipeline`] ร  d'autres cas d'utilisation. Par exemple, si vous souhaitez un modรจle capable de traiter du texte franรงais, utilisez les filtres du Hub pour trouver un modรจle appropriรฉ. Le premier rรฉsultat renvoie un [modรจle BERT](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) multilingue finetunรฉ pour l'analyse des sentiments que vous pouvez utiliser pour le texte franรงais : ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Utilisez [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `AutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Utilisez [`TFAutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `TFAutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Spรฉcifiez le modรจle et le tokenizer dans le [`pipeline`], et utilisez le `classifier` sur le texte en franรงais : ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Si vous ne parvenez pas ร  trouver un modรจle adaptรฉ ร  votre cas d'utilisation, vous devrez finetuner un modรจle prรฉ-entraรฎnรฉ sur vos donnรฉes. Jetez un coup d'ล“il ร  notre [tutoriel sur le finetuning](./training) pour apprendre comment faire. Enfin, aprรจs avoir finetunรฉ votre modรจle prรฉ-entraรฎnรฉ, pensez ร  [partager](./model_sharing) le modรจle avec la communautรฉ sur le Hub afin de dรฉmocratiser l'apprentissage automatique pour tous ! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Les classes [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] fonctionnent ensemble pour crรฉer un [`pipeline`] comme celui que vous avez utilisรฉ ci-dessus. Une [AutoClass](./model_doc/auto) est un raccourci qui rรฉcupรจre automatiquement l'architecture d'un modรจle prรฉ-entraรฎnรฉ ร  partir de son nom ou de son emplacement. Il vous suffit de sรฉlectionner l'`AutoClass` appropriรฉe ร  votre tรขche et la classe de prรฉtraitement qui lui est associรฉe. Reprenons l'exemple de la section prรฉcรฉdente et voyons comment vous pouvez utiliser l'`AutoClass` pour reproduire les rรฉsultats du [`pipeline`]. ### AutoTokenizer Un tokenizer est chargรฉ de prรฉtraiter le texte pour en faire un tableau de chiffres qui servira d'entrรฉe ร  un modรจle. De nombreuses rรจgles rรฉgissent le processus de tokenisation, notamment la maniรจre de diviser un mot et le niveau auquel les mots doivent รชtre divisรฉs (pour en savoir plus sur la tokenisation, consultez le [rรฉsumรฉ](./tokenizer_summary)). La chose la plus importante ร  retenir est que vous devez instancier un tokenizer avec le mรชme nom de modรจle pour vous assurer que vous utilisez les mรชmes rรจgles de tokenisation que celles avec lesquelles un modรจle a รฉtรฉ prรฉ-entraรฎnรฉ. Chargez un tokenizer avec [`AutoTokenizer`] : ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Passez votre texte au tokenizer : ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Le tokenizer retourne un dictionnaire contenant : * [input_ids](./glossary#input-ids): la reprรฉsentation numรฉrique des tokens. * [attention_mask](.glossary#attention-mask): indique quels tokens doivent faire l'objet d'une attention particuliรจre (plus particuliรจrement les tokens de remplissage). Un tokenizer peut รฉgalement accepter une liste de textes, et remplir et tronquer le texte pour retourner un รฉchantillon de longueur uniforme : <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> Consultez le tutoriel [prรฉtraitement](./preprocessing) pour plus de dรฉtails sur la tokenisation, et sur la maniรจre d'utiliser un [`AutoImageProcessor`], un [`AutoFeatureExtractor`] et un [`AutoProcessor`] pour prรฉtraiter les images, l'audio et les contenus multimodaux. </Tip> ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉes. Cela signifie que vous pouvez charger un [`AutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner l'[`AutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`AutoModelForSequenceClassification`] : ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Maintenant, passez votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle. Il vous suffit de dรฉcompresser le dictionnaire en ajoutant `**` : ```py >>> pt_outputs = pt_model(**pt_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉs. Cela signifie que vous pouvez charger un [`TFAutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner le [`TFAutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`TFAutoModelForSequenceClassification`] : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Passez maintenant votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle en passant les clรฉs du dictionnaire directement aux tensors : ```py >>> tf_outputs = tf_model(tf_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tous les modรจles ๐Ÿค— Transformers (PyTorch ou TensorFlow) produisent les tensors *avant* la fonction d'activation finale (comme softmax) car la fonction d'activation finale est souvent fusionnรฉe avec le calcul de la perte. Les structures produites par le modรจle sont des classes de donnรฉes spรฉciales, de sorte que leurs attributs sont autocomplรฉtรฉs dans un environnement de dรฉveloppement. Les structures produites par le modรจle se comportent comme un tuple ou un dictionnaire (vous pouvez les indexer avec un entier, une tranche ou une chaรฎne), auquel cas les attributs qui sont None sont ignorรฉs. </Tip> ### Sauvegarder un modรจle <frameworkcontent> <pt> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`PreTrainedModel.save_pretrained`] : ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`PreTrainedModel.from_pretrained`] : ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`TFPreTrainedModel.save_pretrained`] : ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`TFPreTrainedModel.from_pretrained`] : ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Une fonctionnalitรฉ particuliรจrement cool ๐Ÿค— Transformers est la possibilitรฉ d'enregistrer un modรจle et de le recharger en tant que modรจle PyTorch ou TensorFlow. Le paramรจtre `from_pt` ou `from_tf` permet de convertir le modรจle d'un framework ร  l'autre : <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Constructions de modรจles personnalisรฉs Vous pouvez modifier la configuration du modรจle pour changer la faรงon dont un modรจle est construit. La configuration spรฉcifie les attributs d'un modรจle, tels que le nombre de couches ou de tรชtes d'attention. Vous partez de zรฉro lorsque vous initialisez un modรจle ร  partir d'une configuration personnalisรฉe. Les attributs du modรจle sont initialisรฉs de maniรจre alรฉatoire et vous devrez entraรฎner le modรจle avant de pouvoir l'utiliser pour obtenir des rรฉsultats significatifs. Commencez par importer [`AutoConfig`], puis chargez le modรจle prรฉ-entraรฎnรฉ que vous voulez modifier. Dans [`AutoConfig.from_pretrained`], vous pouvez spรฉcifier l'attribut que vous souhaitez modifier, tel que le nombre de tรชtes d'attention : ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`AutoModel.from_config`] : ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`TFAutoModel.from_config`] : ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Consultez le guide [Crรฉer une architecture personnalisรฉe](./create_a_model) pour plus d'informations sur la crรฉation de configurations personnalisรฉes. ## Trainer - une boucle d'entraรฎnement optimisรฉe par PyTorch Tous les modรจles sont des [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) standard, vous pouvez donc les utiliser dans n'importe quelle boucle d'entraรฎnement typique. Bien que vous puissiez รฉcrire votre propre boucle d'entraรฎnement, ๐Ÿค— Transformers fournit une classe [`Trainer`] pour PyTorch, qui contient la boucle d'entraรฎnement de base et ajoute des fonctionnalitรฉs supplรฉmentaires comme l'entraรฎnement distribuรฉ, la prรฉcision mixte, et plus encore. En fonction de votre tรขche, vous passerez gรฉnรฉralement les paramรจtres suivants ร  [`Trainer`] : 1. Un [`PreTrainedModel`] ou un [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module): ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. [`TrainingArguments`] contient les hyperparamรจtres du modรจle que vous pouvez changer comme le taux d'apprentissage, la taille de l'รฉchantillon, et le nombre d'รฉpoques pour s'entraรฎner. Les valeurs par dรฉfaut sont utilisรฉes si vous ne spรฉcifiez pas d'hyperparamรจtres d'apprentissage : ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 4. Chargez un jeu de donnรฉes : ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` Puis appliquez-la ร  l'intรฉgralitรฉ du jeu de donnรฉes avec [`~datasets.Dataset.map`]: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. Un [`DataCollatorWithPadding`] pour crรฉer un รฉchantillon d'exemples ร  partir de votre jeu de donnรฉes : ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` Maintenant, rassemblez tous ces รฉlรฉments dans un [`Trainer`] : ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` Une fois que vous รชtes prรชt, appelez la fonction [`~Trainer.train`] pour commencer l'entraรฎnement : ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> Pour les tรขches - comme la traduction ou la gรฉnรฉration de rรฉsumรฉ - qui utilisent un modรจle sรฉquence ร  sรฉquence, utilisez plutรดt les classes [`Seq2SeqTrainer`] et [`Seq2SeqTrainingArguments`]. </Tip> Vous pouvez personnaliser le comportement de la boucle d'apprentissage en redรฉfinissant les mรฉthodes ร  l'intรฉrieur de [`Trainer`]. Cela vous permet de personnaliser des caractรฉristiques telles que la fonction de perte, l'optimiseur et le planificateur. Consultez la documentation de [`Trainer`] pour savoir quelles mรฉthodes peuvent รชtre redรฉfinies. L'autre moyen de personnaliser la boucle d'apprentissage est d'utiliser les [Callbacks](./main_classes/callbacks). Vous pouvez utiliser les callbacks pour intรฉgrer d'autres bibliothรจques et inspecter la boucle d'apprentissage afin de suivre la progression ou d'arrรชter l'apprentissage plus tรดt. Les callbacks ne modifient rien dans la boucle d'apprentissage elle-mรชme. Pour personnaliser quelque chose comme la fonction de perte, vous devez redรฉfinir le [`Trainer`] ร  la place. ## Entraรฎnement avec TensorFlow Tous les modรจles sont des modรจles standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) afin qu'ils puissent รชtre entraรฎnรฉs avec TensorFlow avec l'API [Keras](https://keras.io/). ๐Ÿค— Transformers fournit la fonction [`~TFPreTrainedModel.prepare_tf_dataset`] pour charger facilement votre jeu de donnรฉes comme un `tf.data.Dataset` afin que vous puissiez commencer l'entraรฎnement immรฉdiatement avec les fonctions [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) et [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) de Keras. 1. Vous commencez avec un modรจle [`TFPreTrainedModel`] ou [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 3. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. Appliquez le tokenizer ร  l'ensemble du jeu de donnรฉes avec [`~datasets.Dataset.map`] et passez ensuite le jeu de donnรฉes et le tokenizer ร  [`~TFPreTrainedModel.prepare_tf_dataset`]. Vous pouvez รฉgalement modifier la taille de l'รฉchantillon et mรฉlanger le jeu de donnรฉes ici si vous le souhaitez : ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. Une fois que vous รชtes prรชt, appelez les fonctions `compile` et `fit` pour commencer l'entraรฎnement : ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) >>> model.fit(dataset) # doctest: +SKIP ``` ## Et aprรจs ? Maintenant que vous avez terminรฉ la visite rapide de ๐Ÿค— Transformers, consultez nos guides et apprenez ร  faire des choses plus spรฉcifiques comme crรฉer un modรจle personnalisรฉ, finetuner un modรจle pour une tรขche, et comment entraรฎner un modรจle avec un script. Si vous souhaitez en savoir plus sur les concepts fondamentaux de ๐Ÿค— Transformers, jetez un ล“il ร  nos guides conceptuels !
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Modelli multilingue per l'inferenza [[open-in-colab]] Ci sono diversi modelli multilingue in ๐Ÿค— Transformers, e il loro utilizzo per l'inferenza differisce da quello dei modelli monolingua. Non *tutti* gli utilizzi dei modelli multilingue sono perรฒ diversi. Alcuni modelli, come [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased), possono essere usati come un modello monolingua. Questa guida ti mostrerร  come utilizzare modelli multilingue che utilizzano un modo diverso per fare l'inferenza. ## XLM XLM ha dieci diversi checkpoint, di cui solo uno รจ monolingua. I nove checkpoint rimanenti possono essere suddivisi in due categorie: i checkpoint che utilizzano i language embeddings e quelli che non li utilizzano. ### XLM con language embeddings I seguenti modelli XLM utilizzano gli embeddings linguistici per specificare la lingua utilizzata per l'inferenza: - `xlm-mlm-ende-1024` (Modellazione mascherata del linguaggio (Masked language modeling, in inglese), Inglese-Tedesco) - `xlm-mlm-enfr-1024` (Modellazione mascherata del linguaggio, Inglese-Francese) - `xlm-mlm-enro-1024` (Modellazione mascherata del linguaggio, Inglese-Rumeno) - `xlm-mlm-xnli15-1024` (Modellazione mascherata del linguaggio, lingue XNLI) - `xlm-mlm-tlm-xnli15-1024` (Modellazione mascherata del linguaggio + traduzione, lingue XNLI) - `xlm-clm-enfr-1024` (Modellazione causale del linguaggio, Inglese-Francese) - `xlm-clm-ende-1024` (Modellazione causale del linguaggio, Inglese-Tedesco) Gli embeddings linguistici sono rappresentati come un tensore delle stesse dimensioni dell' `input_ids` passato al modello. I valori in questi tensori dipendono dal linguaggio usato e sono identificati dagli attributi `lang2id` e `id2lang` del tokenizer. In questo esempio, carica il checkpoint `xlm-clm-enfr-1024` (Modellazione causale del linguaggio, Inglese-Francese): ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") ``` L'attributo `lang2id` del tokenizer mostra il linguaggio del modello e il suo ids: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` Poi, crea un esempio di input: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` Imposta l'id del linguaggio a `"en"` e usalo per definire il language embedding. Il language embedding รจ un tensore riempito con `0` perchรฉ questo รจ il language id per l'inglese. Questo tensore dovrebbe avere la stessa dimensione di `input_ids`. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` Adesso puoi inserire `input_ids` e language embedding nel modello: ```py >>> outputs = model(input_ids, langs=langs) ``` Lo script [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) puรฒ generare testo tramite i language embeddings usando i checkpoints `xlm-clm`. ### XLM senza language embeddings I seguenti modelli XLM non richiedono l'utilizzo dei language embeddings per fare inferenza: - `xlm-mlm-17-1280` (Modellazione mascherata del linguaggio, 17 lingue) - `xlm-mlm-100-1280` (Modellazione mascherata del linguaggio, 100 lingue) Questi modelli sono utilizzati per rappresentazioni generiche di frasi, a differenza dei precedenti checkpoints XML. ## BERT Il seguente modello BERT puรฒ essere usato per compiti multilingue: - `bert-base-multilingual-uncased` (Modellazione mascherata del linguaggio + Previsione della prossima frase, 102 lingue) - `bert-base-multilingual-cased` (Modellazione mascherata del linguaggio + Previsione della prossima frase, 104 lingue) Questi modelli non richiedono language embeddings per fare inferenza. Riescono ad identificare il linguaggio dal contesto e inferire di conseguenza. ## XLM-RoBERTa Il seguente modello XLM-RoBERTa puรฒ essere usato per compiti multilingue: - `xlm-roberta-base` (Modellazione mascherata del linguaggio, 100 lingue) - `xlm-roberta-large` (Modellazione mascherata del linguaggio, 100 lingue) XLM-RoBERTa รจ stato addestrato su 2.5TB di dati CommonCrawl appena creati e puliti in 100 lingue. Offre notevoli vantaggi rispetto ai modelli multilingue rilasciati in precedenza, come mBERT o XLM, in compiti come la classificazione, l'etichettatura delle sequenze e la risposta alle domande. ## M2M100 Il seguente modello M2M100 puรฒ essere usato per compiti multilingue: - `facebook/m2m100_418M` (Traduzione) - `facebook/m2m100_1.2B` (Traduzione) In questo esempio, carica il checkpoint `facebook/m2m100_418M` per tradurre dal cinese all'inglese. Puoi impostare la lingua di partenza nel tokenizer: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` Applica il tokenizer al testo: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 forza l'id della lingua obiettivo come primo token generato per tradurre nella lingua obiettivo. Imposta il parametro `forced_bos_token_id` a `en` nel metodo `generate` per tradurre in inglese: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart Il seguente modello MBart puรฒ essere usato per compiti multilingue: - `facebook/mbart-large-50-one-to-many-mmt` (Traduzione automatica multilingue uno-a-molti, 50 lingue) - `facebook/mbart-large-50-many-to-many-mmt` (Traduzione automatica multilingue molti-a-molti, 50 lingue) - `facebook/mbart-large-50-many-to-one-mmt` (Traduzione automatica multilingue molti-a-uno, 50 lingue) - `facebook/mbart-large-50` (Traduzione multilingue, 50 lingue) - `facebook/mbart-large-cc25` In questo esempio, carica il checkpoint `facebook/mbart-large-50-many-to-many-mmt` per tradurre dal finlandese all'inglese. Puoi impostare la lingua di partenza nel tokenizer: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` Applica il tokenizer sul testo: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart forza l'id della lingua obiettivo come primo token generato per tradurre nella lingua obiettivo. Imposta il parametro `forced_bos_token_id` a `en` nel metodo `generate` per tradurre in inglese: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` Se stai usando il checkpoint `facebook/mbart-large-50-many-to-one-mmt`, non hai bisogno di forzare l'id della lingua obiettivo come primo token generato altrimenti l'uso รจ lo stesso.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/preprocessing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Preprocess [[open-in-colab]] Prima di poter usare i dati in un modello, bisogna processarli in un formato accettabile per quest'ultimo. Un modello non comprende il testo grezzo, le immagini o l'audio. Bisogna convertire questi input in numeri e assemblarli all'interno di tensori. In questa esercitazione, tu potrai: * Preprocessare dati testuali con un tokenizer. * Preprocessare immagini o dati audio con un estrattore di caratteristiche. * Preprocessare dati per attivitร  multimodali mediante un processore. ## NLP <Youtube id="Yffk5aydLzg"/> Lo strumento principale per processare dati testuali รจ un [tokenizer](main_classes/tokenizer). Un tokenizer inizia separando il testo in *tokens* secondo una serie di regole. I tokens sono convertiti in numeri, questi vengono utilizzati per costruire i tensori di input del modello. Anche altri input addizionali se richiesti dal modello vengono aggiunti dal tokenizer. <Tip> Se stai pensando si utilizzare un modello preaddestrato, รจ importante utilizzare il tokenizer preaddestrato associato. Questo assicura che il testo sia separato allo stesso modo che nel corpus usato per l'addestramento, e venga usata la stessa mappatura tokens-to-index (solitamente indicato come il *vocabolario*) come nel preaddestramento. </Tip> Iniziamo subito caricando un tokenizer preaddestrato con la classe [`AutoTokenizer`]. Questo scarica il *vocabolario* usato quando il modello รจ stato preaddestrato. ### Tokenize Carica un tokenizer preaddestrato con [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` Poi inserisci le tue frasi nel tokenizer: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Il tokenizer restituisce un dizionario contenente tre oggetti importanti: * [input_ids](glossary#input-ids) sono gli indici che corrispondono ad ogni token nella frase. * [attention_mask](glossary#attention-mask) indicata se un token deve essere elaborato o no. * [token_type_ids](glossary#token-type-ids) identifica a quale sequenza appartiene un token se รจ presente piรน di una sequenza. Si possono decodificare gli `input_ids` per farsi restituire l'input originale: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` Come si puรฒ vedere, il tokenizer aggiunge due token speciali - `CLS` e `SEP` (classificatore e separatore) - alla frase. Non tutti i modelli hanno bisogno dei token speciali, ma se servono, il tokenizer li aggiungerร  automaticamente. Se ci sono piรน frasi che vuoi processare, passale come una lista al tokenizer: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### Pad Questo รจ un argomento importante. Quando processi un insieme di frasi potrebbero non avere tutte la stessa lunghezza. Questo รจ un problema perchรจ i tensori, in input del modello, devono avere dimensioni uniformi. Il padding รจ una strategia per assicurarsi che i tensori siano rettangolari aggiungendo uno speciale *padding token* alle frasi piรน corte. Imposta il parametro `padding` a `True` per imbottire le frasi piรน corte nel gruppo in modo che combacino con la massima lunghezza presente: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` Nota che il tokenizer aggiunge alle sequenze degli `0` perchรจ sono troppo corte! ### Truncation L'altra faccia della medaglia รจ che avolte le sequenze possono essere troppo lunghe per essere gestite dal modello. In questo caso, avrai bisogno di troncare la sequenza per avere una lunghezza minore. Imposta il parametro `truncation` a `True` per troncare una sequenza alla massima lunghezza accettata dal modello: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ### Costruire i tensori Infine, vuoi che il tokenizer restituisca i tensori prodotti dal modello. Imposta il parametro `return_tensors` su `pt` per PyTorch, o `tf` per TensorFlow: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102], [ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0]])} ===PT-TF-SPLIT=== >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102], [ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0]], dtype=int32)>} ``` ## Audio Gli input audio sono processati in modo differente rispetto al testo, ma l'obiettivo rimane lo stesso: creare sequenze numeriche che il modello puรฒ capire. Un [estrattore di caratteristiche](main_classes/feature_extractor) รจ progettato con lo scopo preciso di estrarre caratteristiche da immagini o dati audio grezzi e convertirli in tensori. Prima di iniziare, installa ๐Ÿค— Datasets per caricare un dataset audio e sperimentare: ```bash pip install datasets ``` Carica il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) (vedi il ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) per avere maggiori dettagli su come caricare un dataset): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Accedi al primo elemento della colonna `audio` per dare uno sguardo all'input. Richiamando la colonna `audio` sarร  caricato automaticamente e ricampionato il file audio: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` Questo restituisce tre oggetti: * `array` รจ il segnale vocale caricato - e potenzialmente ricampionato - come vettore 1D. * `path` il percorso del file audio. * `sampling_rate` si riferisce al numero di campioni del segnale vocale misurati al secondo. ### Ricampionamento Per questo tutorial, puoi usare il modello [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base). Come puoi vedere dalla model card, il modello Wav2Vec2 รจ preaddestrato su un campionamento vocale a 16kHz.รˆ importante che la frequenza di campionamento dei tuoi dati audio combaci con la frequenza di campionamento del dataset usato per preaddestrare il modello. Se la frequenza di campionamento dei tuoi dati non รจ uguale dovrai ricampionare i tuoi dati audio. Per esempio, il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ha una frequenza di campionamento di 8000kHz. Utilizzando il modello Wav2Vec2 su questo dataset, alzala a 16kHz: ```py >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` 1. Usa il metodo di ๐Ÿค— Datasets' [`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.cast_column) per alzare la frequenza di campionamento a 16kHz: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. Carica il file audio: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` Come puoi notare, la `sampling_rate` adesso รจ 16kHz! ### Feature extractor Il prossimo passo รจ caricare un estrattore di caratteristiche per normalizzare e fare padding sull'input. Quando applichiamo il padding sui dati testuali, uno `0` รจ aggiunto alle sequenze piรน brevi. La stessa idea si applica ai dati audio, l'estrattore di caratteristiche per gli audio aggiungerร  uno `0` - interpretato come silenzio - agli `array`. Carica l'estrattore delle caratteristiche con [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` Inserisci l' `array` audio nell'estrattore delle caratteristiche. Noi raccomandiamo sempre di aggiungere il parametro `sampling_rate` nell'estrattore delle caratteristiche per correggere meglio qualche errore, dovuto ai silenzi, che potrebbe verificarsi. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ### Pad e truncate Come per il tokenizer, puoi applicare le operazioni padding o truncation per manipolare sequenze di variabili a lotti. Dai uno sguaro alla lunghezza delle sequenze di questi due campioni audio: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` Come puoi vedere, il primo campione ha una sequenza piรน lunga del secondo. Crea una funzione che preprocesserร  il dataset. Specifica una lunghezza massima del campione, e l'estrattore di features si occuperร  di riempire o troncare la sequenza per coincidervi: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` Applica la funzione ai primi esempi nel dataset: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` Adesso guarda la lunghezza dei campioni elaborati: ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` La lunghezza dei campioni adesso coincide con la massima lunghezza impostata nelle funzione. ## Vision Un estrattore di caratteristiche si puรฒ usare anche per processare immagini e per compiti di visione. Ancora una volta, l'obiettivo รจ convertire l'immagine grezza in un lotto di tensori come input. Carica il dataset [food101](https://huggingface.co/datasets/food101) per questa esercitazione. Usa il parametro `split` di ๐Ÿค— Datasets per caricare solo un piccolo campione dal dataset di addestramento poichรจ il set di dati รจ molto grande: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` Secondo passo, dai uno sguardo alle immagini usando la caratteristica [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image) di ๐Ÿค— Datasets: ```py >>> dataset[0]["image"] ``` ![vision-preprocess-tutorial.png](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png) ### Feature extractor Carica l'estrattore di caratteristiche [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224") ``` ### Data augmentation Per le attivitร  di visione, รจ usuale aggiungere alcuni tipi di data augmentation alle immagini come parte del preprocessing. Puoi aggiungere augmentations con qualsiasi libreria che preferisci, ma in questa esercitazione, userai il modulo [`transforms`](https://pytorch.org/vision/stable/transforms.html) di torchvision. 1. Normalizza l'immagine e usa [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) per concatenare alcune trasformazioni - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) e [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) - insieme: ```py >>> from torchvision.transforms import Compose, Normalize, RandomResizedCrop, ColorJitter, ToTensor >>> normalize = Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std) >>> _transforms = Compose( ... [RandomResizedCrop(feature_extractor.size), ColorJitter(brightness=0.5, hue=0.5), ToTensor(), normalize] ... ) ``` 2. Il modello accetta [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) come input. Questo valore รจ generato dall'estrattore di caratteristiche. Crea una funzione che genera `pixel_values` dai transforms: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(image.convert("RGB")) for image in examples["image"]] ... return examples ``` 3. Poi utilizza ๐Ÿค— Datasets [`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)per applicare al volo la trasformazione: ```py >>> dataset.set_transform(transforms) ``` 4. Adesso quando accedi all'immagine, puoi notare che l'estrattore di caratteristiche ha aggiunto `pixel_values` allo schema di input: ```py >>> dataset[0]["image"] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x7F1A7B0630D0>, 'label': 6, 'pixel_values': tensor([[[ 0.0353, 0.0745, 0.1216, ..., -0.9922, -0.9922, -0.9922], [-0.0196, 0.0667, 0.1294, ..., -0.9765, -0.9843, -0.9922], [ 0.0196, 0.0824, 0.1137, ..., -0.9765, -0.9686, -0.8667], ..., [ 0.0275, 0.0745, 0.0510, ..., -0.1137, -0.1216, -0.0824], [ 0.0667, 0.0824, 0.0667, ..., -0.0588, -0.0745, -0.0980], [ 0.0353, 0.0353, 0.0431, ..., -0.0039, -0.0039, -0.0588]], [[ 0.2078, 0.2471, 0.2863, ..., -0.9451, -0.9373, -0.9451], [ 0.1608, 0.2471, 0.3098, ..., -0.9373, -0.9451, -0.9373], [ 0.2078, 0.2706, 0.3020, ..., -0.9608, -0.9373, -0.8275], ..., [-0.0353, 0.0118, -0.0039, ..., -0.2392, -0.2471, -0.2078], [ 0.0196, 0.0353, 0.0196, ..., -0.1843, -0.2000, -0.2235], [-0.0118, -0.0039, -0.0039, ..., -0.0980, -0.0980, -0.1529]], [[ 0.3961, 0.4431, 0.4980, ..., -0.9216, -0.9137, -0.9216], [ 0.3569, 0.4510, 0.5216, ..., -0.9059, -0.9137, -0.9137], [ 0.4118, 0.4745, 0.5216, ..., -0.9137, -0.8902, -0.7804], ..., [-0.2314, -0.1922, -0.2078, ..., -0.4196, -0.4275, -0.3882], [-0.1843, -0.1686, -0.2000, ..., -0.3647, -0.3804, -0.4039], [-0.1922, -0.1922, -0.1922, ..., -0.2941, -0.2863, -0.3412]]])} ``` Di seguito come si vede l'immagine dopo la fase di preprocessing. Come ci si aspetterebbe dalle trasformazioni applicate, l'immagine รจ stata ritagliata in modo casuale e le proprietร  del colore sono diverse. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` ![preprocessed_image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png) ## Multimodal Per attivitร  multimodali userai una combinazione di tutto quello che hai imparato poco fa e applicherai le tue competenze alla comprensione automatica del parlato (Automatic Speech Recognition - ASR). Questo significa che avrai bisogno di: * Un estrattore delle caratteristiche per processare i dati audio. * Il Tokenizer per processare i testi. Ritorna sul datasere [LJ Speech](https://huggingface.co/datasets/lj_speech): ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` Visto che sei interessato solo alle colonne `audio` e `text`, elimina tutte le altre: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` Adesso guarda le colonne `audio` e `text`: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` Ricorda dalla sezione precedente sull'elaborazione dei dati audio, tu dovresti sempre [ricampionare](preprocessing#audio) la frequenza di campionamento dei tuoi dati audio per farla coincidere con quella del dataset usato dal modello preaddestrato: ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` ### Processor Un processor combina un estrattore di caratteristiche e un tokenizer. Carica un processor con [`AutoProcessor.from_pretrained]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. Crea una funzione che processi i dati audio in `input_values`, e tokenizza il testo in `labels`. Questi sono i tuoi input per il modello: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. Applica la funzione `prepare_dataset` ad un campione: ```py >>> prepare_dataset(lj_speech[0]) ``` Nota che il processor ha aggiunto `input_values` e `labels`. La frequenza di campionamento รจ stata corretta riducendola a 16kHz. Fantastico, ora dovresti essere in grado di preelaborare i dati per qualsiasi modalitร  e persino di combinare modalitร  diverse! Nella prossima esercitazione, impareremo a mettere a punto un modello sui dati appena pre-elaborati.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Condividi un modello Gli ultimi due tutorial ti hanno mostrato come puoi fare fine-tuning di un modello con PyTorch, Keras e ๐Ÿค— Accelerate per configurazioni distribuite. Il prossimo passo รจ quello di condividere il tuo modello con la community! In Hugging Face, crediamo nella condivisione della conoscenza e delle risorse in modo da democratizzare l'intelligenza artificiale per chiunque. Ti incoraggiamo a considerare di condividere il tuo modello con la community per aiutare altre persone a risparmiare tempo e risorse. In questo tutorial, imparerai due metodi per la condivisione di un modello trained o fine-tuned nel [Model Hub](https://huggingface.co/models): - Condividi in modo programmatico i tuoi file nell'Hub. - Trascina i tuoi file nell'Hub mediante interfaccia grafica. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> Per condividere un modello con la community, hai bisogno di un account su [huggingface.co](https://huggingface.co/join). Puoi anche unirti ad un'organizzazione esistente o crearne una nuova. </Tip> ## Caratteristiche dei repository Ogni repository nel Model Hub si comporta come un tipico repository di GitHub. I nostri repository offrono il versionamento, la cronologia dei commit, e la possibilitร  di visualizzare le differenze. Il versionamento all'interno del Model Hub รจ basato su git e [git-lfs](https://git-lfs.github.com/). In altre parole, puoi trattare un modello come un unico repository, consentendo un maggiore controllo degli accessi e maggiore scalabilitร . Il controllo delle versioni consente *revisions*, un metodo per appuntare una versione specifica di un modello con un hash di commit, un tag o un branch. Come risultato, puoi caricare una specifica versione di un modello con il parametro `revision`: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # nome di un tag, di un branch, o commit hash ... ) ``` Anche i file possono essere modificati facilmente in un repository ed รจ possibile visualizzare la cronologia dei commit e le differenze: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Configurazione Prima di condividere un modello nell'Hub, hai bisogno delle tue credenziali di Hugging Face. Se hai accesso ad un terminale, esegui il seguente comando nell'ambiente virtuale in cui รจ installata la libreria ๐Ÿค— Transformers. Questo memorizzerร  il tuo token di accesso nella cartella cache di Hugging Face (di default `~/.cache/`): ```bash huggingface-cli login ``` Se stai usando un notebook come Jupyter o Colaboratory, assicurati di avere la libreria [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) installata. Questa libreria ti permette di interagire in maniera programmatica con l'Hub. ```bash pip install huggingface_hub ``` Utilizza `notebook_login` per accedere all'Hub, e segui il link [qui](https://huggingface.co/settings/token) per generare un token con cui effettuare il login: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Converti un modello per tutti i framework Per assicurarti che il tuo modello possa essere utilizzato da persone che lavorano con un framework differente, ti raccomandiamo di convertire e caricare il tuo modello sia con i checkpoint di PyTorch che con quelli di TensorFlow. Anche se รจ possibile caricare il modello da un framework diverso, se si salta questo passaggio, il caricamento sarร  piรน lento perchรฉ ๐Ÿค— Transformers ha bisogno di convertire i checkpoint al momento. Convertire un checkpoint per un altro framework รจ semplice. Assicurati di avere PyTorch e TensorFlow installati (vedi [qui](installation) per le istruzioni d'installazione), e poi trova il modello specifico per il tuo compito nell'altro framework. <frameworkcontent> <pt> Specifica `from_tf=True` per convertire un checkpoint da TensorFlow a PyTorch: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_tf=True ... ) >>> pt_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto") ``` </pt> <tf> Specifica `from_pt=True` per convertire un checkpoint da PyTorch a TensorFlow: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True ... ) ``` Poi puoi salvare il tuo nuovo modello in TensorFlow con il suo nuovo checkpoint: ```py >>> tf_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto") ``` </tf> <jax> Se un modello รจ disponibile in Flax, puoi anche convertire un checkpoint da PyTorch a Flax: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Condividi un modello durante il training <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Condividere un modello nell'Hub รจ tanto semplice quanto aggiungere un parametro extra o un callback. Ricorda dal [tutorial sul fine-tuning](training), la classe [`TrainingArguments`] รจ dove specifichi gli iperparametri e le opzioni addizionali per l'allenamento. Una di queste opzioni di training include l'abilitร  di condividere direttamente un modello nell'Hub. Imposta `push_to_hub=True` in [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="il-mio-bellissimo-modello", push_to_hub=True) ``` Passa gli argomenti per il training come di consueto al [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Dopo aver effettuato il fine-tuning del tuo modello, chiama [`~transformers.Trainer.push_to_hub`] sul [`Trainer`] per condividere il modello allenato nell'Hub. ๐Ÿค— Transformers aggiungerร  in modo automatico persino gli iperparametri, i risultati del training e le versioni del framework alla scheda del tuo modello (model card, in inglese)! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Condividi un modello nell'Hub con [`PushToHubCallback`]. Nella funzione [`PushToHubCallback`], aggiungi: - Una directory di output per il tuo modello. - Un tokenizer. - L'`hub_model_id`, che รจ il tuo username sull'Hub e il nome del modello. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./il_path_dove_salvare_il_tuo_modello", ... tokenizer=tokenizer, ... hub_model_id="il-tuo-username/il-mio-bellissimo-modello", ... ) ``` Aggiungi il callback a [`fit`](https://keras.io/api/models/model_training_apis/), e ๐Ÿค— Transformers caricherร  il modello allenato nell'Hub: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Utilizzare la funzione `push_to_hub` Puoi anche chiamare `push_to_hub` direttamente sul tuo modello per caricarlo nell'Hub. Specifica il nome del tuo modello in `push_to_hub`: ```py >>> pt_model.push_to_hub("il-mio-bellissimo-modello") ``` Questo crea un repository sotto il proprio username con il nome del modello `il-mio-bellissimo-modello`. Ora chiunque puรฒ caricare il tuo modello con la funzione `from_pretrained`: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("il-tuo-username/il-mio-bellissimo-modello") ``` Se fai parte di un'organizzazione e vuoi invece condividere un modello sotto il nome dell'organizzazione, aggiungi il parametro `organization`: ```py >>> pt_model.push_to_hub("il-mio-bellissimo-modello", organization="la-mia-fantastica-org") ``` La funzione `push_to_hub` puรฒ essere anche utilizzata per aggiungere altri file al repository del modello. Per esempio, aggiungi un tokenizer ad un repository di un modello: ```py >>> tokenizer.push_to_hub("il-mio-bellissimo-modello") ``` O magari potresti voler aggiungere la versione di TensorFlow del tuo modello PyTorch a cui hai fatto fine-tuning: ```py >>> tf_model.push_to_hub("il-mio-bellissimo-modello") ``` Ora quando navighi nel tuo profilo Hugging Face, dovresti vedere il tuo repository del modello appena creato. Premendo sulla scheda **Files** vengono visualizzati tutti i file caricati nel repository. Per maggiori dettagli su come creare e caricare file ad un repository, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/how-to-upstream). ## Carica un modello utilizzando l'interfaccia web Chi preferisce un approccio senza codice puรฒ caricare un modello tramite l'interfaccia web dell'hub. Visita [huggingface.co/new](https://huggingface.co/new) per creare un nuovo repository: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) Da qui, aggiungi alcune informazioni sul tuo modello: - Seleziona il/la **owner** del repository. Puoi essere te o qualunque organizzazione di cui fai parte. - Scegli un nome per il tuo modello, il quale sarร  anche il nome del repository. - Scegli se il tuo modello รจ pubblico o privato. - Specifica la licenza utilizzata per il tuo modello. Ora premi sulla scheda **Files** e premi sul pulsante **Add file** per caricare un nuovo file al tuo repository. Trascina poi un file per caricarlo e aggiungere un messaggio di commit. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Aggiungi una scheda del modello Per assicurarti che chiunque possa comprendere le abilitร , limitazioni, i potenziali bias e le considerazioni etiche del tuo modello, per favore aggiungi una scheda del modello (model card, in inglese) al tuo repository. La scheda del modello รจ definita nel file `README.md`. Puoi aggiungere una scheda del modello: * Creando manualmente e caricando un file `README.md`. * Premendo sul pulsante **Edit model card** nel repository del tuo modello. Dai un'occhiata alla [scheda del modello](https://huggingface.co/distilbert-base-uncased) di DistilBert per avere un buon esempio del tipo di informazioni che una scheda di un modello deve includere. Per maggiori dettagli legati ad altre opzioni che puoi controllare nel file `README.md`, come l'impatto ambientale o widget di esempio, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/models-cards).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/create_a_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Crea un'architettura personalizzata Una [`AutoClass`](model_doc/auto) deduce automaticamente il modello dell'architettura e scarica la configurazione e i pesi pre-allenati. Generalmente, noi consigliamo di usare un `AutoClass` per produrre un codice indipendente dal checkpoint. Ma gli utenti che desiderano un controllo maggiore su parametri specifici del modello possono creare un modello ๐Ÿค— Transformers personalizzato da poche classi base. Questo potrebbe essere particolarmente utile per qualunque persona sia interessata nel studiare, allenare o sperimentare con un modello ๐Ÿค— Transformers. In questa guida, approfondisci la creazione di un modello personalizzato senza `AutoClass`. Impara come: - Caricare e personalizzare una configurazione del modello. - Creare un'architettura modello. - Creare un tokenizer lento e veloce per il testo. - Creare un estrattore di caratteristiche per attivitร  riguardanti audio o immagini. - Creare un processore per attivitร  multimodali. ## Configurazione Una [configurazione](main_classes/configuration) si riferisce agli attributi specifici di un modello. Ogni configurazione del modello ha attributi diversi; per esempio, tutti i modelli npl hanno questi attributi in comune `hidden_size`, `num_attention_heads`, `num_hidden_layers` e `vocab_size`. Questi attributi specificano il numero di attention heads o strati nascosti con cui costruire un modello. Dai un'occhiata piรน da vicino a [DistilBERT](model_doc/distilbert) accedendo a [`DistilBertConfig`] per ispezionare i suoi attributi: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`] mostra tutti gli attributi predefiniti usati per costruire una base [`DistilBertModel`]. Tutti gli attributi sono personalizzabili, creando uno spazio per sperimentare. Per esempio, puoi configurare un modello predefinito per: - Provare un funzione di attivazione diversa con il parametro `activation`. - Utilizzare tasso di drop out piรน elevato per le probalitร  di attention con il parametro `attention_dropout`. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` Nella funzione [`~PretrainedConfig.from_pretrained`] possono essere modificati gli attributi del modello pre-allenato: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` Quando la configurazione del modello ti soddisfa, la puoi salvare con [`~PretrainedConfig.save_pretrained`]. Il file della tua configurazione รจ memorizzato come file JSON nella save directory specificata: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` Per riutilizzare la configurazione del file, caricalo con [`~PretrainedConfig.from_pretrained`]: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") ``` <Tip> Puoi anche salvare il file di configurazione come dizionario oppure come la differenza tra gli attributi della tua configurazione personalizzata e gli attributi della configurazione predefinita! Guarda la documentazione [configuration](main_classes/configuration) per piรน dettagli. </Tip> ## Modello Il prossimo passo e di creare [modello](main_classes/models). Il modello - vagamente riferito anche come architettura - definisce cosa ogni strato deve fare e quali operazioni stanno succedendo. Attributi come `num_hidden_layers` provenienti dalla configurazione sono usati per definire l'architettura. Ogni modello condivide la classe base [`PreTrainedModel`] e alcuni metodi comuni come il ridimensionamento degli input embeddings e la soppressione delle self-attention heads . Inoltre, tutti i modelli sono la sottoclasse di [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) o [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html). Cio significa che i modelli sono compatibili con l'uso di ciascun di framework. <frameworkcontent> <pt> Carica gli attributi della tua configurazione personalizzata nel modello: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> model = DistilBertModel(my_config) ``` Questo crea modelli con valori casuali invece di pesi pre-allenati. Non sarai in grado di usare questo modello per niente di utile finchรฉ non lo alleni. L'allenamento รจ un processo costoso e che richiede tempo . Generalmente รจ meglio usare un modello pre-allenato per ottenere risultati migliori velocemente, utilizzando solo una frazione delle risorse neccesarie per l'allenamento. Crea un modello pre-allenato con [`~PreTrainedModel.from_pretrained`]: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased") ``` Quando carichi pesi pre-allenati, la configurazione del modello predefinito รจ automaticamente caricata se il modello รจ fornito da ๐Ÿค— Transformers. Tuttavia, puoi ancora sostituire gli attributi - alcuni o tutti - di configurazione del modello predefinito con i tuoi se lo desideri: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </pt> <tf> Carica gli attributi di configurazione personalizzati nel modello: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` Questo crea modelli con valori casuali invece di pesi pre-allenati. Non sarai in grado di usare questo modello per niente di utile finchรฉ non lo alleni. L'allenamento รจ un processo costoso e che richiede tempo . Generalmente รจ meglio usare un modello pre-allenato per ottenere risultati migliori velocemente, utilizzando solo una frazione delle risorse neccesarie per l'allenamento. Crea un modello pre-allenoto con [`~TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") ``` Quando carichi pesi pre-allenati, la configurazione del modello predefinito รจ automaticamente caricato se il modello รจ fornito da ๐Ÿค— Transformers. Tuttavia, puoi ancora sostituire gli attributi - alcuni o tutti - di configurazione del modello predefinito con i tuoi se lo desideri: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### Model head A questo punto, hai un modello DistilBERT base i cui output sono gli *hidden states* (in italiano stati nascosti). Gli stati nascosti sono passati come input a un model head per produrre l'output finale. ๐Ÿค— Transformers fornisce un model head diverso per ogni attivitร  fintanto che il modello supporta l'attivitร  (i.e., non puoi usare DistilBERT per un attivitร  sequence-to-sequence come la traduzione). <frameworkcontent> <pt> Per esempio, [`DistilBertForSequenceClassification`] รจ un modello DistilBERT base con una testa di classificazione per sequenze. La sequenza di classificazione head รจ uno strato lineare sopra gli output ragruppati. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Riutilizza facilmente questo checkpoint per un'altra attivitร  passando ad un model head differente. Per un attivitร  di risposta alle domande, utilizzerai il model head [`DistilBertForQuestionAnswering`]. La head per compiti di question answering รจ simile alla classificazione di sequenza head tranne per il fatto che รจ uno strato lineare sopra l'output degli stati nascosti (hidden states in inglese) ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </pt> <tf> Per esempio, [`TFDistilBertForSequenceClassification`] รจ un modello DistilBERT base con classificazione di sequenza head. La classificazione di sequenza head รจ uno strato lineare sopra gli output raggruppati. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Riutilizza facilmente questo checkpoint per un altra attivitร  passando ad un modello head diverso. Per un attivitร  di risposta alle domande, utilizzerai il model head [`TFDistilBertForQuestionAnswering`]. Il head di risposta alle domande รจ simile alla sequenza di classificazione head tranne per il fatto che รจ uno strato lineare sopra l'output degli stati nascosti (hidden states in inglese) ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </tf> </frameworkcontent> ## Tokenizer L'ultima classe base di cui hai bisogno prima di utilizzare un modello per i dati testuali รจ un [tokenizer](main_classes/tokenizer) per convertire il testo grezzo in tensori. Ci sono due tipi di tokenizer che puoi usare con ๐Ÿค— Transformers: - [`PreTrainedTokenizer`]: un'implementazione Python di un tokenizer. - [`PreTrainedTokenizerFast`]: un tokenizer dalla nostra libreria [๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) basata su Rust. Questo tipo di tokenizer รจ significativamente piรน veloce, specialmente durante la batch tokenization, grazie alla sua implementazione Rust. Il tokenizer veloce offre anche metodi aggiuntivi come *offset mapping* che associa i token alle loro parole o caratteri originali. Entrambi i tokenizer supportano metodi comuni come la codifica e la decodifica, l'aggiunta di nuovi token e la gestione di token speciali. <Tip warning={true}> Non tutti i modelli supportano un tokenizer veloce. Dai un'occhiata a questo [tabella](index#supported-frameworks) per verificare se un modello ha il supporto per tokenizer veloce. </Tip> Se hai addestrato il tuo tokenizer, puoi crearne uno dal tuo file *vocabolario*: ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` รˆ importante ricordare che il vocabolario di un tokenizer personalizzato sarร  diverso dal vocabolario generato dal tokenizer di un modello preallenato. รˆ necessario utilizzare il vocabolario di un modello preallenato se si utilizza un modello preallenato, altrimenti gli input non avranno senso. Crea un tokenizer con il vocabolario di un modello preallenato con la classe [`DistilBertTokenizer`]: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") ``` Crea un tokenizer veloce con la classe [`DistilBertTokenizerFast`]: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased") ``` <Tip> Per l'impostazione predefinita, [`AutoTokenizer`] proverร  a caricare un tokenizer veloce. Puoi disabilitare questo comportamento impostando `use_fast=False` in `from_pretrained`. </Tip> ## Estrattore Di Feature Un estrattore di caratteristiche (feature in inglese) elabora input audio o immagini. Eredita dalla classe [`~feature_extraction_utils.FeatureExtractionMixin`] base e puรฒ anche ereditare dalla classe [`ImageFeatureExtractionMixin`] per l'elaborazione delle caratteristiche dell'immagine o dalla classe [`SequenceFeatureExtractor`] per l'elaborazione degli input audio. A seconda che tu stia lavorando a un'attivitร  audio o visiva, crea un estrattore di caratteristiche associato al modello che stai utilizzando. Ad esempio, crea un [`ViTFeatureExtractor`] predefinito se stai usando [ViT](model_doc/vit) per la classificazione delle immagini: ```py >>> from transformers import ViTFeatureExtractor >>> vit_extractor = ViTFeatureExtractor() >>> print(vit_extractor) ViTFeatureExtractor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> Se non stai cercando alcuna personalizzazione, usa il metodo `from_pretrained` per caricare i parametri di default dell'estrattore di caratteristiche di un modello. </Tip> Modifica uno qualsiasi dei parametri [`ViTFeatureExtractor`] per creare il tuo estrattore di caratteristiche personalizzato: ```py >>> from transformers import ViTFeatureExtractor >>> my_vit_extractor = ViTFeatureExtractor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTFeatureExtractor { "do_normalize": false, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` Per gli input audio, puoi creare un [`Wav2Vec2FeatureExtractor`] e personalizzare i parametri in modo simile: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` ## Processore Per modelli che supportano attivitร  multimodali, ๐Ÿค— Transformers offre una classe di processore che racchiude comodamente un estrattore di caratteristiche e un tokenizer in un unico oggetto. Ad esempio, utilizziamo [`Wav2Vec2Processor`] per un'attivitร  di riconoscimento vocale automatico (ASR). ASR trascrive l'audio in testo, quindi avrai bisogno di un estrattore di caratteristiche e di un tokenizer. Crea un estrattore di feature per gestire gli input audio: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` Crea un tokenizer per gestire gli input di testo: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` Combinare l'estrattore di caratteristiche e il tokenizer in [`Wav2Vec2Processor`]: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` Con due classi di base - configurazione e modello - e una classe di preelaborazione aggiuntiva (tokenizer, estrattore di caratteristiche o processore), puoi creare qualsiasi modello supportato da ๐Ÿค— Transformers. Ognuna di queste classi base รจ configurabile, consentendoti di utilizzare gli attributi specifici che desideri. รˆ possibile impostare facilmente un modello per l'addestramento o modificare un modello preallenato esistente per la messa a punto.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installazione Installa ๐Ÿค— Transformers per qualsiasi libreria di deep learning con cui stai lavorando, imposta la tua cache, e opzionalmente configura ๐Ÿค— Transformers per l'esecuzione offline. ๐Ÿค— Transformers รจ testato su Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, e Flax. Segui le istruzioni di installazione seguenti per la libreria di deep learning che stai utilizzando: * [PyTorch](https://pytorch.org/get-started/locally/) istruzioni di installazione. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) istruzioni di installazione. * [Flax](https://flax.readthedocs.io/en/latest/) istruzioni di installazione. ## Installazione con pip Puoi installare ๐Ÿค— Transformers in un [ambiente virtuale](https://docs.python.org/3/library/venv.html). Se non sei familiare con gli ambienti virtuali in Python, dai un'occhiata a questa [guida](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Un ambiente virtuale rende piรน semplice la gestione di progetti differenti, evitando problemi di compatibilitร  tra dipendenze. Inizia creando un ambiente virtuale nella directory del tuo progetto: ```bash python -m venv .env ``` Attiva l'ambiente virtuale: ```bash source .env/bin/activate ``` Ora puoi procedere con l'installazione di ๐Ÿค— Transformers eseguendo il comando seguente: ```bash pip install transformers ``` Per il solo supporto della CPU, puoi installare facilmente ๐Ÿค— Transformers e una libreria di deep learning in solo una riga. Ad esempio, installiamo ๐Ÿค— Transformers e PyTorch con: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers e TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers e Flax: ```bash pip install transformers[flax] ``` Infine, verifica se ๐Ÿค— Transformers รจ stato installato in modo appropriato eseguendo il seguente comando. Questo scaricherร  un modello pre-allenato: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Dopodichรฉ stampa l'etichetta e il punteggio: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installazione dalla fonte Installa ๐Ÿค— Transformers dalla fonte con il seguente comando: ```bash pip install git+https://github.com/huggingface/transformers ``` Questo comando installa la versione `main` piรน attuale invece dell'ultima versione stabile. Questo รจ utile per stare al passo con gli ultimi sviluppi. Ad esempio, se un bug รจ stato sistemato da quando รจ uscita l'ultima versione ufficiale ma non รจ stata ancora rilasciata una nuova versione. Tuttavia, questo significa che questa versione `main` puรฒ non essere sempre stabile. Ci sforziamo per mantenere la versione `main` operativa, e la maggior parte dei problemi viene risolta in poche ore o in un giorno. Se riscontri un problema, per favore apri una [Issue](https://github.com/huggingface/transformers/issues) cosรฌ possiamo sistemarlo ancora piรน velocemente! Controlla se ๐Ÿค— Transformers รจ stata installata in modo appropriato con il seguente comando: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Installazione modificabile Hai bisogno di un'installazione modificabile se vuoi: * Usare la versione `main` del codice dalla fonte. * Contribuire a ๐Ÿค— Transformers e hai bisogno di testare i cambiamenti nel codice. Clona il repository e installa ๐Ÿค— Transformers con i seguenti comandi: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Questi comandi collegheranno la cartella in cui รจ stato clonato il repository e i path delle librerie Python. Python guarderร  ora all'interno della cartella clonata, oltre ai normali path delle librerie. Per esempio, se i tuoi pacchetti Python sono installati tipicamente in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python cercherร  anche nella cartella clonata: `~/transformers/`. <Tip warning={true}> Devi tenere la cartella `transformers` se vuoi continuare ad utilizzare la libreria. </Tip> Ora puoi facilmente aggiornare il tuo clone all'ultima versione di ๐Ÿค— Transformers con il seguente comando: ```bash cd ~/transformers/ git pull ``` Il tuo ambiente Python troverร  la versione `main` di ๐Ÿค— Transformers alla prossima esecuzione. ## Installazione con conda Installazione dal canale conda `huggingface`: ```bash conda install -c huggingface transformers ``` ## Impostazione della cache I modelli pre-allenati sono scaricati e memorizzati localmente nella cache in: `~/.cache/huggingface/transformers/`. Questa รจ la directory di default data dalla variabile d'ambiente della shell `TRANSFORMERS_CACHE`. Su Windows, la directory di default รจ data da `C:\Users\username\.cache\huggingface\transformers`. Puoi cambiare le variabili d'ambiente della shell indicate in seguito, in ordine di prioritร , per specificare una directory differente per la cache: 1. Variabile d'ambiente della shell (default): `TRANSFORMERS_CACHE`. 2. Variabile d'ambiente della shell: `HF_HOME` + `transformers/`. 3. Variabile d'ambiente della shell: `XDG_CACHE_HOME` + `/huggingface/transformers`. <Tip> ๐Ÿค— Transformers utilizzerร  le variabili d'ambiente della shell `PYTORCH_TRANSFORMERS_CACHE` o `PYTORCH_PRETRAINED_BERT_CACHE` se si proviene da un'iterazione precedente di questa libreria e sono state impostate queste variabili d'ambiente, a meno che non si specifichi la variabile d'ambiente della shell `TRANSFORMERS_CACHE`. </Tip> ## Modalitร  Offline ๐Ÿค— Transformers puรฒ essere eseguita in un ambiente firewalled o offline utilizzando solo file locali. Imposta la variabile d'ambiente `TRANSFORMERS_OFFLINE=1` per abilitare questo comportamento. <Tip> Aggiungi [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) al tuo flusso di lavoro offline di training impostando la variabile d'ambiente `HF_DATASETS_OFFLINE=1`. </Tip> Ad esempio, in genere si esegue un programma su una rete normale, protetta da firewall per le istanze esterne, con il seguente comando: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Esegui lo stesso programma in un'istanza offline con: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Lo script viene ora eseguito senza bloccarsi o attendere il timeout, perchรฉ sa di dover cercare solo file locali. ### Ottenere modelli e tokenizer per l'uso offline Un'altra opzione per utilizzare offline ๐Ÿค— Transformers รจ scaricare i file in anticipo, e poi puntare al loro path locale quando hai la necessitร  di utilizzarli offline. Ci sono tre modi per fare questo: * Scarica un file tramite l'interfaccia utente sul [Model Hub](https://huggingface.co/models) premendo sull'icona โ†“. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Utilizza il flusso [`PreTrainedModel.from_pretrained`] e [`PreTrainedModel.save_pretrained`]: 1. Scarica i tuoi file in anticipo con [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Salva i tuoi file in una directory specificata con [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./il/tuo/path/bigscience_t0") >>> model.save_pretrained("./il/tuo/path/bigscience_t0") ``` 3. Ora quando sei offline, carica i tuoi file con [`PreTrainedModel.from_pretrained`] dalla directory specificata: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./il/tuo/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./il/tuo/path/bigscience_t0") ``` * Scarica in maniera programmatica i file con la libreria [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub): 1. Installa la libreria `huggingface_hub` nel tuo ambiente virtuale: ```bash python -m pip install huggingface_hub ``` 2. Utilizza la funzione [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) per scaricare un file in un path specifico. Per esempio, il seguente comando scarica il file `config.json` dal modello [T0](https://huggingface.co/bigscience/T0_3B) nel path che desideri: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./il/tuo/path/bigscience_t0") ``` Una volta che il tuo file รจ scaricato e salvato in cache localmente, specifica il suo path locale per caricarlo e utilizzarlo: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./il/tuo/path/bigscience_t0/config.json") ``` <Tip> Fai riferimento alla sezione [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) per avere maggiori dettagli su come scaricare modelli presenti sull Hub. </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/big_models.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Istanziare un big model Quando vuoi utilizzare un modello preaddestrato (pretrained) molto grande, una sfida รจ minimizzare l'uso della RAM. Il workflow classico in PyTorch รจ: 1. Crea il tuo modello con pesi casuali (random weights). 2. Carica i tuoi pesi preaddestrati. 3. Inserisci i pesi preaddestrati nel tuo modello casuale. I passi 1 e 2 una versione completa del modello in memoria, in molti casi non รจ un problema, ma se il modello inizia a pesare diversi GigaBytes, queste due copie possono sturare la nostra RAM. Ancora peggio, se stai usando `torch.distributed` per seguire l'addestramento (training) in distribuito, ogni processo caricherร  il modello preaddestrato e memorizzerร  queste due copie nella RAM. <Tip> Nota che il modello creato casualmente รจ inizializzato con tensori "vuoti", che occupano spazio in memoria ma senza riempirlo (quindi i valori casuali sono quelli che si trovavano in questa porzione di memoria in un determinato momento). L'inizializzazione casuale che segue la distribuzione appropriata per il tipo di modello/parametri istanziato (come la distribuzione normale per le istanze) รจ eseguito solo dopo il passaggio 3 sui pesi non inizializzati, per essere piรน rapido possibile! </Tip> In questa guida, esploreremo le soluzioni che Transformers offre per affrontare questo problema. C'รจ da tenere in conto che questa รจ un'area in cui si sta attualmente sviluppando, quindi le API spiegate qui possono variare velocemente in futuro. ## Checkpoints condivisi Dalla versione 4.18.0, i checkpoints dei modelli che occupano piรน di 10GB di spazio vengono automaticamente frammentati in piรน parti. Per quanto riguarda la possibilitร  di avere un unico checkpoint quando si utilizza `model.save_pretrained(save_dir)`, si hanno diversi checkpoint parziali (ognuno con dimensione < 10GB) e un indice che mappa i nomi dei parametri ai file in cui sono memorizzati. Puoi controllare la dimensione massima dopo la frammentazione con il parametro `max_shard_size`, nel prossimo esempio, useremo modelli di dimensioni normali con frammenti di piccoli dimensioni: prendiamo un modello BERT classico. ```py from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") ``` Se tu salvi usando [`~PreTrainedModel.save_pretrained`], avrai una nuova cartella con due file: il config del modello e i suoi pesi: ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` Adesso usiamo una dimensione massima di frammentazione di 200MB: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` In aggiunta alla configurazione del modello, vediamo tre differenti file dei pesi, e un file `index.json` che รจ il nostro indice. Un checkpoint puรฒ essere ricaricato totalmente usando il metodo [`~PreTrainedModel.from_pretrained`]: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` Il vantaggio principale di applicare questo metodo per modelli grandi รจ che durante il passo 2 del workflow illustrato in precedenza, ogni frammento del checkpoint viene caricato dopo il precedente, limitando l'utilizzo della RAM alla dimensione del modello piรน la dimensione del frammento piรน grande. Dietro le quinte, il file indice รจ utilizzato per determinare quali chiavi sono nel checkpoint, e dove i corrispondenti pesi sono memorizzati. Possiamo caricare l'indice come un qualsiasi json e ottenere un dizionario: ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` I metadati consistono solo nella dimensione totale del modello per ora. Abbiamo in programma di aggiungere altre informazioni in futuro: ```py >>> index["metadata"] {'total_size': 433245184} ``` La mappa dei pesi รจ la parte principale di questo indice, che mappa ogni nome dei parametri (si trova solitamente nei modelli PyTorch come `state_dict`) al file in cui รจ memorizzato: ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` Se vuoi caricare direttamente un checkpoint frammentato in un modello senza usare [`~PreTrainedModel.from_pretrained`] (come si farebbe con `model.load_state_dict()` per un checkpoint completo) devi usare [`~modeling_utils.load_sharded_checkpoint`]: ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## Caricamento low memory Frammentare i checkpoint l'utilizzo di memoria al passo 2 del workflow citato in precedenza, ma per utilizzare questo modello in un ambiente con poca memoria, consigliamo di utilizzare i nostri strumenti basati sulla libreria Accelerate. Per ulteriori informazioni, leggere la seguente guida: [Large model loading using Accelerate](./main_classes/model#large-model-loading)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipeline per l'inferenza La [`pipeline`] rende semplice usare qualsiasi modello dal [Model Hub](https://huggingface.co/models) per fare inferenza su diversi compiti come generazione del testo, segmentazione di immagini e classificazione di audio. Anche se non hai esperienza con una modalitร  specifica o non comprendi bene il codice che alimenta i modelli, รจ comunque possibile utilizzarli con l'opzione [`pipeline`]! Questa esercitazione ti insegnerร  a: * Usare una [`pipeline`] per fare inferenza. * Usare uno specifico tokenizer o modello. * Usare una [`pipeline`] per compiti che riguardano audio e video. <Tip> Dai un'occhiata alla documentazione di [`pipeline`] per una lista completa dei compiti supportati. </Tip> ## Utilizzo della Pipeline Nonostante ogni compito abbia una [`pipeline`] associata, รจ piรน semplice utilizzare l'astrazione generica della [`pipeline`] che contiene tutte quelle specifiche per ogni mansione. La [`pipeline`] carica automaticamente un modello predefinito e un tokenizer in grado di fare inferenza per il tuo compito. 1. Inizia creando una [`pipeline`] e specificando il compito su cui fare inferenza: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation") ``` 2. Inserisci il testo in input nella [`pipeline`]: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Iron-priests at the door to the east, and thirteen for the Lord Kings at the end of the mountain'}] ``` Se hai piรน di un input, inseriscilo in una lista: ```py >>> generator( ... [ ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... "Nine for Mortal Men, doomed to die, One for the Dark Lord on his dark throne", ... ] ... ) # doctest: +SKIP ``` Qualsiasi parametro addizionale per il tuo compito puรฒ essere incluso nella [`pipeline`]. La mansione `text-generation` ha un metodo [`~generation.GenerationMixin.generate`] con diversi parametri per controllare l'output. Ad esempio, se desideri generare piรน di un output, utilizza il parametro `num_return_sequences`: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... num_return_sequences=2, ... ) # doctest: +SKIP ``` ### Scegliere modello e tokenizer La [`pipeline`] accetta qualsiasi modello dal [Model Hub](https://huggingface.co/models). Ci sono tag nel Model Hub che consentono di filtrare i modelli per attivitร . Una volta che avrai scelto il modello appropriato, caricalo usando la corrispondente classe `AutoModelFor` e [`AutoTokenizer`]. Ad esempio, carica la classe [`AutoModelForCausalLM`] per un compito di causal language modeling: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") ``` Crea una [`pipeline`] per il tuo compito, specificando il modello e il tokenizer che hai caricato: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) ``` Inserisci il testo di input nella [`pipeline`] per generare del testo: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Dragon-lords (for them to rule in a world ruled by their rulers, and all who live within the realm'}] ``` ## Audio pipeline La flessibilitร  della [`pipeline`] fa si che possa essere estesa ad attivitร  sugli audio. Per esempio, classifichiamo le emozioni in questo clip audio: ```py >>> from datasets import load_dataset >>> import torch >>> torch.manual_seed(42) # doctest: +IGNORE_RESULT >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> audio_file = ds[0]["audio"]["path"] ``` Trova un modello per la [classificazione audio](https://huggingface.co/models?pipeline_tag=audio-classification) sul Model Hub per eseguire un compito di riconoscimento automatico delle emozioni e caricalo nella [`pipeline`]: ```py >>> from transformers import pipeline >>> audio_classifier = pipeline( ... task="audio-classification", model="ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` Inserisci il file audio nella [`pipeline`]: ```py >>> preds = audio_classifier(audio_file) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}] ``` ## Vision pipeline Infine, usare la [`pipeline`] per le attivitร  sulle immagini รจ praticamente la stessa cosa. Specifica la tua attivitร  e inserisci l'immagine nel classificatore. L'immagine puรฒ essere sia un link che un percorso sul tuo pc in locale. Per esempio, quale specie di gatto รจ raffigurata qui sotto? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(task="image-classification") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_train_cpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento effciente su multiple CPU Quando l'addestramento su una singola CPU รจ troppo lento, possiamo usare CPU multiple. Quasta guida si concentra su DDP basato su PyTorch abilitando l'addetramento distribuito su CPU in maniera efficiente. ## Intelยฎ oneCCL Bindings per PyTorch [Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) รจ una libreria per l'addestramento efficiente del deep learning in distribuito e implementa collettivi come allreduce, allgather, alltoall. Per maggiori informazioni su oneCCL, fai riferimento a [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) e [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html). Il modulo `oneccl_bindings_for_pytorch` (`torch_ccl` precedentemente alla versione 1.12) implementa PyTorch C10D ProcessGroup API e puรฒ essere caricato dinamicamente com external ProcessGroup e funziona solo su piattaforma Linux al momento. Qui trovi informazioni piรน dettagliate per [oneccl_bind_pt](https://github.com/intel/torch-ccl). ### Intelยฎ oneCCL Bindings per l'installazione PyTorch: I file wheel sono disponibili per le seguenti versioni di Python: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | โˆš | โˆš | โˆš | โˆš | | 1.12.100 | | โˆš | โˆš | โˆš | โˆš | | 1.12.0 | | โˆš | โˆš | โˆš | โˆš | | 1.11.0 | | โˆš | โˆš | โˆš | โˆš | | 1.10.0 | โˆš | โˆš | โˆš | โˆš | | ```bash pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` dove `{pytorch_version}` deve essere la tua versione di PyTorch, per l'stanza 1.13.0. Verifica altri approcci per [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Le versioni di oneCCL e PyTorch devono combaciare. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100 </Tip> ## Intelยฎ MPI library Usa questa implementazione basata su standard MPI per fornire una architettura flessibile, efficiente, scalabile su cluster per Intelยฎ. Questo componente รจ parte di Intelยฎ oneAPI HPC Toolkit. oneccl_bindings_for_pytorch รจ installato insieme al set di strumenti MPI. Necessitร  di reperire l'ambiente prima di utilizzarlo. per Intelยฎ oneCCL >= 1.12.0 ```bash oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` per Intelยฎ oneCCL con versione < 1.12.0 ```bash torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### Installazione IPEX: IPEX fornisce ottimizzazioni delle prestazioni per l'addestramento della CPU sia con Float32 che con BFloat16; puoi fare riferimento a [single CPU section](./perf_train_cpu). Il seguente "Utilizzo in Trainer" prende come esempio mpirun nella libreria Intelยฎ MPI. ## Utilizzo in Trainer Per abilitare l'addestramento distribuito multi CPU nel Trainer con il ccl backend, gli utenti devono aggiungere **`--ddp_backend ccl`** negli argomenti del comando. Vediamo un esempio per il [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) Il seguente comando abilita due processi sul nodo Xeon, con un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` Il seguente comando abilita l'addestramento per un totale di quattro processi su due Xeon (node0 e node1, prendendo node0 come processo principale), ppn (processes per node) รจ impostato a 2, on un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale. In node0, รจ necessario creare un file di configurazione che contenga gli indirizzi IP di ciascun nodo (per esempio hostfile) e passare il percorso del file di configurazione come parametro. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` A questo punto, esegui il seguente comando nel nodo0 e **4DDP** sarร  abilitato in node0 e node1 con BF16 auto mixed precision: ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Tour rapido - local: installation title: Installazione title: Iniziare - sections: - local: pipeline_tutorial title: Pipeline per l'inferenza - local: autoclass_tutorial title: Carica istanze pre-allenate con AutoClass - local: preprocessing title: Preprocess - local: training title: Fine-tuning di un modello pre-addestrato - local: accelerate title: Allenamento distribuito con ๐Ÿค— Accelerate - local: model_sharing title: Condividere un modello title: Esercitazione - sections: - local: create_a_model title: Crea un'architettura personalizzata - local: custom_models title: Condividere modelli personalizzati - local: run_scripts title: Addestramento con script - local: multilingual title: Modelli multilingua per l'inferenza - local: converting_tensorflow_models title: Convertire modelli tensorflow - local: serialization title: Esporta modelli Transformers - local: perf_train_cpu title: Addestramento efficiente su CPU - local: perf_train_cpu_many title: Addestramento efficiente su multiple CPU - local: perf_train_tpu title: Addestramento su TPU - local: perf_train_special title: Addestramento su Hardware Specializzato - local: perf_infer_cpu title: Inferenza Efficiente su CPU - local: perf_infer_gpu_one title: Inferenza su una GPU - local: perf_infer_gpu_many title: Inferenza Efficiente su GPU Multiple - local: perf_infer_special title: Inferenza su Hardware Specializzato - local: big_models title: Istanziare un big model - local: migration title: Passaggio da pacchetti precedenti - local: debugging title: Debugging title: Guide pratiche - sections: - local: add_new_pipeline title: Come aggiungere una pipeline a ๐Ÿค— Transformers? - local: add_new_model title: Come aggiungere un modello a ๐Ÿค— Transformers? - local: perf_hardware title: Hardware ottimizzato per l'addestramento - local: community title: Risorse della comunitร  - local: pr_checks title: Controlli su una Pull Request title: Guide How-to
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento efficiente su CPU Questa guida si concentra su come addestrare in maniera efficiente grandi modelli su CPU. ## Mixed precision con IPEX IPEX รจ ottimizzato per CPU con AVX-512 o superiore, e funziona per le CPU con solo AVX2. Pertanto, si prevede che le prestazioni saranno piรน vantaggiose per le le CPU Intel con AVX-512 o superiori, mentre le CPU con solo AVX2 (ad esempio, le CPU AMD o le CPU Intel piรน vecchie) potrebbero ottenere prestazioni migliori con IPEX, ma non sono garantite. IPEX offre ottimizzazioni delle prestazioni per l'addestramento della CPU sia con Float32 che con BFloat16. L'uso di BFloat16 รจ l'argomento principale delle seguenti sezioni. Il tipo di dati a bassa precisione BFloat16 รจ stato supportato in modo nativo su 3rd Generation Xeonยฎ Scalable Processors (aka Cooper Lake) con AVX512 e sarร  supportata dalla prossima generazione di Intelยฎ Xeonยฎ Scalable Processors con Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) instruction set con prestazioni ulteriormente migliorate. L'Auto Mixed Precision per il backende della CPU รจ stato abilitato da PyTorch-1.10. allo stesso tempo, il supporto di Auto Mixed Precision con BFloat16 per CPU e l'ottimizzazione degli operatori BFloat16 รจ stata abilitata in modo massiccio in Intelยฎ Extension per PyTorch, and parzialmente aggiornato al branch master di PyTorch. Gli utenti possono ottenere prestazioni migliori ed users experience con IPEX Auto Mixed Precision.. Vedi informazioni piรน dettagliate su [Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html). ### Installazione di IPEX: Il rilascio di IPEX segue quello di PyTorch, da installare via pip: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ```bash pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` Vedi altri approcci per [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html). ### Utilizzo nel Trainer Per abilitare la auto mixed precision con IPEX in Trainer, l'utende dovrebbe aggiungere `use_ipex`, `bf16` e `no_cuda` negli argomenti del comando di addestramento. Vedi un sempio di un caso d'uso [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Training with IPEX using BF16 auto mixed precision on CPU: <pre> python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### Esempi pratici Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/migration.md
<!--- Copyright 2020 The HuggingFace Team. Tutti i diritti riservati. Concesso in licenza in base alla Licenza Apache, Versione 2.0 (la "Licenza"); non รจ possibile utilizzare questo file se non in conformitร  con la Licenza. รˆ possibile ottenere una copia della Licenza all'indirizzo http://www.apache.org/licenses/LICENSE-2.0 A meno che non sia richiesto dalla legge applicabile o concordato per iscritto, il software distribuito con la Licenza รจ distribuito su BASE "COSรŒ COM'รˆ", SENZA GARANZIE O CONDIZIONI DI ALCUN TIPO, espresse o implicite. Per la lingua specifica vedi la Licenza che regola le autorizzazioni e le limitazioni ai sensi della STESSA. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Migrazione da pacchetti precedenti ## Migrazione da transformers `v3.x` a `v4.x` Un paio di modifiche sono state introdotte nel passaggio dalla versione 3 alla versione 4. Di seguito รจ riportato un riepilogo delle modifiche previste: #### 1. AutoTokenizer e pipeline ora utilizzano tokenizer veloci (rust) per impostazione predefinita. I tokenizer python e rust hanno all'incirca le stesse API, ma i tokenizer rust hanno un set di funzionalitร  piรน completo. Ciรฒ introduce due modifiche sostanziali: - La gestione dei token in overflow tra i tokenizer Python e Rust รจ diversa. - I tokenizers di rust non accettano numeri interi nei metodi di codifica. ##### Come ottenere lo stesso comportamento di v3.x in v4.x - Le pipeline ora contengono funzionalitร  aggiuntive pronte all'uso. Vedi la [pipeline di classificazione dei token con il flag `grouped_entities`](main_classes/pipelines#transformers.TokenClassificationPipeline). - Gli auto-tokenizer ora restituiscono tokenizer rust. Per ottenere invece i tokenizer python, l'utente deve usare il flag `use_fast` impostandolo `False`: Nella versione `v3.x`: ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` per ottenere lo stesso nella versione `v4.x`: ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False) ``` #### 2. SentencePiece รจ stato rimosso dalle dipendenze richieste Il requisito sulla dipendenza SentencePiece รจ stato rimosso da `setup.py`. รˆ stato fatto per avere un canale su anaconda cloud senza basarsi su `conda-forge`. Ciรฒ significa che i tokenizer che dipendono dalla libreria SentencePiece non saranno disponibili con un'installazione standard di `transformers`. Ciรฒ include le versioni **lente** di: - `XLNetTokenizer` - `AlbertTokenizer` - `CamembertTokenizer` - `MBartTokenizer` - `PegasusTokenizer` - `T5Tokenizer` - `ReformerTokenizer` - `XLMRobertaTokenizer` ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, devi installare anche `sentencepiece`: Nella versione `v3.x`: ```bash pip install transformers ``` per ottenere lo stesso nella versione `v4.x`: ```bash pip install transformers[sentencepiece] ``` o ```bash pip install transformers stentencepiece ``` #### 3. L'architettura delle repo รจ stato aggiornata in modo che ogni modello abbia la propria cartella Con lโ€™aggiunta di nuovi modelli, il numero di file nella cartella `src/transformers` continua a crescere e diventa piรน difficile navigare e capire. Abbiamo fatto la scelta di inserire ogni modello e i file che lo accompagnano nelle proprie sottocartelle. Si tratta di una modifica sostanziale in quanto l'importazione di layer intermedi utilizzando direttamente il modulo di un modello deve essere eseguita tramite un percorso diverso. ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, devi aggiornare il percorso utilizzato per accedere ai layer. Nella versione `v3.x`: ```bash from transformers.modeling_bert import BertLayer ``` per ottenere lo stesso nella versione `v4.x`: ```bash from transformers.models.bert.modeling_bert import BertLayer ``` #### 4. Impostare l'argomento `return_dict` su `True` per impostazione predefinita L'[argomento `return_dict`](main_classes/output) abilita la restituzione di oggetti python dict-like contenenti gli output del modello, invece delle tuple standard. Questo oggetto รจ self-documented poichรฉ le chiavi possono essere utilizzate per recuperare valori, comportandosi anche come una tupla e gli utenti possono recuperare oggetti per indexing o slicing. Questa รจ una modifica sostanziale poichรฉ la tupla non puรฒ essere decompressa: `value0, value1 = outputs` non funzionerร . ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, specifica l'argomento `return_dict` come `False`, sia nella configurazione del modello che nel passaggio successivo. Nella versione `v3.x`: ```bash model = BertModel.from_pretrained("bert-base-cased") outputs = model(**inputs) ``` per ottenere lo stesso nella versione `v4.x`: ```bash model = BertModel.from_pretrained("bert-base-cased") outputs = model(**inputs, return_dict=False) ``` o ```bash model = BertModel.from_pretrained("bert-base-cased", return_dict=False) outputs = model(**inputs) ``` #### 5. Rimozione di alcuni attributi deprecati Gli attributi sono stati rimossi se deprecati da almeno un mese. L'elenco completo degli attributi obsoleti รจ disponibile in [#8604](https://github.com/huggingface/transformers/pull/8604). Ecco un elenco di questi attributi/metodi/argomenti e quali dovrebbero essere le loro sostituzioni: In diversi modelli, le etichette diventano coerenti con gli altri modelli: - `masked_lm_labels` diventa `labels` in `AlbertForMaskedLM` e `AlbertForPreTraining`. - `masked_lm_labels` diventa `labels` in `BertForMaskedLM` e `BertForPreTraining`. - `masked_lm_labels` diventa `labels` in `DistilBertForMaskedLM`. - `masked_lm_labels` diventa `labels` in `ElectraForMaskedLM`. - `masked_lm_labels` diventa `labels` in `LongformerForMaskedLM`. - `masked_lm_labels` diventa `labels` in `MobileBertForMaskedLM`. - `masked_lm_labels` diventa `labels` in `RobertaForMaskedLM`. - `lm_labels` diventa `labels` in `BartForConditionalGeneration`. - `lm_labels` diventa `labels` in `GPT2DoubleHeadsModel`. - `lm_labels` diventa `labels` in `OpenAIGPTDoubleHeadsModel`. - `lm_labels` diventa `labels` in `T5ForConditionalGeneration`. In diversi modelli, il meccanismo di memorizzazione nella cache diventa coerente con gli altri: - `decoder_cached_states` diventa `past_key_values` in tutti i modelli BART-like, FSMT e T5. - `decoder_past_key_values` diventa `past_key_values` in tutti i modelli BART-like, FSMT e T5. - `past` diventa `past_key_values` in tutti i modelli CTRL. - `past` diventa `past_key_values` in tutti i modelli GPT-2. Per quanto riguarda le classi tokenizer: - L'attributo tokenizer `max_len` diventa `model_max_length`. - L'attributo tokenizer `return_lengths` diventa `return_length`. - L'argomento di codifica del tokenizer `is_pretokenized` diventa `is_split_into_words`. Per quanto riguarda la classe `Trainer`: - L'argomento `tb_writer` di `Trainer` รจ stato rimosso in favore della funzione richiamabile `TensorBoardCallback(tb_writer=...)`. - L'argomento `prediction_loss_only` di `Trainer` รจ stato rimosso in favore dell'argomento di classe `args.prediction_loss_only`. - L'attributo `data_collator` di `Trainer` sarร  richiamabile. - Il metodo `_log` di `Trainer` รจ deprecato a favore di `log`. - Il metodo `_training_step` di `Trainer` รจ deprecato a favore di `training_step`. - Il metodo `_prediction_loop` di `Trainer` รจ deprecato a favore di `prediction_loop`. - Il metodo `is_local_master` di `Trainer` รจ deprecato a favore di `is_local_process_zero`. - Il metodo `is_world_master` di `Trainer` รจ deprecato a favore di `is_world_process_zero`. Per quanto riguarda la classe `TFTrainer`: - L'argomento `prediction_loss_only` di `TFTrainer` รจ stato rimosso a favore dell'argomento di classe `args.prediction_loss_only`. - Il metodo `_log` di `Trainer` รจ deprecato a favore di `log`. - Il metodo `_prediction_loop` di `TFTrainer` รจ deprecato a favore di `prediction_loop`. - Il metodo `_setup_wandb` di `TFTrainer` รจ deprecato a favore di `setup_wandb`. - Il metodo `_run_model` di `TFTrainer` รจ deprecato a favore di `run_model`. Per quanto riguarda la classe `TrainingArguments`: - L'argomento `evaluate_during_training` di `TrainingArguments` รจ deprecato a favore di `evaluation_strategy`. Per quanto riguarda il modello Transfo-XL: - L'attributo di configurazione `tie_weight` di Transfo-XL diventa `tie_words_embeddings`. - Il metodo di modellazione `reset_length` di Transfo-XL diventa `reset_memory_length`. Per quanto riguarda le pipeline: - L'argomento `topk` di `FillMaskPipeline` diventa `top_k`. ## Passaggio da pytorch-transformers a ๐Ÿค— Transformers Ecco un breve riepilogo di ciรฒ a cui prestare attenzione durante il passaggio da `pytorch-transformers` a ๐Ÿค— Transformers. ### Lโ€™ordine posizionale di alcune parole chiave di input dei modelli (`attention_mask`, `token_type_ids`...) รจ cambiato Per usare Torchscript (vedi #1010, #1204 e #1195) l'ordine specifico delle **parole chiave di input** di alcuni modelli (`attention_mask`, `token_type_ids`...) รจ stato modificato. Se inizializzavi i modelli usando parole chiave per gli argomenti, ad esempio `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, questo non dovrebbe causare alcun cambiamento. Se inizializzavi i modelli con input posizionali per gli argomenti, ad esempio `model(inputs_ids, attention_mask, token_type_ids)`, potrebbe essere necessario ricontrollare l'ordine esatto degli argomenti di input. ## Migrazione da pytorch-pretrained-bert Ecco un breve riepilogo di ciรฒ a cui prestare attenzione durante la migrazione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers ### I modelli restituiscono sempre `tuple` La principale modifica di rilievo durante la migrazione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers รจ che il metodo dei modelli di previsione dร  sempre una `tupla` con vari elementi a seconda del modello e dei parametri di configurazione. Il contenuto esatto delle tuple per ciascun modello รจ mostrato in dettaglio nelle docstring dei modelli e nella [documentazione](https://huggingface.co/transformers/). In quasi tutti i casi, andrร  bene prendendo il primo elemento dell'output come quello che avresti precedentemente utilizzato in `pytorch-pretrained-bert`. Ecco un esempio di conversione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers per un modello di classificazione `BertForSequenceClassification`: ```python # Carichiamo il nostro modello model = BertForSequenceClassification.from_pretrained("bert-base-uncased") # Se usavi questa riga in pytorch-pretrained-bert : loss = model(input_ids, labels=labels) # Ora usa questa riga in ๐Ÿค— Transformers per estrarre la perdita dalla tupla di output: outputs = model(input_ids, labels=labels) loss = outputs[0] # In ๐Ÿค— Transformers puoi anche avere accesso ai logit: loss, logits = outputs[:2] # Ed anche agli attention weight se configuri il modello per restituirli (e anche altri output, vedi le docstring e la documentazione) model = BertForSequenceClassification.from_pretrained(" bert-base-uncased", output_attentions=True) outputs = model(input_ids, labels=labels) loss, logits, attentions = outputs ``` ### Serializzazione Modifica sostanziale nel metodo `from_pretrained()`: 1. I modelli sono ora impostati in modalitร  di valutazione in maniera predefinita quando usi il metodo `from_pretrained()`. Per addestrarli non dimenticare di riportarli in modalitร  di addestramento (`model.train()`) per attivare i moduli di dropout. 2. Gli argomenti aggiuntivi `*inputs` e `**kwargs` forniti al metodo `from_pretrained()` venivano passati direttamente al metodo `__init__()` della classe sottostante del modello. Ora sono usati per aggiornare prima l'attributo di configurazione del modello, che puรฒ non funzionare con le classi del modello derivate costruite basandosi sui precedenti esempi di `BertForSequenceClassification`. Piรน precisamente, gli argomenti posizionali `*inputs` forniti a `from_pretrained()` vengono inoltrati direttamente al metodo `__init__()` del modello mentre gli argomenti keyword `**kwargs` (i) che corrispondono agli attributi della classe di configurazione, vengono utilizzati per aggiornare tali attributi (ii) che non corrispondono ad alcun attributo della classe di configurazione, vengono inoltrati al metodo `__init__()`. Inoltre, sebbene non si tratti di una modifica sostanziale, i metodi di serializzazione sono stati standardizzati e probabilmente dovresti passare al nuovo metodo `save_pretrained(save_directory)` se prima usavi qualsiasi altro metodo di serializzazione. Ecco un esempio: ```python ### Carichiamo un modello e un tokenizer model = BertForSequenceClassification.from_pretrained("bert-base-uncased") tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") ### Facciamo fare alcune cose al nostro modello e tokenizer # Es: aggiungiamo nuovi token al vocabolario e agli embending del nostro modello tokenizer.add_tokens(["[SPECIAL_TOKEN_1]", "[SPECIAL_TOKEN_2]"]) model.resize_token_embeddings(len(tokenizer)) # Alleniamo il nostro modello train(model) ### Ora salviamo il nostro modello e il tokenizer in una cartella model.save_pretrained("./my_saved_model_directory/") tokenizer.save_pretrained("./my_saved_model_directory/") ### Ricarichiamo il modello e il tokenizer model = BertForSequenceClassification.from_pretrained("./my_saved_model_directory/") tokenizer = BertTokenizer.from_pretrained("./my_saved_model_directory/") ``` ### Ottimizzatori: BertAdam e OpenAIAdam ora sono AdamW, lo scheduling รจ quello standard PyTorch I due ottimizzatori precedenti inclusi, `BertAdam` e `OpenAIAdam`, sono stati sostituiti da un singolo `AdamW` che presenta alcune differenze: - implementa solo la correzione del weights decay, - lo scheduling ora รจ esterno (vedi sotto), - anche il gradient clipping ora รจ esterno (vedi sotto). Il nuovo ottimizzatore `AdamW` corrisponde alle API di `Adam` di PyTorch e ti consente di utilizzare metodi PyTorch o apex per lo scheduling e il clipping. Lo scheduling รจ ora standard [PyTorch learning rate schedulers](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) e non fanno piรน parte dell'ottimizzatore. Ecco un esempio di linear warmup e decay con `BertAdam` e con `AdamW`: ```python # Parametri: lr = 1e-3 max_grad_norm = 1.0 num_training_steps = 1000 num_warmup_steps = 100 warmup_proportion = float( num_warmup_steps) / float(num_training_steps) # 0.1 ### In precedenza l'ottimizzatore BertAdam veniva istanziato in questo modo: optimizer = BertAdam( model.parameters(), lr=lr, schedule="warmup_linear", warmup=warmup_proportion, num_training_steps=num_training_steps, ) ### e usato in questo modo: for batch in train_data: loss = model(batch) loss.backward() optimizer.step() ### In ๐Ÿค— Transformers, ottimizzatore e schedule sono divisi e usati in questo modo: optimizer = AdamW( model.parameters(), lr=lr, correct_bias=False ) # Per riprodurre il comportamento specifico di BertAdam impostare correct_bias=False scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps ) # PyTorch scheduler ### e va usato cosรฌ: for batch in train_data: loss = model(batch) loss.backward() torch.nn.utils.clip_grad_norm_( model.parameters(), max_grad_norm ) # Gradient clipping non รจ piรน in AdamW (quindi puoi usare amp senza problemi) optimizer.step() scheduler.step() ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/add_new_model.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Come aggiungere un modello a ๐Ÿค— Transformers? Aggiungere un nuovo modello รฉ spesso difficile e richiede una profonda conoscenza della libreria ๐Ÿค— Transformers e anche della repository originale del modello. A Hugging Face cerchiamo di dare alla community sempre piรบ poteri per aggiungere modelli independentemente. Quindi, per alcuni nuovi modelli che la community vuole aggiungere a ๐Ÿค— Transformers, abbiamo creato una specifica *call-for-model-addition* che spiega passo dopo passo come aggiungere il modello richiesto. Con questo *call-for-model-addition* vogliamo insegnare a volenterosi e esperti collaboratori della community come implementare un modello in ๐Ÿค— Transformers. Se questo รฉ qualcosa che puรฒ interessarvi, siete liberi di controllare l'attuale โ€œcalls-for-model-additionโ€ [qui](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model/open_model_proposals/README.md) e contattarci. Se il modello sarร  selezionato, allora potrete lavorare insieme a un membro di Hugging Face per integrare il modello in ๐Ÿค— Transformers. Cosรฌ facendo, ci guadagnerai in una comprensione totale, sia teorica che pratica, del modello proposto. Inoltre, sarai l'artefice di un importante contributo open-source a ๐Ÿค— Transformers. Durante l'implementazione avrai l'opportunitร  di: - ottenere piรน comprensione delle best practices in open-source - capire i principi di design di una della librerie NLP piรน popolari - capire come efficientemente testare complessi modelli NLP - capire come integrare utilit Python come `black`, `ruff`, `make fix-copies` in una libreria per garantire sempre di avere un codice leggibile e pulito Siamo anche contenti se vuoi aggiungere un modello che non puรฒ essere trovato nella cartella โ€œcalls-for-model-additionโ€. Le seguenti sezioni spiegano in dettaglio come aggiungere un nuovo modello. Puรฒ anche essere molto utile controllare modelli giร  aggiunti [qui](https://github.com/huggingface/transformers/pulls?q=is%3Apr+label%3A%22PR+for+Model+Addition%22+is%3Aclosed), per capire se richiamano il modello che vorreste aggiungere. Per cominciare, vediamo una panoramica general della libreria Transformers. ## Panoramica generale su ๐Ÿค— Transformers Prima di tutto, vediamo in generale ๐Ÿค— Transformers. ๐Ÿค— Transformers รฉ una libreria molto strutturata, quindi puร  essere che a volte ci sia un disaccordo con alcune filosofie della libreria o scelte di design. Dalla nostra esperienza, tuttavia, abbiamo trovato che le scelte fondamentali di design della libreria sono cruciali per usare ๐Ÿค— Transformers efficacemente su larga scala, mantenendo i costi a un livello accettabile. Un buon primo punto di partenza per capire al meglio la libreria รฉ leggere la [documentazione sulla nostra filosofia](filosofia) Da qui, ci sono alcune scelte sul modo di lavorare che cerchiamo di applicare a tutti i modelli: - La composizione รฉ generalmente favorita sulla sovra-astrazione - Duplicare il codice non รฉ sempre male, soprattutto se migliora notevolmente la leggibilitร  e accessibilitร  del modello - Tutti i files creati per il nuovo modello devono il piu possibile "compatti". Questo vuol dire che quando qualcuno leggerรก il codice di uno specifico modello, potrรก vedere solo il corrispettivo file `modeling_....py` senza avere multiple dipendenze. La cosa piรบ importante, รฉ che consideriamo la libreria non solo un mezzo per dare un prodotto, *per esempio* dare la possibilitร  di usare BERT per inferenza, ma รฉ anche il prodotto reale che noi vogliamo migliorare sempre piรน. Quindi, quando aggiungi un modello, non sei solo la persona che userร  il modello, ma rappresenti anche tutti coloro che leggeranno, cercheranno di capire e modificare il tuo modello. Tenendo questi principi in mente, immergiamoci nel design generale della libreria. ### Panoramica sui modelli Per aggiungere con successo un modello, รฉ importante capire l'interazione tra il tuo modello e la sua configurazione, [`PreTrainedModel`], e [`PretrainedConfig`]. Per dare un esempio, chiameremo il modello da aggiungere a ๐Ÿค— Transformers `BrandNewBert`. Diamo un'occhiata: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> Come potete vedere, ci basiamo sull'ereditarietร  in ๐Ÿค— Transformers, tenendo perรฒ il livello di astrazione a un minimo assoluto. Non ci sono mai piรน di due livelli di astrazione per ogni modello nella libreria. `BrandNewBertModel` eredita da `BrandNewBertPreTrainedModel` che, a sua volta, eredita da [`PreTrainedModel`] - semplice no? Come regola generale, vogliamo essere sicuri che un nuovo modello dipenda solo da [`PreTrainedModel`]. Le funzionalitร  importanti che sono automaticamente conferite a ogni nuovo modello sono [`~PreTrainedModel.from_pretrained`] e [`~PreTrainedModel.save_pretrained`], che sono usate per serializzazione e deserializzazione. Tutte le altre importanti funzionalitร , come ad esempio `BrandNewBertModel.forward` devono essere definite completamente nel nuovo script `modeling_brand_new_bert.py`. Inoltre, vogliamo essere sicuri che un modello con uno specifico head layer, come `BrandNewBertForMaskedLM` non erediti da `BrandNewBertModel`, ma piuttosto usi `BrandNewBertModel` come componente che puรฒ essere chiamata nel passaggio forward per mantenere il livello di astrazione basso. Ogni nuovo modello richieste una classe di configurazione, chiamata `BrandNewBertConfig`. Questa configurazione รฉ sempre mantenuta come un attributo in [`PreTrainedModel`], e quindi puรฒ essere accessibile tramite l'attributo `config` per tutte le classi che ereditano da `BrandNewBertPreTrainedModel`: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # il modello ha accesso al suo config ``` Analogamente al modello, la configurazione eredita le funzionalitร  base di serializzazione e deserializzazione da [`PretrainedConfig`]. ร‰ da notare che la configurazione e il modello sono sempre serializzati in due formati differenti - il modello รฉ serializzato in un file *pytorch_model.bin* mentre la configurazione con *config.json*. Chiamando [`~PreTrainedModel.save_pretrained`] automaticamente chiamerร  [`~PretrainedConfig.save_pretrained`], cosicchรฉ sia il modello che la configurazione siano salvati. ### Stile per il codice Quando codifichi un nuovo modello, tieni presente che Transformers ha una sua struttura di fondo come libreria, perciรฒ ci sono alcuni fatti da considerare su come scrivere un codice :-) 1. Il forward pass del tuo modello dev'essere scritto completamente nel file del modello, mentre dev'essere indipendente da altri modelli nella libreria. Se vuoi riutilizzare un blocco di codice da un altro modello, copia e incolla il codice con un commento `# Copied from` in cima al codice (guarda [qui](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) per un ottimo esempio). 2. Il codice dev'essere interamente comprensibile, anche da persone che non parlano in inglese. Questo significa che le variabili devono avere un nome descrittivo e bisogna evitare abbreviazioni. Per esempio, `activation` รฉ molto meglio che `act`. Le variabili con una lettera sono da evitare fortemente, almeno che non sia per un indce in un for loop. 3. Generamente รฉ meglio avere un codice esplicito e piรบ lungo che un codice corto e magico. 4. Evita di subclassare `nn.Sequential` in Pytorch, puoi subclassare `nn.Module` e scrivere il forward pass, cosicchรฉ chiunque puรฒ effettuare debug sul tuo codice, aggiungendo print o breaking points. 5. La tua function-signature dev'essere type-annoted. Per il resto, รฉ meglio preferire variabili con un nome accettabile piuttosto che annotazioni per aumentare la comprensione e leggibilitร  del codice. ### Panoramica sui tokenizers Questa sezione sarร  creata al piu presto :-( ## Aggiungere un modello a ๐Ÿค— Transformers passo dopo passo Ci sono differenti modi per aggiungere un modello a Hugging Face. Qui trovi una lista di blog posts da parte della community su come aggiungere un modello: 1. [Aggiungere GPT2](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) scritto da [Thomas](https://huggingface.co/thomwolf) 2. [Aggiungere WMT19 MT](https://huggingface.co/blog/porting-fsmt) scritto da [Stas](https://huggingface.co/stas) Per esperienza, possiamo dirti che quando si aggiunge un modello รฉ meglio tenere a mente le seguenti considerazioni: - Non sfondare una porta giรก aperta! La maggior parte del codice che aggiungerai per un nuovo modello ๐Ÿค— Transformers esiste giร  da qualche parte in ๐Ÿค— Transformers. Prendi un po' di tempo per trovare codici simili in modelli e tokenizers esistenti e fare un copia-incolla. Ricorda che [grep](https://www.gnu.org/software/grep/) e [rg](https://github.com/BurntSushi/ripgrep) sono tuoi buoni amici. Inoltre, ricorda che puรณ essere molto probabile che il tokenizer per il tuo modello sia basato sull'implementazione di un altro modello, e il codice del tuo modello stesso su un altro ancora. *Per esempio* il modello FSMT รฉ basato su BART, mentre il tokenizer di FSMT รฉ basato su XLM. - Ricorda che qui รฉ piu una sfida ingegneristica che scientifica. Spendi piรบ tempo per create un efficiente ambiente di debugging piuttosto che cercare di capire tutti gli aspetti teorici dell'articolo del modello. - Chiedi aiuto se sei in panne! I modelli sono la parte principale di ๐Ÿค— Transformers, perciรฒ qui a Hugging Face siamo piรน che contenti di aiutarti in ogni passo per aggiungere il tuo modello. Non esitare a chiedere se vedi che non riesci a progredire. Di seguito, diamo una ricetta generale per aiutare a portare un modello in ๐Ÿค— Transformers. La lista seguente รฉ un sommario di tutto quello che รฉ stato fatto per aggiungere un modello, e puรฒ essere usata come To-Do List: - 1. โ˜ (Opzionale) Capire gli aspetti teorici del modello - 2. โ˜ Preparare l'ambiente dev per transformers - 3. โ˜ Preparare l'ambiente debugging della repository originale - 4. โ˜ Create uno script che gestisca con successo il forward pass usando la repository originale e checkpoint - 5. โ˜ Aggiungere con successo lo scheletro del modello a Transformers - 6. โ˜ Convertire i checkpoint original a Transformers checkpoint - 7. โ˜ Effettuare con successo la forward pass in Transformers, di modo che dia un output identico al checkpoint originale - 8. โ˜ Finire i tests per il modello in Transformers - 9. โ˜ Aggiungere con successo Tokenizer in Transformers - 10. โ˜ Testare e provare gli integration tests da capo a fine - 11. โ˜ Completare i docs - 12. โ˜ Caricare i moedl weights all'hub - 13. โ˜ Sottomettere una pull request - 14. โ˜ (Opzionale) Aggiungere un notebook con una demo Per cominciare di solito consigliamo `BrandNewBert`, partendo dalla teoria, di modo da avere una buona comprensione della teoria generale. TUttavia, se preferisci imparare l'aspetto teorico del modello mentre *lavori* sul modello รฉ ok immergersi direttamente nel codice di `BrandNewBert`. Questa opzione puรณ essere buona se le tue skills ingegneristiche sono meglio che quelle teoriche, o se il paper `BrandNewBert` ti dรก problemi, o se semplicemente ti piace programmare piรบ che leggere articoli scientifici. ### 1. (Opzionale) Aspetti teorici di BrandNewBert Allora con calma, prendi un po' di tempo per leggere l'articolo su *BrandNewBert* . Sicuramente, alcune sezioni dell'articolo sono molto complesse, ma non preoccuparti! L'obiettivo non รฉ avere una compresione immensa della teoria alla base, ma estrarre le informazioni necessarie per re-implementare con successo il modello in ๐Ÿค— Transformers. Quindi, non impazzire sugli aspetti teorici, ma piuttosto focalizzati su quelli pratici, ossia: - Che tipo di modello รฉ *brand_new_bert*? ร‰ solo un encoder in stile BERT? O tipo decoder come GPT2? O encoder e decoder stile BART? Dai un'occhiata a [model_summary](model_summary) se non sei famigliare con le differenze tra questi modelli - Quali sono le applicazioni di *brand_new_bert*? Classificazione di testo? Generazione di testo? O per tasks del genere seq2seq? - Quali sono le nuove aggiunte al modello che lo rendono diverso da BERT/GPT-2/BART? - Quali modelli estistenti in [๐Ÿค— Transformers models](https://huggingface.co/transformers/#contents) sono molto simili a *brand_new_bert*? - Che tipo di tokenizer si usa in questo caso? Un sentencepiece tokenizer? O un word piece tokenizer? Il tokenizer รฉ lo stesso di BERT o BART? Una volta che senti che hai avuto una bella overview dell'architettura del modello, puoi scrivere senza problemi al team di Hugging Face per ogni domanda che tu hai. Questo puรณ includere domande sull'architettura del modello, o sull'attention layer, etc. Saremo molto felici di aiutarti :) ### 2. Prepare il tuo ambiente 1. Forka la [repository](https://github.com/huggingface/transformers) cliccando sul tasto โ€˜Fork' nella pagina della repository. Questo crea una copia del codice nel tuo account GitHub 2. Clona il tuo fork `transfomers` sul tuo dico locale, e aggiungi la repository base come remota: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Crea un ambiente di sviluppo, per esempio tramite questo comando: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` quindi torna alla directory principale: ```bash cd .. ``` 4. Attenzione, raccomandiamo di aggiungere la versione di PyTorch di *brand_new_bert* a Transfomers. Per installare PyTorch, basta seguire queste istruzioni https://pytorch.org/get-started/locally/. **Nota bene:** Non c'รฉ bisogno di installare o avere installato CUDA. Il nuovo modello puรฒ funzionare senza problemi su una CPU. 5. Per trasferire *brand_new_bert* To port *brand_new_bert* avrai bisogno anche accesso alla sua repository originale: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` Ok, ora hai un ambiente di sviluppo per portare *brand_new_bert* in ๐Ÿค— Transformers. ### 3.-4. Provare un pretrained checkpoint usando la repo originale Per cominciare, comincerai a lavorare sulla repo originale di *brand_new_bert*. Come spesso accade, l'implementazione originale รฉ molto sullo stile "ricerca". Questo significa che a volte la documentazione non รฉ al top, magari manca qualche cosa e il codice puรณ essere difficile da capire. Tuttavia, questa รฉ e dev'essere la motivazione per reimplementare *brand_new_bert*. In Hugging Face, uno degli obiettivi principali รฉ di *mettere le persone sulle spalle dei giganti*, il che si traduce, in questo contesto, di prendere un modello funzionante e riscriverlo e renderlo il piรบ possibile **accessibile, user-friendly, e leggibile**. Questa รฉ la top motivazione per re-implementare modelli in ๐Ÿค— Transformers - cercare di creare nuove complesse tecnologie NLP accessibili a **chiunque**. Riuscire a far girare il modello pretrained originale dalla repository ufficiale รฉ spesso il passo **piu arduo**. Dalla nostra esperienza, รฉ molto importante spendere un p' di tempo per diventare familiari con il codice base originale. Come test, prova a capire i seguenti punti: - Dove si trovano i pretrained weights? - Come caricare i pretrained weights nel modello corrispondente? - Come girare un tokenizer independentemente dal modello? - Prova a tracciare un singolo forward pass, cosicchรฉ potrai sapere che classi e funzioni sono richieste per un semplice forward pass. Di solito, dovrai reimplementare queste funzioni e basta - Prova a localizzare i componenti importanti del modello: Dove si trova la classe del modello? Ci sono sotto classi nel modello *per esempio* EngoderModel, DecoderMOdel? Dove si trova il self-attention layer? Ci sono molteplici differenti layer di attention, *per esempio * *self-attention*, *cross-attention*...? - Come puoi fare debug sul modello nell'ambiente originale della repo? Devi aggiungere dei *print* o puoi usare *ipdb* come debugger interattivo, o vabene anche un IDE efficiente per debug come PyCharm? ร‰ molto importante che prima di cominciare a trasferire il modello nuovo tu spenda tempo a fare debug del codice originale in maniera **efficiente**! Inoltre, ricorda che tutta la library รฉ open-soruce, quindi non temere di aprire issue o fare una pull request nella repo originale. Tutti coloro che mantengono la repository saranno piรบ che felici di avere qualcuno che guarda e gioca con i loro codici! A questo punto, sta a te decidere quale ambiente per debug vuoi usare. Noi consilgiamo di evitare setup con GPU, che potrebbero costare assai, lavorare su una CPU puรณ essere un ottimo punto di partenza per indagare la repository originale e per cominciare a scrivere il codice per ๐Ÿค— Transformers. Solo alla fine, quando il modello รฉ stato portato con successo in ๐Ÿค— Transformers, allora si potrรก verificare il suo funzionamento su GPU. In generale ci sono due possibili ambienti di debug per il testare il modello originale: - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Scripts locali in Python Il vantaggio dei Jupyter notebooks รฉ la possibilitร  di eseguire cella per cella, il che puรฒ essere utile per decomporre tutte le componenti logiche, cosi da a vere un ciclo di debug piรน rapido, siccome si possono salvare i risultati da steps intermedi. Inoltre, i notebooks spesso sono molto facili da condividere con altri contributors, il che puรฒ essere molto utile se vuoi chiedere aiuto al team di Hugging Face. Se sei famigliare con Jupyter notebooks allora racommandiamo di lavorare in questa maniera. Ovviamente se non siete abituati a lavorare con i notebook, questo puรฒ essere uno svantaggio nell'usare questa tecnologia, sprecando un sacco di tempo per setup e portare tutto al nuovo ambiente, siccome non potreste neanche usare dei tools di debug come `ipdb`. Per ogni pratica code-base, รฉ sempre meglio come primo step caricare un **piccolo** checkpoint pretrained e cercare di riprodurre un singolo forward pass usando un vettore fittizio di IDs fatti da numeri interi. Un esempio per uno script simile, in pseudocodice รฉ: ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` Per quanto riguarda la strategia di debugging, si puรฒ scegliere tra: - Decomporre il modello originario in piccole componenenti e testare ognuna di esse - Decomporre il modello originario nel *tokenizer* originale e nel *modello* originale, testare un forward pass su questi, e usare dei print statement o breakpoints intermedi per verificare Ancora una volta, siete liberi di scegliere quale strategia sia ottimale per voi. Spesso una strategia รฉ piu avvantaggiosa di un'altra, ma tutto dipende dall'code-base originario. Se il code-base vi permette di decomporre il modello in piccole sub-componenenti, *per esempio* se il code-base originario puรฒ essere facilmente testato in eager mode, allora vale la pena effettuare un debugging di questo genere. Ricordate che ci sono dei vantaggi nel decidere di prendere la strada piu impegnativa sin da subito: - negli stage piu finali, quando bisognerร  comparare il modello originario all'implementazione in Hugging Face, potrete verificare automaticamente ogni componente, individualmente, di modo che ci sia una corrispondenza 1:1 - avrete l'opportunitร  di decomporre un problema molto grande in piccoli passi, cosรฌ da strutturare meglio il vostro lavoro - separare il modello in componenti logiche vi aiuterร  ad avere un'ottima overview sul design del modello, quindi una migliore comprensione del modello stesso - verso gli stage finali i test fatti componente per componente vi aiuterร  ad essere sicuri di non andare avanti e indietro nell'implementazione, cosรฌ da continuare la modifica del codice senza interruzione Un ottimo esempio di come questo puรฒ essere fatto รฉ dato da [Lysandre](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) per il modello ELECTRA Tuttavia, se il code-base originale รฉ molto complesso o le componenti intermedie possono essere testate solo in tramite compilazione, potrebbe richiedere parecchio tempo o addirittura essere impossibile separare il modello in piccole sotto-componenti. Un buon esempio รฉ [MeshTensorFlow di T5](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow). Questa libreria รฉ molto complessa e non offre un metodo semplice di decomposizione in sotto-componenti. Per simili librerie, potrete fare affidamento ai print statements. In ogni caso, indipendentemente da quale strategia scegliete, la procedura raccomandata รฉ di cominciare a fare debug dal primo layer al layer finale. ร‰ consigliato recuperare gli output dai layers, tramite print o sotto-componenti, nel seguente ordine: 1. Recuperare gli IDs di input dati al modello 2. Recuperare i word embeddings 3. Recuperare l'input del primo Transformer layer 4. Recuperare l'output del primo Transformer layer 5. Recuperare l'output dei seguenti `n - 1` Transformer layers 6. Recuperare l'output dell'intero BrandNewBert Model Gli IDs in input dovrebbero essere un arrary di interi, *per esempio* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` Gli output dei seguenti layer di solito dovrebbero essere degli array di float multi-dimensionali come questo: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` Ci aspettiamo che ogni modello aggiunto a ๐Ÿค— Transformers passi con successo un paio di test d'integrazione. Questo significa che il modello originale e la sua implementazione in ๐Ÿค— Transformers abbiano lo stesso output con una precisione di 0.001! Siccome รฉ normale che lo stesso esatto modello, scritto in librerie diverse, possa dare output leggermente diversi, la tolleranza accettata รฉ 1e-3 (0.001). Ricordate che i due modelli devono dare output quasi identici. Dunque, รฉ molto conveniente comparare gli output intermedi di ๐Ÿค— Transformers molteplici volte con gli output intermedi del modello originale di *brand_new_bert*. Di seguito vi diamo alcuni consigli per avere un ambiente di debug il piu efficiente possibile: - Trovate la migliore strategia per fare debug dei risultati intermedi. Per esempio, รฉ la repository originale scritta in PyTorch? Se si, molto probabilmente dovrete dedicare un po' di tempo per scrivere degli script piu lunghi, cosรฌ da decomporre il modello originale in piccole sotto-componenti, in modo da poter recuperare i valori intermedi. Oppure, la repo originale รฉ scritta in Tensorflow 1? Se รฉ cosรฌ dovrete fare affidamento ai print di Tensorflow [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) per avere i valori intermedi. Altro caso, la repo รฉ scritta in Jax? Allora assicuratevi che il modello non sia in **jit** quanto testate il foward pass, *per esempio* controllate [questo link](https://github.com/google/jax/issues/196). - Usate i piรน piccoli pretrained checkpoint che potete trovare. Piu piccolo รฉ il checkpoint, piu velocemente sarร  il vostro ciclo di debug. Non รฉ efficiente avere un pretrained model cosรฌ gigante che per il forward pass impieghi piu di 10 secondi. Nel caso in cui i checkpoints siano molto grandi, e non si possa trovare di meglio, allora รฉ buona consuetudine ricorrere a fare un dummy model nel nuovo ambiente, con weights inizializzati random e salvare quei weights per comprare la versione ๐Ÿค— Transformers con il vostro modello - Accertatevi di usare la via piu semplice per chiamare il forward pass nella repo originale. Sarebbe opportuno trovare la funzione originaria che chiami **solo** un singolo forward pass, *per esempio* questa funzione spesso viene chiamata `predict`, `evaluate`, `forward` o `__call__`. Siate sicuri di non fare debug su una funzione che chiami `forward` molteplici volte, *per esempio* per generare testo, come `autoregressive_sample`, `generate`. - Cercate di separare la tokenization dal forward pass del modello. Se la repo originaria mostra esempio dove potete dare come input una stringa, provate a cercare dove nella forward call la stringa viene cambiata in input ids e cominciate il debug da questo punto. Questo vi garantisce un ottimo punto di partenza per scrivere un piccolo script personale dove dare gli input al modello, anziche delle stringhe in input. - Assicuratevi che il debugging **non** sia in training mode. Spesso questo potra il modello a dare degli output random, per via dei molteplici dropout layers. Assicuratevi che il forward pass nell'ambiente di debug sia **deterministico**, cosicche i dropout non siano usati. Alternativamente, potete usare *transformers.utils.set_seed* se la vecchia e nuova implementazione sono nello stesso framework. La seguente sezione vi da ulteriori dettagli e accorgimenti su come potete fare tutto questo per *brand_new_bert*. ### 5.-14. Trasferire BrandNewBert in ๐Ÿค— Transformers Allora cominciamo ad aggiungere un nuovo codice in ๐Ÿค— Transformers. Andate nel vostro fork clone di ๐Ÿค— Transformers: ```bash cd transformers ``` Nel caso speciale in cui stiate aggiungendo un modello, la cui architettura sia identica a una di un modello giร  esistente, dovrete solo aggiugnere uno script di conversione, come descritto [qui](#write-a-conversion-script). In questo caso, potete riutilizzare l'intera architettura del modello gia esistente. Se questo non รฉ il caso, cominciamo con il generare un nuovo modello. Avrete due opzioni: - `transformers-cli add-new-model-like` per aggiungere un nuovo modello come uno che gia esiste - `transformers-cli add-new-model` per aggiungere un nuovo modello da un nostro template (questo assomigliera a BERT o Bart, in base al modello che selezionerete) In entrambi i casi, l'output vi darร  un questionario da riempire con informazioni basi sul modello. Il secondo comando richiede di installare un `cookiecutter` - maggiori informazioni [qui](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model). **Aprire una Pull Request in main huggingface/transformers repo** Prime di cominciare ad adattare il codice automaticamente generato, aprite una nuova PR come "Work in progress (WIP)", *per esempio* "[WIP] Aggiungere *brand_new_bert*", cosicchรฉ il team di Hugging Face possa lavorare al vostro fianco nell' integrare il modello in ๐Ÿค— Transformers. Questi sarebbero gli step generali da seguire: 1. Creare un branch dal main branch con un nome descrittivo ```bash git checkout -b add_brand_new_bert ``` 2. Commit del codice automaticamente generato ```bash git add . git commit ``` 3. Fare fetch e rebase del main esistente ```bash git fetch upstream git rebase upstream/main ``` 4. Push dei cambiamenti al proprio account: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. Una volte che siete soddisfatti dei nuovi cambiamenti, andate sulla webpage del vostro fork su GitHub. Cliccate "Pull request". Assiuratevi di aggiungere alcuni membri di Hugging Face come reviewers, nel riguardo alla destra della pagina della PR, cosicche il team Hugging Face verrร  notificato anche per i futuri cambiamenti. 6. Cambiare la PR a draft, cliccando su "Convert to draft" alla destra della pagina della PR Da quel punto in poi, ricordate di fare commit di ogni progresso e cambiamento, cosicche venga mostrato nella PR. Inoltre, ricordatevi di tenere aggiornato il vostro lavoro con il main esistente: ```bash git fetch upstream git merge upstream/main ``` In generale, tutte le domande che avrete riguardo al modello o l'implementazione dovranno essere fatte nella vostra PR e discusse/risolte nella PR stessa. In questa maniera, il team di Hugging Face sarร  sempre notificato quando farete commit di un nuovo codice o se avrete qualche domanda. ร‰ molto utile indicare al team di Hugging Face il codice a cui fate riferimento nella domanda, cosicche il team potra facilmente capire il problema o la domanda. Per fare questo andate sulla tab "Files changed", dove potrete vedere tutti i vostri cambiamenti al codice, andate sulla linea dove volete chiedere una domanda, e cliccate sul simbolo "+" per aggiungere un commento. Ogni volta che una domanda o problema รฉ stato risolto, cliccate sul bottone "Resolve". In questa stessa maniera, Hugging Face aprirร  domande o commenti nel rivedere il vostro codice. Mi raccomando, chiedete piรน domande possibili nella pagina della vostra PR. Se avete domande molto generali, non molto utili per il pubblico, siete liberi di chiedere al team Hugging Face direttamente su slack o email. **5. Adattare i codici per brand_new_bert** Per prima cosa, ci focalizzeremo sul modello e non sui tokenizer. Tutto il codice relative dovrebbe trovarsi in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` e `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Ora potete finalmente cominciare il codice :). Il codice generato in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` avrร  sia la stessa architettura di BERT se รฉ un modello encoder-only o BART se รฉ encoder-decoder. A questo punto, ricordatevi cio che avete imparato all'inizio, riguardo agli aspetti teorici del modello: *In che maniera il modello che sto implmementando รฉ diverso da BERT o BART?*. Implementare questi cambi spesso vuol dire cambiare il layer *self-attention*, l'ordine dei layer di normalizzazione e cosรฌ via... Ancora una volta ripetiamo, รฉ molto utile vedere architetture simili di modelli gia esistenti in Transformers per avere un'idea migliore su come implementare il modello. **Notate** che a questo punto non dovete avere subito un codice tutto corretto o pulito. Piuttosto, รฉ consigliato cominciare con un codice poco pulito, con copia-incolla del codice originale in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` fino a che non avrete tutto il codice necessario. In base alla nostra esperienza, รฉ molto meglio aggiungere una prima bozza del codice richiesto e poi correggere e migliorare iterativamente. L'unica cosa essenziale che deve funzionare qui รฉ la seguente instanza: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` Questo comando creerร  un modello con i parametri di default definiti in `BrandNewBergConfig()` e weights random. Questo garantisce che `init()` di tutte le componenti funzioni correttamente. **6. Scrivere uno script di conversione** Il prossimo step รฉ scrivere uno script per convertire il checkpoint che avete usato per fare debug su *brand_new_berts* nella repo originale in un checkpoint per la nuova implementazione di *brand_new_bert* in ๐Ÿค— Transformers. Non รฉ consigliato scrivere lo script di conversione da zero, ma piuttosto cercate e guardate script gia esistenti in ๐Ÿค— Transformers, cosรฌ da trovarne uno simile al vostro modello. Di solito basta fare una copia di uno script gia esistente e adattarlo al vostro caso. Non esistate a chiedre al team di Hugging Face a riguardo. - Se state convertendo un modello da TensorFlow a PyTorch, un ottimo inizio รฉ vedere [questo script di conversione per BERT](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - Se state convertendo un modello da PyTorch a PyTorch, [lo script di conversione di BART puรฒ esservi utile](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) Qui di seguito spiegheremo come i modelli PyTorch salvano i weights per ogni layer e come i nomi dei layer sono definiti. In PyTorch, il nomde del layer รฉ definito dal nome della class attribute che date al layer. Definiamo un modello dummy in PyTorch, chiamato `SimpleModel`: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` Ora possiamo creare un'instanza di questa definizione di modo da inizializzare a random weights: `dense`, `intermediate`, `layer_norm`. Possiamo usare print per vedere l'architettura del modello: ```python model = SimpleModel() print(model) ``` Da cui si ottiene: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` Si puรฒ vedere come i nomi dei layers siano definiti dal nome della class attribute in PyTorch. I valori dei weights di uno specifico layer possono essere visualizzati: ```python print(model.dense.weight.data) ``` ad esempio: ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` Nello script di conversione, dovreste riempire quei valori di inizializzazione random con gli stessi weights del corrispondente layer nel checkpoint. *Per esempio* ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` Cosรฌ facendo, dovete verificare che ogni inizializzazione random di un peso del modello PyTorch e il suo corrispondente peso nel pretrained checkpoint siano esattamente gli stessi e uguali in **dimensione/shape e nome**. Per fare questo, รฉ **necessario** aggiungere un `assert` per la dimensione/shape e nome: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` Inoltre, dovrete fare il print sia dei nomi che dei weights per essere sicuri che siano gli stessi: ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` Se la dimensione o il nome non sono uguali, probabilmente avete sbagliato ad assegnare il peso nel checkpoint o nel layer costrutture di ๐Ÿค— Transformers. Una dimensione sbagliata puรฒ essere dovuta ad un errore nei parameteri in `BrandNewBertConfig()`. Tuttavia, puรฒ essere anche che l'implementazione del layer in PyTorch richieda di fare una transposizione della matrice dei weights. Infine, controllate **tutti** che tutti i weights inizializzati e fate print di tutti i weights del checkpoint che non sono stati usati per l'inizializzazione, di modo da essere sicuri che il modello sia correttamente convertito. ร‰ normale che ci siano errori nel test di conversione, fai per un errore in `BrandNewBertConfig()`, o un errore nell'architettura in ๐Ÿค— Transformers, o un bug in `init()`. Questo step dev'essere fatto tramite iterazioni fino a che non si raggiungano gli stessi valori per i weights. Una volta che il checkpoint รฉ stato correttamente caricato in ๐Ÿค— Transformers, potete salvare il modello in una cartella di vostra scelta `/path/to/converted/checkpoint/folder` che contenga sia `pytorch_model.bin` che `config.json`: ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. Implementare il forward pass** Una volta che i weights pretrained sono stati correttamente caricati in ๐Ÿค— Transformers, dovrete assicurarvi che il forward pass sia correttamente implementato. [Qui](#provare-un-pretrained-checkpoint-usando-la-repo-originale), avete give creato e provato uno script che testi il forward pass del modello usando la repo originaria. Ora dovrete fare lo stesso con uno script analogo usando l'implementazione in ๐Ÿค— Transformers anzichรฉ l'originale. Piu o meno lo script dovrebbe essere: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` Di solito l'output da ๐Ÿค— Transformers non รฉ uguale uguale all'output originario, sopratto la prima volta. Non vi abbattete - รฉ normale! Prima di tutto assicuratevi che non ci siano errori o che non vengano segnalati degli errori nella forward pass. Spesso capita che ci siano dimensioni sbagliate o data type sbagliati, *ad esempio* `torch.long` anziche `torch.float32`. Non esistate a chiedere al team Hugging Face! Nella parte finale assicuratevi che l'implementazione ๐Ÿค— Transformers funzioni correttamente cosi da testare che gli output siano equivalenti a una precisione di `1e-3`. Controllate che `outputs.shape` siano le stesse tra ๐Ÿค— Transformers e l'implementazione originaria. Poi, controllate che i valori in output siano identici. Questa รฉ sicuramente la parte piรน difficile, qui una serie di errori comuni quando gli output non sono uguali: - Alcuni layers non sono stati aggiunti, *ad esempio* un *activation* layer non รฉ stato aggiunto, o ci si รฉ scordati di una connessione - La matrice del word embedding non รฉ stata ripareggiata - Ci sono degli embeddings posizionali sbagliati perchรฉ l'implementazione originaria ha un offset - Il dropout รฉ in azione durante il forward pass. Per sistemare questo errore controllate che *model.training = False* e che il dropout non sia stato attivato nel forward pass, * per esempio * passate *self.training* a [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout) La miglior maniera per sistemare il problema รฉ di vedere all'implementazione originaria del forward pass e in ๐Ÿค— Transformers fianco a fianco e vedere se ci sono delle differenze. In teoria, con debug e print degli output intermedie di entrambe le implementazioni nel forward pass nell'esatta posizione del network dovrebbe aiutarvi a vedere dove ci sono differenze tra i due frameworks. Come prima mossa controllate che `input_ids` siano identici in entrambi gli scripts. Da lรฌ andate fino all'ultimo layer. Potrete notare una differenza tra le due implementazioni a quel punto. Una volta che lo stesso output รฉ stato ragguingi, verificate gli output con `torch.allclose(original_output, output, atol=1e-3)`. A questo punto se รฉ tutto a posto: complimenti! Le parti seguenti saranno una passeggiata ๐Ÿ˜Š. **8. Aggiungere i test necessari per il modello** A questo punto avete aggiunto con successo il vostro nuovo modello. Tuttavia, รฉ molto probabile che il modello non sia del tutto ok con il design richiesto. Per essere sicuri che l'implementazione sia consona e compatibile con ๐Ÿค— Transformers รฉ necessario implementare dei tests. Il Cookiecutter dovrebbe fornire automaticamente dei file per test per il vostro modello, di solito nella folder `tests/test_modeling_brand_new_bert.py`. Provate questo per verificare l'ok nei test piu comuni: ```bash pytest tests/test_modeling_brand_new_bert.py ``` Una volta sistemati i test comuni, bisogna assicurarsi che il vostro lavoro sia correttamente testato cosicchรจ: - a) La community puo capire in maniera semplice il vostro lavoro controllando tests specifici del modello *brand_new_bert*, - b) Implementazioni future del vostro modello non rompano alcune feature importante del modello. Per prima cosa agguingete dei test d'integrazione. Questi sono essenziali perche fanno la stessa funzione degli scripts di debug usati precedentemente. Un template per questi tests esiste gia nel Cookiecutter ed รฉ sotto il nome di `BrandNewBertModelIntegrationTests`, voi dovrete solo completarlo. Una volta che questi tests sono OK, provate: ```bash RUN_SLOW=1 pytest -sv tests/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Nel caso siate su Windows, sostituite `RUN_SLOW=1` con `SET RUN_SLOW=1` </Tip> Di seguito, tutte le features che sono utili e necessarire per *brand_new_bert* devono essere testate in test separati, contenuti in `BrandNewBertModelTester`/ `BrandNewBertModelTest`. spesso la gente si scorda questi test, ma ricordate che sono utili per: - Aiuta gli utenti a capire il vostro codice meglio, richiamando l'attenzione su queste nuove features - Developers e contributors futuri potranno velocemente testare nuove implementazioni del modello testanto questi casi speciali. **9. Implementare il tokenizer** A questo punto avremo bisogno un tokenizer per *brand_new_bert*. Di solito il tokenizer รฉ uguale ad altri modelli in ๐Ÿค— Transformers. ร‰ importante che troviate il file con il tokenizer originale e che lo carichiate in ๐Ÿค— Transformers. Per controllare che il tokenizer funzioni in modo corretto, create uno script nella repo originaria che riceva come input una stringa e ritorni gli `input_ids`. Piu o meno questo potrebbe essere il codice: ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` Potrebbe richiedere un po' di tempo, ma guardate ancora alla repo originaria per trovare la funzione corretta del tokenizer. A volte capita di dover riscrivere il tokenizer nella repo originaria, di modo da avere come output gli `input_ids`. A quel punto uno script analogo รฉ necessario in ๐Ÿค— Transformers: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` Una volta che `input_ids` sono uguali, bisogna aggiungere un test per il tokenizer. Il file test per tokenizer di *brand_new_brand* dovrebbe avere un paio di hard-coded test d'integrazione. **10. Test end-to-end** Ora che avete il tokenizer, dovrete aggiungere dei test d'integrazione per l'intero workflow in `tests/test_modeling_brand_new_bert.py` in ๐Ÿค— Transformer. Questi test devono mostrare che un significante campione text-to-text funzioni come ci si aspetta nell'implementazione di ๐Ÿค— Transformers. *Per esempio* potreste usare dei source-to-target-translation, o un sommario di un articolo, o un domanda-risposta e cosi via. Se nessuno dei checkpoints รฉ stato ultra parametrizzato per task simili, allora i tests per il modello sono piu che sufficienti. Nello step finale dovete assicurarvi che il modello sia totalmente funzionale, e consigliamo anche di provare a testare su GPU. Puo succedere che ci si scordi un `.to(self.device)` ad esempio. Se non avete accesso a GPU, il team Hugging Face puo provvedere a testare questo aspetto per voi. **11. Aggiungere una Docstring** Siete quasi alla fine! L'ultima cosa rimasta รฉ avere una bella docstring e una pagina doc. Il Cookiecutter dovrebbe provvedere giร  un template chiamato `docs/source/model_doc/brand_new_bert.rst`, che dovrete compilare. La prima cosa che un utente farร  per usare il vostro modello sarร  dare una bella lettura al doc. Quindi proponete una documentazione chiara e concisa. ร‰ molto utile per la community avere anche delle *Tips* per mostrare come il modello puo' essere usato. Non esitate a chiedere a Hugging Face riguardo alle docstirng. Quindi, assicuratevi che la docstring sia stata aggiunta a `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`. Assicuratevi che la docstring sia corretta e che includa tutti i necessari input e output. Abbiamo una guida dettagliata per scrivere la documentazione e docstring. **Rifattorizzare il codice** Perfetto! Ora che abbiamo tutto per *brand_new_bert* controllate che lo stile del codice sia ok: ```bash make style ``` E che il codice passi i quality check: ```bash make quality ``` A volte capita che manchino delle informazioninella docstring o alcuni nomi sbagliati, questo farร  fallire i tests sopra. Ripetiamo: chiedete pure a Hugging Face, saremo lieti di aiutarvi. Per ultimo, fare del refactoring del codice una volta che รฉ stato creato. Avete finito con il codice, congratulazioni! ๐ŸŽ‰ Siete fantasticiiiiiii! ๐Ÿ˜Ž **12. Caricare il modello sul model hub** In questa ultima parte dovrete convertire e caricare il modello, con tutti i checkpoints, nel model hub e aggiungere una model card per ogni checkpoint caricato. Leggete la nostra guida [Model sharing and uploading Page](model_sharing) per avere familiaritร  con l'hub. Di solito in questa parte lavorate a fianco di Hugging face per decidere un nome che sia ok per ogni checkpoint, per ottenere i permessi necessari per caricare il modello nell'organizzazione dell'autore di *brand_new_bert*. Il metodo `push_to_hub`, presente in tutti i modelli `transformers`, รฉ una maniera rapida e indolore per caricare il vostro checkpoint sull'hub: ```python brand_new_bert.push_to_hub( repo_path_or_name="brand_new_bert", # Uncomment the following line to push to an organization # organization="<ORGANIZATION>", commit_message="Add model", use_temp_dir=True, ) ``` Vale la pena spendere un po' di tempo per creare una model card ad-hoc per ogni checkpoint. Le model cards dovrebbero suggerire le caratteristiche specifiche del checkpoint, *per esempio* su che dataset il checkpoint รฉ stato pretrained o fine-tuned. O che su che genere di task il modello lavoro? E anche buona pratica includere del codice su come usare il modello correttamente. **13. (Opzionale) Aggiungere un notebook** ร‰ molto utile aggiungere un notebook, che dimostri in dettaglio come *brand_new_bert* si utilizzi per fare inferenza e/o fine-tuned su specifiche task. Non รฉ una cosa obbligatoria da avere nella vostra PR, ma รฉ molto utile per la community. **14. Sottomettere la PR** L'ultimissimo step! Ovvero il merge della PR nel main. Di solito il team Hugging face a questo punto vi avrร  gia aiutato, ma รฉ ok prendere un po' di tempo per pulire la descirzione e commenti nel codice. ### Condividete il vostro lavoro!! ร‰ ora tempo di prendere un po' di credito dalla communitร  per il vostro lavoro! Caricare e implementare un nuovo modello รฉ un grandissimo contributo per Transformers e l'intera community NLP. Il codice e la conversione dei modelli pre-trained sara sicuramente utilizzato da centinaia o migliaia di sviluppatori e ricercatori. Siate fieri e orgogliosi di condividere il vostro traguardo con l'intera community :) ** Avete create un altro modello che รฉ super facile da usare per tutti quanti nella community! ๐Ÿคฏ**
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento con script Insieme ai [notebooks](./noteboks/README) ๐Ÿค— Transformers, ci sono anche esempi di script che dimostrano come addestrare un modello per un task con [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), o [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). Troverai anche script che abbiamo usato nei nostri [progetti di ricerca](https://github.com/huggingface/transformers/tree/main/examples/research_projects) e [precedenti esempi](https://github.com/huggingface/transformers/tree/main/examples/legacy) a cui contribuisce per lo piรน la comunitร . Questi script non sono attivamente mantenuti e richiedono una specifica versione di ๐Ÿค— Transformers che sarร  molto probabilmente incompatibile con l'ultima versione della libreria. Non รจ dato per scontato che gli script di esempio funzionino senza apportare modifiche per ogni problema, bensรฌ potrebbe essere necessario adattare lo script al tuo caso specifico. Per aiutarti in ciรฒ, la maggioranza degli script espone le modalitร  di pre-processamento dei dati, consentendoti di modificare lo script come preferisci. Per qualsiasi feature che vorresti implementare in uno script d'esempio, per favore discutine nel [forum](https://discuss.huggingface.co/) o in un'[issue](https://github.com/huggingface/transformers/issues) prima di inviare una Pull Request. Mentre accogliamo con piacere la correzione di bug, รจ piรน improbabile che faremo la stessa con una PR che aggiunge funzionalitร  sacrificando la leggibilitร . Questa guida ti mostrerร  come eseguire uno script di esempio relativo al task di summarization in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) e [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). Tutti gli esempi funzioneranno con entrambi i framework a meno che non sia specificato altrimenti. ## Installazione Per eseguire con successo l'ultima versione degli script di esempio, devi **installare ๐Ÿค— Transformers dalla fonte** in un nuovo ambiente virtuale: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` Per le precedenti versioni degli script di esempio, clicca sul pulsante di seguito: <details> <summary>Esempi per versioni precedenti di ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Successivamente, cambia la tua attuale copia di ๐Ÿค— Transformers specificandone la versione, ad esempio v3.5.1: ```bash git checkout tags/v3.5.1 ``` Dopo aver configurato correttamente la versione della libreria, naviga nella cartella degli esempi di tua scelta e installa i requisiti: ```bash pip install -r requirements.txt ``` ## Esegui uno script <frameworkcontent> <pt> Lo script di esempio scarica e pre-processa un dataset dalla libreria ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui รจ stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Lo script di esempio scarica e pre-processa un dataset dalla libreria ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando Keras su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui รจ stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Addestramento distribuito e precisione mista Il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supporta l'addestramento distribuito e la precisione mista, che significa che puoi anche usarla in uno script. Per abilitare entrambe le funzionalitร : - Aggiunto l'argomento `fp16` per abilitare la precisione mista. - Imposta un numero di GPU da usare con l'argomento `nproc_per_node`. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Gli script TensorFlow utilizzano una [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) per il training distribuito e non devi aggiungere alcun argomento addizionale allo script di training. Lo script TensorFlow userร  multiple GPU in modo predefinito se quest'ultime sono disponibili: ## Esegui uno script su TPU <frameworkcontent> <pt> Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. PyTorch supporta le TPU con il compilatore per deep learning [XLA](https://www.tensorflow.org/xla) (guarda [questo link](https://github.com/pytorch/xla/blob/master/README.md) per maggiori dettagli). Per usare una TPU, avvia lo script `xla_spawn.py` e usa l'argomento `num_cores` per impostare il numero di core TPU che intendi usare. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. Gli script TensorFlow utilizzano una [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) per eseguire l'addestramento su TPU. Per usare una TPU, passa il nome della risorsa TPU all'argomento `tpu`. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Esegui uno script con ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) รจ una libreria compatibile solo con PyTorch che offre un metodo unificato per addestrare modelli su diverse tipologie di configurazioni (CPU, multiple GPU, TPU) mantenendo una completa visibilitร  rispetto al ciclo di training di PyTorch. Assicurati di aver effettuato l'installazione di ๐Ÿค— Accelerate, nel caso non lo avessi fatto: > Nota: dato che Accelerate รจ in rapido sviluppo, รจ necessario installare la versione proveniente da git per eseguire gli script: ```bash pip install git+https://github.com/huggingface/accelerate ``` Invece che usare lo script `run_summarization.py`, devi usare lo script `run_summarization_no_trainer.py`. Gli script supportati in ๐Ÿค— Accelerate avranno un file chiamato `task_no_trainer.py` nella rispettiva cartella. Per iniziare, esegui il seguente comando per creare e salvare un file di configurazione: ```bash accelerate config ``` Testa la tua configurazione per assicurarti della sua correttezza: ```bash accelerate test ``` Ora sei pronto per avviare l'addestramento: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Uso di un dataset personalizzato Lo script di summarization supporta dataset personalizzati purchรฉ siano file CSV o JSON Line. Quando usi il tuo dataset, devi specificare diversi argomenti aggiuntivi: - `train_file` e `validation_file` specificano dove si trovano i file di addestramento e validazione. - `text_column` รจ il file di input da riassumere. - `summary_column` รจ il file di destinazione per l'output. Uno script di summarization usando un dataset personalizzato sarebbe simile a questo: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Testare uno script รˆ spesso una buona idea avviare il tuo script su un numero inferiore di esempi tratti dal dataset, per assicurarti che tutto funzioni come previsto prima di eseguire lo script sull'intero dataset, che potrebbe necessitare di ore. Usa i seguenti argomenti per limitare il dataset ad un massimo numero di esempi: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Non tutti gli esempi di script supportano l'argomento `max_predict_samples`. Se non sei sicuro circa il supporto di questo argomento da parte del tuo script, aggiungi l'argomento `-h` per controllare: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Riavviare addestramento da un checkpoint Un'altra utile opzione รจ riavviare un addestramento da un checkpoint precedente. Questo garantirร  che tu possa riprendere da dove hai interrotto senza ricominciare se l'addestramento viene interrotto. Ci sono due metodi per riavviare l'addestramento da un checkpoint: Il primo metodo usa l'argomento `output_dir previous_output_dir` per riavviare l'addestramento dall'ultima versione del checkpoint contenuto in `output_dir`. In questo caso, dovresti rimuovere `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` Il secondo metodo usa l'argomento `resume_from_checkpoint path_to_specific_checkpoint` per riavviare un addestramento da una specifica cartella di checkpoint. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Condividi il tuo modello Tutti gli script possono caricare il tuo modello finale al [Model Hub](https://huggingface.co/models). Prima di iniziare, assicurati di aver effettuato l'accesso su Hugging Face: ```bash huggingface-cli login ``` Poi, aggiungi l'argomento `push_to_hub` allo script. Questo argomento consentirร  di creare un repository con il tuo username Hugging Face e la cartella specificata in `output_dir`. Per dare uno specifico nome al repository, usa l'argomento `push_to_hub_model_id`. Il repository verrร  automaticamente elencata sotto al tuo namespace. Il seguente esempio mostra come caricare un modello specificando il nome del repository: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_infer_special.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza su Hardware Specializzato Questo documento sarร  completato a breve con la documentazione per l'inferenza su hardware specializzato. Nel frattempo puoi controllare [la guida per fare inferenza sulle CPU](perf_infer_cpu).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Installazione di Transformers ! pip install transformers datasets # Per installare dalla fonte invece dell'ultima versione rilasciata, commenta il comando sopra e # rimuovi la modalitร  commento al comando seguente. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Esporta modelli ๐Ÿค— Transformers Se devi implementare ๐Ÿค— modelli Transformers in ambienti di produzione, noi consigliamo di esportarli in un formato serializzato che puรฒ essere caricato ed eseguito su runtime e hardware specializzati. In questa guida ti mostreremo come farlo esporta ๐Ÿค— Modelli Transformers in due formati ampiamente utilizzati: ONNX e TorchScript. Una volta esportato, un modello puรฒ essere ottimizato per l'inferenza tramite tecniche come la quantizzazione e soppressione. Se sei interessato a ottimizzare i tuoi modelli per l'esecuzione con la massima efficienza, dai un'occhiata a [๐Ÿค— Optimum library](https://github.com/huggingface/optimum). ## ONNX Il progetto [ONNX (Open Neural Network eXchange)](http://onnx.ai) Il progetto onnx รจ un open standard che definisce un insieme comune di operatori e un formato di file comune a rappresentano modelli di deep learning in un'ampia varietร  di framework, tra cui PyTorch e TensorFlow. Quando un modello viene esportato nel formato ONNX, questi operatori sono usati per costruire un grafico computazionale (often called an _intermediate representation_) che rappresenta il flusso di dati attraverso la rete neurale. Esponendo un grafico con operatori e tipi di dati standardizzati, ONNX rende piรน facile passare da un framework all'altro. Ad esempio, un modello allenato in PyTorch puรฒ essere esportato in formato ONNX e quindi importato in TensorFlow (e viceversa). ๐Ÿค— Transformers fornisce un pacchetto `transformers.onnx` che ti consente di convertire i checkpoint del modello in un grafico ONNX sfruttando gli oggetti di configurazione. Questi oggetti di configurazione sono giร  pronti per una serie di architetture di modelli, e sono progettati per essere facilmente estensibili ad altre architetture. Le configurazioni pronte includono le seguenti architetture: <!--This table is automatically generated by `make fix-copies`, do not fill manually!--> - ALBERT - BART - BEiT - BERT - BigBird - BigBird-Pegasus - Blenderbot - BlenderbotSmall - CamemBERT - ConvBERT - Data2VecText - Data2VecVision - DeiT - DistilBERT - ELECTRA - FlauBERT - GPT Neo - GPT-J - I-BERT - LayoutLM - M2M100 - Marian - mBART - MobileBERT - OpenAI GPT-2 - Perceiver - PLBart - RoBERTa - RoFormer - SqueezeBERT - T5 - ViT - XLM - XLM-RoBERTa - XLM-RoBERTa-XL Nelle prossime due sezioni, ti mostreremo come: * Esporta un modello supportato usando il pacchetto `transformers.onnx`. * Esporta un modello personalizzato per un'architettura non supportata. ### Esportazione di un modello in ONNX Per esportare un modello ๐Ÿค— Transformers in ONNX, dovrai prima installarne alcune dipendenze extra: ```bash pip install transformers[onnx] ``` Il pacchetto `transformers.onnx` puรฒ essere usato come modulo Python: ```bash python -m transformers.onnx --help usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output positional arguments: output Path indicating where to store generated ONNX model. optional arguments: -h, --help show this help message and exit -m MODEL, --model MODEL Model ID on huggingface.co or path on disk to load model from. --feature {causal-lm, ...} The type of features to export the model with. --opset OPSET ONNX opset version to export the model with. --atol ATOL Absolute difference tolerance when validating the model. ``` L'esportazione di un checkpoint utilizzando una configurazione giร  pronta puรฒ essere eseguita come segue: ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ ``` che dovrebbe mostrare i seguenti log: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[โœ“] (2, 8, 768) matches (2, 8, 768) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Questo esporta un grafico ONNX del checkpoint definito dall'argomento `--model`. In questo esempio รจ `distilbert-base-uncased`, ma puรฒ essere qualsiasi checkpoint Hugging Face Hub o uno memorizzato localmente. Il file risultante `model.onnx` puรฒ quindi essere eseguito su uno dei [tanti acceleratori](https://onnx.ai/supported-tools.html#deployModel) che supportano il lo standard ONNX. Ad esempio, possiamo caricare ed eseguire il modello con [ONNX Runtime](https://onnxruntime.ai/) come segue: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` I nomi di output richiesti (cioรจ `["last_hidden_state"]`) possono essere ottenuti dando un'occhiata alla configurazione ONNX di ogni modello. Ad esempio, per DistilBERT abbiamo: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` Il processo รจ identico per i checkpoint TensorFlow sull'hub. Ad esempio, noi possiamo esportare un checkpoint TensorFlow puro da [Keras organizzazione](https://huggingface.co/keras-io) come segue: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` Per esportare un modello memorizzato localmente, devi disporre dei pesi del modello e file tokenizer memorizzati in una directory. Ad esempio, possiamo caricare e salvare un checkpoint come segue: <frameworkcontent> <pt> ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> # Load tokenizer and PyTorch weights form the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-pt-checkpoint") >>> pt_model.save_pretrained("local-pt-checkpoint") ``` Una volta salvato il checkpoint, possiamo esportarlo su ONNX puntando l'argomento `--model` del pacchetto `transformers.onnx` nella directory desiderata: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ``` </pt> <tf> ```python >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> # Load tokenizer and TensorFlow weights from the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-tf-checkpoint") >>> tf_model.save_pretrained("local-tf-checkpoint") ``` Once the checkpoint is saved, we can export it to ONNX by pointing the `--model` argument of the `transformers.onnx` package to the desired directory: ```bash python -m transformers.onnx --model=local-tf-checkpoint onnx/ ``` </tf> </frameworkcontent> ### Selezione delle caratteristiche per diverse topologie di modello Ogni configurazione giร  pronta viene fornita con una serie di _caratteristiche_ che ti consentono di esportare modelli per diversi tipi di topologie o attivitร . Come mostrato nella tabella di seguito, ogni caratteristica รจ associata a una diversa Auto Class: | Caratteristica | Auto Class | | ------------------------------------ | ------------------------------------ | | `causal-lm`, `causal-lm-with-past` | `AutoModelForCausalLM` | | `default`, `default-with-past` | `AutoModel` | | `masked-lm` | `AutoModelForMaskedLM` | | `question-answering` | `AutoModelForQuestionAnswering` | | `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM` | | `sequence-classification` | `AutoModelForSequenceClassification` | | `token-classification` | `AutoModelForTokenClassification` | Per ciascuna configurazione, puoi trovare l'elenco delle funzionalitร  supportate tramite il `FeaturesManager`. Ad esempio, per DistilBERT abbiamo: ```python >>> from transformers.onnx.features import FeaturesManager >>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys()) >>> print(distilbert_features) ["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"] ``` Puoi quindi passare una di queste funzionalitร  all'argomento `--feature` nel pacchetto `transformers.onnx`. Ad esempio, per esportare un modello di classificazione del testo possiamo scegliere un modello ottimizzato dall'Hub ed eseguire: ```bash python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \ --feature=sequence-classification onnx/ ``` che visualizzerร  i seguenti registri: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[โœ“] (2, 2) matches (2, 2) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Puoi notare che in questo caso, i nomi di output del modello ottimizzato sono `logits` invece di `last_hidden_state` che abbiamo visto con il checkpoint `distilbert-base-uncased` precedente. Questo รจ previsto dal modello ottimizato visto che ha una testa di e. <Tip> Le caratteristiche che hanno un suffisso `wtih-past` (ad es. `causal-lm-with-past`) corrispondono a topologie di modello con stati nascosti precalcolati (chiave e valori nei blocchi di attenzione) che possono essere utilizzati per la decodifica autoregressiva veloce. </Tip> ### Esportazione di un modello per un'architettura non supportata Se desideri esportare un modello la cui architettura non รจ nativamente supportata dalla libreria, ci sono tre passaggi principali da seguire: 1. Implementare una configurazione ONNX personalizzata. 2. Esportare il modello in ONNX. 3. Convalidare gli output di PyTorch e dei modelli esportati. In questa sezione, vedremo come DistilBERT รจ stato implementato per mostrare cosa รจ coinvolto in ogni passaggio. #### Implementazione di una configurazione ONNX personalizzata Iniziamo con l'oggetto di configurazione ONNX. Forniamo tre classi astratte da cui ereditare, a seconda del tipo di archittettura del modello che desideri esportare: * I modelli basati su encoder ereditano da [`~onnx.config.OnnxConfig`] * I modelli basati su decoder ereditano da [`~onnx.config.OnnxConfigWithPast`] * I modelli encoder-decoder ereditano da[`~onnx.config.OnnxSeq2SeqConfigWithPast`] <Tip> Un buon modo per implementare una configurazione ONNX personalizzata รจ guardare l'implementazione esistente nel file `configuration_<model_name>.py` di un'architettura simile. </Tip> Poichรฉ DistilBERT รจ un modello basato su encoder, la sua configurazione eredita da `OnnxConfig`: ```python >>> from typing import Mapping, OrderedDict >>> from transformers.onnx import OnnxConfig >>> class DistilBertOnnxConfig(OnnxConfig): ... @property ... def inputs(self) -> Mapping[str, Mapping[int, str]]: ... return OrderedDict( ... [ ... ("input_ids", {0: "batch", 1: "sequence"}), ... ("attention_mask", {0: "batch", 1: "sequence"}), ... ] ... ) ``` Ogni oggetto di configurazione deve implementare la proprietร  `inputs` e restituire una mappatura, dove ogni chiave corrisponde a un input previsto e ogni valore indica l'asse di quell'input. Per DistilBERT, possiamo vedere che sono richiesti due input: `input_ids` e `attention_mask`. Questi inputs hanno la stessa forma di `(batch_size, sequence_length)` per questo motivo vediamo gli stessi assi usati nella configurazione. <Tip> Puoi notare che la proprietร  `inputs` per `DistilBertOnnxConfig` restituisce un `OrdinatoDict`. Ciรฒ garantisce che gli input corrispondano alla loro posizione relativa all'interno del metodo `PreTrainedModel.forward()` durante il tracciamento del grafico. Raccomandiamo di usare un `OrderedDict` per le proprietร  `inputs` e `outputs` quando si implementano configurazioni ONNX personalizzate. </Tip> Dopo aver implementato una configurazione ONNX, รจ possibile istanziarla fornendo alla configurazione del modello base come segue: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert-base-uncased") >>> onnx_config = DistilBertOnnxConfig(config) ``` L'oggetto risultante ha diverse proprietร  utili. Ad esempio รจ possibile visualizzare il Set operatore ONNX che verrร  utilizzato durante l'esportazione: ```python >>> print(onnx_config.default_onnx_opset) 11 ``` รˆ inoltre possibile visualizzare gli output associati al modello come segue: ```python >>> print(onnx_config.outputs) OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})]) ``` Puoi notare che la proprietร  degli output segue la stessa struttura degli input; esso restituisce un `OrderedDict` di output con nome e le loro forme. La struttura di output รจ legato alla scelta della funzione con cui viene inizializzata la configurazione. Per impostazione predefinita, la configurazione ONNX viene inizializzata con la funzione 'predefinita' che corrisponde all'esportazione di un modello caricato con la classe `AutoModel`. Se tu desideri esportare una topologia di modello diversa, รจ sufficiente fornire una funzionalitร  diversa a l'argomento `task` quando inizializzi la configurazione ONNX. Ad esempio, se volevamo esportare DistilBERT con una testa di classificazione per sequenze, potremmo usare: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert-base-uncased") >>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification") >>> print(onnx_config_for_seq_clf.outputs) OrderedDict([('logits', {0: 'batch'})]) ``` <Tip> Tutte le proprietร  e i metodi di base associati a [`~onnx.config.OnnxConfig`] e le altre classi di configurazione possono essere sovrascritte se necessario. Guarda [`BartOnnxConfig`] per un esempio avanzato. </Tip> #### Esportazione del modello Una volta implementata la configurazione ONNX, il passaggio successivo consiste nell'esportare il modello. Qui possiamo usare la funzione `export()` fornita dal pacchetto `transformers.onnx`. Questa funzione prevede la configurazione ONNX, insieme con il modello base e il tokenizer e il percorso per salvare il file esportato: ```python >>> from pathlib import Path >>> from transformers.onnx import export >>> from transformers import AutoTokenizer, AutoModel >>> onnx_path = Path("model.onnx") >>> model_ckpt = "distilbert-base-uncased" >>> base_model = AutoModel.from_pretrained(model_ckpt) >>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt) >>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) ``` Gli `onnx_inputs` e `onnx_outputs` restituiti dalla funzione `export()` sono liste di chiavi definite nelle proprietร  di `input` e `output` della configurazione. Una volta esportato il modello, puoi verificare che il modello sia ben formato come segue: ```python >>> import onnx >>> onnx_model = onnx.load("model.onnx") >>> onnx.checker.check_model(onnx_model) ``` <Tip> Se il tuo modello รจ piรน largo di 2 GB, vedrai che molti file aggiuntivi sono creati durante l'esportazione. Questo รจ _previsto_ perchรฉ ONNX utilizza [Protocol Buffer](https://developers.google.com/protocol-buffers/) per memorizzare il modello e questi hanno un limite di dimensione 2 GB. Vedi la [Documentazione ONNX](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) per istruzioni su come caricare modelli con dati esterni. </Tip> #### Convalida degli output del modello Il passaggio finale consiste nel convalidare gli output dal modello di base e quello esportato corrispondere entro una soglia di tolleranza assoluta. Qui possiamo usare la Funzione `validate_model_outputs()` fornita dal pacchetto `transformers.onnx` come segue: ```python >>> from transformers.onnx import validate_model_outputs >>> validate_model_outputs( ... onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation ... ) ``` Questa funzione usa il metodo `OnnxConfig.generate_dummy_inputs()` per generare input per il modello di base e quello esportato e la tolleranza assoluta puรฒ essere definita nella configurazione. Generalmente troviamo una corrispondenza numerica nell'intervallo da 1e-6 a 1e-4, anche se รจ probabile che qualsiasi cosa inferiore a 1e-3 vada bene. ### Contribuire con una nuova configurazione a ๐Ÿค— Transformers Stiamo cercando di espandere l'insieme di configurazioni giร  pronte e di accettare contributi della community! Se vuoi contribuire con la tua aggiunta nella libreria, dovrai: * Implementare la configurazione ONNX nella corrispondente `configuration file _<model_name>.py` * Includere l'architettura del modello e le funzioni corrispondenti in [`~onnx.features.FeatureManager`] * Aggiungere la tua architettura del modello ai test in `test_onnx_v2.py` Scopri come stato contribuito la configurazione per [IBERT] (https://github.com/huggingface/transformers/pull/14868/files) per avere un'idea di cosa รจ coinvolto. ## TorchScript <Tip> Questo รจ l'inizio dei nostri esperimenti con TorchScript e stiamo ancora esplorando le sue capacitร  con modelli con variable-input-size. รˆ una nostra prioritร  e approfondiremo le nostre analisi nelle prossime versioni, con piรน esempi di codici, un'implementazione piรน flessibile e benchmark che confrontano i codici basati su Python con quelli compilati con TorchScript. </Tip> Secondo la documentazione di Pytorch: "TorchScript รจ un modo per creare modelli serializzabili e ottimizzabili da codice Pytorch". I due moduli di Pytorch [JIT e TRACE](https://pytorch.org/docs/stable/jit.html) consentono allo sviluppatore di esportare il loro modello da riutilizzare in altri programmi, come i programmi C++ orientati all'efficienza. Abbiamo fornito un'interfaccia che consente l'esportazione di modelli ๐Ÿค— Transformers in TorchScript in modo che possano essere riutilizzati in un ambiente diverso rispetto a un programma Python basato su Pytorch. Qui spieghiamo come esportare e utilizzare i nostri modelli utilizzando TorchScript. Esportare un modello richiede due cose: - Un passaggio in avanti con input fittizzi. - Istanziazione del modello con flag `torchscript`. Queste necessitร  implicano diverse cose a cui gli sviluppatori dovrebbero prestare attenzione. Questi dettagli mostrati sotto. ### Flag TorchScript e pesi legati Questo flag รจ necessario perchรฉ la maggior parte dei modelli linguistici in questo repository hanno pesi legati tra il loro strato "Embedding" e lo strato "Decoding". TorchScript non consente l'esportazione di modelli che hanno pesi legati, quindi รจ necessario prima slegare e clonare i pesi. Ciรฒ implica che i modelli istanziati con il flag `torchscript` hanno il loro strato `Embedding` e strato `Decoding` separato, il che significa che non dovrebbero essere addestrati in futuro. L'allenamento de-sincronizza i due strati, portando a risultati inaspettati. Questo non รจ il caso per i modelli che non hanno una testa del modello linguistico, poichรฉ quelli non hanno pesi legati. Questi modelli puรฒ essere esportato in sicurezza senza il flag `torchscript`. ### Input fittizi e standard lengths Gli input fittizzi sono usati per fare un modello passaggio in avanti . Mentre i valori degli input si propagano attraverso i strati, Pytorch tiene traccia delle diverse operazioni eseguite su ciascun tensore. Queste operazioni registrate vengono quindi utilizzate per creare la "traccia" del modello. La traccia viene creata relativamente alle dimensioni degli input. รˆ quindi vincolato dalle dimensioni dell'input fittizio e non funzionerร  per altre lunghezze di sequenza o dimensioni batch. Quando si proverร  con una dimensione diversa, ci sarร  errore come: `La dimensione espansa del tensore (3) deve corrispondere alla dimensione esistente (7) nella dimensione non singleton 2` will be raised. Si consiglia pertanto di tracciare il modello con una dimensione di input fittizia grande almeno quanto il piรน grande input che verrร  fornito al modello durante l'inferenza. รˆ possibile eseguire il padding per riempire i valori mancanti. Il modello sarร  tracciato con una grande dimensione di input, tuttavia, anche le dimensioni della diverse matrici saranno grandi, risultando in piรน calcoli. Si raccomanda di prestare attenzione al numero totale di operazioni eseguite su ciascun input e di seguire da vicino le prestazioni durante l'esportazione di modelli di sequenza-lunghezza variabili. ### Usare TorchSscript in Python Di seguito รจ riportato un esempio, che mostra come salvare, caricare modelli e come utilizzare la traccia per l'inferenza. #### Salvare un modello Questo frammento di codice mostra come usare TorchScript per esportare un `BertModel`. Qui il `BertModel` รจ istanziato secondo una classe `BertConfig` e quindi salvato su disco con il nome del file `traced_bert.pt` ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("bert-base-uncased") # Tokenizing input text text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # Initializing the model with the torchscript flag # Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # Instantiating the model model = BertModel(config) # The model needs to be in evaluation mode model.eval() # If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag model = BertModel.from_pretrained("bert-base-uncased", torchscript=True) # Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` #### Caricare un modello Questo frammento di codice mostra come caricare il `BertModel` che era stato precedentemente salvato su disco con il nome `traced_bert.pt`. Stiamo riutilizzando il `dummy_input` precedentemente inizializzato. ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` #### Utilizzare un modello tracciato per l'inferenza Usare il modello tracciato per l'inferenza รจ semplice come usare il suo metodo dunder `__call__`: ```python traced_model(tokens_tensor, segments_tensors) ``` ###Implementare modelli HuggingFace TorchScript su AWS utilizzando Neuron SDK AWS ha introdotto [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) famiglia di istanze per l'inferenza di machine learning a basso costo e ad alte prestazioni nel cloud. Le istanze Inf1 sono alimentate dal chip AWS Inferentia, un acceleratore hardware personalizzato, specializzato in carichi di lavoro di inferenza di deep learning. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) รจ l'SDK per Inferentia che supporta il tracciamento e l'ottimizzazione dei modelli transformers per distribuzione su Inf1. L'SDK Neuron fornisce: 1. API di facile utilizzo con una riga di modifica del codice per tracciare e ottimizzare un modello TorchScript per l'inferenza nel cloud. 2. Ottimizzazioni delle prestazioni pronte all'uso per [miglioramento dei costi-prestazioni](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>) 3. Supporto per i modelli di trasformatori HuggingFace costruiti con [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) o [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html). #### Implicazioni Modelli Transformers basati su architettura [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert), o sue varianti come [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) e [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta) funzioneranno meglio su Inf1 per attivitร  non generative come la question answering estrattive, Classificazione della sequenza, Classificazione dei token. In alternativa, generazione di testo le attivitร  possono essere adattate per essere eseguite su Inf1, secondo questo [tutorial AWS Neuron MarianMT](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html). Ulteriori informazioni sui modelli che possono essere convertiti fuori dagli schemi su Inferentia possono essere trovati nella [sezione Model Architecture Fit della documentazione Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia). #### Dipendenze L'utilizzo di AWS Neuron per convertire i modelli richiede le seguenti dipendenze e l'ambiente: * A [Neuron SDK environment](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide), which comes pre-configured on [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html). #### Convertire un modello per AWS Neuron Usando lo stesso script come in [Usando TorchScipt in Python](https://huggingface.co/docs/transformers/main/en/serialization#using-torchscript-in-python) per tracciare un "BertModel", importi l'estensione del framework `torch.neuron` per accedere i componenti di Neuron SDK tramite un'API Python. ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` E modificare solo la riga di codice di traccia Da: ```python torch.jit.trace(model, [tokens_tensor, segments_tensors]) ``` A: ```python torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` Questa modifica consente a Neuron SDK di tracciare il modello e ottimizzarlo per l'esecuzione nelle istanze Inf1. Per ulteriori informazioni sulle funzionalitร , gli strumenti, i tutorial di esempi e gli ultimi aggiornamenti di AWS Neuron SDK, consultare la [documentazione AWS NeuronSDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/converting_tensorflow_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Convertire checkpoint di Tensorflow รˆ disponibile un'interfaccia a linea di comando per convertire gli originali checkpoint di Bert/GPT/GPT-2/Transformer-XL/XLNet/XLM in modelli che possono essere caricati utilizzando i metodi `from_pretrained` della libreria. <Tip> A partire dalla versione 2.3.0 lo script di conversione รจ parte di transformers CLI (**transformers-cli**), disponibile in ogni installazione di transformers >=2.3.0. La seguente documentazione riflette il formato dei comandi di **transformers-cli convert**. </Tip> ## BERT Puoi convertire qualunque checkpoint Tensorflow di BERT (in particolare [i modeli pre-allenati rilasciati da Google](https://github.com/google-research/bert#pre-trained-models)) in un file di salvataggio Pytorch utilizzando lo script [convert_bert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py). Questo CLI prende come input un checkpoint di Tensorflow (tre files che iniziano con `bert_model.ckpt`) ed il relativo file di configurazione (`bert_config.json`), crea un modello Pytorch per questa configurazione, carica i pesi dal checkpoint di Tensorflow nel modello di Pytorch e salva il modello che ne risulta in un file di salvataggio standard di Pytorch che puรฒ essere importato utilizzando `from_pretrained()` (vedi l'esempio nel [quicktour](quicktour) , [run_glue.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_glue.py) ). Devi soltanto lanciare questo script di conversione **una volta** per ottenere un modello Pytorch. Dopodichรจ, potrai tralasciare il checkpoint di Tensorflow (i tre files che iniziano con `bert_model.ckpt`), ma assicurati di tenere il file di configurazione (`bert_config.json`) ed il file di vocabolario (`vocab.txt`) in quanto queste componenti sono necessarie anche per il modello di Pytorch. Per lanciare questo specifico script di conversione avrai bisogno di un'installazione di Tensorflow e di Pytorch (`pip install tensorflow`). Il resto della repository richiede soltanto Pytorch. Questo รจ un esempio del processo di conversione per un modello `BERT-Base Uncased` pre-allenato: ```bash export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12 transformers-cli convert --model_type bert \ --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \ --config $BERT_BASE_DIR/bert_config.json \ --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin ``` Puoi scaricare i modelli pre-allenati di Google per la conversione [qua](https://github.com/google-research/bert#pre-trained-models). ## ALBERT Per il modello ALBERT, converti checkpoint di Tensoflow in Pytorch utilizzando lo script [convert_albert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/main/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py). Il CLI prende come input un checkpoint di Tensorflow (tre files che iniziano con `model.ckpt-best`) e i relativi file di configurazione (`albert_config.json`), dopodichรจ crea e salva un modello Pytorch. Per lanciare questa conversione avrai bisogno di un'installazione di Tensorflow e di Pytorch. Ecco un esempio del procedimento di conversione di un modello `ALBERT Base` pre-allenato: ```bash export ALBERT_BASE_DIR=/path/to/albert/albert_base transformers-cli convert --model_type albert \ --tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-best \ --config $ALBERT_BASE_DIR/albert_config.json \ --pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin ``` Puoi scaricare i modelli pre-allenati di Google per la conversione [qui](https://github.com/google-research/albert#pre-trained-models). ## OpenAI GPT Ecco un esempio del processo di conversione di un modello OpenAI GPT pre-allenato, assumendo che il tuo checkpoint di NumPy sia salvato nello stesso formato dei modelli pre-allenati OpenAI (vedi [qui](https://github.com/openai/finetune-transformer-lm)): ```bash export OPENAI_GPT_CHECKPOINT_FOLDER_PATH=/path/to/openai/pretrained/numpy/weights transformers-cli convert --model_type gpt \ --tf_checkpoint $OPENAI_GPT_CHECKPOINT_FOLDER_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--config OPENAI_GPT_CONFIG] \ [--finetuning_task_name OPENAI_GPT_FINETUNED_TASK] \ ``` ## OpenAI GPT-2 Ecco un esempio del processo di conversione di un modello OpenAI GPT-2 pre-allenato (vedi [qui](https://github.com/openai/gpt-2)): ```bash export OPENAI_GPT2_CHECKPOINT_PATH=/path/to/gpt2/pretrained/weights transformers-cli convert --model_type gpt2 \ --tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--config OPENAI_GPT2_CONFIG] \ [--finetuning_task_name OPENAI_GPT2_FINETUNED_TASK] ``` ## XLNet Ecco un esempio del processo di conversione di un modello XLNet pre-allenato: ```bash export TRANSFO_XL_CHECKPOINT_PATH=/path/to/xlnet/checkpoint export TRANSFO_XL_CONFIG_PATH=/path/to/xlnet/config transformers-cli convert --model_type xlnet \ --tf_checkpoint $TRANSFO_XL_CHECKPOINT_PATH \ --config $TRANSFO_XL_CONFIG_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--finetuning_task_name XLNET_FINETUNED_TASK] \ ``` ## XLM Ecco un esempio del processo di conversione di un modello XLM pre-allenato: ```bash export XLM_CHECKPOINT_PATH=/path/to/xlm/checkpoint transformers-cli convert --model_type xlm \ --tf_checkpoint $XLM_CHECKPOINT_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT [--config XML_CONFIG] \ [--finetuning_task_name XML_FINETUNED_TASK] ``` ## T5 Ecco un esempio del processo di conversione di un modello T5 pre-allenato: ```bash export T5=/path/to/t5/uncased_L-12_H-768_A-12 transformers-cli convert --model_type t5 \ --tf_checkpoint $T5/t5_model.ckpt \ --config $T5/t5_config.json \ --pytorch_dump_output $T5/pytorch_model.bin ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Carica istanze pre-allenate con AutoClass Con cosรฌ tante architetture Transformer differenti, puรฒ essere sfidante crearne una per il tuo checkpoint. Come parte della filosofia centrale di ๐Ÿค— Transformers per rendere la libreria facile, semplice e flessibile da utilizzare, una `AutoClass` inferisce e carica automaticamente l'architettura corretta da un dato checkpoint. Il metodo `from_pretrained` ti permette di caricare velocemente un modello pre-allenato per qualsiasi architettura, cosรฌ non devi utilizzare tempo e risorse per allenare un modello da zero. Produrre questo codice agnostico ai checkpoint significa che se il tuo codice funziona per un checkpoint, funzionerร  anche per un altro checkpoint, purchรฉ sia stato allenato per un compito simile, anche se l'architettura รจ differente. <Tip> Ricorda, con architettura ci si riferisce allo scheletro del modello e con checkpoint ai pesi di una determinata architettura. Per esempio, [BERT](https://huggingface.co/bert-base-uncased) รจ un'architettura, mentre `bert-base-uncased` รจ un checkpoint. Modello รจ un termine generale che puรฒ significare sia architettura che checkpoint. </Tip> In questo tutorial, imparerai a: * Caricare un tokenizer pre-allenato. * Caricare un estrattore di caratteristiche (feature extractor, in inglese) pre-allenato. * Caricare un processore pre-allenato. * Caricare un modello pre-allenato. ## AutoTokenizer Quasi tutti i compiti di NLP iniziano con un tokenizer. Un tokenizer converte il tuo input in un formato che possa essere elaborato dal modello. Carica un tokenizer con [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") ``` Poi tokenizza il tuo input come mostrato in seguito: ```py >>> sequenza = "In un buco nel terreno viveva uno Hobbit." >>> print(tokenizer(sequenza)) {'input_ids': [0, 360, 51, 373, 587, 1718, 54644, 22597, 330, 3269, 2291, 22155, 18, 5, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoFeatureExtractor Per compiti inerenti a audio e video, un feature extractor processa il segnale audio o l'immagine nel formato di input corretto. Carica un feature extractor con [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Compiti multimodali richiedono un processore che combini i due tipi di strumenti di elaborazione. Per esempio, il modello [LayoutLMV2](model_doc/layoutlmv2) richiede un feature extractor per gestire le immagine e un tokenizer per gestire il testo; un processore li combina entrambi. Carica un processore con [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Infine, le classi `AutoModelFor` ti permettono di caricare un modello pre-allenato per un determinato compito (guarda [qui](model_doc/auto) per una lista completa di compiti presenti). Per esempio, carica un modello per la classificazione di sequenze con [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Semplicemente utilizza lo stesso checkpoint per caricare un'architettura per un task differente: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` Generalmente, raccomandiamo di utilizzare la classe `AutoTokenizer` e la classe `AutoModelFor` per caricare istanze pre-allenate dei modelli. Questo ti assicurerร  di aver caricato la corretta architettura ogni volta. Nel prossimo [tutorial](preprocessing), imparerai come utilizzare il tokenizer, il feature extractor e il processore per elaborare un dataset per il fine-tuning. </pt> <tf> Infine, le classi `TFAutoModelFor` ti permettono di caricare un modello pre-allenato per un determinato compito (guarda [qui](model_doc/auto) per una lista completa di compiti presenti). Per esempio, carica un modello per la classificazione di sequenze con [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Semplicemente utilizza lo stesso checkpoint per caricare un'architettura per un task differente: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` Generalmente, raccomandiamo di utilizzare la classe `AutoTokenizer` e la classe `TFAutoModelFor` per caricare istanze pre-allenate dei modelli. Questo ti assicurerร  di aver caricato la corretta architettura ogni volta. Nel prossimo [tutorial](preprocessing), imparerai come utilizzare il tokenizer, il feature extractor e il processore per elaborare un dataset per il fine-tuning. </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_infer_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza Efficiente su CPU Questa guida si concentra sull'inferenza di modelli di grandi dimensioni in modo efficiente sulla CPU. ## `BetterTransformer` per inferenza piรน rapida Abbiamo integrato di recente `BetterTransformer` per fare inferenza piรน rapidamente con modelli per testi, immagini e audio. Visualizza la documentazione sull'integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli. ## PyTorch JIT-mode (TorchScript) TorchScript รจ un modo di creare modelli serializzabili e ottimizzabili da codice PyTorch. Ogni programmma TorchScript puรฒ esere salvato da un processo Python e caricato in un processo dove non ci sono dipendenze Python. Comparandolo con l'eager mode di default, jit mode in PyTorch normalmente fornisce prestazioni migliori per l'inferenza del modello da parte di metodologie di ottimizzazione come la operator fusion. Per una prima introduzione a TorchScript, vedi la Introduction to [PyTorch TorchScript tutorial](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules). ### IPEX Graph Optimization con JIT-mode Intelยฎ Extension per PyTorch fornnisce ulteriori ottimizzazioni in jit mode per i modelli della serie Transformers. Consigliamo vivamente agli utenti di usufruire dei vantaggi di Intelยฎ Extension per PyTorch con jit mode. Alcuni operator patterns usati fequentemente dai modelli Transformers models sono giร  supportati in Intelยฎ Extension per PyTorch con jit mode fusions. Questi fusion patterns come Multi-head-attention fusion, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm fusion and etc. sono abilitati e hanno buone performance. I benefici della fusion รจ fornito agli utenti in modo trasparente. In base alle analisi, il ~70% dei problemi piรน popolari in NLP question-answering, text-classification, and token-classification possono avere benefici sulle performance grazie ai fusion patterns sia per Float32 precision che per BFloat16 Mixed precision. Vedi maggiori informazioni per [IPEX Graph Optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html). #### Installazione di IPEX I rilasci di IPEX seguono PyTorch, verifica i vari approcci per [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/). ### Utilizzo del JIT-mode Per abilitare JIT-mode in Trainer per evaluation e prediction, devi aggiungere `jit_mode_eval` negli argomenti di Trainer. <Tip warning={true}> per PyTorch >= 1.14.0. JIT-mode potrebe giovare a qualsiasi modello di prediction e evaluaion visto che il dict input รจ supportato in jit.trace per PyTorch < 1.14.0. JIT-mode potrebbe giovare ai modelli il cui ordine dei parametri corrisponde all'ordine delle tuple in ingresso in jit.trace, come i modelli per question-answering. Nel caso in cui l'ordine dei parametri seguenti non corrisponda all'ordine delle tuple in ingresso in jit.trace, come nei modelli di text-classification, jit.trace fallirร  e lo cattureremo con una eccezione al fine di renderlo un fallback. Il logging รจ usato per notificare gli utenti. </Tip> Trovi un esempo con caso d'uso in [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Inference using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--jit_mode_eval </b></pre> - Inference with IPEX using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--use_ipex \</b> <b>--jit_mode_eval</b></pre>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Allenamento distribuito con ๐Ÿค— Accelerate La parallelizzazione รจ emersa come strategia per allenare modelli sempre piรน grandi su hardware limitato e accelerarne la velocitร  di allenamento di diversi ordini di magnitudine. In Hugging Face, abbiamo creato la libreria [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) per aiutarti ad allenare in modo semplice un modello ๐Ÿค— Transformers su qualsiasi tipo di configurazione distribuita, sia che si tratti di piรน GPU su una sola macchina o di piรน GPU su piรน macchine. In questo tutorial, imparerai come personalizzare il training loop nativo di PyTorch per consentire l'addestramento in un ambiente distribuito. ## Configurazione Inizia installando ๐Ÿค— Accelerate: ```bash pip install accelerate ``` Poi importa e crea un oggetto [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator). `Accelerator` rileverร  automaticamente il tuo setup distribuito e inizializzerร  tutte le componenti necessarie per l'allenamento. Non dovrai allocare esplicitamente il tuo modello su un device. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## Preparati ad accelerare Il prossimo passo รจ quello di passare tutti gli oggetti rilevanti per l'allenamento al metodo [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare). Questo include i tuoi DataLoaders per l'allenamento e per la valutazione, un modello e un ottimizzatore: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## Backward Infine, sostituisci il tipico metodo `loss.backward()` nel tuo loop di allenamento con il metodo [`backward`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.backward) di ๐Ÿค— Accelerate: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` Come puoi vedere nel seguente codice, hai solo bisogno di aggiungere quattro righe in piรน di codice al tuo training loop per abilitare l'allenamento distribuito! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## Allenamento Una volta che hai aggiunto le righe di codice rilevanti, lancia il tuo allenamento in uno script o in un notebook come Colaboratory. ### Allenamento con uno script Se stai eseguendo il tuo allenamento da uno script, esegui il comando seguente per creare e salvare un file di configurazione: ```bash accelerate config ``` Poi lancia il tuo allenamento con: ```bash accelerate launch train.py ``` ### Allenamento con un notebook La libreria ๐Ÿค— Accelerate puรฒ anche essere utilizzata in un notebook se stai pianificando di utilizzare le TPU di Colaboratory. Inserisci tutto il codice legato all'allenamento in una funzione, e passala al `notebook_launcher`: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` Per maggiori informazioni relative a ๐Ÿค— Accelerate e le sue numerose funzionalitร , fai riferimento alla [documentazione](https://huggingface.co/docs/accelerate).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_train_special.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento su Hardware Specializzato <Tip> Nota: Molte delle strategie introdotte nella [sezione sulla GPU singola](perf_train_gpu_one) (come mixed precision training o gradient accumulation) e [sezione multi-GPU](perf_train_gpu_many) sono generiche e applicabili all'addestramento di modelli in generale quindi assicurati di dargli un'occhiata prima di immergerti in questa sezione. </Tip> Questo documento sarร  presto completato con informazioni su come effettuare la formazione su hardware specializzato.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/pr_checks.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Controlli su una Pull Request Quando apri una pull request sui ๐Ÿค— Transformers, vengono eseguiti un discreto numero di controlli per assicurarsi che la patch che stai aggiungendo non stia rompendo qualcosa di esistente. Questi controlli sono di quattro tipi: - test regolari - costruzione della documentazione - stile del codice e della documentazione - coerenza generale del repository In questo documento, cercheremo di spiegare quali sono i vari controlli e le loro ragioni, oltre a spiegare come eseguire il debug locale se uno di essi fallisce sulla tua PR. Nota che tutti richiedono un'installazione dev: ```bash pip install transformers[dev] ``` o un'installazione modificabile: ```bash pip install -e .[dev] ``` all'interno del repo Transformers. ## Tests Tutti i job che iniziano con `ci/circleci: run_tests_` eseguono parti della suite di test dei Transformers. Ognuno di questi job si concentra su una parte della libreria in un determinato ambiente: per esempio `ci/circleci: run_tests_pipelines_tf` esegue il test delle pipeline in un ambiente in cui รจ installato solo TensorFlow. Nota che per evitare di eseguire i test quando non ci sono cambiamenti reali nei moduli che si stanno testando, ogni volta viene eseguita solo una parte della suite di test: viene eseguita una utility per determinare le differenze nella libreria tra prima e dopo la PR (ciรฒ che GitHub mostra nella scheda "Files changes") e sceglie i test che sono stati impattati dalla diff. Questa utility puรฒ essere eseguita localmente con: ```bash python utils/tests_fetcher.py ``` dalla root del repo Transformers. Di seguito ciรฒ che farร : 1. Controlla per ogni file nel diff se le modifiche sono nel codice o solo nei commenti o nelle docstrings. Vengono mantenuti solo i file con modifiche reali al codice. 2. Costruisce una mappa interna che fornisce per ogni file del codice sorgente della libreria tutti i file su cui ha un impatto ricorsivo. Si dice che il modulo A ha un impatto sul modulo B se il modulo B importa il modulo A. Per l'impatto ricorsivo, abbiamo bisogno di una catena di moduli che va dal modulo A al modulo B in cui ogni modulo importa il precedente. 3. Applica questa mappa ai file raccolti nel passaggio 1, si ottiene l'elenco dei file del modello interessati dalla PR. 4. Mappa ciascuno di questi file con i corrispondenti file di test e ottiene l'elenco dei test da eseguire. Quando esegui lo script in locale, dovresti ottenere la stampa dei risultati dei passi 1, 3 e 4 e quindi sapere quali test sono stati eseguiti. Lo script creerร  anche un file chiamato `test_list.txt` che contiene l'elenco dei test da eseguire e che puoi eseguire localmente con il seguente comando: ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` Nel caso in cui qualcosa sia sfuggito, l'intera suite di test viene eseguita quotidianamente. ## Build della documentazione Il job `ci/circleci: build_doc` esegue una build della documentazione per assicurarsi che tutto sia a posto una volta che la PR รจ stata unita. Se questo passaggio fallisce, puoi controllare localmente entrando nella cartella `docs` del repo Transformers e digitare ```bash make html ``` Sphinx non รจ noto per i suoi messaggi di errore chiari, quindi potrebbe essere necessario che provi alcune cose per trovare davvero la fonte dell'errore. ## Stile del codice e della documentazione La formattazione del codice viene applicata a tutti i file sorgenti, agli esempi e ai test usando `black` e `isort`. Abbiamo anche uno strumento personalizzato che si occupa della formattazione delle docstring e dei file `rst` (`utils/style_doc.py`), cosรฌ come dell'ordine dei lazy imports eseguiti nei file `__init__.py` dei Transformers (`utils/custom_init_isort.py`). Tutto questo puรฒ essere lanciato eseguendo ```bash make style ``` I controlli della CI sono applicati all'interno del controllo `ci/circleci: check_code_quality`. Esegue anche `flake8`, che dร  un'occhiata di base al codice e si lamenta se trova una variabile non definita o non utilizzata. Per eseguire questo controllo localmente, usare ```bash make quality ``` Questa operazione puรฒ richiedere molto tempo, quindi per eseguire la stessa operazione solo sui file modificati nel branch corrente, eseguire ```bash make fixup ``` Quest'ultimo comando eseguirร  anche tutti i controlli aggiuntivi per la consistenza del repository. Diamogli un'occhiata. ## Coerenza del repository All'interno sono raggruppati tutti i test per assicurarsi che la tua PR lasci il repository in un buono stato ed รจ eseguito dal controllo `ci/circleci: check_repository_consistency`. Puoi eseguire localmente questo controllo eseguendo quanto segue: ```bash make repo-consistency ``` Questo verifica che: - Tutti gli oggetti aggiunti all'init sono documentati (eseguito da `utils/check_repo.py`) - Tutti i file `__init__.py` hanno lo stesso contenuto nelle loro due sezioni (eseguito da `utils/check_inits.py`) - Tutto il codice identificato come copia da un altro modulo รจ coerente con l'originale (eseguito da `utils/check_copies.py`) - Le traduzioni dei README e l'indice della documentazione hanno lo stesso elenco di modelli del README principale (eseguito da `utils/check_copies.py`) - Le tabelle autogenerate nella documentazione sono aggiornate (eseguito da `utils/check_table.py`) - La libreria ha tutti gli oggetti disponibili anche se non tutte le dipendenze opzionali sono installate (eseguito da `utils/check_dummies.py`) Se questo controllo fallisce, le prime due voci richiedono una correzione manuale, mentre le ultime quattro possono essere corrette automaticamente per te eseguendo il comando ```bash make fix-copies ``` Ulteriori controlli riguardano le PR che aggiungono nuovi modelli, principalmente che: - Tutti i modelli aggiunti sono in un Auto-mapping (eseguita da `utils/check_repo.py`) <!-- TODO Sylvain, add a check that makes sure the common tests are implemented.--> - Tutti i modelli sono testati correttamente (eseguito da `utils/check_repo.py`) <!-- TODO Sylvain, add the following - All models are added to the main README, inside the main doc - All checkpoints used actually exist on the Hub -->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/add_new_pipeline.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Come creare una pipeline personalizzata? In questa guida, scopriremo come creare una pipeline personalizzata e condividerla sull' [Hub](hf.co/models) o aggiungerla nella libreria Transformers. Innanzitutto, รจ necessario decidere gli input grezzi che la pipeline sarร  in grado di accettare. Possono essere strings, raw bytes, dictionaries o qualsiasi cosa sia l'input desiderato piรน probabile. Cerca di mantenere questi input il piรน possibile in Python in quanto facilita la compatibilitร  (anche con altri linguaggi tramite JSON). Questi saranno gli `inputs` della pipeline (`preprocess`). Poi definire gli `outputs`. Stessa strategia degli `inputs`. Piรน รจ seplice e meglio รจ. Questi saranno gli output del metodo `postprocess`. Si parte ereditando la classe base `Pipeline`. con i 4 metodi che bisogna implementare `preprocess`, `_forward`, `postprocess` e `_sanitize_parameters`. ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Maybe {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` La struttura di questa suddivisione consiste nel supportare in modo relativamente continuo CPU/GPU, supportando allo stesso tempo l'esecuzione di pre/postelaborazione sulla CPU su thread diversi. `preprocess` prenderร  gli input originariamente definiti e li trasformerร  in qualcosa di alimentabile dal modello. Potrebbe contenere piรน informazioni e di solito รจ un `Dict`. `_forward` รจ il dettaglio dell'implementazione e non รจ destinato a essere chiamato direttamente. `forward` รจ il metodo preferito per assicurarsi che tutto funzioni correttamente perchรจ contiene delle slavaguardie. Se qualcosa รจ รจ collegato a un modello reale, appartiene al metodo `_forward`, tutto il resto รจ nel preprocess/postprocess. `postprocess` prende l'otput di `_forward` e lo trasforma nell'output finale che era stato deciso in precedenza. `_sanitize_parameters` esiste per consentire agli utenti di passare i parametri ogni volta che desiderano sia a inizialization time `pipeline(...., maybe_arg=4)` che al call time `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`. `_sanitize_parameters` ritorna 3 dicts di kwargs che vengono passati direttamente a `preprocess`, `_forward` e `postprocess`. Non riempire nulla se il chiamante non ha chiamato con alcun parametro aggiuntivo. Questo consente di mantenere gli argomenti predefiniti nella definizione della funzione, che รจ sempre piรน "naturale". Un esempio classico potrebbe essere l'argomento `top_k` nel post processing dei classification tasks. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit `_sanitize_parameters` to allow this new parameter. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # Add logic to handle top_k return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` Cercare di mantenere gli input/output molto semplici e idealmente serializzabili in JSON, in quanto ciรฒ rende l'uso della pipeline molto facile senza richiedere agli utenti di comprendere nuovi tipi di oggetti. รˆ anche relativamente comune supportare molti tipi di argomenti per facilitarne l'uso (ad esempio file audio, possono essere nomi di file, URL o byte puri). ## Aggiungilo alla lista dei tasks supportati Per registrar il tuo `new-task` alla lista dei tasks supportati, devi aggiungerlo al `PIPELINE_REGISTRY`: ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` Puoi specificare il modello di default che desideri, in questo caso dovrebbe essere accompagnato da una revisione specifica (che puรฒ essere il nome di un branch o l'hash di un commit, in questo caso abbiamo preso `"abcdef"`) e anche dal type: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # current support type: text, audio, image, multimodal ) ``` ## Condividi la tua pipeline sull'Hub Per condividere la tua pipeline personalizzata sull'Hub, devi solo salvare il codice della tua sottoclasse `Pipeline` in un file python. Per esempio, supponiamo di voler utilizzare una pipeline personalizzata per la classificazione delle coppie di frasi come la seguente: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` L'implementazione รจ agnostica al framework, e lavorerร  sia con modelli PyTorch che con TensorFlow. Se l'abbiamo salvato in un file chiamato `pair_classification.py`, puรฒ essere successivamente importato e registrato in questo modo: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` Una volta fatto, possiamo usarla con un modello pretrained. L'istanza `sgugger/finetuned-bert-mrpc` รจ stata fine-tuned sul dataset MRPC, che classifica le coppie di frasi come parafrasi o no. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` Successivamente possiamo condividerlo sull'Hub usando il metodo `save_pretrained` in un `Repository`: ```py from huggingface_hub import Repository repo = Repository("test-dynamic-pipeline", clone_from="{your_username}/test-dynamic-pipeline") classifier.save_pretrained("test-dynamic-pipeline") repo.push_to_hub() ``` Questo codice copierร  il file dove รจ stato definitp `PairClassificationPipeline` all'interno della cartella `"test-dynamic-pipeline"`, insieme al salvataggio del modello e del tokenizer della pipeline, prima di pushare il tutto nel repository `{your_username}/test-dynamic-pipeline`. Dopodichรฉ chiunque potrร  utilizzarlo, purchรฉ fornisca l'opzione `trust_remote_code=True`: ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## Aggiungere la pipeline a Transformers Se vuoi contribuire con la tua pipeline a Transformers, dovrai aggiungere un modulo nel sottomodulo `pipelines` con il codice della tua pipeline, quindi aggiungilo all'elenco dei tasks definiti in `pipelines/__init__.py`. Poi hai bisogno di aggiungere i test. Crea un nuovo file `tests/test_pipelines_MY_PIPELINE.py` con esempi ed altri test. La funzione `run_pipeline_test` sarร  molto generica e su piccoli modelli casuali su ogni possibile architettura, come definito da `model_mapping` e `tf_model_mapping`. Questo รจ molto importante per testare la compatibilitร  futura, nel senso che se qualcuno aggiunge un nuovo modello di `XXXForQuestionAnswering` allora il test della pipeline tenterร  di essere eseguito su di esso. Poichรฉ i modelli sono casuali, รจ รจ impossibile controllare i valori effettivi, per questo esiste un aiuto `ANY` che tenterร  solamente di far corrispondere l'output della pipeline TYPE. Hai anche *bisogno* di implementare 2 (idealmente 4) test. - `test_small_model_pt` : Definire 1 piccolo modello per questa pipeline (non importa se i risultati non hanno senso) e testare i risultati della pipeline. I risultati dovrebbero essere gli stessi di `test_small_model_tf`. - `test_small_model_tf` : Definire 1 piccolo modello per questa pipeline (non importa se i risultati non hanno senso) e testare i risultati della pipeline. I risultati dovrebbero essere gli stessi di `test_small_model_pt`. - `test_large_model_pt` (`optional`): Testare la pipeline su una pipeline reale in cui i risultati dovrebbero avere senso. Questi test sono lenti e dovrebbero essere contrassegnati come tali. In questo caso l'obiettivo รจ mostrare la pipeline e assicurarsi che non ci siano derive nelle versioni future - `test_large_model_tf` (`optional`): Testare la pipeline su una pipeline reale in cui i risultati dovrebbero avere senso. Questi test sono lenti e dovrebbero essere contrassegnati come tali. In questo caso l'obiettivo รจ mostrare la pipeline e assicurarsi che non ci siano derive nelle versioni future
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_infer_gpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza Efficiente su GPU Multiple Questo documento contiene informazioni su come fare inferenza in maniera efficiente su GPU multiple. <Tip> Nota: Un setup con GPU multiple puรฒ utilizzare la maggior parte delle strategie descritte nella [sezione con GPU singola](./perf_infer_gpu_one). Tuttavia, รจ necessario conoscere delle tecniche semplici che possono essere utilizzate per un risultato migliore. </Tip> ## `BetterTransformer` per inferenza piรน rapida Abbiamo recentemente integrato `BetterTransformer` per inferenza piรน rapida su multi-GPU per modelli su testo, immagini e audio. Controlla il documento con queste integrazioni [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/community.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Comunitร  Questa pagina raggruppa le risorse sviluppate dalla comunitร  riguardo ๐Ÿค— Transformers. ## Risorse della comunitร : | Risorsa | Descrizione | Autore | |:----------|:-------------|------:| | [Glossario delle Flashcards di Transformers](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | Un insieme di flashcards basate sul [glossario della documentazione di Transformers](glossary), creato in un formato tale da permettere un facile apprendimento e revisione usando [Anki](https://apps.ankiweb.net/), un'applicazione open-source e multi-piattaforma, specificatamente progettata per ricordare informazioni nel lungo termine. Guarda questo [video introduttivo su come usare le flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) | ## Notebook della comunitร : | Notebook | Descrizione | Autore | | |:----------|:-------------|:-------------|------:| | [Fine-tuning di un Transformer pre-addestrato, al fine di generare testi di canzoni](https://github.com/AlekseyKorshuk/huggingartists) | Come generare testi di canzoni nello stile del vostro artista preferito attraverso il fine-tuning di un modello GPT-2. | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Addestramento di T5 in Tensorflow 2 ](https://github.com/snapthat/TF-T5-text-to-text) | Come addestrare T5 per qualsiasi attivitร  usando Tensorflow 2. Questo notebook mostra come risolvere l'attivitร  di "Question Answering" usando Tensorflow 2 e SQUAD. | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [Addestramento di T5 con TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | Come addestrare T5 su SQUAD con Transformers e NLP. | [Suraj Patil](https://github.com/patil-suraj) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [Fine-tuning di T5 per la classificazione e scelta multipla](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | Come effettuare il fine-tuning di T5 per le attivitร  di classificazione a scelta multipla - usando un formato testo-a-testo - con PyTorch Lightning. | [Suraj Patil](https://github.com/patil-suraj) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [Fine-tuning di DialoGPT su nuovi dataset e lingue](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | Come effettuare il fine-tuning di un modello DialoGPT su un nuovo dataset per chatbots conversazionali open-dialog. | [Nathan Cooper](https://github.com/ncoop57) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Modellamento di una lunga sequenza con Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Come addestrare su sequenze di lunghezza fino a 500 mila token con Reformer. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [Fine-tuning di BART per riassumere testi](https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi con fastai usando blurr. | [Wayde Gilliam](https://ohmeow.com/) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | | [Fine-tuning di un Transformer pre-addestrato su tweet](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | Come generare tweet nello stile del tuo account Twitter preferito attraverso il fine-tuning di un modello GPT-2. | [Boris Dayma](https://github.com/borisdayma) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Ottimizzazione di modelli ๐Ÿค— Hugging Face con Weights & Biases](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | Un tutorial completo che mostra l'integrazione di W&B con Hugging Face. | [Boris Dayma](https://github.com/borisdayma) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Longformer pre-addestrato](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | Come costruire una versione "long" degli esistenti modelli pre-addestrati. | [Iz Beltagy](https://beltagy.net) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [Fine-tuning di Longformer per QA](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | Come effettuare il fine-tuning di un modello longformer per un task di QA.| [Suraj Patil](https://github.com/patil-suraj) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [Valutazione di modelli con ๐Ÿค—NLP](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | Come valutare longformer su TriviaQA con `NLP`. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [Fine-tuning di T5 per Sentiment Span Extraction](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | Come effettuare il fine-tuning di T5 per la sentiment span extraction - usando un formato testo-a-testo - con PyTorch Lightning. | [Lorenzo Ampil](https://github.com/enzoampil) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [Fine-tuning di DistilBert per la classificazione multi-classe](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | Come effettuare il fine-tuning di DistilBert per la classificazione multi-classe con PyTorch. | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| |[Fine-tuning di BERT per la classificazione multi-etichetta](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|Come effettuare il fine-tuning di BERT per la classificazione multi-etichetta con PyTorch. |[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| |[Accelerazione del fine-tuning con il Dynamic Padding / Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)| Come velocizzare il fine-tuning di un fattore 2X usando il dynamic padding / bucketing. |[Michael Benesty](https://github.com/pommedeterresautee) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[Pre-addestramento di Reformer per Masked Language Modeling](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| Come addestrare un modello Reformer usando livelli di self-attention bi-direzionali.| [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| |[Espansione e fine-tuning di Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| Come incrementare il vocabolario di un modello SciBERT - pre-addestrato da AllenAI sul dataset CORD - e crearne una pipeline. | [Tanmay Thakur](https://github.com/lordtt13) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| |[Fine-tuning di BlenderBotSmall per riassumere testi usando Trainer API](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| Come effettuare il fine-tuning di BlenderBotSmall per riassumere testi su un dataset personalizzato, usando Trainer API. | [Tanmay Thakur](https://github.com/lordtt13) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| |[Fine-tuning di Electra e interpretazione con Integrated Gradients](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | Come effettuare il fine-tuning di Electra per l'analisi dei sentimenti e intepretare le predizioni con Captum Integrated Gradients. | [Eliza Szczechla](https://elsanns.github.io) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| |[Fine-tuning di un modello GPT-2 non inglese con la classe Trainer](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | Come effettuare il fine-tuning di un modello GPT-2 non inglese con la classe Trainer. | [Philipp Schmid](https://www.philschmid.de) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[Fine-tuning di un modello DistilBERT per la classficazione multi-etichetta](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | Come effettuare il fine-tuning di un modello DistilBERT per l'attivitร  di classificazione multi-etichetta. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[Fine-tuning di ALBERT per la classifcazione di coppie di frasi](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | Come effettuare il fine-tuning di un modello ALBERT - o un altro modello BERT-based - per l'attivitร  di classificazione di coppie di frasi. | [Nadir El Manouzi](https://github.com/NadirEM) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[Fine-tuning di Roberta per l'analisi di sentimenti](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | Come effettuare il fine-tuning di un modello Roberta per l'analisi di sentimenti. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[Valutazione di modelli che generano domande](https://github.com/flexudy-pipe/qugeev) | Quanto sono accurante le risposte alle domande generate dal tuo modello transformer seq2seq? | [Pascal Zoleko](https://github.com/zolekode) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[Classificazione di testo con DistilBERT e Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | Come effettuare il fine-tuning di DistilBERT per la classificazione di testo in TensorFlow. | [Peter Bayerle](https://github.com/peterbayerle) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[Utilizzo di BERT per riassumere testi con un modello Encoder-Decoder su CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* attraverso l'utilizzo di un checkpoint *bert-base-uncased* per riassumere testi su CNN/Dailymail. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[Utilizzo di RoBERTa per riassumere testi con un modello Encoder-Decoder su BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* (condiviso) attraverso l'utilizzo di un checkpoint *roberta-base* per riassumere testi su BBC/XSum. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[Fine-tuning di TAPAS su Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | Come effettuare il fine-tuning di un modello *TapasForQuestionAnswering* attraverso l'utilizzo di un checkpoint *tapas-base* sul dataset Sequential Question Answering (SQA). | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[Valutazione di TAPAS su Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | Come valutare un modello *TapasForSequenceClassification* - fine-tuned con un checkpoint *tapas-base-finetuned-tabfact* - usando una combinazione delle librerie ๐Ÿค— datasets e ๐Ÿค— transformers. | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[Fine-tuning di mBART per la traduzione](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | Come effettuare il fine-tuning di mBART usando Seq2SeqTrainer per la traduzione da hindi a inglese.| [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[Fine-tuning di LayoutLM su FUNSD (un dataset per la comprensione della forma)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForTokenClassification* sul dataset FUNSD per l'estrazione di informazioni da documenti scannerizzati.| [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[Fine-tuning di DistilGPT2 e generazione di testo](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | Come effettuare il fine-tuning di DistilGPT2 e generare testo. | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[Fine-tuning di LED fino a 8 mila token](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | Come effettuare il fine-tuning di LED su PubMed per riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Valutazione di LED su Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | Come valutare efficacemente LED sull'attivitร  di riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[Fine-tuning di LayoutLM su RVL-CDIP, un dataset per la classificazione di documenti (immagini)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForSequenceClassification* sul dataset RVL-CDIP per la classificazione di documenti scannerizzati. | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[Decodifica Wav2Vec2 CTC con variazioni di GPT2](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | Come decodificare sequenze CTC, variate da modelli di linguaggio. | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing) |[Fine-tuning di BART per riassumere testi in due lingue con la classe Trainer](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi in due lingue usando la classe Trainer. | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Valutazione di Big Bird su Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Come valutare BigBird su question answering di "lunghi" documenti attraverso Trivia QA. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Creazione di sottotitoli per video usando Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Come creare sottotitoli per qualsiasi video di YouTube trascrivendo l'audio con Wav2Vec. | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [Fine-tuning di Vision Transformer su CIFAR-10 usando PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e PyTorch Lightning.| [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [Fine-tuning di Vision Transformer su CIFAR-10 usando ๐Ÿค— Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e ๐Ÿค— Trainer. | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [Valutazione di LUKE su Open Entity, un dataset di entity typing](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Come valutare un modello *LukeForEntityClassification* sul dataset Open Entity. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [Valutazione di LUKE su TACRED, un dataset per l'estrazione di relazioni](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | Come valutare un modello *LukeForEntityPairClassification* sul dataset TACRED. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [Valutazione di LUKE su CoNLL-2003, un importante benchmark NER](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | Come valutare un modello *LukeForEntitySpanClassification* sul dataset CoNLL-2003. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [Valutazione di BigBird-Pegasus su dataset PubMed](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | Come valutare un modello *BigBirdPegasusForConditionalGeneration* su dataset PubMed. | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Classificazione di emozioni dal discorso con Wav2Vec2](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | Come utilizzare un modello pre-addestrato Wav2Vec2 per la classificazione di emozioni sul dataset MEGA. | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [Rilevamento oggetti in un'immagine con DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | Come usare un modello addestrato *DetrForObjectDetection* per rilevare oggetti in un'immagine e visualizzare l'attention. | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [Fine-tuning di DETR su un dataset personalizzato per rilevare oggetti](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | Come effettuare fine-tuning di un modello *DetrForObjectDetection* su un dataset personalizzato per rilevare oggetti. | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [Fine-tuning di T5 per Named Entity Recognition](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | Come effettuare fine-tunining di *T5* per un'attivitร  di Named Entity Recognition. | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_infer_gpu_one.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza efficiente su GPU singola Questo documento sarร  presto completato con informazioni su come effetture l'inferenza su una singola GPU. Nel frattempo รจ possibile consultare [la guida per l'addestramento su una singola GPU](perf_train_gpu_one) e [la guida per l'inferenza su CPU](perf_infer_cpu). ## `BetterTransformer` per l'inferenza piรน veloce Abbiamo recentemente integrato `BetterTransformer` per velocizzare l'inferenza su GPU per modelli di testo, immagini e audio. Per maggiori dettagli, consultare la documentazione su questa integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview). ## Integrazione di `bitsandbytes` per Int8 mixed-precision matrix decomposition <Tip> Nota che questa funzione puรฒ essere utilizzata anche nelle configurazioni multi GPU. </Tip> Dal paper [`LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale`](https://arxiv.org/abs/2208.07339), noi supportiamo l'integrazione di Hugging Face per tutti i modelli dell'Hub con poche righe di codice. Il metodo `nn.Linear` riduce la dimensione di 2 per i pesi `float16` e `bfloat16` e di 4 per i pesi `float32`, con un impatto quasi nullo sulla qualitร , operando sugli outlier in half-precision. ![HFxbitsandbytes.png](https://cdn-uploads.huggingface.co/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png) Il metodo Int8 mixed-precision matrix decomposition funziona separando la moltiplicazione tra matrici in due flussi: (1) una matrice di flusso di outlier di caratteristiche sistematiche moltiplicata in fp16, (2) in flusso regolare di moltiplicazione di matrici int8 (99,9%). Con questo metodo, รจ possibile effettutare inferenza int8 per modelli molto grandi senza degrado predittivo. Per maggiori dettagli sul metodo, consultare il [paper](https://arxiv.org/abs/2208.07339) o il nostro [blogpost sull'integrazione](https://huggingface.co/blog/hf-bitsandbytes-integration). ![MixedInt8.gif](https://cdn-uploads.huggingface.co/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif) Nota che รจ necessaria una GPU per eseguire modelli di tipo mixed-8bit, poichรฉ i kernel sono stati compilati solo per le GPU. Prima di utilizzare questa funzione, assicurarsi di disporre di memoria sufficiente sulla GPU per memorizzare un quarto del modello (o la metร  se i pesi del modello sono in mezza precisione). Di seguito sono riportate alcune note per aiutarvi a utilizzare questo modulo, oppure seguite le dimostrazioni su [Google colab](#colab-demos). ### Requisiti - Se si dispone di `bitsandbytes<0.37.0`, assicurarsi di eseguire su GPU NVIDIA che supportano tensor cores a 8 bit (Turing, Ampere o architetture piรน recenti - ad esempio T4, RTX20s RTX30s, A40-A100). Per `bitsandbytes>=0.37.0`, tutte le GPU dovrebbero essere supportate. - Installare la versione corretta di `bitsandbytes` eseguendo: `pip install bitsandbytes>=0.31.5`. - Installare `accelerate` `pip install accelerate>=0.12.0` ### Esecuzione di modelli mixed-Int8 - configurazione per singola GPU Dopo aver installato le librerie necessarie, per caricare il tuo modello mixed 8-bit รจ il seguente: ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` Per la generazione di testo, si consiglia di: * utilizzare il metodo `generate()` del modello invece della funzione `pipeline()`. Sebbene l'inferenza sia possibile con la funzione `pipeline()`, essa non รจ ottimizzata per i modelli mixed-8bit e sarร  piรน lenta rispetto all'uso del metodo `generate()`. Inoltre, alcune strategie di campionamento, come il campionamento nucleaus, non sono supportate dalla funzione `pipeline()` per i modelli mixed-8bit. * collocare tutti gli ingressi sullo stesso dispositivo del modello. Ecco un semplice esempio: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-2b5" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) text = "Hello, my llama is cute" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ### Esecuzione di modelli mixed-8bit - configurazione multi GPU Usare il seguente modo caricare il modello mixed-8bit su piรน GPU (stesso comando della configurazione a GPU singola): ```py model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` Puoi controllare la RAM della GPU che si vuole allocare su ogni GPU usando `accelerate`. Utilizzare l'argomento `max_memory` come segue: ```py max_memory_mapping = {0: "1GB", 1: "2GB"} model_name = "bigscience/bloom-3b" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping ) ``` In questo esempio, la prima GPU utilizzerร  1 GB di memoria e la seconda 2 GB. ### Colab demos Con questo metodo รจ possibile inferire modelli che prima non era possibile inferire su Google Colab. Guardate la demo per l'esecuzione di T5-11b (42GB in fp32)! Utilizzo la quantizzazione a 8 bit su Google Colab: [![Open In Colab: T5-11b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) Oppure questa demo di BLOOM-3B: [![Open In Colab: BLOOM-3b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Fine-tuning di un modello pre-addestrato [[open-in-colab]] Ci sono benefici significativi nell'usare un modello pre-addestrato. Si riducono i costi computazionali, l'impronta di carbonio e ti consente di usare modelli stato dell'arte senza doverli addestrare da zero. ๐Ÿค— Transformers consente l'accesso a migliaia di modelli pre-addestrati per un'ampia gamma di compiti. Quando usi un modello pre-addestrato, lo alleni su un dataset specifico per il tuo compito. Questo รจ conosciuto come fine-tuning, una tecnica di addestramento incredibilmente potente. In questa esercitazione, potrai fare il fine-tuning di un modello pre-addestrato, con un framework di deep learning a tua scelta: * Fine-tuning di un modello pre-addestrato con ๐Ÿค— Transformers [`Trainer`]. * Fine-tuning di un modello pre-addestrato in TensorFlow con Keras. * Fine-tuning di un modello pre-addestrato con PyTorch. <a id='data-processing'></a> ## Preparare un dataset <Youtube id="_BZearw7f0w"/> Prima di poter fare il fine-tuning di un modello pre-addestrato, scarica un dataset e preparalo per l'addestramento. La precedente esercitazione ti ha mostrato come processare i dati per l'addestramento e adesso hai l'opportunitร  di metterti alla prova! Inizia caricando il dataset [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full): ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` Come giร  sai, hai bisogno di un tokenizer per processare il testo e includere una strategia di padding e truncation per gestire sequenze di lunghezza variabile. Per processare il dataset in un unico passo, usa il metodo [`map`](https://huggingface.co/docs/datasets/process#map) di ๐Ÿค— Datasets che applica la funzione di preprocessing all'intero dataset: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` Se vuoi, puoi creare un sottoinsieme piรน piccolo del dataset per il fine-tuning cosรฌ da ridurre il tempo necessario: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Addestramento <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ๐Ÿค— Transformers mette a disposizione la classe [`Trainer`] ottimizzata per addestrare modelli ๐Ÿค— Transformers, rendendo semplice iniziare l'addestramento senza scrivere manualmente il tuo ciclo di addestramento. L'API [`Trainer`] supporta un'ampia gamma di opzioni e funzionalitร  di addestramento come logging, gradient accumulation e mixed precision. Inizia caricando il tuo modello e specificando il numero di etichette (labels) attese. Nel dataset Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields), sai che ci sono cinque etichette: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` <Tip> Potresti vedere un warning dato che alcuni dei pesi pre-addestrati non sono stati utilizzati e altri pesi sono stati inizializzati casualmente. Non preoccuparti, รจ completamente normale! L'head pre-addestrata del modello BERT viene scartata e rimpiazzata da una classification head inizializzata casualmente. Farai il fine-tuning di questa nuova head del modello sul tuo compito di classificazione, trasferendogli la conoscenza del modello pre-addestrato. </Tip> ### Iperparametri per il training Successivamente, crea una classe [`TrainingArguments`] contenente tutti gli iperparametri che si possono regore nonchรฉ le variabili per attivare le differenti opzioni di addestramento. Per questa esercitazione puoi iniziare con gli [iperparametri](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) di ddestramento predefiniti, ma sentiti libero di sperimentare per trovare la configurazione ottimale per te. Specifica dove salvare i checkpoints del tuo addestramento: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### Metriche [`Trainer`] non valuta automaticamente le performance del modello durante l'addestramento. Dovrai passare a [`Trainer`] una funzione che calcola e restituisce le metriche. La libreria ๐Ÿค— Datasets mette a disposizione una semplice funzione [`accuracy`](https://huggingface.co/metrics/accuracy) che puoi caricare con la funzione `load_metric` (guarda questa [esercitazione](https://huggingface.co/docs/datasets/metrics) per maggiori informazioni): ```py >>> import numpy as np >>> from datasets import load_metric >>> metric = load_metric("accuracy") ``` Richiama `compute` su `metric` per calcolare l'accuratezza delle tue previsioni. Prima di passare le tue previsioni a `compute`, hai bisogno di convertirle in logits (ricorda che tutti i modelli ๐Ÿค— Transformers restituiscono logits): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` Se preferisci monitorare le tue metriche di valutazione durante il fine-tuning, specifica il parametro `evaluation_strategy` nei tuoi training arguments per restituire le metriche di valutazione ad ogni epoca di addestramento: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") ``` ### Trainer Crea un oggetto [`Trainer`] col tuo modello, training arguments, dataset di training e test, e funzione di valutazione: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Poi metti a punto il modello richiamando [`~transformers.Trainer.train`]: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> I modelli ๐Ÿค— Transformers supportano anche l'addestramento in TensorFlow usando l'API di Keras. ### Convertire dataset nel formato per TensorFlow Il [`DefaultDataCollator`] assembla tensori in lotti su cui il modello si addestrerร . Assicurati di specificare di restituire tensori per TensorFlow in `return_tensors`: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` <Tip> [`Trainer`] usa [`DataCollatorWithPadding`] in maniera predefinita in modo da non dover specificare esplicitamente un collettore di dati. </Tip> Successivamente, converti i datasets tokenizzati in TensorFlow datasets con il metodo [`to_tf_dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset). Specifica il tuo input in `columns` e le tue etichette in `label_cols`: ```py >>> tf_train_dataset = small_train_dataset.to_tf_dataset( ... columns=["attention_mask", "input_ids", "token_type_ids"], ... label_cols=["labels"], ... shuffle=True, ... collate_fn=data_collator, ... batch_size=8, ... ) >>> tf_validation_dataset = small_eval_dataset.to_tf_dataset( ... columns=["attention_mask", "input_ids", "token_type_ids"], ... label_cols=["labels"], ... shuffle=False, ... collate_fn=data_collator, ... batch_size=8, ... ) ``` ### Compilazione e addestramento Carica un modello TensorFlow col numero atteso di etichette: ```py >>> import tensorflow as tf >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` Poi compila e fai il fine-tuning del tuo modello usando [`fit`](https://keras.io/api/models/model_training_apis/) come faresti con qualsiasi altro modello di Keras: ```py >>> model.compile( ... optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), ... loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), ... metrics=tf.metrics.SparseCategoricalAccuracy(), ... ) >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## Addestramento in PyTorch nativo <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`] si occupa del ciclo di addestramento e ti consente di mettere a punto un modello con una sola riga di codice. Per chi preferisse scrivere un proprio ciclo di addestramento personale, puoi anche fare il fine-tuning di un modello ๐Ÿค— Transformers in PyTorch nativo. A questo punto, potresti avere bisogno di riavviare il tuo notebook o eseguire il seguente codice per liberare un po' di memoria: ```py del model del pytorch_model del trainer torch.cuda.empty_cache() ``` Successivamente, postprocessa manualmente il `tokenized_dataset` per prepararlo ad essere allenato. 1. Rimuovi la colonna `text` perchรฉ il modello non accetta testo grezzo come input: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. Rinomina la colonna `label` in `labels` perchรฉ il modello si aspetta che questo argomento si chiami `labels`: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. Imposta il formato del dataset per farti restituire tensori di PyTorch all'interno delle liste: ```py >>> tokenized_datasets.set_format("torch") ``` Poi crea un piccolo sottocampione del dataset come visto precedentemente per velocizzare il fine-tuning: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader Crea un `DataLoader` per i tuoi datasets di train e test cosรฌ puoi iterare sui lotti di dati: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` Carica il tuo modello con il numero atteso di etichette: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` ### Ottimizzatore e learning rate scheduler Crea un ottimizzatore e il learning rate scheduler per fare il fine-tuning del modello. Usa l'ottimizzatore [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) di PyTorch: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` Crea il learning rate scheduler predefinito da [`Trainer`]: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` Infine specifica come `device` da usare una GPU se ne hai una. Altrimenti, l'addestramento su una CPU puรฒ richiedere diverse ore invece di un paio di minuti. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> Ottieni l'accesso gratuito a una GPU sul cloud se non ne possiedi una usando un notebook sul web come [Colaboratory](https://colab.research.google.com/) o [SageMaker StudioLab](https://studiolab.sagemaker.aws/). </Tip> Ottimo, adesso possiamo addestrare! ๐Ÿฅณ ### Training loop Per tenere traccia dei tuoi progressi durante l'addestramento, usa la libreria [tqdm](https://tqdm.github.io/) per aggiungere una progress bar sopra il numero dei passi di addestramento: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### Metriche Proprio come รจ necessario aggiungere una funzione di valutazione del [`Trainer`], รจ necessario fare lo stesso quando si scrive il proprio ciclo di addestramento. Ma invece di calcolare e riportare la metrica alla fine di ogni epoca, questa volta accumulerai tutti i batch con [`add_batch`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=add_batch#datasets.Metric.add_batch) e calcolerai la metrica alla fine. ```py >>> metric = load_metric("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## Altre risorse Per altri esempi sul fine-tuning, fai riferimento a: - [๐Ÿค— Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) include scripts per addestrare compiti comuni di NLP in PyTorch e TensorFlow. - [๐Ÿค— Transformers Notebooks](notebooks) contiene diversi notebooks su come mettere a punto un modello per compiti specifici in PyTorch e TensorFlow.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Machine Learning allo stato dell'arte per PyTorch, TensorFlow e JAX. ๐Ÿค— Transformers fornisce delle API per scaricare in modo semplice e allenare modelli pre-allenati allo stato dell'arte. L'utilizzo di modelli pre-allenati puรฒ ridurre i tuoi costi computazionali, l'impatto ambientale, e farti risparmiare il tempo che utilizzeresti per allenare un modello da zero. I modelli possono essere utilizzati in diverse modalitร  come ad esempio: * ๐Ÿ“ Testo: classificazione del testo, estrazione delle informazioni, rispondere a domande, riassumere, traduzione e generazione del testo in piรน di 100 lingue. * ๐Ÿ–ผ๏ธ Immagini: classificazione di immagini, rilevazione di oggetti e segmentazione. * ๐Ÿ—ฃ๏ธ Audio: riconoscimento vocale e classificazione dell'audio. * ๐Ÿ™ Multimodale: rispondere a domande inerenti dati tabulari, riconoscimento ottico dei caratteri, estrazione di informazioni a partire da documenti scannerizzati, classificazione di video e risposta visuale a domande. La nostra libreria supporta un'integrazione perfetta tra tre delle librerie per il deep learning piรน popolari: [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) e [JAX](https://jax.readthedocs.io/en/latest/). Allena il tuo modello in tre righe di codice in un framework, e caricalo per l'inferenza in un altro. Ogni architettura di ๐Ÿค— Transformers รจ definita in un modulo Python indipendente cosรฌ da poter essere personalizzata in modo semplice per la ricerca e gli esperimenti. ## Se stai cercando supporto personalizzato dal team di Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contenuti La documentazione รจ organizzata in cinque parti: - **INIZIARE** contiene un tour rapido e le istruzioni di installazione per cominciare ad utilizzare ๐Ÿค— Transformers. - **TUTORIALS** รจ un buon posto da cui iniziare se per te la nostra libreria รจ nuova. Questa sezione ti aiuterร  ad acquisire le competenze basilari di cui hai bisogno per iniziare ad utilizzare ๐Ÿค— Transformers. - **GUIDE PRATICHE** ti mostrerร  come raggiungere obiettivi specifici come fare fine-tuning di un modello pre-allenato per la modellizzazione del linguaggio o come creare una testa per un modello personalizzato. - **GUIDE CONCETTUALI** fornisce discussioni e spiegazioni dei concetti sottostanti alle idee dietro ai modelli, compiti, e la filosofia di progettazione di ๐Ÿค— Transformers. - **API** descrive ogni classe e funzione, raggruppate in: - **CLASSI PRINCIPALI** per le classi principali che espongono le API importanti della libreria. - **MODELLI** per le classi e le funzioni relative ad ogni modello implementato all'interno della libreria. - **HELPERS INTERNI** per le classi e le funzioni che utilizziamo internamente. La libreria attualmente contiene implementazioni in JAX, PyTorch e TensorFlow, pesi di modelli pre-allenati, script di utilizzo e strumenti di conversione per i seguenti modelli. ### Modelli supportati <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (da Google Research e l'Istituto Tecnologico di Chicago) rilasciato con il paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), da Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) rilasciato con il paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) da Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[BART](model_doc/bart)** (da Facebook) rilasciato con il paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) da Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov e Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (da politecnico di ร‰cole) rilasciato con il paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) da Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (da VinAI Research) rilasciato con il paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) da Nguyen Luong Tran, Duong Minh Le e Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (da Microsoft) rilasciato con il paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) da Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (da Google) rilasciato con il paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) da Jacob Devlin, Ming-Wei Chang, Kenton Lee e Kristina Toutanova. 1. **[BERTweet](model_doc/bertweet)** (da VinAI Research) rilasciato con il paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) da Dat Quoc Nguyen, Thanh Vu e Anh Tuan Nguyen. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (da Google) rilasciato con il paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) da Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (da Google Research) rilasciato con il paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) da Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (v Google Research) rilasciato con il paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) da Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (da Facebook) rilasciato con il paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) da Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (da Facebook) rilasciato con il paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) da Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BORT](model_doc/bort)** (da Alexa) rilasciato con il paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) da Adrian de Wynter e Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (da Google Research) rilasciato con il paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) da Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (da Inria/Facebook/Sorbonne) rilasciato con il paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) da Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah e Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (da Google Research) rilasciato con il paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) da Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[ConvNeXT](model_doc/convnext)** (da Facebook AI) rilasciato con il paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) da Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (da Facebook AI) rilasciato con il paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) da Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CLIP](model_doc/clip)** (da OpenAI) rilasciato con il paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) da Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[ConvBERT](model_doc/convbert)** (da YituTech) rilasciato con il paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) da Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[CPM](model_doc/cpm)** (dalla Universitร  di Tsinghua) rilasciato con il paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) da Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (da Salesforce) rilasciato con il paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) da Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong e Richard Socher. 1. **[CvT](model_doc/cvt)** (da Microsoft) rilasciato con il paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) da Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (da Facebook) rilasciato con il paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) da Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (da Microsoft) rilasciato con il paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) da Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (da Microsoft) rilasciato con il paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) da Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (da Berkeley/Facebook/Google) rilasciato con il paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) da Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[DiT](model_doc/dit)** (da Microsoft Research) rilasciato con il paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) da Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[DeiT](model_doc/deit)** (da Facebook) rilasciato con il paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) da Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (da Facebook) rilasciato con il paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) da Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (da Microsoft Research) rilasciato con il paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) da Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (da HuggingFace), rilasciato assieme al paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) da Victor Sanh, Lysandre Debut e Thomas Wolf. La stessa tecnica รจ stata applicata per comprimere GPT2 in [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa in [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT in [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DPR](model_doc/dpr)** (da Facebook) rilasciato con il paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) da Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, e Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (da Intel Labs) rilasciato con il paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) da Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (da Google Research) rilasciato con il paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) da Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ELECTRA](model_doc/electra)** (da Google Research/Stanford University) rilasciato con il paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) da Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[FlauBERT](model_doc/flaubert)** (da CNRS) rilasciato con il paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) da Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (da Facebook AI) rilasciato con il paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) da Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, e Douwe Kiela. 1. **[FNet](model_doc/fnet)** (da Google Research) rilasciato con il paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) da James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (da CMU/Google Brain) rilasciato con il paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) da Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (da KAIST) rilasciato con il paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) da Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (da OpenAI) rilasciato con il paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) da Alec Radford, Karthik Narasimhan, Tim Salimans e Ilya Sutskever. 1. **[GPT-2](model_doc/gpt2)** (da OpenAI) rilasciato con il paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) da Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** e Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (da EleutherAI) rilasciato nel repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) da Ben Wang e Aran Komatsuzaki. 1. **[GPT Neo](model_doc/gpt_neo)** (da EleutherAI) rilasciato nel repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) da Sid Black, Stella Biderman, Leo Gao, Phil Wang e Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (da EleutherAI) rilasciato con il paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) da Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[Hubert](model_doc/hubert)** (da Facebook) rilasciato con il paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) da Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (da Berkeley) rilasciato con il paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) da Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (da OpenAI) rilasciato con il paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) da Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) da Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) da Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) da Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutlxlm)** (da Microsoft Research Asia) rilasciato con il paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) da Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (da AllenAI) rilasciato con il paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) da Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[Longformer](model_doc/longformer)** (da AllenAI) rilasciato con il paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) da Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LUKE](model_doc/luke)** (da Studio Ousia) rilasciato con il paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) da Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[mLUKE](model_doc/mluke)** (da Studio Ousia) rilasciato con il paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) da Ryokan Ri, Ikuya Yamada, e Yoshimasa Tsuruoka. 1. **[LXMERT](model_doc/lxmert)** (da UNC Chapel Hill) rilasciato con il paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) da Hao Tan e Mohit Bansal. 1. **[M2M100](model_doc/m2m_100)** (da Facebook) rilasciato con il paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) da Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Modello di machine learning per le traduzioni allenato utilizzando i dati [OPUS](http://opus.nlpl.eu/) di Jรถrg Tiedemann. Il [Framework Marian](https://marian-nmt.github.io/) รจ stato sviluppato dal Microsoft Translator Team. 1. **[Mask2Former](model_doc/mask2former)** (da FAIR e UIUC) rilasciato con il paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) da Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (da Meta e UIUC) rilasciato con il paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) da Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[MBart](model_doc/mbart)** (da Facebook) rilasciato con il paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) da Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[MBart-50](model_doc/mbart)** (da Facebook) rilasciato con il paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) da Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (da NVIDIA) rilasciato con il paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) da Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper e Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (da NVIDIA) rilasciato con il paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) da Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper e Bryan Catanzaro. 1. **[MPNet](model_doc/mpnet)** (da Microsoft Research) rilasciato con il paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) da Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (da Google AI) rilasciato con il paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) da Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[Nystrรถmformer](model_doc/nystromformer)** (dalla Universitร  del Wisconsin - Madison) rilasciato con il paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) da Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (da SHI Labs) rilasciato con il paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) da Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (da Meta AI) rilasciato con il paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) da Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[Pegasus](model_doc/pegasus)** (da Google) rilasciato con il paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) da Jingqing Zhang, Yao Zhao, Mohammad Saleh e Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (da Deepmind) rilasciato con il paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) da Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (da VinAI Research) rilasciato con il paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) da Dat Quoc Nguyen e Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (da UCLA NLP) rilasciato con il paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) da Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (da Sea AI Labs) rilasciato con il paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) da Yu, Weihao e Luo, Mi e Zhou, Pan e Si, Chenyang e Zhou, Yichen e Wang, Xinchao e Feng, Jiashi e Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (da Microsoft Research) rilasciato con il paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) da Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang e Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (da NVIDIA) rilasciato con il paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) da Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev e Paulius Micikevicius. 1. **[REALM](model_doc/realm.html)** (da Google Research) rilasciato con il paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) da Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat e Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (da Google Research) rilasciato con il paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) da Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RemBERT](model_doc/rembert)** (da Google Research) rilasciato con il paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) da Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[RegNet](model_doc/regnet)** (da META Platforms) rilasciato con il paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) da Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[ResNet](model_doc/resnet)** (da Microsoft Research) rilasciato con il paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) da Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (da Facebook), rilasciato assieme al paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) da Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoFormer](model_doc/roformer)** (da ZhuiyiTechnology), rilasciato assieme al paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) da Jianlin Su e Yu Lu e Shengfeng Pan e Bo Wen e Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (da NVIDIA) rilasciato con il paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) da Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (da ASAPP) rilasciato con il paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) da Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (da ASAPP) rilasciato con il paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) da Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (da Facebook), rilasciato assieme al paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) da Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (da Facebook), rilasciato assieme al paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) da Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (dalla Universitร  di Tel Aviv), rilasciato assieme al paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) da Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBert](model_doc/squeezebert)** (da Berkeley) rilasciato con il paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) da Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, e Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (da Microsoft) rilasciato con il paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) da Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[T5](model_doc/t5)** (da Google AI) rilasciato con il paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) da Colin Raffel e Noam Shazeer e Adam Roberts e Katherine Lee e Sharan Narang e Michael Matena e Yanqi Zhou e Wei Li e Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (da Google AI) rilasciato nel repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) da Colin Raffel e Noam Shazeer e Adam Roberts e Katherine Lee e Sharan Narang e Michael Matena e Yanqi Zhou e Wei Li e Peter J. Liu. 1. **[TAPAS](model_doc/tapas)** (da Google AI) rilasciato con il paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) da Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno e Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (da Microsoft Research) rilasciato con il paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) da Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (dall'Universitร  della California a Berkeley) rilasciato con il paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) da Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (da Google/CMU) rilasciato con il paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) da Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (da Microsoft), rilasciato assieme al paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) da Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UniSpeech](model_doc/unispeech)** (da Microsoft Research) rilasciato con il paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) da Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (da Microsoft Research) rilasciato con il paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) da Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (dalle Universitร  di Tsinghua e Nankai) rilasciato con il paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) da Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[ViLT](model_doc/vilt)** (da NAVER AI Lab/Kakao Enterprise/Kakao Brain) rilasciato con il paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) da Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (da Google AI) rilasciato con il paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) da Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (da Meta AI) rilasciato con il paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) da Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[VisualBERT](model_doc/visual_bert)** (da UCLA NLP) rilasciato con il paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) da Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[WavLM](model_doc/wavlm)** (da Microsoft Research) rilasciato con il paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) da Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Wav2Vec2](model_doc/wav2vec2)** (da Facebook AI) rilasciato con il paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) da Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (da Facebook AI) rilasciato con il paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) da Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[XGLM](model_doc/xglm)** (da Facebook AI) rilasciato con il paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) da Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (v Facebook) rilasciato assieme al paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) da Guillaume Lample e Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (da Microsoft Research) rilasciato con il paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) da Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang e Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (da Facebook AI), rilasciato assieme al paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) da Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer e Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (da Facebook AI), rilasciato assieme al paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) da Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (da Google/CMU) rilasciato con il paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) da Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (da Facebook AI) rilasciato con il paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) da Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[XLS-R](model_doc/xls_r)** (da Facebook AI) rilasciato con il paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) da Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (dalla Universitร  della scienza e tecnologia di Huazhong) rilasciato con il paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) da Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (dall'Universitร  del Wisconsin - Madison) rilasciato con il paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) da Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Framework supportati La tabella seguente rappresenta il supporto attuale nella libreria per ognuno di questi modelli, si puรฒ identificare se questi hanno un Python tokenizer (chiamato "slow"). Un tokenizer "fast" supportato dalla libreria ๐Ÿค— Tokenizers, e se hanno supporto in Jax (via Flax), PyTorch, e/o TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBirdPegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | Canine | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNext | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | Flava | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | MegatronBert | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | mT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Nystromformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | Realm | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin | โŒ | โŒ | โœ… | โœ… | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBert | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โŒ | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLMProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/custom_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Condividere modelli personalizzati La libreria ๐Ÿค— Transformers รจ studiata per essere facilmente estendibile. Il codice di ogni modello รจ interamente situato in una sottocartella del repository senza alcuna astrazione, perciรฒ puoi facilmente copiare il file di un modello e modificarlo in base ai tuoi bisogni. Se stai scrivendo un nuovo modello, potrebbe essere piรน semplice iniziare da zero. In questo tutorial, ti mostreremo come scrivere un modello personalizzato e la sua configurazione in modo che possa essere utilizzato allโ€™interno di Transformers, e come condividerlo con la community (assieme al relativo codice) cosรฌ che tutte le persone possano usarlo, anche se non presente nella libreria ๐Ÿค— Transformers. Illustriamo tutto questo su un modello ResNet, avvolgendo la classe ResNet della [libreria timm](https://github.com/rwightman/pytorch-image-models) in un [`PreTrainedModel`]. ## Scrivere una configurazione personalizzata Prima di iniziare a lavorare al modello, scriviamone la configurazione. La configurazione di un modello รจ un oggetto che contiene tutte le informazioni necessarie per la build del modello. Come vedremo nella prossima sezione, il modello puรฒ soltanto essere inizializzato tramite `config`, per cui dovremo rendere tale oggetto piรน completo possibile. Nel nostro esempio, prenderemo un paio di argomenti della classe ResNet che potremmo voler modificare. Configurazioni differenti ci daranno quindi i differenti possibili tipi di ResNet. Salveremo poi questi argomenti, dopo averne controllato la validitร . ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` Le tre cose piรน importanti da ricordare quando scrivi le tue configurazioni sono le seguenti: - Devi ereditare da `Pretrainedconfig`, - Il metodo `__init__` del tuo `Pretrainedconfig` deve accettare i kwargs, - I `kwargs` devono essere passati alla superclass `__init__` Lโ€™ereditร  รจ importante per assicurarsi di ottenere tutte le funzionalitร  della libreria ๐Ÿค— transformers, mentre gli altri due vincoli derivano dal fatto che un `Pretrainedconfig` ha piรน campi di quelli che stai settando. Quando ricarichi una config da un metodo `from_pretrained`, questi campi devono essere accettati dalla tua config e poi inviati alla superclasse. Definire un `model_type` per la tua configurazione (qua `model_type = โ€œresnetโ€`) non รจ obbligatorio, a meno che tu non voglia registrare il modello con le classi Auto (vedi l'ultima sezione). Una volta completato, puoi facilmente creare e salvare la tua configurazione come faresti con ogni altra configurazione di modelli della libreria. Ecco come possiamo creare la config di un resnet50d e salvarlo: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` Questo salverร  un file chiamato `config.json` all'interno della cartella `custom-resnet`. Potrai poi ricaricare la tua config con il metodo `from_pretrained`. ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` Puoi anche usare qualunque altro metodo della classe [`PretrainedConfig`], come [`~PretrainedConfig.push_to_hub`] per caricare direttamente la tua configurazione nell'hub. ## Scrivere un modello personalizzato Ora che abbiamo la nostra configurazione ResNet, possiamo continuare a scrivere il modello. In realtร , ne scriveremo due: uno che estrae le features nascoste da una batch di immagini (come [`BertModel`]) e uno che รจ utilizzabile per la classificazione di immagini (come [`BertModelForSequenceClassification`]). Come abbiamo menzionato in precedenza, scriveremo soltanto un wrapper del modello, per mantenerlo semplice ai fini di questo esempio. L'unica cosa che dobbiamo fare prima di scrivere questa classe รจ una mappatura fra i tipi di blocco e le vere classi dei blocchi. Successivamente il modello รจ definito tramite la configurazione, passando tutto quanto alla classe `ResNet`. ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` Per il modello che classificherร  le immagini, cambiamo soltanto il metodo forward: ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` Nota come, in entrambi i casi, ereditiamo da `PreTrainedModel` e chiamiamo l'inizializzazione della superclasse con il metodo `config` (un po' come quando scrivi un normale `torch.nn.Module`). La riga che imposta la `config_class` non รจ obbligatoria, a meno che tu non voglia registrare il modello con le classi Auto (vedi l'ultima sezione). <Tip> Se il tuo modello รจ molto simile a un modello all'interno della libreria, puoi ri-usare la stessa configurazione di quel modello. </Tip> Puoi fare in modo che il tuo modello restituisca in output qualunque cosa tu voglia, ma far restituire un dizionario come abbiamo fatto per `ResnetModelForImageClassification`, con la funzione di perdita inclusa quando vengono passate le labels, renderร  il tuo modello direttamente utilizzabile all'interno della classe [`Trainer`]. Utilizzare altri formati di output va bene se hai in progetto di utilizzare un tuo loop di allenamento, o se utilizzerai un'altra libreria per l'addestramento. Ora che abbiamo la classe del nostro modello, creiamone uno: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` Ribadiamo, puoi usare qualunque metodo dei [`PreTrainedModel`], come [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`]. Utilizzeremo quest'ultimo nella prossima sezione, e vedremo come caricare i pesi del modello assieme al codice del modello stesso. Ma prima, carichiamo alcuni pesi pre-allenati all'interno del nostro modello. Nel tuo caso specifico, probabilmente allenerai il tuo modello sui tuoi dati. Per velocizzare in questo tutorial, utilizzeremo la versione pre-allenata del resnet50d. Dato che il nostro modello รจ soltanto un wrapper attorno a quel modello, sarร  facile trasferirne i pesi: ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Vediamo adesso come assicurarci che quando facciamo [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`], il codice del modello venga salvato. ## Inviare il codice all'Hub <Tip warning={true}> Questa API รจ sperimentale e potrebbe avere alcuni cambiamenti nei prossimi rilasci. </Tip> Innanzitutto, assicurati che il tuo modello sia completamente definito in un file `.py`. Puรฒ sfruttare import relativi ad altri file, purchรจ questi siano nella stessa directory (non supportiamo ancora sotto-moduli per questa funzionalitร ). Per questo esempio, definiremo un file `modeling_resnet.py` e un file `configuration_resnet.py` in una cartella dell'attuale working directory chiamata `resnet_model`. Il file configuration contiene il codice per `ResnetConfig` e il file modeling contiene il codice di `ResnetModel` e `ResnetModelForImageClassification`. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` Il file `__init__.py` puรฒ essere vuoto, serve solo perchรจ Python capisca che `resnet_model` puรฒ essere utilizzato come un modulo. <Tip warning={true}> Se stai copiando i file relativi alla modellazione della libreria, dovrai sostituire tutti gli import relativi in cima al file con import del pacchetto `transformers`. </Tip> Nota che puoi ri-utilizzare (o usare come sottoclassi) un modello/configurazione esistente. Per condividere il tuo modello con la community, segui questi passi: prima importa il modello ResNet e la sua configurazione dai nuovi file creati: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` Dopodichรจ dovrai dire alla libreria che vuoi copiare i file con il codice di quegli oggetti quando utilizzi il metodo `save_pretrained` e registrarli in modo corretto con una Auto classe (specialmente per i modelli). Utilizza semplicemente: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` Nota che non c'รจ bisogno di specificare una Auto classe per la configurazione (c'รจ solo una Auto classe per le configurazioni, [`AutoConfig`], ma รจ diversa per i modelli). Il tuo modello personalizato potrebbe essere utilizzato per diverse tasks, per cui devi specificare quale delle classi Auto รจ quella corretta per il tuo modello. Successivamente, creiamo i modelli e la config come abbiamo fatto in precedenza: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Adesso, per inviare il modello all'Hub, assicurati di aver effettuato l'accesso. Lancia dal tuo terminale: ```bash huggingface-cli login ``` O da un notebook: ```py from huggingface_hub import notebook_login notebook_login() ``` Potrai poi inviare il tutto sul tuo profilo (o di un'organizzazione di cui fai parte) in questo modo: ```py resnet50d.push_to_hub("custom-resnet50d") ``` Oltre ai pesi del modello e alla configurazione in formato json, questo ha anche copiato i file `.py` modeling e configuration all'interno della cartella `custom-resnet50d` e ha caricato i risultati sull'Hub. Puoi controllare i risultati in questa [model repo](https://huggingface.co/sgugger/custom-resnet50d). Puoi controllare il tutorial di condivisione [tutorial di condivisione](model_sharing) per piรน informazioni sul metodo con cui inviare all'Hub. ## Usare un modello con codice personalizzato Puoi usare ogni configurazione, modello o tokenizer con file di codice personalizzati nella sua repository con le classi Auto e il metodo `from_pretrained`. Tutti i files e il codice caricati sull'Hub sono scansionati da malware (fai riferimento alla documentazione [Hub security](https://huggingface.co/docs/hub/security#malware-scanning) per piรน informazioni), ma dovresti comunque assicurarti dell'affidabilitร  del codice e dell'autore per evitare di eseguire codice dannoso sulla tua macchina. Imposta `trust_remote_code=True` per usare un modello con codice personalizzato: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` Inoltre, raccomandiamo fortemente di passare un hash del commit come `revision` per assicurarti che le autrici o gli autori del modello non abbiano modificato il codice con alcune nuove righe dannose (a meno che non ti fidi completamente della fonte): ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Nota che quando cerchi la storia dei commit della repo del modello sull'Hub, c'รจ un bottone con cui facilmente copiare il commit hash di ciascun commit. ## Registrare un modello con codice personalizzato nelle classi Auto Se stai scrivendo una libreria che estende ๐Ÿค— Transformers, potresti voler estendere le classi Auto per includere il tuo modello. Questo รจ diverso dall'inviare codice nell'Hub: gli utenti dovranno importare la tua libreria per ottenere il modello personalizzato (anzichรจ scaricare automaticamente il modello dall'Hub). Finchรจ il tuo file di configurazione ha un attributo `model_type` diverso dai model types esistenti, e finchรจ le tue classi modello hanno i corretti attributi `config_class`, potrai semplicemente aggiungerli alle classi Auto come segue: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` Nota che il primo argomento utilizzato quando registri la configurazione di un modello personalizzato con [`AutoConfig`] deve corrispondere al `model_type` della tua configurazione personalizzata, ed il primo argomento utilizzato quando registri i tuoi modelli personalizzati in una qualunque classe Auto del modello deve corrispondere alla `config_class` di quei modelli.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_hardware.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hardware ottimizzato per l'addestramento L'hardware utilizzato per eseguire l'addestramento del modello e l'inferenza puรฒ avere un grande effetto sulle prestazioni. Per un analisi approfondita delle GPUs, assicurati di dare un'occhiata all'eccellente [blog post](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/) di Tim Dettmer. Diamo un'occhiata ad alcuni consigli pratici per la configurazione della GPU. ## GPU Quando si addestrano modelli piรน grandi ci sono essenzialmente tre opzioni: - GPUs piu' grandi - Piu' GPUs - Piu' CPU e piu' NVMe (scaricato da [DeepSpeed-Infinity](main_classes/deepspeed#nvme-support)) Iniziamo dal caso in cui ci sia una singola GPU. ### Potenza e Raffreddamento Se hai acquistato una costosa GPU di fascia alta, assicurati di darle la potenza corretta e un raffreddamento sufficiente. **Potenza**: Alcune schede GPU consumer di fascia alta hanno 2 e talvolta 3 prese di alimentazione PCI-E a 8 pin. Assicurati di avere tanti cavi PCI-E a 8 pin indipendenti da 12 V collegati alla scheda quante sono le prese. Non utilizzare le 2 fessure a un'estremitร  dello stesso cavo (noto anche come cavo a spirale). Cioรจ se hai 2 prese sulla GPU, vuoi 2 cavi PCI-E a 8 pin che vanno dall'alimentatore alla scheda e non uno che abbia 2 connettori PCI-E a 8 pin alla fine! In caso contrario, non otterrai tutte le prestazioni ufficiali. Ciascun cavo di alimentazione PCI-E a 8 pin deve essere collegato a una guida da 12 V sul lato dell'alimentatore e puรฒ fornire fino a 150 W di potenza. Alcune altre schede possono utilizzare connettori PCI-E a 12 pin e questi possono fornire fino a 500-600 W di potenza. Le schede di fascia bassa possono utilizzare connettori a 6 pin, che forniscono fino a 75 W di potenza. Inoltre vuoi un alimentatore (PSU) di fascia alta che abbia una tensione stabile. Alcuni PSU di qualitร  inferiore potrebbero non fornire alla scheda la tensione stabile di cui ha bisogno per funzionare al massimo. E ovviamente l'alimentatore deve avere abbastanza Watt inutilizzati per alimentare la scheda. **Raffreddamento**: Quando una GPU si surriscalda, inizierร  a rallentare e non fornirร  le prestazioni mssimali e potrebbe persino spegnersi se diventasse troppo calda. รˆ difficile dire l'esatta temperatura migliore a cui aspirare quando una GPU รจ molto caricata, ma probabilmente qualsiasi cosa al di sotto di +80ยฐC va bene, ma piรน bassa รจ meglio - forse 70-75ยฐC รจ un intervallo eccellente in cui trovarsi. รˆ probabile che il rallentamento inizi a circa 84-90ยฐC. Ma oltre alla limitazione delle prestazioni, una temperatura molto elevata prolungata รจ probabile che riduca la durata di una GPU. Diamo quindi un'occhiata a uno degli aspetti piรน importanti quando si hanno piรน GPU: la connettivitร . ### Connettivitร  multi-GPU Se utilizzi piรน GPU, il modo in cui le schede sono interconnesse puรฒ avere un enorme impatto sul tempo totale di allenamento. Se le GPU si trovano sullo stesso nodo fisico, puoi eseguire: ``` nvidia-smi topo -m ``` e ti dirร  come sono interconnesse le GPU. Su una macchina con doppia GPU e collegata a NVLink, molto probabilmente vedrai qualcosa del tipo: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` su una macchina diversa senza NVLink potremmo vedere: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A ``` Il rapporto include questa legenda: ``` X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` Quindi il primo rapporto `NV2` ci dice che le GPU sono interconnesse con 2 NVLinks e nel secondo report `PHB` abbiamo una tipica configurazione PCIe+Bridge a livello di consumatore. Controlla che tipo di connettivitร  hai sulla tua configurazione. Alcuni di questi renderanno la comunicazione tra le carte piรน veloce (es. NVLink), altri piรน lenta (es. PHB). A seconda del tipo di soluzione di scalabilitร  utilizzata, la velocitร  di connettivitร  potrebbe avere un impatto maggiore o minore. Se le GPU devono sincronizzarsi raramente, come in DDP, l'impatto di una connessione piรน lenta sarร  meno significativo. Se le GPU devono scambiarsi messaggi spesso, come in ZeRO-DP, una connettivitร  piรน veloce diventa estremamente importante per ottenere un addestramento piรน veloce. #### NVlink [NVLink](https://en.wikipedia.org/wiki/NVLink) รจ un collegamento di comunicazione a corto raggio multilinea seriale basato su cavo sviluppato da Nvidia. Ogni nuova generazione fornisce una larghezza di banda piรน veloce, ad es. ecco una citazione da [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf): > Third-Generation NVLinkยฎ > GA102 GPUs utilize NVIDIAโ€™s third-generation NVLink interface, which includes four x4 links, > with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four > links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth > between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. > (Note that 3-Way and 4-Way SLI configurations are not supported.) Quindi piรน `X` si ottiene nel rapporto di `NVX` nell'output di `nvidia-smi topo -m`, meglio รจ. La generazione dipenderร  dall'architettura della tua GPU. Confrontiamo l'esecuzione di un training del modello di linguaggio gpt2 su un piccolo campione di wikitext I risultati sono: | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | Puoi vedere che NVLink completa l'addestramento circa il 23% piรน velocemente. Nel secondo benchmark utilizziamo `NCCL_P2P_DISABLE=1` per dire alle GPU di non utilizzare NVLink. Ecco il codice benchmark completo e gli output: ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`) Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_train_tpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento su TPU <Tip> Nota: Molte delle strategie introdotte nella [sezione sulla GPU singola](perf_train_gpu_one) (come mixed precision training o gradient accumulation) e [sezione multi-GPU](perf_train_gpu_many) sono generiche e applicabili all'addestramento di modelli in generale quindi assicurati di dargli un'occhiata prima di immergerti in questa sezione. </Tip> Questo documento sarร  presto completato con informazioni su come effettuare la formazione su TPU.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quick tour [[open-in-colab]] Entra in azione con ๐Ÿค— Transformers! Inizia utilizzando [`pipeline`] per un'inferenza veloce, carica un modello pre-allenato e un tokenizer con una [AutoClass](./model_doc/auto) per risolvere i tuoi compiti legati a testo, immagini o audio. <Tip> Tutti gli esempi di codice presenti in questa documentazione hanno un pulsante in alto a sinistra che permette di selezionare tra PyTorch e TensorFlow. Se questo non รจ presente, ci si aspetta che il codice funzioni per entrambi i backend senza alcun cambiamento. </Tip> ## Pipeline [`pipeline`] รจ il modo piรน semplice per utilizzare un modello pre-allenato per un dato compito. <Youtube id="tiZFewofSLM"/> La [`pipeline`] supporta molti compiti comuni: **Testo**: * Analisi del Sentimento (Sentiment Analysis, in inglese): classifica la polaritร  di un testo dato. * Generazione del Testo (Text Generation, in inglese): genera del testo a partire da un dato input. * Riconoscimento di Entitร  (Name Entity Recognition o NER, in inglese): etichetta ogni parola con l'entitร  che questa rappresenta (persona, data, luogo, ecc.). * Rispondere a Domande (Question answering, in inglese): estrae la risposta da un contesto, dato del contesto e una domanda. * Riempimento di Maschere (Fill-mask, in inglese): riempie gli spazi mancanti in un testo che ha parole mascherate. * Riassumere (Summarization, in inglese): genera una sintesi di una lunga sequenza di testo o di un documento. * Traduzione (Translation, in inglese): traduce un testo in un'altra lingua. * Estrazione di Caratteristiche (Feature Extraction, in inglese): crea un tensore che rappresenta un testo. **Immagini**: * Classificazione di Immagini (Image Classification, in inglese): classifica un'immagine. * Segmentazione di Immagini (Image Segmentation, in inglese): classifica ogni pixel di un'immagine. * Rilevazione di Oggetti (Object Detection, in inglese): rileva oggetti all'interno di un'immagine. **Audio**: * Classificazione di Audio (Audio Classification, in inglese): assegna un'etichetta ad un segmento di audio dato. * Riconoscimento Vocale Automatico (Automatic Speech Recognition o ASR, in inglese): trascrive il contenuto di un audio dato in un testo. <Tip> Per maggiori dettagli legati alla [`pipeline`] e ai compiti ad essa associati, fai riferimento alla documentazione [qui](./main_classes/pipelines). </Tip> ### Utilizzo della Pipeline Nel seguente esempio, utilizzerai la [`pipeline`] per l'analisi del sentimento. Installa le seguenti dipendenze se non lo hai giร  fatto: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importa [`pipeline`] e specifica il compito che vuoi completare: ```py >>> from transformers import pipeline >>> classificatore = pipeline("sentiment-analysis", model="MilaNLProc/feel-it-italian-sentiment") ``` La pipeline scarica e salva il [modello pre-allenato](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) e il tokenizer per l'analisi del sentimento. Se non avessimo scelto un modello, la pipeline ne avrebbe scelto uno di default. Ora puoi utilizzare il `classifier` sul tuo testo obiettivo: ```py >>> classificatore("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") [{'label': 'positive', 'score': 0.9997}] ``` Per piรน di una frase, passa una lista di frasi alla [`pipeline`] la quale restituirร  una lista di dizionari: ```py >>> risultati = classificatore( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."] ... ) >>> for risultato in risultati: ... print(f"etichetta: {risultato['label']}, con punteggio: {round(risultato['score'], 4)}") etichetta: positive, con punteggio: 0.9998 etichetta: negative, con punteggio: 0.9998 ``` La [`pipeline`] puรฒ anche iterare su un dataset intero. Inizia installando la libreria [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/): ```bash pip install datasets ``` Crea una [`pipeline`] con il compito che vuoi risolvere e con il modello che vuoi utilizzare. ```py >>> import torch >>> from transformers import pipeline >>> riconoscitore_vocale = pipeline( ... "automatic-speech-recognition", model="radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram" ... ) ``` Poi, carica un dataset (vedi ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart) per maggiori dettagli) sul quale vuoi iterare. Per esempio, carichiamo il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="it-IT", split="train") # doctest: +IGNORE_RESULT ``` Dobbiamo assicurarci che la frequenza di campionamento del set di dati corrisponda alla frequenza di campionamento con cui รจ stato addestrato `radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram`. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=riconoscitore_vocale.feature_extractor.sampling_rate)) ``` I file audio vengono caricati automaticamente e ri-campionati quando chiamiamo la colonna "audio". Estraiamo i vettori delle forme d'onda grezze delle prime 4 osservazioni e passiamoli come lista alla pipeline: ```py >>> risultato = riconoscitore_vocale(dataset[:4]["audio"]) >>> print([d["text"] for d in risultato]) ['dovrei caricare dei soldi sul mio conto corrente', 'buongiorno e senza vorrei depositare denaro sul mio conto corrente come devo fare per cortesia', 'sรฌ salve vorrei depositare del denaro sul mio conto', 'e buon pomeriggio vorrei depositare dei soldi sul mio conto bancario volleo sapere come posso fare se e posso farlo online ed un altro conto o andandoo tramite bancomut'] ``` Per un dataset piรน grande dove gli input sono di dimensione maggiore (come nel parlato/audio o nella visione), dovrai passare un generatore al posto di una lista che carica tutti gli input in memoria. Guarda la [documentazione della pipeline](./main_classes/pipelines) per maggiori informazioni. ### Utilizzare un altro modello e tokenizer nella pipeline La [`pipeline`] puรฒ ospitare qualsiasi modello del [Model Hub](https://huggingface.co/models), rendendo semplice l'adattamento della [`pipeline`] per altri casi d'uso. Per esempio, se si vuole un modello capace di trattare testo in francese, usa i tag presenti nel Model Hub in modo da filtrare per ottenere un modello appropriato. Il miglior risultato filtrato restituisce un modello multi-lingua [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) fine-tuned per l'analisi del sentimento. Ottimo, utilizziamo questo modello! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Usa [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `AutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Usa [`TFAutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `TFAutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Poi puoi specificare il modello e il tokenizer nella [`pipeline`], e applicare il `classifier` sul tuo testo obiettivo: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Se non riesci a trovare un modello per il tuo caso d'uso, dovrai fare fine-tuning di un modello pre-allenato sui tuoi dati. Dai un'occhiata al nostro tutorial [fine-tuning tutorial](./training) per imparare come. Infine, dopo che hai completato il fine-tuning del tuo modello pre-allenato, considera per favore di condividerlo (vedi il tutorial [qui](./model_sharing)) con la comunitร  sul Model Hub per democratizzare l'NLP! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Al suo interno, le classi [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] lavorano assieme per dare potere alla [`pipeline`]. Una [AutoClass](./model_doc/auto) รจ una scorciatoia che automaticamente recupera l'architettura di un modello pre-allenato a partire dal suo nome o path. Hai solo bisogno di selezionare la `AutoClass` appropriata per il tuo compito e il suo tokenizer associato con [`AutoTokenizer`]. Ritorniamo al nostro esempio e vediamo come puoi utilizzare la `AutoClass` per replicare i risultati della [`pipeline`]. ### AutoTokenizer Un tokenizer รจ responsabile dell'elaborazione del testo in modo da trasformarlo in un formato comprensibile dal modello. Per prima cosa, il tokenizer dividerร  il testo in parole chiamate *token*. Ci sono diverse regole che governano il processo di tokenizzazione, tra cui come dividere una parola e a quale livello (impara di piรน sulla tokenizzazione [qui](./tokenizer_summary)). La cosa piรน importante da ricordare comunque รจ che hai bisogno di inizializzare il tokenizer con lo stesso nome del modello in modo da assicurarti che stai utilizzando le stesse regole di tokenizzazione con cui il modello รจ stato pre-allenato. Carica un tokenizer con [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(nome_del_modello) ``` Dopodichรฉ, il tokenizer converte i token in numeri in modo da costruire un tensore come input del modello. Questo รจ conosciuto come il *vocabolario* del modello. Passa il tuo testo al tokenizer: ```py >>> encoding = tokenizer("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") >>> print(encoding) {'input_ids': [101, 56821, 10132, 14407, 13019, 13007, 10120, 47201, 10330, 10106, 91686, 100, 58263, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Il tokenizer restituirร  un dizionario contenente: * [input_ids](./glossary#input-ids): rappresentazioni numeriche dei tuoi token. * [attention_mask](.glossary#attention-mask): indica quali token devono essere presi in considerazione. Come con la [`pipeline`], il tokenizer accetterร  una lista di input. In piรน, il tokenizer puรฒ anche completare (pad, in inglese) e troncare il testo in modo da restituire un lotto (batch, in inglese) di lunghezza uniforme: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Leggi il tutorial sul [preprocessing](./preprocessing) per maggiori dettagli sulla tokenizzazione. ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`AutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare l'[`AutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello. Devi solo spacchettare il dizionario aggiungendo `**`: ```py >>> pt_outputs = pt_model(**pt_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0041, 0.0037, 0.0203, 0.2005, 0.7713], [0.3766, 0.3292, 0.1832, 0.0558, 0.0552]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`TFAutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare il [`TFAutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(nome_del_modello) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello passando le chiavi del dizionario al tensore: ```py >>> tf_outputs = tf_model(tf_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tutti i modelli di ๐Ÿค— Transformers (PyTorch e TensorFlow) restituiscono i tensori *prima* della funzione finale di attivazione (come la softmax) perchรฉ la funzione di attivazione finale viene spesso unita a quella di perdita. </Tip> I modelli sono [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) o [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) standard cosรฌ puoi utilizzarli all'interno del tuo training loop usuale. Tuttavia, per rendere le cose piรน semplici, ๐Ÿค— Transformers fornisce una classe [`Trainer`] per PyTorch che aggiunge delle funzionalitร  per l'allenamento distribuito, precisione mista, e altro ancora. Per TensorFlow, puoi utilizzare il metodo `fit` di [Keras](https://keras.io/). Fai riferimento al [tutorial per il training](./training) per maggiori dettagli. <Tip> Gli output del modello di ๐Ÿค— Transformers sono delle dataclasses speciali in modo che i loro attributi vengano auto-completati all'interno di un IDE. Gli output del modello si comportano anche come una tupla o un dizionario (ad esempio, puoi indicizzare con un intero, una slice o una stringa) nel qual caso gli attributi che sono `None` vengono ignorati. </Tip> ### Salva un modello <frameworkcontent> <pt> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`PreTrainedModel.save_pretrained`]: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`TFPreTrainedModel.save_pretrained`]: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Una caratteristica particolarmente interessante di ๐Ÿค— Transformers รจ la sua abilitร  di salvare un modello e ri-caricarlo sia come modello di PyTorch che di TensorFlow. I parametri `from_pt` o `from_tf` possono convertire un modello da un framework all'altro: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/debugging.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Debugging ## Debug dei problemi di rete multi-GPU Quando addestri o fai inferenza con `DistributedDataParallel` e GPU multiple, se si verificano problemi di intercomunicazione tra processi e/o nodi, puoi utilizzare il seguente script per diagnosticare i problemi della rete. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py ``` Per esempio per testare come 2 GPU interagiscono fai: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` Se entrambi i processi sono in grado di comunicare tra loro e di allocare la memoria della GPU, ciascuno di essi stamperร  lo stato OK. Per piรน GPU o nodi adatta gli argumenti nello script. All'interno dello script di diagnostica troverai molti altri dettagli e anche una guida per eseguirlo in ambiente SLURM. Un livello di debug superiore รจ aggiungere la variabile d'ambiente `NCCL_DEBUG=INFO` come di seguito: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` In questo modo si scaricano molte informazioni di debug relative a NCCL, che puoi cercare online in caso di problemi. Oppure, se non hai la sicurezza di come interpretare l'output, puoi condividere il file di log in una Issue. ## Rilevamento di Underflow e Overflow <Tip> Questa funzionalitร  al momento รจ disponibile solo per PyTorch. </Tip> <Tip> Per addestramento multi-GPU richiede DDP (`torch.distributed.launch`). </Tip> <Tip> Questa funzionalitร  puรฒ essere usata con modelli basati su `nn.Module`. </Tip> Se inizi a ottenere `loss=NaN` o il modello presenta qualche altro comportamento anomalo a causa di valori `inf` o `nan` in attivazioni o nei pesi, รจ necessario scoprire dove si verifica il primo underflow o overflow e cosa lo ha determinato. Fortunatamente รจ possibile farlo facilmente attivando un modulo speciale che effettuerร  il rilevamento automaticamente. Se stai usando [`Trainer`], hai bisogno di aggiungere solo: ```bash --debug underflow_overflow ``` ai normali argomenti della riga di comando, o passa `debug="underflow_overflow"` quando viene creato l'oggetto [`TrainingArguments`]. Se stai usando il tuo ciclo di allenamento o un altro trainer, puoi ottenere lo stesso risultato con: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) ``` [`~debug_utils.DebugUnderflowOverflow`] inserisce dei ganci nel modello che dopo ogni chiamata testeranno le variabili di ingresso e di uscita e anche i pesi del modulo corrispondente. Non appena viene rilevato `inf` o o `nan` in almeno un elemento delle attivazioni o dei pesi, il programma lo notifica e stampa un rapporto come il seguente (questo รจ stato rilevato con `google/mt5-small` sotto fp16 mixed precision): ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [...] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` L'output di esempio รจ stato tagliato al centro per brevitร . La seconda colonna mostra il valore dell'elemento piรน grande in assoluto,cosรฌ se osserviamo da vicino gli ultimi istanti, input e output sono nel range di `1e4`. Questo addestramento รจ stato eseguito con una mixed precision fp16 e l'ultimo passo usciva fuori (sotto `fp16` il valore piรน grande prima di `inf` รจ `64e3`). Per evitare overflows sotto `fp16` le attivazionioni devono rimanere molto al di sotto di `1e4`, perchรฉ `1e4 * 1e4 = 1e8` quindi qualsiasi moltiplicazione di matrice con grandi attivazioni porterร  a una condizione di overflow numerico. All'inizio della traccia รจ possibile scoprire a quale lotto si รจ verificato il problema (questo `Detected inf/nan during batch_number=0` significa che il problema si รจ verificato nel primo lotto). Ogni frame segnalato inizia dichiarando la voce completamente qualificata per il modulo corrispondente per il quale il frame รจ stato segnalato. Se osserviamo il seguente frame: ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output ``` Questo, `encoder.block.2.layer.1.layer_norm` indica che si tratta di un layer norm nel primo layer, del secondo blocco dell'encoder. E le chiamata specifica di `forward` รจ `T5LayerNorm`. Osserviamo gli ultimi frame del report: ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` L'ultimo frame report per la funzione `Dropout.forward` con la prima voce per l'unico input e la seconda per l'unico output. Si puรฒ notare che รจ stato richiamato da un attibuto `dropout` dentro la classe `DenseReluDense`. Si puรฒ notare che ciรฒ รจ avvenuto durante il primo strato, del 2ยฐ blocco, durante il primissimo lotto. Infine, gli elementi di input piรน grandi in assoluto sono stati `6.27e+04` e l'equivalente per l'output era `inf`. Puoi vedere qui, che `T5DenseGatedGeluDense.forward` risulta in output activations, il cui valore massimo assoluto era circa 62,7K, che รจ molto vicino al limite massimo di 64K di fp16. Nel prossimo frame abbiamo `Dropout` che rinormalizza i pesi, dopo aver azzerato alcuni elementi, il che spinge il valore massimo assoluto a piรน di 64K e si verifica un overflow.(`inf`). Come puoi notare, รจ nei frames precedenti che occorre esaminare quando i numeri iniziano a diventare molto grandi per i valori fp16. Confrontiamo il report al codice `models/t5/modeling_t5.py`: ```python class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states ``` Ora รจ facile vedere la chiamata `dropout`, e tutte le chiamate precedenti. Poichรฉ il rilevamento avviene in un avanzamento (forward hook in eng.), i rapporti vengono creati immeditamente dopo ogni rientro da `forward` (forward returns in eng.). Tornando al rapporto completo, per agire e risolvere il problema, dobbiamo andare qualche frame piรน in alto, dove i numeri hanno iniziato a salire, e probabilmente passare alla modalitร  `fp32`, in modo che i numeri non trabocchino quando vengono moltiplicati o sommati. Naturalmente, potrebbero esserci altre soluzioni. Per esempio, potremmo spegnere temporanemante `amp` se รจ abilitato, successivamente spostare `forward` in un helper wrapper, come: ```python def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) ``` Poichรฉ il rilevatore automatico riporta solo gli ingressi e le uscite di fotogrammi completi, una volta che si sa dove cercare, si puรฒ analizzare anche le fasi intermedie di una specifica funzione `forward`. In alcuni casi puoi usare la funzione di supporto `detect_overflow` per indirizzare il rilevatore dove preferisci, ad esempio: ```python from debug_utils import detect_overflow class T5LayerFF(nn.Module): [...] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` Si puรฒ vedere che abbiamo aggiunto 2 di questi e ora teniamo traccia se `inf` o `nan` per `forwarded_states` รจ stato rilevato da qualche parte. In realtร , il rilevatore li riporta giร , perchรฉ ciascuna delle chiamate nell'esempio precedente รจ un `nn.Module`, ma diciamo che se avessimo dei calcoli diretti locali, questo รจ il modo in cui lo faremmo. Inoltre, se si istanzia il debugger nel proprio codice, รจ possibile modificare il numero di fotogrammi stampati rispetto a predefinito, ad esempio.: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` ### Tracciamento della mistura assoluta del lotto specifico e del valore massimo La stessa classe di debug puรฒ essere utilizzata per il tracciamento per-batch con la funzione di rilevamento di underflow/overflow disattivata. Supponiamo di voler osservare i valori minimi e massimi assoluti per tutti gli ingredienti di ogni chiamata `forward` di un dato lotto. lotto, e che lo si voglia fare solo per i lotti 1 e 3. Si istanzia questa classe come: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` Ora i batch completi 1 e 3 saranno tracciati utilizzando lo stesso formato del rilevatore di underflow/overflow. I batches sono 0-indexed. Questo รจ utile se si sa che il programma inizia a comportarsi male dopo un certo numero di batch, in modo da poter avanzare velocemente fino a quell'area. direttamente a quell'area. Ecco un esempio di output troncato per questa configurazione: ``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [...] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [...] ``` Qui verrร  scaricato un numero enorme di fotogrammi, tanti quanti sono le chiamate in avanti nel modello, quindi puรฒ essere o non essere quello che volete, ma a volte puรฒ essere piรน utile usarlo di un classico debugger. Per esempio, se il problema inizia a verificarsi a partire dal lotto numero 150. Quindi รจ possibile scaricare le tracce dei lotti 149 e 150 e confrontare i punti in cui i numeri hanno iniziato a divergere. รˆ inoltre possibile specificare il numero di batch dopo il quale interrompere l'addestramento, con: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค๊ตญ์–ด ๋ชจ๋ธ ์ถ”๋ก ํ•˜๊ธฐ[[multilingual-models-for-inference]] [[open-in-colab]] ๐Ÿค— Transformers์—๋Š” ์—ฌ๋Ÿฌ ์ข…๋ฅ˜์˜ ๋‹ค๊ตญ์–ด(multilingual) ๋ชจ๋ธ์ด ์žˆ์œผ๋ฉฐ, ๋‹จ์ผ ์–ธ์–ด(monolingual) ๋ชจ๋ธ๊ณผ ์ถ”๋ก  ์‹œ ์‚ฌ์šฉ๋ฒ•์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๊ณ  ํ•ด์„œ *๋ชจ๋“ * ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์˜ ์‚ฌ์šฉ๋ฒ•์ด ๋‹ค๋ฅธ ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)์™€ ๊ฐ™์€ ๋ช‡๋ช‡ ๋ชจ๋ธ์€ ๋‹จ์ผ ์–ธ์–ด ๋ชจ๋ธ์ฒ˜๋Ÿผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์˜ ์ถ”๋ก  ์‹œ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## XLM[[xlm]] XLM์—๋Š” 10๊ฐ€์ง€ ์ฒดํฌํฌ์ธํŠธ(checkpoint)๊ฐ€ ์žˆ๋Š”๋ฐ, ์ด ์ค‘ ํ•˜๋‚˜๋งŒ ๋‹จ์ผ ์–ธ์–ด์ž…๋‹ˆ๋‹ค. ๋‚˜๋จธ์ง€ ์ฒดํฌํฌ์ธํŠธ 9๊ฐœ๋Š” ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜๋Š” ์ฒดํฌํฌ์ธํŠธ์™€ ๊ทธ๋ ‡์ง€ ์•Š์€ ์ฒดํฌํฌ์ธํŠธ์˜ ๋‘ ๊ฐ€์ง€ ๋ฒ”์ฃผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜๋Š” XLM[[xlm-with-language-embeddings]] ๋‹ค์Œ XLM ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: - `xlm-mlm-ende-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-๋…์ผ์–ด) - `xlm-mlm-enfr-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-ํ”„๋ž‘์Šค์–ด) - `xlm-mlm-enro-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-๋ฃจ๋งˆ๋‹ˆ์•„์–ด) - `xlm-mlm-xnli15-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, XNLI ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ œ๊ณตํ•˜๋Š” 15๊ฐœ ๊ตญ์–ด) - `xlm-mlm-tlm-xnli15-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋ฒˆ์—ญ, XNLI ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ œ๊ณตํ•˜๋Š” 15๊ฐœ ๊ตญ์–ด) - `xlm-clm-enfr-1024` (Causal language modeling, ์˜์–ด-ํ”„๋ž‘์Šค์–ด) - `xlm-clm-ende-1024` (Causal language modeling, ์˜์–ด-๋…์ผ์–ด) ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์€ ๋ชจ๋ธ์— ์ „๋‹ฌ๋œ `input_ids`์™€ ๋™์ผํ•œ shape์˜ ํ…์„œ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…์„œ์˜ ๊ฐ’์€ ์‚ฌ์šฉ๋œ ์–ธ์–ด์— ๋”ฐ๋ผ ๋‹ค๋ฅด๋ฉฐ ํ† ํฌ๋‚˜์ด์ €์˜ `lang2id` ๋ฐ `id2lang` ์†์„ฑ์— ์˜ํ•ด ์‹๋ณ„๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ์ œ์—์„œ๋Š” `xlm-clm-enfr-1024` ์ฒดํฌํฌ์ธํŠธ(์ฝ”์ž˜ ์–ธ์–ด ๋ชจ๋ธ๋ง(causal language modeling), ์˜์–ด-ํ”„๋ž‘์Šค์–ด)๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") ``` ํ† ํฌ๋‚˜์ด์ €์˜ `lang2id` ์†์„ฑ์€ ๋ชจ๋ธ์˜ ์–ธ์–ด์™€ ํ•ด๋‹น ID๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` ๋‹ค์Œ์œผ๋กœ, ์˜ˆ์ œ ์ž…๋ ฅ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # ๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 1์ž…๋‹ˆ๋‹ค ``` ์–ธ์–ด ID๋ฅผ `"en"`์œผ๋กœ ์„ค์ •ํ•ด ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์€ ์˜์–ด์˜ ์–ธ์–ด ID์ธ `0`์œผ๋กœ ์ฑ„์›Œ์ง„ ํ…์„œ์ž…๋‹ˆ๋‹ค. ์ด ํ…์„œ๋Š” `input_ids`์™€ ๊ฐ™์€ ํฌ๊ธฐ์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # (batch_size, sequence_length) shape์˜ ํ…์„œ๊ฐ€ ๋˜๋„๋ก ๋งŒ๋“ญ๋‹ˆ๋‹ค. >>> langs = langs.view(1, -1) # ์ด์ œ [1, sequence_length] shape์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค(๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 1์ž…๋‹ˆ๋‹ค) ``` ์ด์ œ `input_ids`์™€ ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ๋ชจ๋ธ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> outputs = model(input_ids, langs=langs) ``` [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) ์Šคํฌ๋ฆฝํŠธ๋กœ `xlm-clm` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ด ํ…์ŠคํŠธ์™€ ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” XLM[[xlm-without-language-embeddings]] ๋‹ค์Œ XLM ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค: - `xlm-mlm-17-1280` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 17๊ฐœ ๊ตญ์–ด) - `xlm-mlm-100-1280` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) ์ด์ „์˜ XLM ์ฒดํฌํฌ์ธํŠธ์™€ ๋‹ฌ๋ฆฌ ์ด ๋ชจ๋ธ์€ ์ผ๋ฐ˜ ๋ฌธ์žฅ ํ‘œํ˜„์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ## BERT[[bert]] ๋‹ค์Œ BERT ๋ชจ๋ธ์€ ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `bert-base-multilingual-uncased` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, 102๊ฐœ ๊ตญ์–ด) - `bert-base-multilingual-cased` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, 104๊ฐœ ๊ตญ์–ด) ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฌธ๋งฅ์—์„œ ์–ธ์–ด๋ฅผ ์‹๋ณ„ํ•˜๊ณ , ์‹๋ณ„๋œ ์–ธ์–ด๋กœ ์ถ”๋ก ํ•ฉ๋‹ˆ๋‹ค. ## XLM-RoBERTa[[xlmroberta]] ๋‹ค์Œ XLM-RoBERTa ๋˜ํ•œ ๋‹ค๊ตญ์–ด ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `xlm-roberta-base` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) - `xlm-roberta-large` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) XLM-RoBERTa๋Š” 100๊ฐœ ๊ตญ์–ด์— ๋Œ€ํ•ด ์ƒˆ๋กœ ์ƒ์„ฑ๋˜๊ณ  ์ •์ œ๋œ 2.5TB ๊ทœ๋ชจ์˜ CommonCrawl ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ „์— ๊ณต๊ฐœ๋œ mBERT๋‚˜ XLM๊ณผ ๊ฐ™์€ ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์— ๋น„ํ•ด ๋ถ„๋ฅ˜, ์‹œํ€€์Šค ๋ผ๋ฒจ๋ง, ์งˆ์˜ ์‘๋‹ต๊ณผ ๊ฐ™์€ ๋‹ค์šด์ŠคํŠธ๋ฆผ(downstream) ์ž‘์—…์—์„œ ์ด์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## M2M100[[m2m100]] ๋‹ค์Œ M2M100 ๋ชจ๋ธ ๋˜ํ•œ ๋‹ค๊ตญ์–ด ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `facebook/m2m100_418M` (๋ฒˆ์—ญ) - `facebook/m2m100_1.2B` (๋ฒˆ์—ญ) ์ด ์˜ˆ์ œ์—์„œ๋Š” `facebook/m2m100_418M` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ค‘๊ตญ์–ด๋ฅผ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์—์„œ ๋ฒˆ์—ญ ๋Œ€์ƒ ์–ธ์–ด(source language)๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100์€ ๋ฒˆ์—ญ์„ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์€ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `generate` ๋ฉ”์†Œ๋“œ์—์„œ `forced_bos_token_id`๋ฅผ `en`์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart[[mbart]] ๋‹ค์Œ MBart ๋ชจ๋ธ ๋˜ํ•œ ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `facebook/mbart-large-50-one-to-many-mmt` (์ผ๋Œ€๋‹ค ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50-many-to-many-mmt` (๋‹ค๋Œ€๋‹ค ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50-many-to-one-mmt` (๋‹ค๋Œ€์ผ ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50` (๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-cc25` ์ด ์˜ˆ์ œ์—์„œ๋Š” ํ•€๋ž€๋“œ์–ด๋ฅผ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `facebook/mbart-large-50-many-to-many-mmt` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์—์„œ ๋ฒˆ์—ญ ๋Œ€์ƒ ์–ธ์–ด(source language)๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart๋Š” ๋ฒˆ์—ญ์„ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์€ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `generate` ๋ฉ”์†Œ๋“œ์—์„œ `forced_bos_token_id`๋ฅผ `en`์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` `facebook/mbart-large-50-many-to-one-mmt` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด, ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์„ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/tflite.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ[[export-to-tflite]] [TensorFlow Lite](https://www.tensorflow.org/lite/guide)๋Š” ์ž์›์ด ์ œํ•œ๋œ ํœด๋Œ€ํฐ, ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ, ์‚ฌ๋ฌผ์ธํ„ฐ๋„ท(IoT) ๊ธฐ๊ธฐ์—์„œ ๊ธฐ๊ณ„ํ•™์Šต ๋ชจ๋ธ์„ ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•œ ๊ฒฝ๋Ÿ‰ ํ”„๋ ˆ์ž„์›Œํฌ์ž…๋‹ˆ๋‹ค. TFLite๋Š” ์—ฐ์‚ฐ ๋Šฅ๋ ฅ, ๋ฉ”๋ชจ๋ฆฌ, ์ „๋ ฅ ์†Œ๋น„๊ฐ€ ์ œํ•œ๋œ ๊ธฐ๊ธฐ์—์„œ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ตœ์ ํ™”ํ•˜๊ณ  ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. TensorFlow Lite ๋ชจ๋ธ์€ `.tflite` ํŒŒ์ผ ํ™•์žฅ์ž๋กœ ์‹๋ณ„๋˜๋Š” ํŠน์ˆ˜ํ•˜๊ณ  ํšจ์œจ์ ์ธ ํœด๋Œ€์šฉ ํฌ๋งท์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ `exporters.tflite` ๋ชจ๋“ˆ๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ TFLite๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/tflite/overview)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋ชจ๋ธ์„ TFLite๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด, ํ•„์š”ํ•œ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install optimum[exporters-tf] ``` ๋ชจ๋“  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ธ์ˆ˜๋ฅผ ํ™•์ธํ•˜๋ ค๋ฉด, [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model)๋ฅผ ์ฐธ๊ณ ํ•˜๊ฑฐ๋‚˜ ํ„ฐ๋ฏธ๋„์—์„œ ๋„์›€๋ง์„ ์‚ดํŽด๋ณด์„ธ์š”: ```bash optimum-cli export tflite --help ``` ์˜ˆ๋ฅผ ๋“ค์–ด ๐Ÿค— Hub์—์„œ์˜ `bert-base-uncased` ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด, ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ ``` ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋กœ๊ทธ์™€ ๊ฒฐ๊ณผ๋ฌผ์ธ `model.tflite`๊ฐ€ ์ €์žฅ๋œ ์œ„์น˜๋ฅผ ๋ณด์—ฌ์ฃผ๋Š” ๋กœ๊ทธ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```bash Validating TFLite model... -[โœ“] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[โœ“] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite ``` ์œ„ ์˜ˆ์ œ๋Š” ๐Ÿค— Hub์—์„œ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ๋กœ์ปฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ธ๋‹ค๋ฉด, ๋จผ์ € ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์™€ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์ด ๋ชจ๋‘ ๊ฐ™์€ ๋””๋ ‰ํ„ฐ๋ฆฌ( `local_path` )์— ์ €์žฅ๋๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. CLI๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ, ๐Ÿค— Hub์—์„œ์˜ ์ฒดํฌํฌ์ธํŠธ ์ด๋ฆ„ ๋Œ€์‹  `model` ์ธ์ˆ˜์— `local_path`๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/preprocessing.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ „์ฒ˜๋ฆฌ[[preprocess]] [[open-in-colab]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ๋งž๋Š” ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ „์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€ ๋˜๋Š” ์˜ค๋””์˜ค์ธ์ง€ ๊ด€๊ณ„์—†์ด ๋ฐ์ดํ„ฐ๋ฅผ ํ…์„œ ๋ฐฐ์น˜๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์กฐ๋ฆฝํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ์ผ๋ จ์˜ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ํ…์ŠคํŠธ๋Š” [Tokenizer](./main_classes/tokenizer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ† ํฐ ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ํ† ํฐ์˜ ์ˆซ์ž ํ‘œํ˜„์„ ๋งŒ๋“  ํ›„ ํ…์„œ๋กœ ์กฐ๋ฆฝํ•ฉ๋‹ˆ๋‹ค. * ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค๋Š” [Feature extractor](./main_classes/feature_extractor)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒํ˜•์—์„œ ์‹œํ€€์Šค ํŠน์„ฑ์„ ํŒŒ์•…ํ•˜์—ฌ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ์ž…๋ ฅ์€ [ImageProcessor](./main_classes/image)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. * ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์€ [Processor](./main_classes/processors)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ† ํฌ๋‚˜์ด์ €์™€ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. <Tip> `AutoProcessor`๋Š” **์–ธ์ œ๋‚˜** ์ž‘๋™ํ•˜์—ฌ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ํ”„๋กœ์„ธ์„œ ๋“ฑ ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ์— ๋งž๋Š” ํด๋ž˜์Šค๋ฅผ ์ž๋™์œผ๋กœ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ๐Ÿค— Datasets๋ฅผ ์„ค์น˜ํ•˜์—ฌ ์‹คํ—˜์— ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ๋ฅผ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pip install datasets ``` ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] <Youtube id="Yffk5aydLzg"/> ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ณธ ๋„๊ตฌ๋Š” [tokenizer](main_classes/tokenizer)์ž…๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์ผ๋ จ์˜ ๊ทœ์น™์— ๋”ฐ๋ผ ํ…์ŠคํŠธ๋ฅผ *ํ† ํฐ*์œผ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ํ† ํฐ์€ ์ˆซ์ž๋กœ ๋ณ€ํ™˜๋˜๊ณ  ํ…์„œ๋Š” ๋ชจ๋ธ ์ž…๋ ฅ์ด ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ํ•„์š”ํ•œ ์ถ”๊ฐ€ ์ž…๋ ฅ์€ ํ† ํฌ๋‚˜์ด์ €์— ์˜ํ•ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. <Tip> ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ๊ณ„ํš์ด๋ผ๋ฉด ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ…์ŠคํŠธ๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ ๋ง๋ญ‰์น˜์™€ ๋™์ผํ•œ ๋ฐฉ์‹์œผ๋กœ ๋ถ„ํ• ๋˜๊ณ  ์‚ฌ์ „ํ›ˆ๋ จ ์ค‘์— ๋™์ผํ•œ ํ•ด๋‹น ํ† ํฐ-์ธ๋ฑ์Šค ์Œ(์ผ๋ฐ˜์ ์œผ๋กœ *vocab*์ด๋ผ๊ณ  ํ•จ)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๋ ค๋ฉด [`AutoTokenizer.from_pretrained`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ํ›ˆ๋ จ๋œ *vocab*์„ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` ๊ทธ ๋‹ค์Œ์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ €์— ๋„ฃ์–ด์ฃผ์„ธ์š”: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ํ† ํฌ๋‚˜์ด์ €๋Š” ์„ธ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ํ•ญ๋ชฉ์„ ํฌํ•จํ•œ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: * [input_ids](glossary#input-ids)๋Š” ๋ฌธ์žฅ์˜ ๊ฐ ํ† ํฐ์— ํ•ด๋‹นํ•˜๋Š” ์ธ๋ฑ์Šค์ž…๋‹ˆ๋‹ค. * [attention_mask](glossary#attention-mask)๋Š” ํ† ํฐ์„ ์ฒ˜๋ฆฌํ•ด์•ผ ํ•˜๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. * [token_type_ids](glossary#token-type-ids)๋Š” ๋‘ ๊ฐœ ์ด์ƒ์˜ ์‹œํ€€์Šค๊ฐ€ ์žˆ์„ ๋•Œ ํ† ํฐ์ด ์†ํ•œ ์‹œํ€€์Šค๋ฅผ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. `input_ids`๋ฅผ ๋””์ฝ”๋”ฉํ•˜์—ฌ ์ž…๋ ฅ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋‘ ๊ฐœ์˜ ํŠน์ˆ˜ํ•œ ํ† ํฐ(๋ถ„๋ฅ˜ ํ† ํฐ `CLS`์™€ ๋ถ„ํ•  ํ† ํฐ `SEP`)์„ ๋ฌธ์žฅ์— ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์— ํŠน์ˆ˜ํ•œ ํ† ํฐ์ด ํ•„์š”ํ•œ ๊ฒƒ์€ ์•„๋‹ˆ์ง€๋งŒ, ํ•„์š”ํ•˜๋‹ค๋ฉด ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌํ•  ๋ฌธ์žฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ์—๋Š” ๋ฆฌ์ŠคํŠธ๋กœ ํ† ํฌ๋‚˜์ด์ €์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### ํŒจ๋”ฉ[[pad]] ๋ชจ๋ธ ์ž…๋ ฅ์ธ ํ…์„œ๋Š” ๋ชจ์–‘์ด ๊ท ์ผํ•ด์•ผ ํ•˜์ง€๋งŒ, ๋ฌธ์žฅ์˜ ๊ธธ์ด๊ฐ€ ํ•ญ์ƒ ๊ฐ™์ง€๋Š” ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ์งง์€ ๋ฌธ์žฅ์— ํŠน์ˆ˜ํ•œ *ํŒจ๋”ฉ ํ† ํฐ*์„ ์ถ”๊ฐ€ํ•˜์—ฌ ํ…์„œ๋ฅผ ์ง์‚ฌ๊ฐํ˜• ๋ชจ์–‘์ด ๋˜๋„๋ก ํ•˜๋Š” ์ „๋žต์ž…๋‹ˆ๋‹ค. `padding` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐฐ์น˜ ๋‚ด์˜ ์งง์€ ์‹œํ€€์Šค๋ฅผ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์— ๋งž์ถฐ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ๊ธธ์ด๊ฐ€ ์งง์€ ์ฒซ ๋ฌธ์žฅ๊ณผ ์„ธ ๋ฒˆ์งธ ๋ฌธ์žฅ์ด ์ด์ œ `0`์œผ๋กœ ์ฑ„์›Œ์กŒ์Šต๋‹ˆ๋‹ค. ### ์ž˜๋ผ๋‚ด๊ธฐ[[truncation]] ํ•œํŽธ, ๋•Œ๋กœ๋Š” ์‹œํ€€์Šค๊ฐ€ ๋ชจ๋ธ์—์„œ ์ฒ˜๋ฆฌํ•˜๊ธฐ์— ๋„ˆ๋ฌด ๊ธธ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ์‹œํ€€์Šค๋ฅผ ๋” ์งง๊ฒŒ ์ค„์ผ ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉํ•˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์‹œํ€€์Šค๋ฅผ ์ž๋ฅด๋ ค๋ฉด `truncation` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` <Tip> ๋‹ค์–‘ํ•œ ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ ์ธ์ˆ˜์— ๋Œ€ํ•ด ๋” ์•Œ์•„๋ณด๋ ค๋ฉด [ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ](./pad_truncation) ๊ฐœ๋… ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”. </Tip> ### ํ…์„œ ๋งŒ๋“ค๊ธฐ[[build-tensors]] ๋งˆ์ง€๋ง‰์œผ๋กœ, ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋ชจ๋ธ์— ๊ณต๊ธ‰๋˜๋Š” ์‹ค์ œ ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. `return_tensors` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ PyTorch์˜ ๊ฒฝ์šฐ `pt`, TensorFlow์˜ ๊ฒฝ์šฐ `tf`๋กœ ์„ค์ •ํ•˜์„ธ์š”: <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> ## ์˜ค๋””์˜ค[[audio]] ์˜ค๋””์˜ค ์ž‘์—…์€ ๋ชจ๋ธ์— ๋งž๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [ํŠน์„ฑ ์ถ”์ถœ๊ธฐ](main_classes/feature_extractor)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋Š” ์›์‹œ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์—์„œ ํŠน์„ฑ๋ฅผ ์ถ”์ถœํ•˜๊ณ  ์ด๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ ์ž…๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด๊ธฐ ์œ„ํ•ด [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. (๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub)์—์„œ ์ž์„ธํžˆ ์„ค๋ช…ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.) ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` `audio` ์—ด์˜ ์ฒซ ๋ฒˆ์งธ ์š”์†Œ์— ์ ‘๊ทผํ•˜์—ฌ ์ž…๋ ฅ์„ ์‚ดํŽด๋ณด์„ธ์š”. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜๋ฉด ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์„ธ ๊ฐ€์ง€ ํ•ญ๋ชฉ์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค: * `array`๋Š” 1D ๋ฐฐ์—ด๋กœ ๊ฐ€์ ธ์™€์„œ (ํ•„์š”ํ•œ ๊ฒฝ์šฐ) ๋ฆฌ์ƒ˜ํ”Œ๋ง๋œ ์Œ์„ฑ ์‹ ํ˜ธ์ž…๋‹ˆ๋‹ค. * `path`๋Š” ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์œ„์น˜๋ฅผ ๊ฐ€๋ฆฌํ‚ต๋‹ˆ๋‹ค. * `sampling_rate`๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์—์„œ ์ดˆ๋‹น ์ธก์ •๋˜๋Š” ๋ฐ์ดํ„ฐ ํฌ์ธํŠธ ์ˆ˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ๋ฅผ ๋ณด๋ฉด Wav2Vec2๊ฐ€ 16kHz ์ƒ˜ํ”Œ๋ง๋œ ์Œ์„ฑ ์˜ค๋””์˜ค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์ „ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์™€ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ๋‹ค๋ฅด๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. ๐Ÿค— Datasets์˜ [`~datasets.Dataset.cast_column`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ 16kHz๋กœ ์—…์ƒ˜ํ”Œ๋งํ•˜์„ธ์š”: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด `audio` ์—ด์„ ๋‹ค์‹œ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` ๋‹ค์Œ์œผ๋กœ, ์ž…๋ ฅ์„ ์ •๊ทœํ™”ํ•˜๊ณ  ํŒจ๋”ฉํ•  ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ๊ฒฝ์šฐ, ๋” ์งง์€ ์‹œํ€€์Šค์— ๋Œ€ํ•ด `0`์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์—๋„ ๊ฐ™์€ ๊ฐœ๋…์ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋Š” ๋ฐฐ์—ด์— `0`(๋ฌต์Œ์œผ๋กœ ํ•ด์„)์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. [`AutoFeatureExtractor.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` ์˜ค๋””์˜ค `array`๋ฅผ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์— ์ „๋‹ฌํ•˜์„ธ์š”. ๋˜ํ•œ, ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์กฐ์šฉํ•œ ์˜ค๋ฅ˜(silent errors)๋ฅผ ๋” ์ž˜ ๋””๋ฒ„๊น…ํ•  ์ˆ˜ ์žˆ๋„๋ก ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์— `sampling_rate` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ํ† ํฌ๋‚˜์ด์ €์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ฐฐ์น˜ ๋‚ด์—์„œ ๊ฐ€๋ณ€์ ์ธ ์‹œํ€€์Šค๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ํŒจ๋”ฉ ๋˜๋Š” ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐœ์˜ ์˜ค๋””์˜ค ์ƒ˜ํ”Œ์˜ ์‹œํ€€์Šค ๊ธธ์ด๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` ์˜ค๋””์˜ค ์ƒ˜ํ”Œ์˜ ๊ธธ์ด๊ฐ€ ๋™์ผํ•˜๋„๋ก ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ์„ธ์š”. ์ตœ๋Œ€ ์ƒ˜ํ”Œ ๊ธธ์ด๋ฅผ ์ง€์ •ํ•˜๋ฉด ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๊ฐ€ ํ•ด๋‹น ๊ธธ์ด์— ๋งž์ถฐ ์‹œํ€€์Šค๋ฅผ ํŒจ๋”ฉํ•˜๊ฑฐ๋‚˜ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` `preprocess_function`์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ฒ˜์Œ ์˜ˆ์‹œ ๋ช‡ ๊ฐœ์— ์ ์šฉํ•ด๋ณด์„ธ์š”: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` ์ด์ œ ์ƒ˜ํ”Œ ๊ธธ์ด๊ฐ€ ๋ชจ๋‘ ๊ฐ™๊ณ  ์ง€์ •๋œ ์ตœ๋Œ€ ๊ธธ์ด์— ๋งž๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ „์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ](main_classes/image_processor)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ž…๋ ฅ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์—ฌ๋Ÿฌ ๋‹จ๊ณ„๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋‹จ๊ณ„์—๋Š” ํฌ๊ธฐ ์กฐ์ •, ์ •๊ทœํ™”, ์ƒ‰์ƒ ์ฑ„๋„ ๋ณด์ •, ์ด๋ฏธ์ง€์˜ ํ…์„œ ๋ณ€ํ™˜ ๋“ฑ์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. <Tip> ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์„ ๋ช‡ ๊ฐ€์ง€ ์ ์šฉํ•œ ๋’ค์— ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ ๋ฐ ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์€ ๋ชจ๋‘ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ˜•ํ•˜์ง€๋งŒ, ์„œ๋กœ ๋‹ค๋ฅธ ๋ชฉ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค: * ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์€ ๊ณผ์ ํ•ฉ(over-fitting)์„ ๋ฐฉ์ง€ํ•˜๊ณ  ๋ชจ๋ธ์˜ ๊ฒฌ๊ณ ํ•จ(resiliency)์„ ๋†’์ด๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ๊ธฐ์™€ ์ƒ‰์ƒ ์กฐ์ •, ์ž๋ฅด๊ธฐ, ํšŒ์ „, ํฌ๊ธฐ ์กฐ์ •, ํ™•๋Œ€/์ถ•์†Œ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ฆ๊ฐ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ฆ๊ฐ•์œผ๋กœ ์ด๋ฏธ์ง€์˜ ์˜๋ฏธ๊ฐ€ ๋ฐ”๋€Œ์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€๊ฐ€ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ž…๋ ฅ ํ˜•์‹๊ณผ ์ผ์น˜ํ•˜๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ์ปดํ“จํ„ฐ ๋น„์ „ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ๋•Œ ์ด๋ฏธ์ง€๋Š” ๋ชจ๋ธ์ด ์ดˆ๊ธฐ์— ํ›ˆ๋ จ๋  ๋•Œ์™€ ์ •ํ™•ํžˆ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ „์ฒ˜๋ฆฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์—๋Š” ์›ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋ฌด์—‡์ด๋“  ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ์—๋Š” ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ `ImageProcessor`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. </Tip> [food101](https://huggingface.co/datasets/food101) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ปดํ“จํ„ฐ ๋น„์ „ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉํ•˜๋Š”์ง€ ์•Œ์•„๋ณด์„ธ์š”. ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. <Tip> ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ƒ๋‹นํžˆ ํฌ๊ธฐ ๋•Œ๋ฌธ์— ๐Ÿค— Datasets์˜ `split` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ์„ธํŠธ์—์„œ ์ž‘์€ ์ƒ˜ํ”Œ๋งŒ ๊ฐ€์ ธ์˜ค์„ธ์š”! </Tip> ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` ๋‹ค์Œ์œผ๋กœ, ๐Ÿค— Datasets์˜ [`image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image)๋กœ ์ด๋ฏธ์ง€๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> dataset[0]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/> </div> [`AutoImageProcessor.from_pretrained`]๋กœ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ๋จผ์ € ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๋‹จ๊ณ„๋ฅผ ์ถ”๊ฐ€ํ•ด ๋ด…์‹œ๋‹ค. ์•„๋ฌด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋‚˜ ์‚ฌ์šฉํ•ด๋„ ๊ดœ์ฐฎ์ง€๋งŒ, ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” torchvision์˜ [`transforms`](https://pytorch.org/vision/stable/transforms.html) ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ด๋ณด๊ณ  ์‹ถ๋‹ค๋ฉด, [Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) ๋˜๋Š” [Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)์—์„œ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉํ•˜๋Š”์ง€ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)๋กœ [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)์™€ [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) ๋“ฑ ๋ณ€ํ™˜์„ ๋ช‡ ๊ฐ€์ง€ ์—ฐ๊ฒฐํ•˜์„ธ์š”. ์ฐธ๊ณ ๋กœ ํฌ๊ธฐ ์กฐ์ •์— ํ•„์š”ํ•œ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ ์š”๊ตฌ์‚ฌํ•ญ์€ `image_processor`์—์„œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์€ ์ •ํ™•ํ•œ ๋†’์ด์™€ ๋„ˆ๋น„๋ฅผ ์š”๊ตฌํ•˜์ง€๋งŒ, ์ œ์ผ ์งง์€ ๋ณ€์˜ ๊ธธ์ด(`shortest_edge`)๋งŒ ์ •์˜๋œ ๋ชจ๋ธ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) ``` 2. ๋ชจ๋ธ์€ ์ž…๋ ฅ์œผ๋กœ [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. `ImageProcessor`๋Š” ์ด๋ฏธ์ง€ ์ •๊ทœํ™” ๋ฐ ์ ์ ˆํ•œ ํ…์„œ ์ƒ์„ฑ์„ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฐ์น˜ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๋ฐ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ๊ฒฐํ•ฉํ•˜๊ณ  `pixel_values`๋ฅผ ์ƒ์„ฑํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> def transforms(examples): ... images = [_transforms(img.convert("RGB")) for img in examples["image"]] ... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"] ... return examples ``` <Tip> ์œ„์˜ ์˜ˆ์—์„œ๋Š” ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ์ค‘์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— `do_resize=False`๋กœ ์„ค์ •ํ•˜๊ณ , ํ•ด๋‹น `image_processor`์—์„œ `size` ์†์„ฑ์„ ํ™œ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ์ค‘์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ƒ๋žตํ•˜์„ธ์š”. ๊ธฐ๋ณธ์ ์œผ๋กœ๋Š” `ImageProcessor`๊ฐ€ ํฌ๊ธฐ ์กฐ์ •์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ฆ๊ฐ• ๋ณ€ํ™˜ ๊ณผ์ •์—์„œ ์ด๋ฏธ์ง€๋ฅผ ์ •๊ทœํ™”ํ•˜๋ ค๋ฉด `image_processor.image_mean` ๋ฐ `image_processor.image_std` ๊ฐ’์„ ์‚ฌ์šฉํ•˜์„ธ์š”. </Tip> 3. ๐Ÿค— Datasets์˜ [`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๋ณ€ํ™˜์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset.set_transform(transforms) ``` 4. ์ด์ œ ์ด๋ฏธ์ง€์— ์ ‘๊ทผํ•˜๋ฉด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ `pixel_values`๋ฅผ ์ถ”๊ฐ€ํ•œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py >>> dataset[0].keys() ``` ๋‹ค์Œ์€ ๋ณ€ํ˜•์ด ์ ์šฉ๋œ ํ›„์˜ ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ž˜๋ ค๋‚˜๊ฐ”๊ณ  ์ƒ‰์ƒ ์†์„ฑ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/> </div> <Tip> `ImageProcessor`๋Š” ๊ฐ์ฒด ๊ฐ์ง€, ์‹œ๋งจํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(semantic segmentation), ์ธ์Šคํ„ด์Šค ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(instance segmentation), ํŒŒ๋†‰ํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(panoptic segmentation)๊ณผ ๊ฐ™์€ ์ž‘์—…์— ๋Œ€ํ•œ ํ›„์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์€ ๋ชจ๋ธ์˜ ์›์‹œ ์ถœ๋ ฅ์„ ๊ฒฝ๊ณ„ ์ƒ์ž๋‚˜ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜ ๋งต๊ณผ ๊ฐ™์€ ์˜๋ฏธ ์žˆ๋Š” ์˜ˆ์ธก์œผ๋กœ ๋ณ€ํ™˜ํ•ด์ค๋‹ˆ๋‹ค. </Tip> ### ํŒจ๋”ฉ[[pad]] ์˜ˆ๋ฅผ ๋“ค์–ด, [DETR](./model_doc/detr)์™€ ๊ฐ™์€ ๊ฒฝ์šฐ์—๋Š” ๋ชจ๋ธ์ด ํ›ˆ๋ จํ•  ๋•Œ ํฌ๊ธฐ ์กฐ์ • ์ฆ๊ฐ•์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ๋ฐฐ์น˜ ๋‚ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`DetrImageProcessor`]์˜ [`DetrImageProcessor.pad`]๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‚ฌ์šฉ์ž ์ •์˜ `collate_fn`์„ ์ •์˜ํ•ด์„œ ๋ฐฐ์น˜ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ[[multimodal]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์ด ํ•„์š”ํ•œ ์ž‘์—…์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•œ [ํ”„๋กœ์„ธ์„œ](main_classes/processors)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ํ† ํฌ๋‚˜์ด์ €์™€ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ๊ฐ™์€ ๋‘ ๊ฐ€์ง€ ์ฒ˜๋ฆฌ ๊ฐ์ฒด๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. [LJ Speech](https://huggingface.co/datasets/lj_speech) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์„ ์œ„ํ•œ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์„ธ์š”. (๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub)์—์„œ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.) ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์—์„œ๋Š” `audio`์™€ `text`์—๋งŒ ์ง‘์ค‘ํ•˜๋ฉด ๋˜๋ฏ€๋กœ, ๋‹ค๋ฅธ ์—ด๋“ค์€ ์ œ๊ฑฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` ์ด์ œ `audio`์™€ `text`์—ด์„ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` ๊ธฐ์กด์— ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ์ƒˆ๋กœ์šด ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ ์ผ์น˜์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ [๋ฆฌ์ƒ˜ํ”Œ๋ง](preprocessing#audio)ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` [`AutoProcessor.from_pretrained`]๋กœ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. `array`์— ๋“ค์–ด ์žˆ๋Š” ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ๋ฅผ `input_values`๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  `text`๋ฅผ ํ† ํฐํ™”ํ•˜์—ฌ `labels`๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. ์ƒ˜ํ”Œ์„ `prepare_dataset` ํ•จ์ˆ˜์— ์ ์šฉํ•˜์„ธ์š”: ```py >>> prepare_dataset(lj_speech[0]) ``` ์ด์ œ ํ”„๋กœ์„ธ์„œ๊ฐ€ `input_values`์™€ `labels`๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ , ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ 16kHz๋กœ ๋‹ค์šด์ƒ˜ํ”Œ๋งํ–ˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/pad_truncation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ[[padding-and-truncation]] ๋ฐฐ์น˜ ์ž…๋ ฅ์€ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์•„์„œ ๊ณ ์ • ํฌ๊ธฐ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ๋Š” ๋‹ค์–‘ํ•œ ๊ธธ์ด์˜ ๋ฐฐ์น˜์—์„œ ์ง์‚ฌ๊ฐํ˜• ํ…์„œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ์ „๋žต์ž…๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ํŠน์ˆ˜ํ•œ **ํŒจ๋”ฉ ํ† ํฐ**์„ ์ถ”๊ฐ€ํ•˜์—ฌ ์งง์€ ์‹œํ€€์Šค๊ฐ€ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค ๋˜๋Š” ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉํ•˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด์™€ ๋™์ผํ•œ ๊ธธ์ด๋ฅผ ๊ฐ–๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ž˜๋ผ๋‚ด๊ธฐ๋Š” ๊ธด ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋‚ด์–ด ํŒจ๋”ฉ๊ณผ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ์‹œํ€€์Šค์˜ ๊ธธ์ด๋ฅผ ๋™์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋ฐฐ์น˜์— ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์˜ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๊ณ  ๋ชจ๋ธ์ด ํ—ˆ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๋Š” ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•„์š”ํ•˜๋‹ค๋ฉด API๊ฐ€ ์ง€์›ํ•˜๋Š” ๋” ๋งŽ์€ ์ „๋žต์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ธ์ˆ˜๋Š” `padding`, `truncation`, `max_length` ์„ธ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. `padding` ์ธ์ˆ˜๋Š” ํŒจ๋”ฉ์„ ์ œ์–ดํ•ฉ๋‹ˆ๋‹ค. ๋ถˆ๋ฆฌ์–ธ ๋˜๋Š” ๋ฌธ์ž์—ด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `True` ๋˜๋Š” `'longest'`: ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค๋กœ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค(๋‹จ์ผ ์‹œํ€€์Šค๋งŒ ์ œ๊ณตํ•˜๋Š” ๊ฒฝ์šฐ ํŒจ๋”ฉ์ด ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค). - `'max_length'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ ์‹œํ€€์Šค๋งŒ ์ œ๊ณตํ•˜๋Š” ๊ฒฝ์šฐ์—๋„ ํŒจ๋”ฉ์ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. - `False` ๋˜๋Š” `'do_not_pad'`: ํŒจ๋”ฉ์ด ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ๊ธฐ๋ณธ ๋™์ž‘์ž…๋‹ˆ๋‹ค. `truncation` ์ธ์ˆ˜๋Š” ์ž˜๋ผ๋‚ผ ๋ฐฉ๋ฒ•์„ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ถˆ๋ฆฌ์–ธ ๋˜๋Š” ๋ฌธ์ž์—ด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `True` ๋˜๋Š” `longest_first`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์˜ ํ† ํฐ์„ ์ ์ ˆํ•œ ๊ธธ์ด์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ํ•˜๋‚˜์”ฉ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. - `'only_second'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ(๋˜๋Š” ์‹œํ€€์Šค ์Œ์˜ ๋ฐฐ์น˜)๊ฐ€ ์ œ๊ณต๋œ ๊ฒฝ์šฐ ์Œ์˜ ๋‘ ๋ฒˆ์งธ ๋ฌธ์žฅ๋งŒ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. - `'only_first'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ(๋˜๋Š” ์‹œํ€€์Šค ์Œ์˜ ๋ฐฐ์น˜)๊ฐ€ ์ œ๊ณต๋œ ๊ฒฝ์šฐ ์Œ์˜ ์ฒซ ๋ฒˆ์งธ ๋ฌธ์žฅ๋งŒ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. - `False` ๋˜๋Š” `'do_not_truncate'`: ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ๊ธฐ๋ณธ ๋™์ž‘์ž…๋‹ˆ๋‹ค. `max_length` ์ธ์ˆ˜๋Š” ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•  ๊ธธ์ด๋ฅผ ์ œ์–ดํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ˆ˜๋Š” ์ •์ˆ˜ ๋˜๋Š” `None`์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ, `None`์ผ ๊ฒฝ์šฐ ๋ชจ๋ธ์ด ํ—ˆ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๊ธฐ๋ณธ๊ฐ’์ด ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ํŠน์ •ํ•œ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ `max_length`์— ๋Œ€ํ•œ ์ž˜๋ผ๋‚ด๊ธฐ ๋˜๋Š” ํŒจ๋”ฉ์ด ๋น„ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ‘œ์—๋Š” ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ถŒ์žฅ ๋ฐฉ๋ฒ•์ด ์š”์•ฝ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ž…๋ ฅ์œผ๋กœ ์‹œํ€€์Šค ์Œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ์˜ˆ์ œ์—์„œ `truncation=True`๋ฅผ `['only_first', 'only_second', 'longest_first']`์—์„œ ์„ ํƒํ•œ `STRATEGY`, ์ฆ‰ `truncation='only_second'` ๋˜๋Š” `truncation='longest_first'`๋กœ ๋ฐ”๊พธ๋ฉด ์•ž์„œ ์„ค๋ช…ํ•œ ๋Œ€๋กœ ์Œ์˜ ๋‘ ์‹œํ€€์Šค๊ฐ€ ์ž˜๋ฆฌ๋Š” ๋ฐฉ์‹์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. | ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ฐฉ๋ฒ• | |--------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------| | ์ž˜๋ผ๋‚ด๊ธฐ ์—†์Œ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='longest')` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length')` | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | ๋‹ค์–‘ํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences, truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ถˆ๊ฐ€ | | ํŠน์ • ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences, truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ถˆ๊ฐ€ | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/in_translation.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์—ด์‹ฌํžˆ ๋ฒˆ์—ญ ์ค‘์ž…๋‹ˆ๋‹ค. ์กฐ๊ธˆ ์ด๋”ฐ ๋งŒ๋‚˜์š”!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/model_memory_anatomy.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๋ชจ๋ธ ํ•™์Šต ํ•ด๋ถ€ํ•˜๊ธฐ [[model-training-anatomy]] ๋ชจ๋ธ ํ›ˆ๋ จ ์†๋„์™€ ๋ฉ”๋ชจ๋ฆฌ ํ™œ์šฉ์˜ ํšจ์œจ์„ฑ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ์ˆ ์„ ์ดํ•ดํ•˜๋ ค๋ฉด GPU๊ฐ€ ํ›ˆ๋ จ ์ค‘์— ์–ด๋–ป๊ฒŒ ํ™œ์šฉ๋˜๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ์ˆ˜ํ–‰๋˜๋Š” ์—ฐ์‚ฐ์— ๋”ฐ๋ผ ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณ€ํ•˜๋Š”์ง€์— ์ต์ˆ™ํ•ด์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € GPU ํ™œ์šฉ๊ณผ ๋ชจ๋ธ ํ›ˆ๋ จ ์‹คํ–‰์— ๋Œ€ํ•œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ฐ๋ชจ๋ฅผ ์œ„ํ•ด ๋ช‡๋ช‡ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install transformers datasets accelerate nvidia-ml-py3 ``` `nvidia-ml-py3` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” Python ๋‚ด๋ถ€์—์„œ ๋ชจ๋ธ์˜ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค. ํ„ฐ๋ฏธ๋„์˜ `nvidia-smi` ๋ช…๋ น์–ด์— ์ต์ˆ™ํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ, ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” Python์—์„œ ์ง์ ‘ ๋™์ผํ•œ ์ •๋ณด์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ, 100๊ณผ 30000 ์‚ฌ์ด์˜ ๋ฌด์ž‘์œ„ ํ† ํฐ ID์™€ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ์œ„ํ•œ ์ด์ง„ ๋ ˆ์ด๋ธ”์ธ ๋”๋ฏธ ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๊ธธ์ด๊ฐ€ ๊ฐ๊ฐ 512์ธ ์ด 512๊ฐœ์˜ ์‹œํ€€์Šค๋ฅผ ๊ฐ€์ ธ์™€ PyTorch ํ˜•์‹์˜ [`~datasets.Dataset`]์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> from datasets import Dataset >>> seq_len, dataset_size = 512, 512 >>> dummy_data = { ... "input_ids": np.random.randint(100, 30000, (dataset_size, seq_len)), ... "labels": np.random.randint(0, 1, (dataset_size)), ... } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format("pt") ``` GPU ํ™œ์šฉ ๋ฐ [`Trainer`]๋กœ ์‹คํ–‰ํ•œ ํ›ˆ๋ จ ๊ณผ์ •์— ๋Œ€ํ•œ ์š”์•ฝ ํ†ต๊ณ„๋ฅผ ์ถœ๋ ฅํ•˜๊ธฐ ์œ„ํ•ด ๋‘ ๊ฐœ์˜ ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from pynvml import * >>> def print_gpu_utilization(): ... nvmlInit() ... handle = nvmlDeviceGetHandleByIndex(0) ... info = nvmlDeviceGetMemoryInfo(handle) ... print(f"GPU memory occupied: {info.used//1024**2} MB.") >>> def print_summary(result): ... print(f"Time: {result.metrics['train_runtime']:.2f}") ... print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}") ... print_gpu_utilization() ``` ์‹œ์ž‘ํ•  ๋•Œ GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋น„์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•ด ๋ด…์‹œ๋‹ค: ```py >>> print_gpu_utilization() GPU memory occupied: 0 MB. ``` ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ธฐ ์ „์—๋Š” ์˜ˆ์ƒ๋Œ€๋กœ GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์ ์œ ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š๋‹ค๋ฉด ์‚ฌ์šฉ์ž์˜ ๊ธฐ๊ธฐ์—์„œ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ํ”„๋กœ์„ธ์Šค๋ฅผ ์ค‘๋‹จํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ์šฉ์ž๋Š” ๋ชจ๋“  ์—ฌ์œ  GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด GPU์— ๋กœ๋“œ๋  ๋•Œ ์ปค๋„๋„ ๋กœ๋“œ๋˜๋ฏ€๋กœ 1-2GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ฐจ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ผ๋งˆ๋‚˜ ๋˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด GPU์— ์ž‘์€ ํ…์„œ๋ฅผ ๋กœ๋“œํ•˜์—ฌ ์ปค๋„์ด ๋กœ๋“œ๋˜๋„๋ก ํŠธ๋ฆฌ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> torch.ones((1, 1)).to("cuda") >>> print_gpu_utilization() GPU memory occupied: 1343 MB. ``` ์ปค๋„๋งŒ์œผ๋กœ๋„ GPU ๋ฉ”๋ชจ๋ฆฌ์˜ 1.3GB๋ฅผ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ๋ชจ๋ธ์ด ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ๊ณต๊ฐ„์„ ์‚ฌ์šฉํ•˜๋Š”์ง€ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ ๋กœ๋“œ [[load-model]] ์šฐ์„ , `bert-large-uncased` ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋ฅผ ์ง์ ‘ GPU์— ๋กœ๋“œํ•ด์„œ ๊ฐ€์ค‘์น˜๋งŒ์ด ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-large-uncased").to("cuda") >>> print_gpu_utilization() GPU memory occupied: 2631 MB. ``` ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋งŒ์œผ๋กœ๋„ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ 1.3 GB ์ฐจ์ง€ํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ์ˆซ์ž๋Š” ์‚ฌ์šฉํ•˜๋Š” GPU์— ๋”ฐ๋ผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ตœ์‹  GPU์—์„œ๋Š” ๋ชจ๋ธ ์‚ฌ์šฉ ์†๋„๋ฅผ ๋†’์ด๋Š” ์ตœ์ ํ™”๋œ ๋ฐฉ์‹์œผ๋กœ ๊ฐ€์ค‘์น˜๊ฐ€ ๋กœ๋“œ๋˜๋ฏ€๋กœ, ๋ชจ๋ธ์ด ๋” ๋งŽ์€ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ `nvidia-smi` CLI์™€ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป๋Š”์ง€ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash nvidia-smi ``` ```bash Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ ``` ์ด์ „๊ณผ ๋™์ผํ•œ ์ˆซ์ž๊ฐ€ ์ถœ๋ ฅ๋˜๊ณ  16GB ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ฐ€์ง„ V100 GPU๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ๋„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์—ฌ GPU ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์ด ์–ด๋–ป๊ฒŒ ๋‹ฌ๋ผ์ง€๋Š”์ง€ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ์„  ๋ช‡๋ช‡ ํ‘œ์ค€ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py default_args = { "output_dir": "tmp", "evaluation_strategy": "steps", "num_train_epochs": 1, "log_level": "error", "report_to": "none", } ``` <Tip> ์—ฌ๋Ÿฌ ์‹คํ—˜์„ ์‹คํ–‰ํ•  ๊ณ„ํš์ด๋ผ๋ฉด, ์‹คํ—˜ ๊ฐ„์— ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ œ๋Œ€๋กœ ๋น„์šฐ๊ธฐ ์œ„ํ•ด์„œ Python ์ปค๋„์„ ์‹คํ—˜ ์‚ฌ์ด๋งˆ๋‹ค ์žฌ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ## ๊ธฐ๋ณธ ํ›ˆ๋ จ์—์„œ์˜ ๋ฉ”๋ชจ๋ฆฌ ํ™œ์šฉ [[memory-utilization-at-vanilla-training]] [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ, GPU ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ 4์ธ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error() >>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) ``` ``` Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. ``` ์šฐ๋ฆฌ๋Š” ๋น„๊ต์  ์ž‘์€ ๋ฐฐ์น˜ ํฌ๊ธฐ๋กœ๋„ ์ „์ฒด GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ฑฐ์˜ ๋‹ค ์ฐจ์ง€ํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ํด์ˆ˜๋ก ๋ชจ๋ธ ์ˆ˜๋ ด ์†๋„๊ฐ€ ๋นจ๋ผ์ง€๊ณ  ์ตœ์ข… ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ ์ด์ƒ์ ์œผ๋กœ๋Š” GPU ์ œํ•œ์ด ์•„๋‹Œ ์šฐ๋ฆฌ ๋ชจ๋ธ์˜ ์š”๊ตฌ์‚ฌํ•ญ์— ๋งž๊ฒŒ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ํฅ๋ฏธ๋กญ๊ฒŒ๋„ ์šฐ๋ฆฌ๋Š” ๋ชจ๋ธ์˜ ํฌ๊ธฐ๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์™œ ์ด๋Ÿฐ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜๋Š”์ง€ ์กฐ๊ธˆ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์˜ ์—ฐ์‚ฐ๊ณผ ๋ฉ”๋ชจ๋ฆฌ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ์˜ ์—ฐ์‚ฐ ํ•ด๋ถ€ํ•˜๊ธฐ [[anatomy-of-models-operations]] ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜์—๋Š” ์—ฐ์‚ฐ ๊ฐ•๋„(compute-intensity)์— ๋”ฐ๋ผ ๊ทธ๋ฃนํ™”๋œ 3๊ฐ€์ง€ ์ฃผ์š” ์—ฐ์‚ฐ ๊ทธ๋ฃน์ด ์žˆ์Šต๋‹ˆ๋‹ค. 1. **ํ…์„œ ์ถ•์•ฝ(Tensor Contractions)** ์„ ํ˜• ๋ ˆ์ด์–ด์™€ ๋ฉ€ํ‹ฐํ—ค๋“œ ์–ดํ…์…˜์˜ ๊ตฌ์„ฑ ์š”์†Œ๋Š” ๋ชจ๋‘ **ํ–‰๋ ฌ-ํ–‰๋ ฌ ๊ณฑ์…ˆ(matrix-matrix multiplications)**์„ ์ผ๊ด„์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ด ์—ฐ์‚ฐ์€ ํŠธ๋žœ์Šคํฌ๋จธ ํ›ˆ๋ จ์—์„œ ๊ฐ€์žฅ ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ๋†’์€ ๋ถ€๋ถ„์ž…๋‹ˆ๋‹ค. 2. **ํ†ต๊ณ„ ์ •๊ทœํ™”(Statistical Normalizations)** ์†Œํ”„ํŠธ๋งฅ์Šค์™€ ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋Š” ํ…์„œ ์ถ•์•ฝ๋ณด๋‹ค ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ๋‚ฎ์Šต๋‹ˆ๋‹ค. ํ•˜๋‚˜ ์ด์ƒ์˜ **๊ฐ์†Œ ์—ฐ์‚ฐ(reduction operations)**์„ ํฌํ•จํ•˜๋ฉฐ, ๊ทธ ๊ฒฐ๊ณผ๋Š” map์„ ํ†ตํ•ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. 3. **์›์†Œ๋ณ„ ์—ฐ์‚ฐ์ž(Element-wise Operators)** ๊ทธ ์™ธ ์—ฐ์‚ฐ์ž๋“ค, **ํŽธํ–ฅ(biases), ๋“œ๋กญ์•„์›ƒ(dropout), ํ™œ์„ฑํ™” ํ•จ์ˆ˜(activations), ์ž”์ฐจ ์—ฐ๊ฒฐ(residual connections)**์ด ์—ฌ๊ธฐ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด ์—ฐ์‚ฐ๋“ค์€ ์—ฐ์‚ฐ ๊ฐ•๋„๊ฐ€ ๊ฐ€์žฅ ๋‚ฎ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ง€์‹์€ ์„ฑ๋Šฅ ๋ณ‘๋ชฉ ํ˜„์ƒ์„ ๋ถ„์„ํ•  ๋•Œ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‚ด์šฉ์€ [Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020](https://arxiv.org/abs/2007.00072)์„ ์ฐธ๊ณ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ์˜ ๋ฉ”๋ชจ๋ฆฌ ๊ตฌ์กฐ [[anatomy-of-models-memory]] ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐ๋Š” ๋‹จ์ˆœํžˆ GPU์— ๋ชจ๋ธ์„ ์˜ฌ๋ฆฌ๋Š” ๊ฒƒ๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ์ด๋Š” ํ›ˆ๋ จ ์ค‘ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋งŽ์€ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. GPU ๋ฉ”๋ชจ๋ฆฌ์˜ ๊ตฌ์„ฑ ์š”์†Œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ ๊ฐ€์ค‘์น˜ 2. ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ 3. ๊ทธ๋ผ๋””์–ธํŠธ 4. ๊ทธ๋ผ๋””์–ธํŠธ ๊ณ„์‚ฐ์„ ์œ„ํ•ด ์ €์žฅ๋œ ์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™” 5. ์ž„์‹œ ๋ฒ„ํผ 6. ๊ธฐ๋Šฅ๋ณ„ ๋ฉ”๋ชจ๋ฆฌ AdamW๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋กœ ํ›ˆ๋ จ๋œ ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ์€ ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๋‹น 18 ๋ฐ”์ดํŠธ์™€ ํ™œ์„ฑํ™” ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ถ”๋ก  ๋‹จ๊ณ„์—์„œ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ๊ทธ๋ผ๋””์–ธํŠธ๊ฐ€ ํ•„์š”ํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ ์ด๋“ค์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ์ถ”๋ก ์˜ ๊ฒฝ์šฐ ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜๋‹น 6 ๋ฐ”์ดํŠธ์™€ ํ™œ์„ฑํ™” ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. **๋ชจ๋ธ ๊ฐ€์ค‘์น˜:** - fp32 ํ›ˆ๋ จ์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 4 ๋ฐ”์ดํŠธ - ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 6 ๋ฐ”์ดํŠธ (๋ฉ”๋ชจ๋ฆฌ์— fp32์™€ fp16 ๋‘ ๊ฐ€์ง€ ๋ชจ๋ธ์„ ์œ ์ง€) **์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ:** - ์ผ๋ฐ˜ AdamW์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 8 ๋ฐ”์ดํŠธ (2๊ฐ€์ง€ ์ƒํƒœ ์œ ์ง€) - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes)์™€ ๊ฐ™์€ 8๋น„ํŠธ AdamW ์˜ตํ‹ฐ๋งˆ์ด์ €์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 2 ๋ฐ”์ดํŠธ - Momentum์„ ๊ฐ€์ง„ SGD์™€ ๊ฐ™์€ ์˜ตํ‹ฐ๋งˆ์ด์ €์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 4 ๋ฐ”์ดํŠธ (ํ•˜๋‚˜์˜ ์ƒํƒœ๋งŒ ์œ ์ง€) **๊ทธ๋ผ๋””์–ธํŠธ** - fp32 ๋˜๋Š” ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ์˜ ๊ฒฝ์šฐ ๋งค๊ฐœ ๋ณ€์ˆ˜ ์ˆ˜ * 4 ๋ฐ”์ดํŠธ (๊ทธ๋ผ๋””์–ธํŠธ๋Š” ํ•ญ์ƒ fp32์œผ๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค.) **์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™”** - ํฌ๊ธฐ๋Š” ์—ฌ๋Ÿฌ ์š”์ธ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง€๋ฉฐ, ์ฃผ์š” ์š”์ธ์€ ์‹œํ€€์Šค ๊ธธ์ด, ์€๋‹‰ ์ƒํƒœ์˜ ํฌ๊ธฐ ๋ฐ ๋ฐฐ์น˜ ํฌ๊ธฐ์ž…๋‹ˆ๋‹ค. ์ˆœ๋ฐฉํ–ฅ ๋ฐ ์—ญ๋ฐฉํ–ฅ ํ•จ์ˆ˜์—์„œ ์ „๋‹ฌ ๋ฐ ๋ฐ˜ํ™˜๋˜๋Š” ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์ด ์žˆ์œผ๋ฉฐ, ๊ทธ๋ผ๋””์–ธํŠธ ๊ณ„์‚ฐ์„ ์œ„ํ•ด ์ €์žฅ๋œ ์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. **์ž„์‹œ ๋ฉ”๋ชจ๋ฆฌ** ๋”๋ถˆ์–ด ๋ชจ๋“  ์ข…๋ฅ˜์˜ ์ž„์‹œ ๋ณ€์ˆ˜๋Š” ์—ฐ์‚ฐ์ด ์™„๋ฃŒ๋˜๋ฉด ๊ณง๋ฐ”๋กœ ํ•ด์ œ๋˜์ง€๋งŒ, ๊ทธ ์ˆœ๊ฐ„์—๋Š” ์ถ”๊ฐ€ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ํ•„์š”ํ•  ์ˆ˜ ์žˆ๊ณ  OOM์„ ์œ ๋ฐœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ฝ”๋”ฉํ•  ๋•Œ ์ด๋Ÿฌํ•œ ์ž„์‹œ ๋ณ€์ˆ˜์— ๋Œ€ํ•ด ์ „๋žต์ ์œผ๋กœ ์ƒ๊ฐํ•˜๊ณ  ๋•Œ๋กœ๋Š” ๋” ์ด์ƒ ํ•„์š” ์—†๋Š” ์ž„์‹œ ๋ณ€์ˆ˜๋ฅผ ์ฆ‰์‹œ ๋ช…์‹œ์ ์œผ๋กœ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. **๊ธฐ๋Šฅ๋ณ„ ๋ฉ”๋ชจ๋ฆฌ** ๊ทธ๋Ÿฐ ๋‹ค์Œ, ์†Œํ”„ํŠธ์›จ์–ด์—๋Š” ํŠน๋ณ„ํ•œ ๋ฉ”๋ชจ๋ฆฌ ์š”๊ตฌ ์‚ฌํ•ญ์ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋น” ๊ฒ€์ƒ‰์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•  ๋•Œ ์†Œํ”„ํŠธ์›จ์–ด๋Š” ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ ์‚ฌ๋ณธ์„ ์—ฌ๋Ÿฌ ๊ฐœ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **`forward` vs `backward` ์‹คํ–‰ ์†๋„** ํ•ฉ์„ฑ๊ณฑ๊ณผ ์„ ํ˜• ๋ ˆ์ด์–ด์˜ ๊ฒฝ์šฐ ์ˆœ๋ฐฉํ–ฅ์— ๋น„ํ•ด ์—ญ๋ฐฉํ–ฅ์—์„œ๋Š” 2๋ฐฐ์˜ ํ”Œ๋กญ์Šค๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ์ผ๋ฐ˜์ ์œผ๋กœ 2๋ฐฐ ์ •๋„ ๋А๋ฆฌ๊ฒŒ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค(์—ญ๋ฐฉํ–ฅ์˜ ๊ฒฝ์šฐ ์‚ฌ์ด์ฆˆ๊ฐ€ ๋ถ€์ž์—ฐ์Šค๋Ÿฝ๊ธฐ ๋•Œ๋ฌธ์—, ๋•Œ๋กœ๋Š” ๋”์šฑ ๋А๋ฆด ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค). ํ™œ์„ฑํ™”๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋Œ€์—ญํญ์ด ์ œํ•œ๋˜์–ด ์žˆ์œผ๋ฉฐ, ์ผ๋ฐ˜์ ์œผ๋กœ ์ˆœ๋ฐฉํ–ฅ๋ณด๋‹ค ์—ญ๋ฐฉํ–ฅ์—์„œ ๋” ๋งŽ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฝ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. (์˜ˆ๋ฅผ ๋“ค์–ด, ์ˆœ๋ฐฉํ–ฅ ํ™œ์„ฑํ™” ์‹œ ํ•œ ๋ฒˆ ์”ฉ ์ฝ๊ณ  ์“ฐ์ง€๋งŒ, ์—ญ๋ฐฉํ–ฅ ํ™œ์„ฑํ™”์—์„œ๋Š” ์ˆœ๋ฐฉํ–ฅ gradOutput๊ณผ ์ถœ๋ ฅ์— ๋Œ€ํ•ด ์ด ๋‘ ๋ฒˆ ์ฝ๊ณ  gradInput์— ๋Œ€ํ•ด ํ•œ ๋ฒˆ ์”๋‹ˆ๋‹ค.) ๋ณด๋‹ค์‹œํ”ผ, GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ ˆ์•ฝํ•˜๊ฑฐ๋‚˜ ์ž‘์—… ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ GPU ํ™œ์šฉ๊ณผ ๊ณ„์‚ฐ ์†๋„์— ์˜ํ–ฅ์„ ์ฃผ๋Š” ๊ฒƒ์ด ๋ฌด์—‡์ธ์ง€๋ฅผ ์ดํ•ดํ–ˆ์œผ๋ฏ€๋กœ, [Methods and tools for efficient training on a single GPU](perf_train_gpu_one) ๋ฌธ์„œ ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์„ฑ๋Šฅ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ณด์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[share-a-model]] ์ง€๋‚œ ๋‘ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ๋ถ„์‚ฐ ์„ค์ •์„ ์œ„ํ•ด PyTorch, Keras ๋ฐ ๐Ÿค— Accelerate๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์„ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! Hugging Face๋Š” ์ธ๊ณต์ง€๋Šฅ์˜ ๋ฏผ์ฃผํ™”๋ฅผ ์œ„ํ•ด ๋ชจ๋‘์—๊ฒŒ ์ง€์‹๊ณผ ์ž์›์„ ๊ณต๊ฐœ์ ์œผ๋กœ ๊ณต์œ ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ๋ฏฟ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์‹œ๊ฐ„๊ณผ ์ž์›์„ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ๋„๋ก ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ณ ๋ คํ•ด ๋ณด์„ธ์š”. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ [Model Hub](https://huggingface.co/models)์—์„œ ํ›ˆ๋ จ๋˜๊ฑฐ๋‚˜ ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…์‹œ๋‹ค: - API๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ Hub์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค. - ์›น์‚ฌ์ดํŠธ๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ Hub๋กœ ๋Œ์–ด๋‹ค ๋†“์Šต๋‹ˆ๋‹ค. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋ ค๋ฉด, [huggingface.co](https://huggingface.co/join)์— ๊ณ„์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด ์กฐ์ง์— ๊ฐ€์ž…ํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ ๋งŒ๋“ค ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ## ์ €์žฅ์†Œ ํŠน์ง•[[repository-features]] ๋ชจ๋ธ ํ—ˆ๋ธŒ์˜ ๊ฐ ์ €์žฅ์†Œ๋Š” ์ผ๋ฐ˜์ ์ธ GitHub ์ €์žฅ์†Œ์ฒ˜๋Ÿผ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ๋Š” ๋ฒ„์ „ ๊ด€๋ฆฌ, ์ปค๋ฐ‹ ๊ธฐ๋ก, ์ฐจ์ด์  ์‹œ๊ฐํ™” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ๋‚ด์žฅ๋œ ๋ฒ„์ „ ๊ด€๋ฆฌ๋Š” git ๋ฐ [git-lfs](https://git-lfs.github.com/)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ํ•˜๋‚˜์˜ ๋ชจ๋ธ์„ ํ•˜๋‚˜์˜ ์ €์žฅ์†Œ๋กœ ์ทจ๊ธ‰ํ•˜์—ฌ ์ ‘๊ทผ ์ œ์–ด ๋ฐ ํ™•์žฅ์„ฑ์ด ํ–ฅ์ƒ๋ฉ๋‹ˆ๋‹ค. ๋ฒ„์ „ ์ œ์–ด๋Š” ์ปค๋ฐ‹ ํ•ด์‹œ, ํƒœ๊ทธ ๋˜๋Š” ๋ธŒ๋žœ์น˜๋กœ ๋ชจ๋ธ์˜ ํŠน์ • ๋ฒ„์ „์„ ๊ณ ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ *revision*์„ ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ `revision` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ๋ชจ๋ธ ๋ฒ„์ „์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` ๋˜ํ•œ ์ €์žฅ์†Œ์—์„œ ํŒŒ์ผ์„ ์‰ฝ๊ฒŒ ํŽธ์ง‘ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ปค๋ฐ‹ ๊ธฐ๋ก๊ณผ ์ฐจ์ด๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## ์„ค์ •[[setup]] ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๊ธฐ ์ „์— Hugging Face ์ž๊ฒฉ ์ฆ๋ช…์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ„ฐ๋ฏธ๋„์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋œ ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฉด Hugging Face ์บ์‹œ ํด๋”(๊ธฐ๋ณธ์ ์œผ๋กœ `~/.cache/`)์— ์•ก์„ธ์Šค ํ† ํฐ์„ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash huggingface-cli login ``` Jupyter ๋˜๋Š” Colaboratory์™€ ๊ฐ™์€ ๋…ธํŠธ๋ถ์„ ์‚ฌ์šฉ ์ค‘์ธ ๊ฒฝ์šฐ, [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด API๋กœ ํ—ˆ๋ธŒ์™€ ์ƒํ˜ธ ์ž‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pip install huggingface_hub ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `notebook_login`๋กœ ํ—ˆ๋ธŒ์— ๋กœ๊ทธ์ธํ•˜๊ณ , [์—ฌ๊ธฐ](https://huggingface.co/settings/token) ๋งํฌ์—์„œ ๋กœ๊ทธ์ธํ•  ํ† ํฐ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„ ๋ชจ๋ธ ๋ณ€ํ™˜ํ•˜๊ธฐ[[convert-a-model-for-all-frameworks]] ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ž‘์—…ํ•˜๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋ ค๋ฉด, PyTorch ๋ฐ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•˜๊ณ  ์—…๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„๋ฅผ ๊ฑด๋„ˆ๋›ฐ์–ด๋„ ์‚ฌ์šฉ์ž๋Š” ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์ง€๋งŒ, ๐Ÿค— Transformers๊ฐ€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ฆ‰์„์—์„œ ๋ณ€ํ™˜ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์†๋„๊ฐ€ ๋А๋ ค์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ์‰ฝ์Šต๋‹ˆ๋‹ค. PyTorch ๋ฐ TensorFlow๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•œ ๋‹ค์Œ(์„ค์น˜ ์ง€์นจ์€ [์—ฌ๊ธฐ](installation) ์ฐธ์กฐ) ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ์ž‘์—…์— ๋Œ€ํ•œ ํŠน์ • ๋ชจ๋ธ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์ฒดํฌํฌ์ธํŠธ๋ฅผ TensorFlow์—์„œ PyTorch๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด `from_tf=True`๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> ์ฒดํฌํฌ์ธํŠธ๋ฅผ PyTorch์—์„œ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด `from_pt=True`๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ƒˆ๋กœ์šด ์ฒดํฌํฌ์ธํŠธ์™€ ํ•จ๊ป˜ ์ƒˆ๋กœ์šด TensorFlow ๋ชจ๋ธ์„ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> Flax์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, PyTorch์—์„œ Flax๋กœ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ ํ‘ธ์‹œํ•˜๊ธฐ[[push-a-model-during-training]] <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์€ ์ถ”๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋‚˜ ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ๋งŒํผ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. [๋ฏธ์„ธ ์กฐ์ • ํŠœํ† ๋ฆฌ์–ผ](training)์—์„œ [`TrainingArguments`] ํด๋ž˜์Šค๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์™€ ์ถ”๊ฐ€ ํ›ˆ๋ จ ์˜ต์…˜์„ ์ง€์ •ํ•˜๋Š” ๊ณณ์ด๋ผ๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ์ด๋Ÿฌํ•œ ํ›ˆ๋ จ ์˜ต์…˜ ์ค‘ ํ•˜๋‚˜๋Š” ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ์ง์ ‘ ํ‘ธ์‹œํ•˜๋Š” ๊ธฐ๋Šฅ์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. [`TrainingArguments`]์—์„œ `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` ํ‰์†Œ์™€ ๊ฐ™์ด ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•œ ํ›„, [`Trainer`]์—์„œ [`~transformers.Trainer.push_to_hub`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•˜์„ธ์š”. ๐Ÿค— Transformers๋Š” ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ํ›ˆ๋ จ ๊ฒฐ๊ณผ ๋ฐ ํ”„๋ ˆ์ž„์›Œํฌ ๋ฒ„์ „์„ ๋ชจ๋ธ ์นด๋“œ์— ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค! ```py >>> trainer.push_to_hub() ``` </pt> <tf> [`PushToHubCallback`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๋ ค๋ฉด, [`PushToHubCallback`]์— ๋‹ค์Œ ์ธ์ˆ˜๋ฅผ ์ •์˜ํ•˜์„ธ์š”: - ์ถœ๋ ฅ๋œ ๋ชจ๋ธ์˜ ํŒŒ์ผ ๊ฒฝ๋กœ - ํ† ํฌ๋‚˜์ด์ € - `{Hub ์‚ฌ์šฉ์ž ์ด๋ฆ„}/{๋ชจ๋ธ ์ด๋ฆ„}` ํ˜•์‹์˜ `hub_model_id` ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` [`fit`](https://keras.io/api/models/model_training_apis/)์— ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๋ฉด, ๐Ÿค— Transformers๊ฐ€ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## `push_to_hub` ํ•จ์ˆ˜ ์‚ฌ์šฉํ•˜๊ธฐ[[use-the-pushtohub-function]] ๋ชจ๋ธ์—์„œ ์ง์ ‘ `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `push_to_hub`์— ๋ชจ๋ธ ์ด๋ฆ„์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‚ฌ์šฉ์ž ์ด๋ฆ„ ์•„๋ž˜์— ๋ชจ๋ธ ์ด๋ฆ„ `my-awesome-model`๋กœ ์ €์žฅ์†Œ๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด์ œ ์‚ฌ์šฉ์ž๋Š” `from_pretrained` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` ์กฐ์ง์— ์†ํ•˜๊ณ  ๋ชจ๋ธ์„ ์กฐ์ง ์ด๋ฆ„์œผ๋กœ ๋Œ€์‹  ํ‘ธ์‹œํ•˜๋ ค๋ฉด `repo_id`์— ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` `push_to_hub` ํ•จ์ˆ˜๋Š” ๋ชจ๋ธ ์ €์žฅ์†Œ์— ๋‹ค๋ฅธ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ชจ๋ธ ์ €์žฅ์†Œ์— ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` ๋˜๋Š” ๋ฏธ์„ธ ์กฐ์ •๋œ PyTorch ๋ชจ๋ธ์˜ TensorFlow ๋ฒ„์ „์„ ์ถ”๊ฐ€ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` ์ด์ œ Hugging Face ํ”„๋กœํ•„๋กœ ์ด๋™ํ•˜๋ฉด, ์ƒˆ๋กœ ์ƒ์„ฑํ•œ ๋ชจ๋ธ ์ €์žฅ์†Œ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. **Files** ํƒญ์„ ํด๋ฆญํ•˜๋ฉด ์ €์žฅ์†Œ์— ์—…๋กœ๋“œํ•œ ๋ชจ๋“  ํŒŒ์ผ์ด ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์— ํŒŒ์ผ์„ ๋งŒ๋“ค๊ณ  ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ํ—ˆ๋ธŒ ์„ค๋ช…์„œ [์—ฌ๊ธฐ](https://huggingface.co/docs/hub/how-to-upstream)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ์›น ์ธํ„ฐํŽ˜์ด์Šค๋กœ ์—…๋กœ๋“œํ•˜๊ธฐ[[upload-with-the-web-interface]] ์ฝ”๋“œ ์—†๋Š” ์ ‘๊ทผ ๋ฐฉ์‹์„ ์„ ํ˜ธํ•˜๋Š” ์‚ฌ์šฉ์ž๋Š” ํ—ˆ๋ธŒ์˜ ์›น ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [huggingface.co/new](https://huggingface.co/new)๋ฅผ ๋ฐฉ๋ฌธํ•˜์—ฌ ์ƒˆ๋กœ์šด ์ €์žฅ์†Œ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) ์—ฌ๊ธฐ์„œ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ •๋ณด๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”: - ์ €์žฅ์†Œ์˜ **์†Œ์œ ์ž**๋ฅผ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์‚ฌ์šฉ์ž ๋˜๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์†ํ•œ ์กฐ์ง์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ €์žฅ์†Œ ์ด๋ฆ„์ด ๋  ๋ชจ๋ธ์˜ ์ด๋ฆ„์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์ด ๊ณต๊ฐœ์ธ์ง€ ๋น„๊ณต๊ฐœ์ธ์ง€ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์˜ ๋ผ์ด์„ผ์Šค ์‚ฌ์šฉ์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ **Files** ํƒญ์„ ํด๋ฆญํ•˜๊ณ  **Add file** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ƒˆ๋กœ์šด ํŒŒ์ผ์„ ์ €์žฅ์†Œ์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—…๋กœ๋“œํ•  ํŒŒ์ผ์„ ๋Œ์–ด๋‹ค ๋†“๊ณ  ์ปค๋ฐ‹ ๋ฉ”์‹œ์ง€๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## ๋ชจ๋ธ ์นด๋“œ ์ถ”๊ฐ€ํ•˜๊ธฐ[[add-a-model-card]] ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์˜ ๊ธฐ๋Šฅ, ์ œํ•œ, ์ž ์žฌ์  ํŽธํ–ฅ ๋ฐ ์œค๋ฆฌ์  ๊ณ ๋ ค ์‚ฌํ•ญ์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์ €์žฅ์†Œ์— ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ๋ชจ๋ธ ์นด๋“œ๋Š” `README.md` ํŒŒ์ผ์— ์ •์˜๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * `README.md` ํŒŒ์ผ์„ ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑํ•˜์—ฌ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. * ๋ชจ๋ธ ์ €์žฅ์†Œ์—์„œ **Edit model card** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ์— ํฌํ•จํ•  ์ •๋ณด ์œ ํ˜•์— ๋Œ€ํ•œ ์ข‹์€ ์˜ˆ๋Š” DistilBert [๋ชจ๋ธ ์นด๋“œ](https://huggingface.co/distilbert-base-uncased)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋ชจ๋ธ์˜ ํƒ„์†Œ ๋ฐœ์ž๊ตญ์ด๋‚˜ ์œ„์ ฏ ์˜ˆ์‹œ ๋“ฑ `README.md` ํŒŒ์ผ์—์„œ ์ œ์–ดํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ์˜ต์…˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/hub/models-cards) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/create_a_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋งž์ถคํ˜• ์•„ํ‚คํ…์ฒ˜ ๋งŒ๋“ค๊ธฐ[[create-a-custom-architecture]] [`AutoClass`](model_doc/auto)๋Š” ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜๊ณ  ๋ฏธ๋ฆฌ ํ•™์Šต๋œ configuration๊ณผ ๊ฐ€์ค‘์น˜๋ฅผ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ฒดํฌํฌ์ธํŠธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š” ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด `AutoClass`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํŠน์ • ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ณด๋‹ค ์„ธ๋ฐ€ํ•˜๊ฒŒ ์ œ์–ดํ•˜๊ณ ์ž ํ•˜๋Š” ์‚ฌ์šฉ์ž๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋งŒ์œผ๋กœ ์ปค์Šคํ…€ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์—ฐ๊ตฌ, ๊ต์œก ๋˜๋Š” ์‹คํ—˜ํ•˜๋Š” ๋ฐ ๊ด€์‹ฌ์ด ์žˆ๋Š” ๋ชจ๋“  ์‚ฌ์šฉ์ž์—๊ฒŒ ํŠนํžˆ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” 'AutoClass'๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ์ปค์Šคํ…€ ๋ชจ๋ธ์„ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ๋ชจ๋ธ configuration์„ ๊ฐ€์ ธ์˜ค๊ณ  ์‚ฌ์šฉ์ž ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ๋А๋ฆฌ๊ฑฐ๋‚˜ ๋น ๋ฅธ ํ† ํฐํ™”๊ธฐ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. - ๋น„์ „ ์ž‘์—…์„ ์œ„ํ•œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ์˜ค๋””์˜ค ์ž‘์—…์„ ์œ„ํ•œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์šฉ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ## Configuration[[configuration]] [configuration](main_classes/configuration)์€ ๋ชจ๋ธ์˜ ํŠน์ • ์†์„ฑ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๊ฐ ๋ชจ๋ธ ๊ตฌ์„ฑ์—๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ์†์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ชจ๋“  NLP ๋ชจ๋ธ์—๋Š” `hidden_size`, `num_attention_heads`, `num_hidden_layers` ๋ฐ `vocab_size` ์†์„ฑ์ด ๊ณตํ†ต์œผ๋กœ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์†์„ฑ์€ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•  attention heads ๋˜๋Š” hidden layers์˜ ์ˆ˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. [DistilBERT](model_doc/distilbert) ์†์„ฑ์„ ๊ฒ€์‚ฌํ•˜๊ธฐ ์œ„ํ•ด [`DistilBertConfig`]์— ์ ‘๊ทผํ•˜์—ฌ ์ž์„ธํžˆ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`]๋Š” ๊ธฐ๋ณธ [`DistilBertModel`]์„ ๋นŒ๋“œํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋“  ๊ธฐ๋ณธ ์†์„ฑ์„ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์†์„ฑ์€ ์ปค์Šคํ„ฐ๋งˆ์ด์ง•์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ์‹คํ—˜์„ ์œ„ํ•œ ๊ณต๊ฐ„์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ธฐ๋ณธ ๋ชจ๋ธ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ปค์Šคํ„ฐ๋งˆ์ด์ฆˆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `activation` ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ๋‹ค๋ฅธ ํ™œ์„ฑํ™” ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์„ธ์š”. - `attention_dropout` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์–ดํ…์…˜ ํ™•๋ฅ ์— ๋” ๋†’์€ ๋“œ๋กญ์•„์›ƒ ๋น„์œจ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ์†์„ฑ์€ [`~PretrainedConfig.from_pretrained`] ํ•จ์ˆ˜์—์„œ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` ๋ชจ๋ธ ๊ตฌ์„ฑ์ด ๋งŒ์กฑ์Šค๋Ÿฌ์šฐ๋ฉด [`~PretrainedConfig.save_pretrained`]๋กœ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค์ • ํŒŒ์ผ์€ ์ง€์ •๋œ ์ž‘์—… ๊ฒฝ๋กœ์— JSON ํŒŒ์ผ๋กœ ์ €์žฅ๋ฉ๋‹ˆ๋‹ค: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` configuration ํŒŒ์ผ์„ ์žฌ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`~PretrainedConfig.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip> configuration ํŒŒ์ผ์„ ๋”•์…”๋„ˆ๋ฆฌ๋กœ ์ €์žฅํ•˜๊ฑฐ๋‚˜ ์‚ฌ์šฉ์ž ์ •์˜ configuration ์†์„ฑ๊ณผ ๊ธฐ๋ณธ configuration ์†์„ฑ์˜ ์ฐจ์ด์ ๋งŒ ์ €์žฅํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [configuration](main_classes/configuration) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ๋ชจ๋ธ[[model]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” [๋ชจ๋ธ(model)](main_classes/models)์„ ๋งŒ๋“œ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋А์Šจํ•˜๊ฒŒ ์•„ํ‚คํ…์ฒ˜๋ผ๊ณ ๋„ ๋ถˆ๋ฆฌ๋Š” ๋ชจ๋ธ์€ ๊ฐ ๊ณ„์ธต์ด ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์ž‘๊ณผ ๋ฐœ์ƒํ•˜๋Š” ์ž‘์—…์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. configuration์˜ `num_hidden_layers`์™€ ๊ฐ™์€ ์†์„ฑ์€ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ •์˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์€ ๊ธฐ๋ณธ ํด๋ž˜์Šค [`PreTrainedModel`]๊ณผ ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ ํฌ๊ธฐ ์กฐ์ • ๋ฐ ์…€ํ”„ ์–ดํ…์…˜ ํ—ค๋“œ ๊ฐ€์ง€ ์น˜๊ธฐ์™€ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ๋ฉ”์†Œ๋“œ๋ฅผ ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ชจ๋“  ๋ชจ๋ธ์€ [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) ๋˜๋Š” [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)์˜ ์„œ๋ธŒํด๋ž˜์Šค์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ๊ฐ ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์‚ฌ์šฉ๋ฒ•๊ณผ ํ˜ธํ™˜๋ฉ๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์‚ฌ์šฉ์ž ์ง€์ • configuration ์†์„ฑ์„ ๋ชจ๋ธ์— ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") >>> model = DistilBertModel(my_config) ``` ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜ ๋Œ€์‹  ์ž„์˜์˜ ๊ฐ’์„ ๊ฐ€์ง„ ๋ชจ๋ธ์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „๊นŒ์ง€๋Š” ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ํ”„๋กœ์„ธ์Šค์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ฆฌ์†Œ์Šค์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๋ฉด์„œ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ๋” ๋นจ๋ฆฌ ์–ป์œผ๋ ค๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ [`~PreTrainedModel.from_pretrained`]๋กœ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased") ``` ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•œ ๋ชจ๋ธ์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration์„ ์ž๋™์œผ๋กœ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration ์†์„ฑ์˜ ์ผ๋ถ€ ๋˜๋Š” ์ „๋ถ€๋ฅผ ์‚ฌ์šฉ์ž ์ง€์ •์œผ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </pt> <tf> ์‚ฌ์šฉ์ž ์ง€์ • configuration ์†์„ฑ์„ ๋ชจ๋ธ์— ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜ ๋Œ€์‹  ์ž„์˜์˜ ๊ฐ’์„ ๊ฐ€์ง„ ๋ชจ๋ธ์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „๊นŒ์ง€๋Š” ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ํ”„๋กœ์„ธ์Šค์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ฆฌ์†Œ์Šค์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๋ฉด์„œ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ๋” ๋นจ๋ฆฌ ์–ป์œผ๋ ค๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ [`~TFPreTrainedModel.from_pretrained`]๋กœ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") ``` ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•œ ๋ชจ๋ธ์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration์„ ์ž๋™์œผ๋กœ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration ์†์„ฑ์˜ ์ผ๋ถ€ ๋˜๋Š” ์ „๋ถ€๋ฅผ ์‚ฌ์šฉ์ž ์ง€์ •์œผ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### ๋ชจ๋ธ ํ—ค๋“œ[[model-heads]] ์ด ์‹œ์ ์—์„œ *์€๋‹‰ ์ƒํƒœ(hidden state)*๋ฅผ ์ถœ๋ ฅํ•˜๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์„ ๊ฐ–๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์€๋‹‰ ์ƒํƒœ๋Š” ์ตœ์ข… ์ถœ๋ ฅ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ ํ—ค๋“œ์— ์ž…๋ ฅ์œผ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ์ด ํ•ด๋‹น ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ํ•œ ๊ฐ ์ž‘์—…๋งˆ๋‹ค ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค(์ฆ‰, ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ ์‹œํ€€์Šค ๊ฐ„ ์ž‘์—…์—๋Š” DistilBERT๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Œ). <frameworkcontent> <pt> ์˜ˆ๋ฅผ ๋“ค์–ด, [`DistilBertForSequenceClassification`]์€ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๊ฐ€ ์žˆ๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ํ’€๋ง๋œ ์ถœ๋ ฅ ์œ„์— ์žˆ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋กœ ์ „ํ™˜ํ•˜์—ฌ ์ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ์ž‘์—…์— ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ์ž‘์—…์˜ ๊ฒฝ์šฐ, [`DistilBertForQuestionAnswering`] ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ํ—ค๋“œ๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ ์ถœ๋ ฅ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </pt> <tf> ์˜ˆ๋ฅผ ๋“ค์–ด, [`TFDistilBertForSequenceClassification`]์€ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๊ฐ€ ์žˆ๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ํ’€๋ง๋œ ์ถœ๋ ฅ ์œ„์— ์žˆ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋กœ ์ „ํ™˜ํ•˜์—ฌ ์ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ์ž‘์—…์— ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ์ž‘์—…์˜ ๊ฒฝ์šฐ, [`TFDistilBertForQuestionAnswering`] ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ํ—ค๋“œ๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ ์ถœ๋ ฅ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </tf> </frameworkcontent> ## ํ† ํฌ๋‚˜์ด์ €[[tokenizer]] ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋งˆ์ง€๋ง‰์œผ๋กœ ํ•„์š”ํ•œ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋Š” ์›์‹œ ํ…์ŠคํŠธ๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” [ํ† ํฌ๋‚˜์ด์ €](main_classes/tokenizer)์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - [`PreTrainedTokenizer`]: ํŒŒ์ด์ฌ์œผ๋กœ ๊ตฌํ˜„๋œ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. - [`PreTrainedTokenizerFast`]: Rust ๊ธฐ๋ฐ˜ [๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ๋งŒ๋“ค์–ด์ง„ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. ์ด ํ† ํฌ๋‚˜์ด์ €๋Š” Rust๋กœ ๊ตฌํ˜„๋˜์–ด ๋ฐฐ์น˜ ํ† ํฐํ™”์—์„œ ํŠนํžˆ ๋น ๋ฆ…๋‹ˆ๋‹ค. ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋Š” ํ† ํฐ์„ ์›๋ž˜ ๋‹จ์–ด๋‚˜ ๋ฌธ์ž์— ๋งคํ•‘ํ•˜๋Š” *์˜คํ”„์…‹ ๋งคํ•‘*๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ๋ฉ”์†Œ๋“œ๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋‘ ํ† ํฌ๋‚˜์ด์ € ๋ชจ๋‘ ์ธ์ฝ”๋”ฉ ๋ฐ ๋””์ฝ”๋”ฉ, ์ƒˆ ํ† ํฐ ์ถ”๊ฐ€, ํŠน์ˆ˜ ํ† ํฐ ๊ด€๋ฆฌ์™€ ๊ฐ™์€ ์ผ๋ฐ˜์ ์ธ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> ๋ชจ๋“  ๋ชจ๋ธ์ด ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง€์›ํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์ด [ํ‘œ](index#supported-frameworks)์—์„œ ๋ชจ๋ธ์˜ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ € ์ง€์› ์—ฌ๋ถ€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. </Tip> ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง์ ‘ ํ•™์Šตํ•œ ๊ฒฝ์šฐ, *์–ดํœ˜(vocabulary)* ํŒŒ์ผ์—์„œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` ์‚ฌ์šฉ์ž ์ง€์ • ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜๋Š” ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ํ† ํฌ๋‚˜์ด์ €์—์„œ ์ƒ์„ฑ๋œ ์–ดํœ˜์™€ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์–ดํœ˜๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์ž…๋ ฅ์ด ์˜๋ฏธ๋ฅผ ๊ฐ–์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. [`DistilBertTokenizer`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์–ดํœ˜๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") ``` [`DistilBertTokenizerFast`] ํด๋ž˜์Šค๋กœ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased") ``` <Tip> [`AutoTokenizer`]๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋™์ž‘์„ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `from_pretrained`์—์„œ `use_fast=False`๋ฅผ ์„ค์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ## ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ[[image-processor]] ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ(image processor)๋Š” ๋น„์ „ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ [`~image_processing_utils.ImageProcessingMixin`] ํด๋ž˜์Šค์—์„œ ์ƒ์†ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— [ViT](model_doc/vit)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ [`ViTImageProcessor`]๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> ์‚ฌ์šฉ์ž ์ง€์ •์„ ์›ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ์‚ฌ์šฉ์ž ์ง€์ • ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`ViTImageProcessor`] ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { "do_normalize": false, "do_resize": true, "feature_extractor_type": "ViTImageProcessor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` ## ํŠน์„ฑ ์ถ”์ถœ๊ธฐ[[feature-extractor]] ํŠน์„ฑ ์ถ”์ถœ๊ธฐ(feature extractor)๋Š” ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ [`~feature_extraction_utils.FeatureExtractionMixin`] ํด๋ž˜์Šค์—์„œ ์ƒ์†๋˜๋ฉฐ, ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด [`SequenceFeatureExtractor`] ํด๋ž˜์Šค์—์„œ ์ƒ์†ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— [Wav2Vec2](model_doc/wav2vec2)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ [`Wav2Vec2FeatureExtractor`]๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` <Tip> ์‚ฌ์šฉ์ž ์ง€์ •์ด ํ•„์š”ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ใ…๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ถˆ๋Ÿฌ ์˜ค๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ์‚ฌ์šฉ์ž ์ง€์ • ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [`Wav2Vec2FeatureExtractor`] ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False) >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": false, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 8000 } ``` ## ํ”„๋กœ์„ธ์„œ[[processor]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ๐Ÿค— Transformers๋Š” ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋ฐ ํ† ํฌ๋‚˜์ด์ €์™€ ๊ฐ™์€ ์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋‹จ์ผ ๊ฐ์ฒด๋กœ ํŽธ๋ฆฌํ•˜๊ฒŒ ๋ž˜ํ•‘ํ•˜๋Š” ํ”„๋กœ์„ธ์„œ ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ž๋™ ์Œ์„ฑ ์ธ์‹ ์ž‘์—…(Automatic Speech Recognition task (ASR))์— [`Wav2Vec2Processor`]๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ž๋™ ์Œ์„ฑ ์ธ์‹ ์ž‘์—…์€ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๋ฏ€๋กœ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•  ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•  ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` [`Wav2Vec2Processor`]์—์„œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` configuration๊ณผ ๋ชจ๋ธ์ด๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค์™€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค(ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ํ”„๋กœ์„ธ์„œ)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๐Ÿค— Transformers์—์„œ ์ง€์›ํ•˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋Š” ๊ตฌ์„ฑ์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ์›ํ•˜๋Š” ํŠน์ • ์†์„ฑ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•™์Šต์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ์‰ฝ๊ฒŒ ์„ค์ •ํ•˜๊ฑฐ๋‚˜ ๊ธฐ์กด์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ˆ˜์ •ํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/tf_xla.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TensorFlow ๋ชจ๋ธ์„ ์œ„ํ•œ XLA ํ†ตํ•ฉ [[xla-integration-for-tensorflow-models]] [[open-in-colab]] XLA(Accelerated Linear Algebra)๋Š” TensorFlow ๋ชจ๋ธ์˜ ์‹คํ–‰ ์‹œ๊ฐ„์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์ปดํŒŒ์ผ๋Ÿฌ์ž…๋‹ˆ๋‹ค. [๊ณต์‹ ๋ฌธ์„œ](https://www.tensorflow.org/xla)์— ๋”ฐ๋ฅด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: XLA(Accelerated Linear Algebra)๋Š” ์„ ํ˜• ๋Œ€์ˆ˜๋ฅผ ์œ„ํ•œ ๋„๋ฉ”์ธ ํŠนํ™” ์ปดํŒŒ์ผ๋Ÿฌ๋กœ, TensorFlow ๋ชจ๋ธ์„ ์†Œ์Šค ์ฝ”๋“œ ๋ณ€๊ฒฝ ์—†์ด ๊ฐ€์†ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow์—์„œ XLA๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. XLA๋Š” `tensorflow` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋‚ด์— ํŒจํ‚ค์ง€๋กœ ์ œ๊ณต๋˜๋ฉฐ, [`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs)๊ณผ ๊ฐ™์€ ๊ทธ๋ž˜ํ”„ ์ƒ์„ฑ ํ•จ์ˆ˜์—์„œ `jit_compile` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `fit()` ๋ฐ `predict()`์™€ ๊ฐ™์€ Keras ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, `jit_compile` ์ธ์ˆ˜๋ฅผ `model.compile()`์— ์ „๋‹ฌํ•˜์—ฌ XLA๋ฅผ ๊ฐ„๋‹จํ•˜๊ฒŒ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ XLA๋Š” ์ด๋Ÿฌํ•œ ๋ฉ”์†Œ๋“œ์— ๊ตญํ•œ๋˜์ง€ ์•Š๊ณ  ์ž„์˜์˜ `tf.function`์„ ๊ฐ€์†ํ™”ํ•˜๋Š” ๋ฐ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์—์„œ๋Š” [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2), [T5](https://huggingface.co/docs/transformers/model_doc/t5), [OPT](https://huggingface.co/docs/transformers/model_doc/opt)์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ํ…์ŠคํŠธ ์ƒ์„ฑ, ๊ทธ๋ฆฌ๊ณ  [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ์Œ์„ฑ ์ฒ˜๋ฆฌ๋ฅผ ํฌํ•จํ•˜์—ฌ ์—ฌ๋Ÿฌ TensorFlow ๋ฉ”์†Œ๋“œ๊ฐ€ XLA์™€ ํ˜ธํ™˜๋˜๋„๋ก ๋‹ค์‹œ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ์†๋„ ํ–ฅ์ƒ์€ ๋ชจ๋ธ์— ๋”ฐ๋ผ ๋‹ค๋ฅด์ง€๋งŒ, ๐Ÿค— Transformers ๋‚ด์˜ TensorFlow ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ตœ๋Œ€ 100๋ฐฐ์˜ ์†๋„ ํ–ฅ์ƒ์„ ํ™•์ธํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์— ๋Œ€ํ•ด XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ๋Œ€ ์„ฑ๋Šฅ์„ ์–ป๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ XLA ํ†ตํ•ฉ์˜ ๋ฒค์น˜๋งˆํฌ ๋ฐ ๋””์ž์ธ ์ฒ ํ•™์— ๋Œ€ํ•œ ์ถ”๊ฐ€ ์ž๋ฃŒ ๋งํฌ๋„ ์ œ๊ณตํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TF ํ•จ์ˆ˜ ์‹คํ–‰ํ•˜๊ธฐ [[running-tf-functions-with-xla]] TensorFlow์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ์„ ๊ณ ๋ คํ•ด ๋ด…์‹œ๋‹ค: ```py import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")] ) ``` ์œ„ ๋ชจ๋ธ์€ ์ฐจ์›์ด `(10, )`์ธ ์ž…๋ ฅ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž„์˜์˜ ์ž…๋ ฅ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) # ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. _ = model(random_inputs) ``` XLA๋กœ ์ปดํŒŒ์ผ๋œ ํ•จ์ˆ˜๋กœ ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) ``` `model`์˜ ๊ธฐ๋ณธ `call()` ํ•จ์ˆ˜๋Š” XLA ๊ทธ๋ž˜ํ”„๋ฅผ ์ปดํŒŒ์ผํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋‹ค๋ฅธ ๋ชจ๋ธ ํ•จ์ˆ˜๋ฅผ XLA๋กœ ์ปดํŒŒ์ผํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) ``` ## ๐Ÿค— Transformers์—์„œ XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TF ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ ์‹คํ–‰ํ•˜๊ธฐ [[running-a-tf-text-generation-model-with-xla-from-transformers]] ๐Ÿค— Transformers์—์„œ XLA๋กœ ๊ฐ€์†ํ™”๋œ ์ƒ์„ฑ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ์ตœ์‹  ๋ฒ„์ „์˜ `transformers`๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pip install transformers --upgrade ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM # ์ตœ์†Œ ๋ฒ„์ „์˜ Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์ง€ ์•Š๋‹ค๋ฉด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. from transformers.utils import check_min_version check_min_version("4.21.0") tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] # XLA ์ƒ์„ฑ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•œ ํ•œ ์ค„ xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the ``` ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด, `generate()`์—์„œ XLA๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋Š” ๊ฒƒ์€ ๋‹จ ํ•œ ์ค„์˜ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ์ฝ”๋“œ์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„์€ ๋ณ€๊ฒฝ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์œ„ ์ฝ”๋“œ ์Šค๋‹ˆํŽซ์—์„œ๋Š” XLA์— ํŠน์ •ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ฃผ์˜ํ•  ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. XLA๊ฐ€ ๊ฐ€์ ธ๋‹ค์ค„ ์†๋„ ํ–ฅ์ƒ์„ ์‹คํ˜„ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ด๋ฅผ ์•Œ๊ณ  ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ์ด์— ๋Œ€ํ•ด ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. ## ์ฃผ์˜ํ•  ์  [[gotchas-to-be-aware-of]] XLA ํ™œ์„ฑํ™” ํ•จ์ˆ˜(`xla_generate()`์™€ ๊ฐ™์€)๋ฅผ ์ฒ˜์Œ ์‹คํ–‰ํ•  ๋•Œ ๋‚ด๋ถ€์ ์œผ๋กœ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ์ถ”๋ก ํ•˜๋ ค๊ณ  ํ•˜๋ฉฐ, ์ด๋Š” ์‹œ๊ฐ„์ด ์†Œ์š”๋ฉ๋‹ˆ๋‹ค. ์ด ๊ณผ์ •์€ [โ€œ์ถ”์ (tracing)โ€](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing)์ด๋ผ๊ณ  ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๋น ๋ฅด์ง€ ์•Š๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. `xla_generate()`(๋˜๋Š” ๋‹ค๋ฅธ XLA ํ™œ์„ฑํ™” ํ•จ์ˆ˜)์˜ ์—ฐ์† ํ˜ธ์ถœ์€ ํ•จ์ˆ˜์— ์ „๋‹ฌ๋œ ์ž…๋ ฅ์ด ์ดˆ๊ธฐ์— ๊ตฌ์ถ•๋œ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„์™€ ๋™์ผํ•œ ํ˜•ํƒœ๋ฅผ ๋”ฐ๋ฅธ๋‹ค๋ฉด, ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ์ถ”๋ก ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ž…๋ ฅ ํ˜•ํƒœ๊ฐ€ ๊ณ ์ •๋œ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์˜ˆ: ์ด๋ฏธ์ง€)์—๋Š” ๋ฌธ์ œ๊ฐ€ ๋˜์ง€ ์•Š์ง€๋งŒ, ๊ฐ€๋ณ€ ์ž…๋ ฅ ํ˜•ํƒœ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์˜ˆ: ํ…์ŠคํŠธ)๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `xla_generate()`๊ฐ€ ํ•ญ์ƒ ๋™์ผํ•œ ์ž…๋ ฅ ํ˜•ํƒœ๋กœ ๋™์ž‘ํ•˜๋„๋ก ํ•˜๋ ค๋ฉด, ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ `padding` ์ธ์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] xla_generate = tf.function(model.generate, jit_compile=True) # ์—ฌ๊ธฐ์„œ, padding ์˜ต์…˜์ด ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `xla_generate()`์— ๋Œ€ํ•œ ์ž…๋ ฅ์ด ํ•ญ์ƒ ์ถ”์ ๋œ ํ˜•ํƒœ๋กœ ์ „๋‹ฌ๋˜์–ด ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๊ฐ€์†ํ™”๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์ฝ”๋“œ๋กœ ์ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n") ``` Tesla T4 GPU์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ถœ๋ ฅ์„ ์˜ˆ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms ``` `xla_generate()`์˜ ์ฒซ ๋ฒˆ์งธ ํ˜ธ์ถœ์€ ์ถ”์  ๋•Œ๋ฌธ์— ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ์ง€๋งŒ, ์—ฐ์† ํ˜ธ์ถœ์€ ๋ช‡ ๋ฐฐ๋‚˜ ๋น ๋ฆ…๋‹ˆ๋‹ค. ์ƒ์„ฑ ์˜ต์…˜์— ๋Œ€ํ•œ ์–ด๋–ค ๋ณ€๊ฒฝ์ด๋“  ๋‹ค์‹œ ์ถ”์ ์„ ์œ ๋ฐœํ•˜๋ฏ€๋กœ ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๋А๋ ค์งˆ ์ˆ˜ ์žˆ์Œ์„ ๋ช…์‹ฌํ•˜์„ธ์š”. ์ด ๋ฌธ์„œ์—์„œ๋Š” ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•˜๋Š” ๋ชจ๋“  ํ…์ŠคํŠธ ์ƒ์„ฑ ์˜ต์…˜์„ ๋‹ค๋ฃจ์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๊ณ ๊ธ‰ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋Œ€ํ•ด ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ## ์ถ”๊ฐ€ ์ž๋ฃŒ [[additional-resources]] ์—ฌ๊ธฐ์— ๐Ÿค— Transformers์™€ XLA์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์ถ”๊ฐ€ ์ž๋ฃŒ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด Colab ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb)์€ XLA์™€ ํ˜ธํ™˜๋˜๋Š” ์ธ์ฝ”๋”-๋””์ฝ”๋”([T5](https://huggingface.co/docs/transformers/model_doc/t5)์™€ ๊ฐ™์€) ๋ฐ ๋””์ฝ”๋” ์ „์šฉ([GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)์™€ ๊ฐ™์€) ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์„ ์‹คํ—˜ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™”ํ˜• ๋ฐ๋ชจ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://huggingface.co/blog/tf-xla-generate)์€ TensorFlow์—์„œ XLA์— ๋Œ€ํ•œ ์นœ์ ˆํ•œ ์†Œ๊ฐœ์™€ ํ•จ๊ป˜ XLA์™€ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋ธ์˜ ๋น„๊ต ๋ฒค์น˜๋งˆํฌ์— ๋Œ€ํ•œ ๊ฐœ์š”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html)์€ ๐Ÿค— Transformers์˜ TensorFlow ๋ชจ๋ธ์— XLA ์ง€์›์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•œ ๋””์ž์ธ ์ฒ ํ•™์„ ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. * XLA์™€ TensorFlow ๊ทธ๋ž˜ํ”„์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ ์ถ”์ฒœํ•˜๋Š” ๊ธ€: * [XLA: ๊ธฐ๊ณ„ ํ•™์Šต์„ ์œ„ํ•œ ์ตœ์ ํ™” ์ปดํŒŒ์ผ๋Ÿฌ](https://www.tensorflow.org/xla) * [๊ทธ๋ž˜ํ”„ ๋ฐ tf.function ์†Œ๊ฐœ](https://www.tensorflow.org/guide/intro_to_graphs) * [tf.function์œผ๋กœ ์„ฑ๋Šฅ ํ–ฅ์ƒํ•˜๊ธฐ](https://www.tensorflow.org/guide/function)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์„ค์น˜๋ฐฉ๋ฒ•[[installation]] ๐Ÿค— Transformers๋ฅผ ์‚ฌ์šฉ ์ค‘์ธ ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋งž์ถฐ ์„ค์น˜ํ•˜๊ณ , ์บ์‹œ๋ฅผ ๊ตฌ์„ฑํ•˜๊ฑฐ๋‚˜ ์„ ํƒ์ ์œผ๋กœ ์˜คํ”„๋ผ์ธ์—์„œ๋„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋„๋ก ๐Ÿค— Transformers๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šฐ๊ฒ ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+ ๋ฐ Flax์—์„œ ํ…Œ์ŠคํŠธ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ์•„๋ž˜ ๋งํฌ๋œ ์ €๋งˆ๋‹ค์˜ ๊ณต์‹ ์‚ฌ์ดํŠธ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. * [PyTorch](https://pytorch.org/get-started/locally/) ์„ค์น˜ํ•˜๊ธฐ * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) ์„ค์น˜ํ•˜๊ธฐ * [Flax](https://flax.readthedocs.io/en/latest/) ์„ค์น˜ํ•˜๊ธฐ ## pip์œผ๋กœ ์„ค์น˜ํ•˜๊ธฐ[[install-with-pip]] ๐Ÿค— Transformers๋ฅผ [๊ฐ€์ƒ ํ™˜๊ฒฝ](https://docs.python.org/3/library/venv.html)์— ์„ค์น˜ํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Python ๊ฐ€์ƒ ํ™˜๊ฒฝ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, ์ด [๊ฐ€์ด๋“œ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ์‚ฌ์šฉํ•˜๋ฉด ์„œ๋กœ ๋‹ค๋ฅธ ํ”„๋กœ์ ํŠธ๋“ค์„ ๋ณด๋‹ค ์‰ฝ๊ฒŒ ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ๊ณ , ์˜์กด์„ฑ ๊ฐ„์˜ ํ˜ธํ™˜์„ฑ ๋ฌธ์ œ๋ฅผ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ํ”„๋กœ์ ํŠธ ๋””๋ ‰ํ† ๋ฆฌ์—์„œ ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ๋งŒ๋“ค์–ด ์ค๋‹ˆ๋‹ค. ```bash python -m venv .env ``` ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ํ™œ์„ฑํ™”ํ•ด์ฃผ์„ธ์š”. Linux๋‚˜ MacOS์˜ ๊ฒฝ์šฐ: ```bash source .env/bin/activate ``` Windows์˜ ๊ฒฝ์šฐ: ```bash .env/Scripts/activate ``` ์ด์ œ ๐Ÿค— Transformers๋ฅผ ์„ค์น˜ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์ž…๋ ฅํ•ด์ฃผ์„ธ์š”. ```bash pip install transformers ``` CPU๋งŒ ์จ๋„ ๋œ๋‹ค๋ฉด, ๐Ÿค— Transformers์™€ ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋‹จ 1์ค„๋กœ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๐Ÿค— Transformers์™€ PyTorch์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers์™€ TensorFlow 2.0์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers์™€ Flax์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[flax] ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๐Ÿค— Transformers๊ฐ€ ์ œ๋Œ€๋กœ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` ๋ผ๋ฒจ๊ณผ ์ ์ˆ˜๊ฐ€ ์ถœ๋ ฅ๋˜๋ฉด ์ž˜ ์„ค์น˜๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๊ธฐ[[install-from-source]] ๐Ÿค— Transformers๋ฅผ ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๋ ค๋ฉด ์•„๋ž˜ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install git+https://github.com/huggingface/transformers ``` ์œ„ ๋ช…๋ น์€ ์ตœ์‹ ์ด์ง€๋งŒ (์•ˆ์ •์ ์ธ) `stable` ๋ฒ„์ „์ด ์•„๋‹Œ ์‹คํ—˜์„ฑ์ด ์ง™์€ `main` ๋ฒ„์ „์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค. `main` ๋ฒ„์ „์€ ๊ฐœ๋ฐœ ํ˜„ํ™ฉ๊ณผ ๋ฐœ๋งž์ถ”๋Š”๋ฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋กœ ๋งˆ์ง€๋ง‰ ๊ณต์‹ ๋ฆด๋ฆฌ์Šค ์ดํ›„ ๋ฐœ๊ฒฌ๋œ ๋ฒ„๊ทธ๊ฐ€ ํŒจ์น˜๋˜์—ˆ์ง€๋งŒ, ์ƒˆ ๋ฆด๋ฆฌ์Šค๋กœ ์•„์ง ๋กค์•„์›ƒ๋˜์ง€๋Š” ์•Š์€ ๊ฒฝ์šฐ๋ฅผ ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ”๊ฟ” ๋งํ•˜๋ฉด `main` ๋ฒ„์ „์ด ์•ˆ์ •์„ฑ๊ณผ๋Š” ๊ฑฐ๋ฆฌ๊ฐ€ ์žˆ๋‹ค๋Š” ๋œป์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” `main` ๋ฒ„์ „์„ ์‚ฌ์šฉํ•˜๋Š”๋ฐ ๋ฌธ์ œ๊ฐ€ ์—†๋„๋ก ๋…ธ๋ ฅํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ๋Œ€๋ถ€๋ถ„์˜ ๋ฌธ์ œ๋Š” ๋Œ€๊ฐœ ๋ช‡ ์‹œ๊ฐ„์ด๋‚˜ ํ•˜๋ฃจ ์•ˆ์— ํ•ด๊ฒฐ๋ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด [์ด์Šˆ](https://github.com/huggingface/transformers/issues)๋ฅผ ์—ด์–ด์ฃผ์‹œ๋ฉด ๋” ๋นจ๋ฆฌ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๐Ÿค— Transformers๊ฐ€ ์ œ๋Œ€๋กœ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## ์ˆ˜์ • ๊ฐ€๋Šฅํ•œ ์„ค์น˜[[editable-install]] ์ˆ˜์ • ๊ฐ€๋Šฅํ•œ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. * `main` ๋ฒ„์ „์˜ ์†Œ์Šค ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด * ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๊ณ  ์‹ถ์–ด์„œ ์ฝ”๋“œ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํ…Œ์ŠคํŠธํ•˜๊ธฐ ์œ„ํ•ด ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋ฅผ ๋ณต์ œํ•˜๊ณ  ๐Ÿค— Transformers๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์ž…๋ ฅํ•ด์ฃผ์„ธ์š”. ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` ์œ„ ๋ช…๋ น์€ ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋ฅผ ๋ณต์ œํ•œ ์œ„์น˜์˜ ํด๋”์™€ Python ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ฒฝ๋กœ๋ฅผ ์—ฐ๊ฒฐ์‹œํ‚ต๋‹ˆ๋‹ค. Python์ด ์ผ๋ฐ˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๊ฒฝ๋กœ ์™ธ์— ๋ณต์ œํ•œ ํด๋” ๋‚ด๋ถ€๋ฅผ ํ™•์ธํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด Python ํŒจํ‚ค์ง€๊ฐ€ ์ผ๋ฐ˜์ ์œผ๋กœ `~/anaconda3/envs/main/lib/python3.7/site-packages/`์— ์„ค์น˜๋˜์–ด ์žˆ๋Š”๋ฐ, ๋ช…๋ น์„ ๋ฐ›์€ Python์ด ์ด์ œ ๋ณต์ œํ•œ ํด๋”์ธ `~/transformers/`๋„ ๊ฒ€์ƒ‰ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. <Tip warning={true}> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ณ„์† ์‚ฌ์šฉํ•˜๋ ค๋ฉด `transformers` ํด๋”๋ฅผ ๊ผญ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋ณต์ œ๋ณธ์€ ์ตœ์‹  ๋ฒ„์ „์˜ ๐Ÿค— Transformers๋กœ ์‰ฝ๊ฒŒ ์—…๋ฐ์ดํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash cd ~/transformers/ git pull ``` Python ํ™˜๊ฒฝ์„ ๋‹ค์‹œ ์‹คํ–‰ํ•˜๋ฉด ์—…๋ฐ์ดํŠธ๋œ ๐Ÿค— Transformers์˜ `main` ๋ฒ„์ „์„ ์ฐพ์•„๋‚ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## conda๋กœ ์„ค์น˜ํ•˜๊ธฐ[[install-with-conda]] `huggingface` conda ์ฑ„๋„์—์„œ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash conda install -c huggingface transformers ``` ## ์บ์‹œ ๊ตฌ์„ฑํ•˜๊ธฐ[[cache-setup]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ๋‹ค์šด๋กœ๋“œ๋œ ํ›„ ๋กœ์ปฌ ๊ฒฝ๋กœ `~/.cache/huggingface/hub`์— ์บ์‹œ๋ฉ๋‹ˆ๋‹ค. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `TRANSFORMERS_CACHE`์˜ ๊ธฐ๋ณธ ๋””๋ ‰ํ„ฐ๋ฆฌ์ž…๋‹ˆ๋‹ค. Windows์˜ ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋””๋ ‰ํ„ฐ๋ฆฌ๋Š” `C:\Users\username\.cache\huggingface\hub`์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ (์šฐ์„  ์ˆœ์œ„) ์ˆœ์„œ๋Œ€๋กœ ๋ณ€๊ฒฝํ•˜์—ฌ ๋‹ค๋ฅธ ์บ์‹œ ๋””๋ ‰ํ† ๋ฆฌ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ (๊ธฐ๋ณธ): `HUGGINGFACE_HUB_CACHE` ๋˜๋Š” `TRANSFORMERS_CACHE` 2. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜: `HF_HOME` 3. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜: `XDG_CACHE_HOME` + `/huggingface` <Tip> ๊ณผ๊ฑฐ ๐Ÿค— Transformers์—์„œ ์“ฐ์˜€๋˜ ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `PYTORCH_TRANSFORMERS_CACHE` ๋˜๋Š” `PYTORCH_PRETRAINED_BERT_CACHE`์ด ์„ค์ •๋˜์žˆ๋‹ค๋ฉด, ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `TRANSFORMERS_CACHE`์„ ์ง€์ •ํ•˜์ง€ ์•Š๋Š” ํ•œ ์šฐ์„  ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. </Tip> ## ์˜คํ”„๋ผ์ธ ๋ชจ๋“œ[[offline-mode]] ๐Ÿค— Transformers๋ฅผ ๋กœ์ปฌ ํŒŒ์ผ๋งŒ ์‚ฌ์šฉํ•˜๋„๋ก ํ•ด์„œ ๋ฐฉํ™”๋ฒฝ ๋˜๋Š” ์˜คํ”„๋ผ์ธ ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `TRANSFORMERS_OFFLINE=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. <Tip> `HF_DATASETS_OFFLINE=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์—ฌ ์˜คํ”„๋ผ์ธ ํ›ˆ๋ จ ๊ณผ์ •์— [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/)์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์˜ˆ๋ฅผ ๋“ค์–ด ์™ธ๋ถ€ ๊ธฐ๊ธฐ ์‚ฌ์ด์— ๋ฐฉํ™”๋ฒฝ์„ ๋‘” ์ผ๋ฐ˜ ๋„คํŠธ์›Œํฌ์—์„œ ํ‰์†Œ์ฒ˜๋Ÿผ ํ”„๋กœ๊ทธ๋žจ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ์˜คํ”„๋ผ์ธ ๊ธฐ๊ธฐ์—์„œ ๋™์ผํ•œ ํ”„๋กœ๊ทธ๋žจ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ์ด์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋กœ์ปฌ ํŒŒ์ผ์— ํ•œํ•ด์„œ๋งŒ ๊ฒ€์ƒ‰ํ•  ๊ฒƒ์ด๋ฏ€๋กœ, ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ค‘๋‹จ๋˜๊ฑฐ๋‚˜ ์‹œ๊ฐ„์ด ์ดˆ๊ณผ๋  ๋•Œ๊นŒ์ง€ ๋ฉˆ์ถฐ์žˆ์ง€ ์•Š๊ณ  ์ž˜ ์‹คํ–‰๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ### ์˜คํ”„๋ผ์ธ์šฉ ๋ชจ๋ธ ๋ฐ ํ† ํฌ๋‚˜์ด์ € ๋งŒ๋“ค์–ด๋‘๊ธฐ[[fetch-models-and-tokenizers-to-use-offline]] Another option for using ๐Ÿค— Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this: ๐Ÿค— Transformers๋ฅผ ์˜คํ”„๋ผ์ธ์œผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ ํŒŒ์ผ์„ ๋ฏธ๋ฆฌ ๋‹ค์šด๋กœ๋“œํ•œ ๋‹ค์Œ, ์˜คํ”„๋ผ์ธ์ผ ๋•Œ ์‚ฌ์šฉํ•  ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ด๋‘๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. 3๊ฐ€์ง€ ์ค‘ ํŽธํ•œ ๋ฐฉ๋ฒ•์„ ๊ณ ๋ฅด์„ธ์š”. * [Model Hub](https://huggingface.co/models)์˜ UI๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋ ค๋ฉด โ†“ ์•„์ด์ฝ˜์„ ํด๋ฆญํ•˜์„ธ์š”. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * [`PreTrainedModel.from_pretrained`]์™€ [`PreTrainedModel.save_pretrained`] ์›Œํฌํ”Œ๋กœ๋ฅผ ํ™œ์šฉํ•˜์„ธ์š”. 1. ๋ฏธ๋ฆฌ [`PreTrainedModel.from_pretrained`]๋กœ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•ด๋‘์„ธ์š”. ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. [`PreTrainedModel.save_pretrained`]๋กœ ์ง€์ •๋œ ๊ฒฝ๋กœ์— ํŒŒ์ผ์„ ์ €์žฅํ•ด๋‘์„ธ์š”. ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. ์ด์ œ ์˜คํ”„๋ผ์ธ์ผ ๋•Œ [`PreTrainedModel.from_pretrained`]๋กœ ์ €์žฅํ•ด๋’€๋˜ ํŒŒ์ผ์„ ์ง€์ •๋œ ๊ฒฝ๋กœ์—์„œ ๋‹ค์‹œ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•ด์„œ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜์„ธ์š”. 1. ๊ฐ€์ƒํ™˜๊ฒฝ์— `huggingface_hub` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜์„ธ์š”. ```bash python -m pip install huggingface_hub ``` 2. [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) ํ•จ์ˆ˜๋กœ ํŒŒ์ผ์„ ํŠน์ • ์œ„์น˜์— ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์•„๋ž˜ ๋ช…๋ น์€ [T0](https://huggingface.co/bigscience/T0_3B) ๋ชจ๋ธ์˜ `config.json` ํŒŒ์ผ์„ ์ง€์ •๋œ ๊ฒฝ๋กœ์— ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ๋กœ์ปฌ์— ์บ์‹œ ํ•ด๋†“๊ณ  ๋‚˜๋ฉด, ๋‚˜์ค‘์— ๋ถˆ๋Ÿฌ์™€ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ด๋‘์„ธ์š”. ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Hub์— ์ €์žฅ๋œ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [Hub์—์„œ ํŒŒ์ผ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ](https://huggingface.co/docs/hub/how-to-downstream) ์„น์…˜์„ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/hpo_train.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Trainer API๋ฅผ ์‚ฌ์šฉํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ [[hyperparameter-search-using-trainer-api]] ๐Ÿค— Transformers์—์„œ๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋Š”๋ฐ ์ตœ์ ํ™”๋œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์‚ฌ์šฉ์ž๋Š” ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•  ํ•„์š” ์—†์ด ๋”์šฑ ๊ฐ„ํŽธํ•˜๊ฒŒ ํ•™์Šต์„ ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, [`Trainer`]๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ์œ„ํ•œ API๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ ์ด API๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์˜ˆ์‹œ์™€ ํ•จ๊ป˜ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ## ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ [[hyperparameter-search-backend]] [`Trainer`]๋Š” ํ˜„์žฌ ์•„๋ž˜ 4๊ฐ€์ง€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: [optuna](https://optuna.org/)์™€ [sigopt](https://sigopt.com/), [raytune](https://docs.ray.io/en/latest/tune/index.html), [wandb](https://wandb.ai/site/sweeps) ์ž…๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ์•„๋ž˜์˜ ๋ช…๋ น์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค์„ ์„ค์น˜ํ•˜์„ธ์š”. ```bash pip install optuna/sigopt/wandb/ray[tune] ``` ## ์˜ˆ์ œ์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ํ™œ์„ฑํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ• [[how-to-enable-hyperparameter-search-in-example]] ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๊ณต๊ฐ„์„ ์ •์˜ํ•˜์„ธ์š”. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋งˆ๋‹ค ์„œ๋กœ ๋‹ค๋ฅธ ํ˜•์‹์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. sigopt์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def sigopt_hp_space(trial): ... return [ ... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"}, ... { ... "categorical_values": ["16", "32", "64", "128"], ... "name": "per_device_train_batch_size", ... "type": "categorical", ... }, ... ] ``` optuna์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def optuna_hp_space(trial): ... return { ... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True), ... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]), ... } ``` raytune์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def ray_hp_space(trial): ... return { ... "learning_rate": tune.loguniform(1e-6, 1e-4), ... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]), ... } ``` wandb์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.wandb.ai/guides/sweeps/configuration) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def wandb_hp_space(trial): ... return { ... "method": "random", ... "metric": {"name": "objective", "goal": "minimize"}, ... "parameters": { ... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4}, ... "per_device_train_batch_size": {"values": [16, 32, 64, 128]}, ... }, ... } ``` `model_init` ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๊ณ  ์ด๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. ์•„๋ž˜๋Š” ๊ทธ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ```py >>> def model_init(trial): ... return AutoModelForSequenceClassification.from_pretrained( ... model_args.model_name_or_path, ... from_tf=bool(".ckpt" in model_args.model_name_or_path), ... config=config, ... cache_dir=model_args.cache_dir, ... revision=model_args.model_revision, ... token=True if model_args.use_auth_token else None, ... ) ``` ์•„๋ž˜์™€ ๊ฐ™์ด `model_init` ํ•จ์ˆ˜, ํ›ˆ๋ จ ์ธ์ˆ˜, ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹, ๊ทธ๋ฆฌ๊ณ  ํ‰๊ฐ€ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ [`Trainer`]๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> trainer = Trainer( ... model=None, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... tokenizer=tokenizer, ... model_init=model_init, ... data_collator=data_collator, ... ) ``` ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ํ˜ธ์ถœํ•˜๊ณ , ์ตœ์ ์˜ ์‹œํ—˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ๋ฐฑ์—”๋“œ๋Š” `"optuna"`/`"sigopt"`/`"wandb"`/`"ray"` ์ค‘์—์„œ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฉํ–ฅ์€ `"minimize"` ๋˜๋Š” `"maximize"` ์ค‘ ์„ ํƒํ•˜๋ฉฐ, ๋ชฉํ‘œ๋ฅผ ์ตœ์†Œํ™”ํ•  ๊ฒƒ์ธ์ง€ ์ตœ๋Œ€ํ™”ํ•  ๊ฒƒ์ธ์ง€๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์ž์‹ ๋งŒ์˜ compute_objective ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ด ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜์ง€ ์•Š์œผ๋ฉด, ๊ธฐ๋ณธ compute_objective๊ฐ€ ํ˜ธ์ถœ๋˜๊ณ , f1๊ณผ ๊ฐ™์€ ํ‰๊ฐ€ ์ง€ํ‘œ์˜ ํ•ฉ์ด ๋ชฉํ‘ฏ๊ฐ’์œผ๋กœ ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค. ```py >>> best_trial = trainer.hyperparameter_search( ... direction="maximize", ... backend="optuna", ... hp_space=optuna_hp_space, ... n_trials=20, ... compute_objective=compute_objective, ... ) ``` ## DDP ๋ฏธ์„ธ ์กฐ์ •์„ ์œ„ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ [[hyperparameter-search-for-ddp-finetune]] ํ˜„์žฌ, DDP(Distributed Data Parallelism; ๋ถ„์‚ฐ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ)๋ฅผ ์œ„ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์€ optuna์™€ sigopt์—์„œ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ตœ์ƒ์œ„ ํ”„๋กœ์„ธ์Šค๊ฐ€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๊ณผ์ •์„ ์‹œ์ž‘ํ•˜๊ณ  ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค๋ฅธ ํ”„๋กœ์„ธ์Šค์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/big_models.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํฐ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šคํ™” [[instantiating-a-big-model]] ๋งค์šฐ ํฐ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, RAM ์‚ฌ์šฉ์„ ์ตœ์†Œํ™”ํ•ด์•ผ ํ•˜๋Š” ๊ณผ์ œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ PyTorch ์›Œํฌํ”Œ๋กœ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. 3. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋ฌด์ž‘์œ„ ๋ชจ๋ธ์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. 1๋‹จ๊ณ„์™€ 2๋‹จ๊ณ„ ๋ชจ๋‘ ๋ชจ๋ธ์˜ ์ „์ฒด ๋ฒ„์ „์„ ๋ฉ”๋ชจ๋ฆฌ์— ์ ์žฌํ•ด์•ผ ํ•˜๋ฉฐ, ๋Œ€๋ถ€๋ถ„ ๋ฌธ์ œ๊ฐ€ ์—†์ง€๋งŒ ๋ชจ๋ธ์ด ๊ธฐ๊ฐ€๋ฐ”์ดํŠธ๊ธ‰์˜ ์šฉ๋Ÿ‰์„ ์ฐจ์ง€ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ๋ณต์‚ฌ๋ณธ 2๊ฐœ๊ฐ€ RAM์„ ์ดˆ๊ณผํ•˜์—ฌ ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ ์ด์Šˆ๋ฅผ ์•ผ๊ธฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋” ์‹ฌ๊ฐํ•œ ๋ฌธ์ œ๋Š” ๋ถ„์‚ฐ ํ•™์Šต์„ ์œ„ํ•ด `torch.distributed`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ํ”„๋กœ์„ธ์Šค๋งˆ๋‹ค ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ๋ณต์‚ฌ๋ณธ์„ 2๊ฐœ์”ฉ RAM์— ์ €์žฅํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. <Tip> ๋ฌด์ž‘์œ„๋กœ ์ƒ์„ฑ๋œ ๋ชจ๋ธ์€ "๋น„์–ด ์žˆ๋Š”" (์ฆ‰ ๊ทธ๋•Œ ๋ฉ”๋ชจ๋ฆฌ์— ์žˆ๋˜ ๊ฒƒ์œผ๋กœ ์ด๋ค„์ง„) ํ…์„œ๋กœ ์ดˆ๊ธฐํ™”๋˜๋ฉฐ ๋ฉ”๋ชจ๋ฆฌ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ดˆ๊ธฐํ™”๋œ ๋ชจ๋ธ/ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์ข…๋ฅ˜์— ์ ํ•ฉํ•œ ๋ถ„ํฌ(์˜ˆ: ์ •๊ทœ ๋ถ„ํฌ)์— ๋”ฐ๋ฅธ ๋ฌด์ž‘์œ„ ์ดˆ๊ธฐํ™”๋Š” ๊ฐ€๋Šฅํ•œ ํ•œ ๋น ๋ฅด๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ์ดˆ๊ธฐํ™”๋˜์ง€ ์•Š์€ ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•ด 3๋‹จ๊ณ„ ์ดํ›„์—๋งŒ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค! </Tip> ์ด ์•ˆ๋‚ด์„œ์—์„œ๋Š” Transformers๊ฐ€ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ œ๊ณตํ•˜๋Š” ์†”๋ฃจ์…˜์„ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. ์ฃผ์˜ํ•  ์ ์€ ์•„์ง ํ™œ๋ฐœํžˆ ๊ฐœ๋ฐœ ์ค‘์ธ ๋ถ„์•ผ์ด๋ฏ€๋กœ ์—ฌ๊ธฐ์„œ ์„ค๋ช…ํ•˜๋Š” API๊ฐ€ ์•ž์œผ๋กœ ์•ฝ๊ฐ„ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ์ƒค๋”ฉ๋œ ์ฒดํฌํฌ์ธํŠธ [[sharded-checkpoints]] 4.18.0 ๋ฒ„์ „ ์ดํ›„, 10GB ์ด์ƒ์˜ ๊ณต๊ฐ„์„ ์ฐจ์ง€ํ•˜๋Š” ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋Š” ์ž๋™์œผ๋กœ ์ž‘์€ ์กฐ๊ฐ๋“ค๋กœ ์ƒค๋”ฉ๋ฉ๋‹ˆ๋‹ค. `model.save_pretrained(save_dir)`๋ฅผ ์‹คํ–‰ํ•  ๋•Œ ํ•˜๋‚˜์˜ ๋‹จ์ผ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ง€๊ฒŒ ๋  ๋Œ€์‹ , ์—ฌ๋Ÿฌ ๋ถ€๋ถ„ ์ฒดํฌํฌ์ธํŠธ(๊ฐ๊ฐ์˜ ํฌ๊ธฐ๋Š” 10GB ๋ฏธ๋งŒ)์™€ ๋งค๊ฐœ๋ณ€์ˆ˜ ์ด๋ฆ„์„ ํ•ด๋‹น ํŒŒ์ผ์— ๋งคํ•‘ํ•˜๋Š” ์ธ๋ฑ์Šค๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. `max_shard_size` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์ƒค๋”ฉ ์ „ ์ตœ๋Œ€ ํฌ๊ธฐ๋ฅผ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์ด ์˜ˆ์ œ๋ฅผ ์œ„ํ•ด ์ƒค๋“œ ํฌ๊ธฐ๊ฐ€ ์ž‘์€ ์ผ๋ฐ˜ ํฌ๊ธฐ์˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ์ „ํ†ต์ ์ธ BERT ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ด…์‹œ๋‹ค. ```py from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") ``` [`~PreTrainedModel.save_pretrained`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•˜๋ฉด, ๋ชจ๋ธ์˜ ๊ตฌ์„ฑ๊ณผ ๊ฐ€์ค‘์น˜๊ฐ€ ๋“ค์–ด์žˆ๋Š” ๋‘ ๊ฐœ์˜ ํŒŒ์ผ์ด ์žˆ๋Š” ์ƒˆ ํด๋”๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค: ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` ์ด์ œ ์ตœ๋Œ€ ์ƒค๋“œ ํฌ๊ธฐ๋ฅผ 200MB๋กœ ์‚ฌ์šฉํ•ด ๋ด…์‹œ๋‹ค: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` ๋ชจ๋ธ์˜ ๊ตฌ์„ฑ์— ๋”ํ•ด, ์„ธ ๊ฐœ์˜ ๋‹ค๋ฅธ ๊ฐ€์ค‘์น˜ ํŒŒ์ผ๊ณผ ํŒŒ๋ผ๋ฏธํ„ฐ ์ด๋ฆ„๊ณผ ํ•ด๋‹น ํŒŒ์ผ์˜ ๋งคํ•‘์ด ํฌํ•จ๋œ `index.json` ํŒŒ์ผ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ฒดํฌํฌ์ธํŠธ๋Š” [`~PreTrainedModel.from_pretrained`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์™„์ „ํžˆ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` ํฐ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ด๋Ÿฌํ•œ ๋ฐฉ์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์ฃผ๋œ ์žฅ์ ์€ ์œ„์—์„œ ๋ณด์—ฌ์ค€ ํ๋ฆ„์˜ 2๋‹จ๊ณ„์—์„œ, ๊ฐ ์ƒค๋“œ๊ฐ€ ์ด์ „ ์ƒค๋“œ ๋‹ค์Œ์— ๋กœ๋“œ๋˜๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์ด ๋ชจ๋ธ ํฌ๊ธฐ์™€ ๊ฐ€์žฅ ํฐ ์ƒค๋“œ์˜ ํฌ๊ธฐ๋ฅผ ์ดˆ๊ณผํ•˜์ง€ ์•Š๋Š”๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ์ด ์ธ๋ฑ์Šค ํŒŒ์ผ์€ ํ‚ค๊ฐ€ ์ฒดํฌํฌ์ธํŠธ์— ์žˆ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ํ•ด๋‹น ๊ฐ€์ค‘์น˜๊ฐ€ ์–ด๋””์— ์ €์žฅ๋˜์–ด ์žˆ๋Š”์ง€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ด ์ธ๋ฑ์Šค๋ฅผ json๊ณผ ๊ฐ™์ด ๋กœ๋“œํ•˜๊ณ  ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๋Š” ํ˜„์žฌ ๋ชจ๋ธ์˜ ์ด ํฌ๊ธฐ๋งŒ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ์•ž์œผ๋กœ ๋‹ค๋ฅธ ์ •๋ณด๋ฅผ ์ถ”๊ฐ€ํ•  ๊ณ„ํš์ž…๋‹ˆ๋‹ค: ```py >>> index["metadata"] {'total_size': 433245184} ``` ๊ฐ€์ค‘์น˜ ๋งต์€ ์ด ์ธ๋ฑ์Šค์˜ ์ฃผ์š” ๋ถ€๋ถ„์œผ๋กœ, ๊ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ์ด๋ฆ„(PyTorch ๋ชจ๋ธ `state_dict`์—์„œ ๋ณดํ†ต ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”)์„ ํ•ด๋‹น ํŒŒ์ผ์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` ๋งŒ์•ฝ [`~PreTrainedModel.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ๋ชจ๋ธ ๋‚ด์—์„œ ์ด๋Ÿฌํ•œ ์ƒค๋”ฉ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ง์ ‘ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด (์ „์ฒด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์œ„ํ•ด `model.load_state_dict()`๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ), [`~modeling_utils.load_sharded_checkpoint`]๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## ์ €(ไฝŽ)๋ฉ”๋ชจ๋ฆฌ ๋กœ๋”ฉ [[low-memory-loading]] ์ƒค๋”ฉ๋œ ์ฒดํฌํฌ์ธํŠธ๋Š” ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ์ž‘์—… ํ๋ฆ„์˜ 2๋‹จ๊ณ„์—์„œ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์ด์ง€๋งŒ, ์ €(ไฝŽ)๋ฉ”๋ชจ๋ฆฌ ์„ค์ •์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์šฐ๋ฆฌ์˜ Accelerate ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ์กฐํ•ด์ฃผ์„ธ์š”: [Accelerate๋กœ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ (์˜๋ฌธ)](../en/main_classes/model#large-model-loading)
0