category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
question answering
|
What all can be said when you say that the CPU is 32 bit?
|
https://cs.stackexchange.com/questions/63001/what-all-can-be-said-when-you-say-that-the-cpu-is-32-bit
|
<p>When someone tells a computer scientist that a CPU is, say, 32 bits, what all does he/she infer from this information?</p>
<p>I know that it means that the physical address has 32 bits. This meas that the physical memory can't hold more that 2^32 bytes of RAM. This also means that the word size is 32 bit or 4 bytes. Please correct me if I am wrong and also tell me what more we can infer from this.</p>
<p>I have searched over the internet only to get websites answering the above question to layman. Can anyone answer it from the technical point of view?</p>
|
<p>An $n$-bit processor is a processor for which the preferred integer size is $n$ bits. That's usually the size of the integer or general purpose registers (a processor may not have exposed registers in its ISA, a processor may have some other kind of registers of different width, a processor may provide instructions to do some integer operations on different -- smaller or wider -- width, the width of buses used in the ISA implementation do not define its architectural width: there may be several implementations using buses of several sizes).</p>
<p>Data of that size is usually called a <em>word</em>, but when an ISA exist as a family and is extended to provide wider registers, the term <em>word</em> tend to continue to refer to data of the width adequate for the first member of the family (thus the continued use of word to refer to 16-bit quantities in the x86 world which has grow now to a 64-bit ISA).</p>
<p>The address space size is determined by the ISA width only if the ISA is using the same registers for address computation as for integer one. That's a very common property of later architectures, but it has not always be the case (the 8086 used 24-bit addresses but its word size was 16 bits, Cray had 64-bit data register but its address registers were 24 bits IIRC). Even when the registers used are the same, the amount of addressable memory may be different, either because some bits are not used for addresses (the 68000 for instance, and programmers making use of that caused issues for their followers when all of the bits were token into consideration), or because virtual memory allows for a process $n$-bit addressable space to be mapped into a wider physical address space (you could consider the 24-bit address of the 8086 a special case of that).</p>
| 700
|
question answering
|
Providing an algorithm for a given PDA
|
https://cs.stackexchange.com/questions/75673/providing-an-algorithm-for-a-given-pda
|
<p>I was asked the next question in my homework assignment:
I just want to make sure that I fully understand what it is that's required of me.
Am I asked to find an algorithm which decides if the language that's accepted by the PDA is finite? And if not, then what is it? Any initial intuition?</p>
<p>**I don't need help with answering it, just understanding the question.
I'm asked to show an algorithm that given a PDA $A$ decides whether there exists a word $w$ that's accepted by the PDA for which there exists a decomposition
$w=uvxyz$ that satisfies that the length of $vy$ is atleast 1 and $u(v^i)x(y^i)z$ is accepted by the PDA for every $i\ge0$.</p>
| 701
|
|
question answering
|
Chordal graph question
|
https://cs.stackexchange.com/questions/109141/chordal-graph-question
|
<p>In the below image, the graph is being triangulated (added edges are in red). My question is simple :<br>
<strong>Is the red edge between nodes 7 and 10 necessary in order to obtain a chordal graph?</strong></p>
<p><a href="https://i.sstatic.net/AXKEV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AXKEV.png" alt="enter image description here"></a></p>
<p>(this image comes from Yaroslav Bulatov's excellent answer on <a href="https://cstheory.stackexchange.com/questions/5018/the-origin-of-the-notion-of-treewidth/5020#5020">https://cstheory.stackexchange.com/questions/5018/the-origin-of-the-notion-of-treewidth/5020#5020</a>)</p>
|
<p>No, it's not necessary.</p>
<p>Let <span class="math-container">$G$</span> be the graph made by deleting the edge <span class="math-container">$\{7,10\}$</span> from the graph in the question. Any cycle in <span class="math-container">$G$</span> that includes <span class="math-container">$7$</span> and <span class="math-container">$10$</span> must include both <span class="math-container">$6$</span> and <span class="math-container">$8$</span>, because such a cycle must be the union of two disjoint <span class="math-container">$7$</span>–<span class="math-container">$10$</span> paths, and <span class="math-container">$\{6,8\}$</span> is a cut that separates <span class="math-container">$7$</span> from <span class="math-container">$10$</span>. However, <span class="math-container">$\{6,8\}$</span> is an edge, so any cycle in <span class="math-container">$G$</span> that includes <span class="math-container">$7$</span> and <span class="math-container">$10$</span> already has <span class="math-container">$\{6,8\}$</span> as a chord, so it doesn't need <span class="math-container">$\{7,10\}$</span> as well.</p>
| 702
|
question answering
|
How to figure out the minimal number of colors needed to color specific given graphs?
|
https://cs.stackexchange.com/questions/29038/how-to-figure-out-the-minimal-number-of-colors-needed-to-color-specific-given-gr
|
<p>I found this question on the net and I'm wondering what is the process for answering such questions? I assume there is some formula that works for all graphs?</p>
<p><strong>1.a.</strong>
Consider the undirected graph with vertices $A$, $B$, $C$, $D$, $E$, $F$ and edges $AB$, $AC$, $BD$, $CE$, $DF$ and $EF$ (i.e., the graph is the 6-cyle $ABDFECA$). What is the
minimal number of colours needed to colour this graph? </p>
<p><strong>1.b.</strong>
Show how when considering the ordering $A$, $B$, $C$, $D$, $E$, $F$ of the vertices in
the above graph, a greedy algorithm will find this minimal number, and find
one other ordering where it will not.</p>
|
<p>I'm not sure what you mean by a "formula that works for all graphs" – what would the variables of such a formula represent? Since it's <strong>NP</strong>-hard to determine the chromatic number of a graph (the minimum number of colours required), there's unlikely to be any simple way of doing it in general.</p>
<p>For the specific graph in the question, the easiest way is to find a colouring with some number of colours and then prove that no smaller number of colours can work. In this case, it's easy to find a 2-colouring of the graph and there's clearly no 1-colouring, since every vertex is adjacent to at least one other vertex.</p>
<p>For part b, are you familiar with greedy colouring algorithms? In particular, if the next vertex to be considered has no neighbours that have already been coloured, that vertex will receive colour $1$. So, one way to produce an ordering that makes the greedy algorithm use a suboptimal number of colours is to find two non-adjacent vertices $X$ and $Y$ that must always have different colours in an optimal colouring, and begin your ordering $X,Y, \dots\;$. Doing this requires understanding what optimal colourings look like, which is fairly simple for the given graph but, as I stated above, is hard in general.</p>
| 703
|
question answering
|
Is there an online preprocessing algorithm for Range Minimum Queries (RMQ)?
|
https://cs.stackexchange.com/questions/130850/is-there-an-online-preprocessing-algorithm-for-range-minimum-queries-rmq
|
<p>Is there a linear time online version of the preprocessing RMQ algorithm? That is, an algorithm that allows to update the data structure when appending additional elements at the end of the input array in O(1) (worst case or amortized) time per element (while still allowing answering arbitrary queries in constant time)?</p>
<p>I am aware of the <a href="http://wcipeg.com/wiki/Sliding_range_minimum_query" rel="nofollow noreferrer">sliding queries algorithm</a>, which is online by nature but restricts the type of allowed queries. I've also seen <a href="https://cs.stackexchange.com/questions/106002/lower-bound-on-online-range-minimum-query-with-element-value-modification">this</a> more general question, which is still unanswered by the time of writing this post.</p>
| 704
|
|
question answering
|
Could AI be used to detect when a human is picking survey response options randomly?
|
https://cs.stackexchange.com/questions/136928/could-ai-be-used-to-detect-when-a-human-is-picking-survey-response-options-rando
|
<p>Context: I am a clinical psych researcher dabbling in machine learning.</p>
<p>Humans cannot be truly random. Therefore, could machine learning be used to analyze a string of numbers and determine the probability that said string was generated by a human or by a computer? Taking it a step further, could machine learning be used to look at survey responses and determine the likelihood as to whether the participant was answering questions honestly, or whether they "Christmas-tree'd" either part of or all of the survey? If so, would this be dependent on the specific questions that each survey item asked, or simply on the type of survey question (e.g., Likert-type, sliding scale, ruler, yes/no or true/false questions, free response, etc.)?</p>
|
<p>AI is probably not the best tool for this job. Several classical techniques in survey design include:</p>
<ul>
<li><p>Consistency check: ask the same question in several ways, spread out across the survey, and check if they've answered consistently.</p>
</li>
<li><p>Open-ended questions: ask an open-ended question, see if they write nonsense or the bare minimum.</p>
</li>
<li><p>Attention check questions: e.g., "What is your favorite color? Regardless of your favorite color, please pick the second option." Beware that these are controversial and have pros and cons.</p>
</li>
<li><p>Recruitment: recruit subjects who are less likely to try to cheat you. Pay them a reasonable wage for their time.</p>
</li>
</ul>
<p>My experience is that it is easy to be overly worried about respondents answering dishonestly; if you follow basic good practices, the overwhelming majority of respondents will try to be helpful and won't try to cheat you.</p>
| 705
|
question answering
|
Is there a concept of probabilistic quantum computers?
|
https://cs.stackexchange.com/questions/136178/is-there-a-concept-of-probabilistic-quantum-computers
|
<p>Answering <a href="https://cstheory.stackexchange.com/q/48527/61557">my question</a> <a href="https://cstheory.stackexchange.com/users/3532/yonatan-n">Yonatan N</a> said a statement from which follows that there are computable functions of quantum time complexity strictly above polynomial.</p>
<p>Accordingly <a href="https://www.quora.com/Are-quantum-computers-really-probabilistic-rather-than-deterministic" rel="nofollow noreferrer">a Quora answer</a></p>
<blockquote>
<p>Quantum computation is fully deterministic. A given computation applied to a given starting state will always produce the same final state, every time. What is potentially non-deterministic is extracting the output in classical terms.</p>
</blockquote>
<p>So, it looks like it makes sense to introduce probabilistic quantum computation (as opposed to "fully deterministic"). Does it make sense? Does such thing as probabilistic quantum computation exist in physics?</p>
<p>So maybe, for probabilistic quantum computation there is no known proof that there are computable functions of (probabilistic) quantum time complexity strictly above polynomial, is there?</p>
|
<p>It is true that unitary gates used in quantum algorithms (and indeed any unitary evolution in quantum mechanics generally) is deterministic and measurements are the only non-deterministic elements in a quantum algorithm (and indeed in quantum mechanics generally). However, it is not true that measurement is always the final step in a quantum algorithm, see for example my answer to <a href="https://quantumcomputing.stackexchange.com/questions/15349/are-there-any-algorithms-that-take-measurements-in-an-intermediate-step">this question</a> for a few prominent instances of the use of intermediate measurements. The misconception might originate in the <a href="https://en.wikipedia.org/wiki/Deferred_Measurement_Principle" rel="noreferrer"><em>principle of deferred measurement</em></a> which says that any quantum algorithm with intermediate measurements is equivalent to one where all measurements are moved to the final step. In particular, the principle enables us to replace classical control of quantum gates based on results of intermediate measurements with quantum controlled gates.</p>
<p>Returning to the question of probabilistic quantum computation, note that any probabilistic quantum algorithm (understood here as one that chooses gates to apply based on random bits) can be simulated by a quantum algorithm with intermediate measurements. This follows from the fact that measurement of the state <span class="math-container">$|+\rangle = (|0\rangle + |1\rangle)/\sqrt{2}$</span> in the computational basis generates a random bit which can subsequently be used to make random choices about which quantum gates to apply. By the principle of deferred measurement we can turn such a probabilistic quantum algorithm into a quantum algorithm where all measurements are terminal. In conclusion, probabilistic quantum computing is subsumed by regular quantum computing. This is why no distinction is generally made between probabilistic and deterministic quantum algorithms.</p>
| 706
|
question answering
|
Looking for interesting unanswered questions within complexity theory for a project
|
https://cs.stackexchange.com/questions/48552/looking-for-interesting-unanswered-questions-within-complexity-theory-for-a-proj
|
<p>I'm looking for interesting open questions in complexity theory that someone with an undergraduate degree in math and comp/sci could theoretically tackle. I have strong interest in the polynomial hierarchy, and the study of probabilistic classes like RP, co-RP, ZPP, BPP, and also their logarithmic counterparts. I also have some interest in quantum computing complexity theory, but I'm afraid my knowledge of quantum isn't that strong. (Only the basics from my modern physics class, which included some quantum). Important: must be new research or new expansion on previous research, a question that either hasn't been asked or hasn't been answered successfully that I might be able to say something interesting about, even if I don't ultimately solve it. I'm willing to learn any necessary programming languages/programs/methods/etc to the end of answering this question (or trying to)! </p>
<p>Note: this is for a senior year undergraduate thesis in math / comp sci. </p>
<p>Thanks for any suggestions!</p>
<p>Edit: An example might be to analyze the complexity of a particular problem that hasn't been (formally) analyzed yet but may yield interesting results.</p>
|
<p>Do something interesting with <a href="https://www.cis.upenn.edu/~alur/nw.html" rel="nofollow noreferrer">visibly</a> <a href="http://madhu.cs.illinois.edu/vpa/" rel="nofollow noreferrer">languages</a>. They are a relatively recent topic with many useful practical applications. For example, I would find it interesting to better understand the role of synchronization here, compared to the role of synchronization for the <a href="http://rads.stackoverflow.com/amzn/click/052188831X" rel="nofollow noreferrer">theory of codes</a>. Already the simplest case could be interesting, where the visibly language just distinguishes between inner symbols and separator symbols, and the considered codes are just <a href="https://en.wikipedia.org/wiki/Prefix_code#Related_concepts" rel="nofollow noreferrer">bifix code</a>.</p>
<p>David Richerby is right that this questions allows opinion biased answers. The origins of the opinions voiced in this answer can be found <a href="https://cs.stackexchange.com/questions/24574/what-are-appropriate-isomorphisms-between-formal-languages/43564#43564">here</a>.</p>
| 707
|
question answering
|
A question about Fleury's algorithm
|
https://cs.stackexchange.com/questions/113122/a-question-about-fleurys-algorithm
|
<p>The following is the Problem 1.4 in [1]:</p>
<p><strong>Finding an Eulerian path.</strong> Show that if a connected graph has two vertices of odd degree and we start at one of them, Fleury's algorithm will produce an Eulerian path, and that if all vertices have even degree, it (Fleury's algorithm) will produce an Eulerian cycle no matter where we start.</p>
<p>Reference</p>
<p>[1] C. Moore and S. Mertens, <em>The Nature of Computation</em>, Oxford University Press, 2015.</p>
<hr>
<p>I have tried to answer this question for a long time, but I don't have any idea. By the way, this question is not my homework, I am just interested in solving this question.</p>
| 708
|
|
question answering
|
Showing that tournament sort requrires O(n log n) comparisons
|
https://cs.stackexchange.com/questions/29900/showing-that-tournament-sort-requrires-on-log-n-comparisons
|
<p>I wish I could think of a better way to word my question. Maybe some one here could offer s suggestion for that, as well.</p>
<p>On to my question. Before I do, this is a class question that has been asked, answer, and considered to be over; however, I'm struggle accepting the answer. For this reason, I'm here hoping someone can word it in a way that I understand it.</p>
<p>The problem is as follows:</p>
<pre><code>Show that if n is a power of 2, tournament sort requires O(n lg n) comparisons.
</code></pre>
<p>I refer to a rooted tree graph of six levels (0-5) with the vertices doubling at each level: i.e 1, 2, 4, 8, 16, 32. I see a pattern very similar to binary. I'm not implying the problem is binary related, but it is powers of 2 and I'm very comfortable with binary.</p>
<p>Using that 2^5 = 32, n=32; therefore, n is a power of 2. No argument here. Now I read the problem to say that 32log32 (nlogn) will result in the number of comparisons. The aide answering my question said, "the calculated results of the nlogn were immaterial. I just needed to know there were 5 levels down, and therefore 5 levels up." Additionally, he kept referring to 5log5 which I still don't see a result that makes any sense to me.</p>
<p>As I study the question, I read it to say that the nlogn should provide me with a number of comparisons, based on n. I cannot make that happen with a known values.</p>
<p>Can someone please help me to follow this better?</p>
|
<p>Initially you place the elements you want to sort in the leaves of the tournament tree. Then you fill all the internal nodes with the bigger of the two elements in their respective children. This takes $n-1$ comparisons.</p>
<p>After that, you have the largest element in the root. So you can remove it and place it in the output. Now all the comparisons where this element was involved have to be redone. This is one comparison per level of the tree, making $(\log n) -1$ comparisons (-1, since on the lowest level, there is only one candidate element left, so you don't need to compare.)</p>
<p>Now the second largest element is at the top and you can repeat the procedure. And so on. In the end you will have made up to $\log n$, i.e. $O(\log n)$ comparisons for each of the $n$ elements.</p>
<p>Summing up, we have $n-1 + n\cdot O(\log n)$ comparisons. By the rules of $O$-notation, that is in $O(n\log n)$.</p>
| 709
|
question answering
|
Multiple knapsack problem with equal profit and different weight
|
https://cs.stackexchange.com/questions/109775/multiple-knapsack-problem-with-equal-profit-and-different-weight
|
<p>I am doing a research about the load balancing problem in 5G system, but I am not sure if my problem is a NP-complete problem.</p>
<p>The problem is:</p>
<ul>
<li>given a set of n items and a set of m knapsack</li>
<li>capacity of knapsacks are equal</li>
<li>the weight of item j in knapsack i is w[i][j],that means weight of a item in each knapsack are different</li>
<li>each profit of items are equal</li>
</ul>
<p>I am not trying to put all item in least knapsack like bin packing problem.
I saw some similar question answered, but no one is identical to this case.
In this case, the goal is to put as more as possible item with m knapsacks.
Is the problem a NP-complete problem?</p>
|
<p>This problem can be shown to be NP-complete via reduction from <a href="https://en.wikipedia.org/wiki/Partition_problem" rel="nofollow noreferrer">PARTITION</a>. Simply take <span class="math-container">$m=2$</span>, the weights of each item to be the same across both knapsacks, and the capacities of each knapsack to be half the total weight across all items.</p>
| 710
|
question answering
|
Question regarding $O(n^2)$ efficiency
|
https://cs.stackexchange.com/questions/87328/question-regarding-on2-efficiency
|
<p>I'm going through a video of EDX course which talks about Big O notation. At the end of the video they have some questions but the <span class="math-container">$O(n^2)$</span> answer is confusing me. It feels like a mistake, but I just want to make sure.</p>
<p>The question is :</p>
<blockquote>
<p>Imagine that we have a data set of 10 items. We run an algorithm on that data set, and it performs 10 operations.<br />
Now imagine that we doubled the size of the data set to 20 items. Approximately how many operations might now be required if the algorithm is of...</p>
<ul>
<li><p>Linear order? 20</p>
</li>
<li><p>Constant Order? 10</p>
</li>
<li><p>Quadratic Order? 40</p>
</li>
</ul>
</blockquote>
<p>I don't understand why the answer if 40 if it's quadratic.</p>
<p>At first I thought it would be 400, because n2 is 400. But the drop-down answer menu doesn't have 400. The highest it has is 100.</p>
<p>So I thought it might be 100, because if 10 items take 100 operations, then increasing by 10, would increase by 100. However 100 seems to be wrong as well.</p>
<p>So why is the answer 40?</p>
<p>The course in question is here:</p>
<p><a href="https://www.edx.org/course/introduction-computing-using-python-gtx-cs1301x" rel="nofollow noreferrer">https://www.edx.org/course/introduction-computing-using-python-gtx-cs1301x</a>
Chapter 5.2</p>
|
<p>Say the time spent is some constant factor $c$ times $n^2$. Then we have:</p>
<p>$$c \cdot 10^2 = 10$$
$$c = 0.1$$</p>
<p>Thus then $c\cdot 20^2 = 40$.</p>
| 711
|
question answering
|
Algorithm Selection for Classification problem
|
https://cs.stackexchange.com/questions/72018/algorithm-selection-for-classification-problem
|
<p>I've been working on a developing a product selection network for my workplace. I work with lots of chemicals and the clients don't always know what they want/need so most of the time I have to ask a bunch of question, collect useful/ignore useless information, then make a selection from there. Discussions take place over the phone.</p>
<p>Ideal situation In terms of information collection/flow:
1. I ask a probing question which specifies which feature I am referring to
2. Client answering question and speech recogniser converts voice to text filling in feature input.
3. Text summariser searches the feature and reduces down to specific key words.
- For example: Me: "what kind of application are you looking to perform?"
Client: "I'm wanting to adhere two pieces of wood together"
Feature summarised to: adhere, wood
4. Once feature vector has enough information the network recommends the most suitable product.</p>
<p>Problems:
1. Clients tend to waffle and give useless information therefore network will need lots of training data.
2. Once a question is asked - client may not directly answer that question and may incidentally answer another feature question.</p>
<p>I would think the logical place to start would be a speech recognition RNN - I have written a weak tensorflow one however I think I'll just look to tap into Google's cloud speech recognition API here. This is where I get stuck - should I just use a simple forward/back propagation network from here and treat it as a classification problem or is there another way to do it?</p>
<p>Any direction pointers would be greatly appreciated.</p>
<p>Kind Regards, Andy</p>
|
<p>I don't think existing state-of-the-art technology will be adequate to solve this task effectively. The current state of knowledge in NLP and AI probably isn't good enough to build something that will work well in practice. Instead, I think you'll need to use humans. Perhaps you can hire someone and train them -- that will probably be both cheaper and more effective than anything feasible with AI today.</p>
<p>(It seems impractical to obtain the large training sets typically needed for deep learning. Also, your task requires deep understanding of spoken text at a semantic level, which is currently hard. Finally, your task requires domain knowledge about the kinds of application tasks that clients are likely to ask about.)</p>
| 712
|
question answering
|
'if' and 'while' statements within a SIMD architecture and its memory architecture
|
https://cs.stackexchange.com/questions/86154/if-and-while-statements-within-a-simd-architecture-and-its-memory-architectu
|
<p>I am doing some past exam papers and I found two interesting questions which I cannot answer due to them not being extensively covered in the lecture notes given to me by my lecturer. </p>
<p>The questions go like this:</p>
<blockquote>
<p><strong>"Explain the mechanism a SIMD computer uses to support conditional statements like if and while."</strong></p>
</blockquote>
<p>SIMD is essentially a class of parallel computers in Flynn's taxonomy and it describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. SIMD is also referred to as vector processing which leads to believe that I should have all my computations made to handle vectors as input than anything else, or am I thinking too much in a 'sequential' manner?</p>
<p>and:</p>
<blockquote>
<p><strong>"Would you regard SIMD as a shared or a distributed memory architecture? Explain your choice."</strong></p>
</blockquote>
<p>It is a shared memory architecture as it uses data and instruction pools from which it can do its processes. I am struggling with answering this question as I cannot come up with a proper way of expressing myself in order to answer it.</p>
| 713
|
|
question answering
|
How can the 4-bit page number and 8-bit offset as well as the page table be used to a answer a question like, "how big is each page"? etc
|
https://cs.stackexchange.com/questions/123912/how-can-the-4-bit-page-number-and-8-bit-offset-as-well-as-the-page-table-be-used
|
<p>No more information required. I would just like to know how to use the information provided.</p>
<p><a href="https://i.sstatic.net/xIOZk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xIOZk.png" alt="enter image description here"></a></p>
|
<p>4 bits page number + 8 bits offset = 12 address bits : A[11:0]</p>
<ul>
<li>A[7:0] selects an offset within a page</li>
<li><p>A[11:8] selects the page</p></li>
<li><p>1792 = 7*256 : A=0111_00000000</p></li>
<li>2304 = 9*256 : A=1001_00000000</li>
<li>2814 =11*256 : A=1011_00000000</li>
<li>1024 = 4*256 : A=0100_00000000</li>
</ul>
<p>What is the problem? You get 8 address bits within each page. So what is the page size?</p>
| 714
|
question answering
|
SAT solvers for use in $P^{NP}$ and $NP^{NP}$
|
https://cs.stackexchange.com/questions/139188/sat-solvers-for-use-in-pnp-and-npnp
|
<p>The original question I am answering:</p>
<blockquote>
<p>Can you use SAT solvers to solve problems complete in <span class="math-container">$\Sigma_2^P,\Pi_2^P,\Delta_2^P$</span>?</p>
</blockquote>
<p>My first thought:</p>
<ul>
<li>Venn Diagrams show that the PH encloses NP, but does not equal NP. Therefore, there should be some problems that cannot be solvable using SAT?</li>
</ul>
<p>My second thought:</p>
<ul>
<li>To be able to use a SAT solver on one of those PH classes, it would imply that they are reducible to SAT, and so <span class="math-container">$\Sigma_2^P = \Pi_2^P = \Delta_2^P = \mathsf{NP}$</span>.</li>
</ul>
<p>This would imply that PH collapses to NP, which we know not to be true.</p>
<p>Are these thoughts correct?</p>
|
<p>We don't know that PH doesn't collapse to NP. We don't even know that PH doesn't collapse all the way to P.</p>
<p>The best you can say is that you can use SAT solvers to solve problems in one of these classes iff PH collapses to NP.</p>
| 715
|
question answering
|
Comparative study between Deep neural nets and Bayesian Networks
|
https://cs.stackexchange.com/questions/60390/comparative-study-between-deep-neural-nets-and-bayesian-networks
|
<p>Is there any comparative study that showcases the powers of Bayesian Networks and Deep learning in their respective favorable setup and how they compare?</p>
<p>I tried to go through blogs but couldn't find out any experimental study where the respective models where described based on a example?</p>
<p>Rationale in putting this question: I am planning to work on Bayesian networks and I know they are great for inference and answering complex queries. But I couldn't find any convincing experimental results where they seemed to be more favorable as compared to other sophisticated machine learning tools like deep neural nets or conventional deep neural nets. </p>
|
<p>They're not directly comparable. They do different things. They solve different problems.</p>
<p>A Bayesian network is a probabilistic model of the relationship between multiple random variables. It is a <a href="https://en.wikipedia.org/wiki/Generative_model" rel="nofollow">generative model</a>. It builds a model of the joint probability distribution between multiple random variables. It typically requires some priors or assumptions about the structure of the joint distribution. While it could be used for classification, it normally isn't.</p>
<p>Deep learning is typically used for classification: supervised learning. It is a <a href="https://en.wikipedia.org/wiki/Discriminative_model" rel="nofollow">discriminative model</a>. It does not try to model/estimate the joint probability distribution between multiple random variables. It typically does not require you to specify priors or assumptions about the structure of the joint distribution.</p>
| 716
|
question answering
|
return a key of a node with maximum value within a range of keys in B+ tree
|
https://cs.stackexchange.com/questions/86155/return-a-key-of-a-node-with-maximum-value-within-a-range-of-keys-in-b-tree
|
<p>I've been asked a question about B+ Tree.</p>
<p><strong>The question is:</strong> Suppose we have object of the following type:</p>
<p><code>class Obj {
private:
int value;
int key;
public:
Obj(int uniq_key , int value)
}</code></p>
<p>and I am creating a Generic B+ tree which will sort objects in it leafs nodes by keys and not by values. (The keys are unique values aren't)
Now I want to return the key of a node containes <strong>Maximum value</strong> within a range of keys.
For Example: </p>
<p>If I have the following objects inserted in my B+ tree:</p>
<p><code>Obj a(1,10) , Obj(2,5) , Obj(3,0)</code></p>
<p>Then by calling this method I will get a return value : <strong>1</strong></p>
<p>I thought of having an extra rank tree with the same data stored in it and a pointer from a leaf node in the B+ tree to it's equivalent node in the rank tree but I think that there is a solution involves adding extra data in the nodes in the B+ tree which can solve this issue.</p>
<p><strong>p.s.</strong> I don't know if this post is duplicated - I couldn't find anything else answering this kind of question.</p>
<p>Thank you,
Michael</p>
|
<p>Augment each node to contain the key of the node with maximum value, among all nodes that are underneath it (among all of its descendants). You can easily maintain/update this augmented information each time you modify the tree, by using the fact that the maximum for any node can be recomputed using just the information in its direct children (you don't need to look at its grandchildren etc.).</p>
<p>Now any range can be expressed as the union of $O(\log n)$ subtrees. In other words, you can find $O(\log n)$ nodes such that their descendants cover the range exactly. So, the max value of any node within that range can be obtained by looking at the values in those nodes, and taking the max of them. In this way each query can be answered in $O(\log n)$ time.</p>
| 717
|
question answering
|
decider for a question not clear
|
https://cs.stackexchange.com/questions/127098/decider-for-a-question-not-clear
|
<p>This question was asked and answered but I cannot understand the solution.</p>
<ol>
<li>Why is it sufficient to test all strings of |Q| + 1 length?</li>
<li>Why should special state q be found?</li>
</ol>
<p>the original question:
<a href="https://cs.stackexchange.com/questions/43892/show-that-the-set-of-all-tms-that-move-only-to-the-right-and-loop-for-some-input?rq=1">Show that the set of all TMs that move only to the right and loop for some input is decidable</a></p>
<p>L2={ M | M is a TM and there exists an input w such that in the computation of M(w) the head only moves right and M never stops}</p>
|
<p>The pigeonhole principle. If you go through <span class="math-container">$|Q|+1$</span> states, then there must be a state you have been in twice already. This means that because the input is <span class="math-container">$\sqcup$</span> almost all of the time, then we are stuck in a loop and the machine wont halt</p>
| 718
|
question answering
|
A question about the paper "Ensembling Ten Math Information Retrieval Systems"
|
https://cs.stackexchange.com/questions/144236/a-question-about-the-paper-ensembling-ten-math-information-retrieval-systems
|
<p>My question is about the paper <a href="http://ceur-ws.org/Vol-2936/paper-06.pdf" rel="nofollow noreferrer">Ensembling Ten Math Information Retrieval Systems</a>.</p>
<p>I am interested in the task of finding answers.</p>
<p>Which of the ten system are able to answer questions using <strong>only</strong> dot products?</p>
<p>(I think CompuBERT is one of them, <a href="https://github.com/MIR-MU/CompuBERT/blob/master/sentence-transformers/sentence_transformers/question_responder.py" rel="nofollow noreferrer">look</a>)</p>
|
<p>All ten systems are able to retrieve answers using only the dot product:</p>
<ul>
<li>All eight MSM systems (MG, PZ, MH, LM, MP, JK, AM, and VS) use <a href="https://en.wikipedia.org/wiki/Vector_space_model" rel="nofollow noreferrer">the standard vector space model (VSM)</a> with either the BM25 or TF-IDF weighting. The document similarity measure in the VSM can be computed as a <em>sparse</em> dot product for a pair of documents and as a <em>sparse</em> matrix product for a pair of corpora.</li>
<li>The MIRMU – SCM system uses <a href="https://arxiv.org/abs/1808.09407v1" rel="nofollow noreferrer">the soft vector space model (soft VSM)</a>. The document similarity measure in the soft VSM can also be computed as a <em>sparse</em> dot product for a pair of documents, see <a href="https://arxiv.org/pdf/1808.09407v1.pdf#page=3" rel="nofollow noreferrer">Theorem 4.2</a>, although <a href="https://github.com/RaRe-Technologies/gensim/blob/fe8e2042f0c8c16abc502220f5a4f88c72d2b31d/gensim/similarities/termsim.py#L580" rel="nofollow noreferrer">our implementation</a> uses the following formula: <span class="math-container">$x^T\cdot S\cdot y.$</span></li>
<li>The MIRMU – CompuBERT system produces 768-dimensional document embeddings. The document similarity measure is the <em>dense</em> dot product, see <a href="https://drive.google.com/drive/folders/1bxYwWzDX3z81S4TwUaTvqZBHtiMOngez" rel="nofollow noreferrer">method <code>TrainedIRSystem.search()</code> in file <code>eval_arqmath.py</code></a>. Note that <a href="https://github.com/MIR-MU/CompuBERT" rel="nofollow noreferrer">your link in the original post</a> points to an outdated version of the system used in ARQMath 2020.</li>
</ul>
| 719
|
question answering
|
Binary Integer Programming question - what graph problem is represented
|
https://cs.stackexchange.com/questions/51377/binary-integer-programming-question-what-graph-problem-is-represented
|
<p>I'm dealing with a BIP question, that represents a graph problem.
The goal is finding the graph problem.</p>
<p>I've spend a lot of time trying to solving this question but I couldn't find the answer to that question.</p>
<p>All I'm given is the set of constraints and the objective function:</p>
<p>I'd really appreciate your help, no full answer needed, just a direction.</p>
<p>$$
\begin{align*}
\min & \sum_{ijk} z_{ijk} c_{ij} \\
\text{s.t.}\; & \sum_j x_{ij} = 1 \qquad \forall i=0\ldots n-1 \\
& \sum_i x_{ij} = 1 \qquad \forall j=0\ldots n-1 \\
& z_{ijk} \geq x_{ik} + x_{j(k+1\,\mathrm{mod}\,n)} \qquad \forall i,j,k=0\ldots n-1 \\
& x_{ij},z_{ijk} \in \{0,1\}
\end{align*}
$$</p>
|
<p>This answer assumes that $c_{ij} \geq 0$.</p>
<p>The first two sets of equations guarantees that $x_{ij}$ is a permutation matrix. It defines a permutation $\pi$ on $\{0,\ldots,n-1\}$ in the following way: $\pi(j) = i$ if $x_{ij} = 1$.</p>
<p>The set of inequalities is a logical implication: if $x_{ik}=x_{j(k+1)}=1$ then $z_{ijk} = 1$. That is, if $\pi(k) = i$ and $\pi(k+1) = j$ then $z_{ijk}=1$.</p>
<p>Since $c_{ij} \geq 0$ and the objective is to minimize $\sum_{ijk} z_{ijk} c_{ij}$, we want to have $z_{ijk} = 0$ unless we are forced to take $z_{ijk} = 1$. Therefore $z={ijk} = x_{ik} \land x_{j(k+1)}$. This means that the objective function is
$$
\min_\pi \sum_{k=0}^{n-1} c_{\pi(k)\pi(k+1)}.
$$
This is the problem known as <em>minimum directed Hamiltonian circuit</em>.</p>
| 720
|
question answering
|
Go Back N ARQ Question
|
https://cs.stackexchange.com/questions/59724/go-back-n-arq-question
|
<p>I am a computer science under graduation student, and was going through some Go Back N ARQ (Computer Networking) videos on YouTube, and got a doubt in a question, which according to me should have a different answer than what the instructor on the video is arriving at (given that no other comment in the comments section of the video raises the same doubt, I am pretty sure I have had some problem in understanding the protocol). I would be greatful for any help. The question goes as follows:</p>
<p>Given a connection oriented communication between two hosts that are following the Go back N protocol, with <code>sender's window = 3</code>, and assuming that every <code>5th</code> packet transmitted by the sender is lost (no acknowledgements are lost), what are the total number of transmissions required by the sender host? Assume that the packets to be sent are numbered <code>1-10</code>.</p>
<p>The sequence of transmissions by the sender that I am getting is (final answer = 16):</p>
<pre><code>1,2,3; 4,5,6; 5,6,7; 8,9,10; 8,9,10; 10
</code></pre>
<p>Meanwhile, the instructor in the video gets it like this (final answer = 18):</p>
<pre><code>1,2,3,4,5,6,7,5,6,7,8,9,7,8,9,10,9,10
</code></pre>
<p>I would appreciate if someone could point out where I am going wrong in understanding the protocol.
Thanks!</p>
|
<p><code>1-2-3</code> is correctly sent and acknowledged so the sender's window is now over <code>4-5-6</code>.</p>
<p><code>4</code> is received correctly and so it is acknowledged. This makes the window (currently over <code>4-5-6</code>) slide to <code>5-6-7</code>. However, since <code>5</code> was lost, <code>5</code> will not be acknowledged. Therefore, despite <code>6</code> being well received, <code>6</code> will not be acknowledged as well. Meanwhile the sender sends <code>7</code> because his window allows him to (it looks like you think it doesn't trigger the sender to send <code>7</code>). It is received well but the recipient doesn't acknowledge it because it's still waiting for <code>5</code> to arrive. The sender's timer goes off and therefore it sends <code>5-6-7</code> again... (etc).</p>
<p>If you haven't seen it already, <a href="https://youtu.be/9BuaeEjIeQI" rel="noreferrer">this</a> animation is really helpful.</p>
| 721
|
question answering
|
Simple question COQ
|
https://cs.stackexchange.com/questions/32707/simple-question-coq
|
<p>I'm a beginner in the coq proof assistant, so sorry if my question is silly. I would like to prove properties of a mathematical object. For clarity I will describe an over-simplified version of my object. Intuitively, the object has
three sets A, B, C. The list A is of the form
$$A= \{(0,x_1) (0,x_2), ... ,(0,x_n)\}$$ i.e, all pairs consist of the number zero and an arbitrary number. Analogously, the set B is of the form
$$B = \{(1,y_1)(1,y_2)...(1,y_m)\}$$. And the set $C$ is such that C = A U B.
For concreteness the sets $A,B,C$ can be defined as lists, if for some reason it is not inconvenient in Coq</p>
<p>So the simplified object would be of the following form: </p>
<p>Object</p>
<p>A : Set of elements of the form (0,x) where x is some number </p>
<p>B : Set of elements of the form (1,y) where x is some number </p>
<p>C : Set such that C = A U B </p>
<p>The Condition that the structure must satisfy is:</p>
<p>If (0,a) belongs to A then (1,a) belongs to B.</p>
<p><strong>Questions:</strong></p>
<p>1) How do I define a type consisting of pairs in which the
first element is 0 and the second an arbitrary natural number?
(Obs: This was answered by @KonstantinWeitz but his answer received
a minus. Why wouldn't Konstantin's answer be satisfactory in Coq?)</p>
<p>2)How do I define the object above in coq? I tried to do it with records.
But the problem is that I have no idea of how to define a type of question 1. </p>
<p>3) How to I impose the condition that this object is valid only if for
each (0,x_n) in A there is a (1,y_n) in B with y_n=x_n? And the condition
that $C = A \cup B$? </p>
|
<p>Set-theoretic thinking is creating trouble, as you are trying
to do things in non-Coq ways. Let me show you a solution which
works better, and then you can explain what your actual non-simplified
problem is -- we can probably optimize that one to.</p>
<p>If we have two lists <code>A</code> and <code>B</code> then we do not have to tag the elements
of the first list with 0 and the second one with 1. That is, rather
than having</p>
<blockquote>
<p><code>A = [(0,x_1), ..., (n,x_n)]</code> and <code>B = [(1,y_1), ..., (m, y_m)]</code></p>
</blockquote>
<p>we can equivalently have just</p>
<blockquote>
<p><code>A = [x_1, ..., x_n]</code> and <code>B = [y_1, ..., y_m]</code>.</p>
</blockquote>
<p>So, we are really trying to define:</p>
<blockquote>
<p><em>A pair of lists such that the elements of the first one are contained in the second one.</em></p>
</blockquote>
<p>We are going to use the standard library for lists (by the way
if you are simulating sets by using lists, you shouldn't).</p>
<pre><code>Require Import List.
(* We define what it means for the elements of list X to be
contained in list Y. *)
Definition contained_in {T} (X : list T) (Y : list T) :=
forall x, In x X -> In x Y.
(* Now we define our data structure. It is a Record with three fields. *)
Record MyLists := {
A : list nat;
B : list nat;
valid : contained_in A B
}.
(** For example, suppose we want to construct A = [1,2,3] and B = [2,3,3,1,5]. *)
Definition example : MyLists.
Proof.
(* We give Coq the fields we know about. *)
refine {| A := 1::2::3::nil ; B := 2::3::3::1::5::nil |}.
(* Coq tells us we also have to provide the 'valid' field. *)
(* We tell Coq to do it itself. *)
firstorder.
Defined.
</code></pre>
| 722
|
question answering
|
How would I simulate a network to explore the percolation threshold of a network connected by the knight's move?
|
https://cs.stackexchange.com/questions/47992/how-would-i-simulate-a-network-to-explore-the-percolation-threshold-of-a-network
|
<p>"If we consider the squares of an infinite chess board as nodes of our graph and consider each to be connected to the other eight squares that are a knight's move away from it what is the percolation threshold of this graph?"</p>
<p>Note:One way I have thought about this problem is to try to use vectors: We can think of a knight's move as a vector of the form $<\pm1,\pm2>$ or $<\pm2,\pm1>$ for the eight cases.</p>
<p>In answering this question I think I need to create a simulation. How would I create a program to simulate an infinite board, for instance, and remove random nodes to create a graph with a percolation value.</p>
|
<p>First of all, you don't simulate an infinite board. You simulate larger and larger boards, until the percolation threshold seems to stabilize. For a given size of board, you need to decide on what event signifies that percolation happens. One common option is that the top of the board is connected to the bottom of the board. For each probability $p$, you estimate the probability that this happens by doing many samplings. You then use binary search to estimate the probability $p_c$.</p>
<p>Here are some more details. Suppose you've decided on some board size. The first step is to compute all the edges, that is all pairs of vertices connected by a knight's move. Given a probability $p$, you run the following experiment many times. Put in each edge with probability $p$ independently. Then check (using DFS/BFS or equivalent) whether the top of the board is connected to the bottom of the board (that is, add a new "top" vertex connected to all vertices at the top of the board, add a similar "bottom" vertex, and check whether the two are connected). Do this many times, and estimate the probability $\theta(p)$ that "bottom" is connected to "top". Then use binary search on $p$ to find a value of $p$ such that $\theta(p) \approx 1/2$, say. This is your estimate for the critical probability.</p>
| 723
|
question answering
|
Half Clique Property question
|
https://cs.stackexchange.com/questions/159653/half-clique-property-question
|
<p>Hey I had this question and am stuck on part (b).</p>
<p>I don't see how its possible to find a graph with 7 vertices and 15 edges that does <strong>not</strong> have the half-clique problem. If there is a way, could someone share their thought process rather than the answer as I would like to figure it out on my own.</p>
<p>Thanks a lot!</p>
<p><a href="https://i.sstatic.net/4qr0a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4qr0a.png" alt="enter image description here" /></a></p>
|
<p>Consider a clique <span class="math-container">$C$</span> with vertex-set <span class="math-container">$V = \{1,2,3,4,5,6,7\}$</span>. <span class="math-container">$C$</span> has <span class="math-container">$\binom{7}{2} = \frac{6 \cdot 7}{2} = 21$</span> edges.</p>
<p>For any <span class="math-container">$S \subseteq V$</span> with <span class="math-container">$|S| \ge \lceil V/2\rceil =4$</span>, either:</p>
<ul>
<li><span class="math-container">$S$</span> contains at least two vertices in <span class="math-container">$\{1,2,3\}$</span>; or</li>
<li><span class="math-container">$S$</span> contains at least <span class="math-container">$2$</span> vertices in <span class="math-container">$\{4,5,6\}$</span>.</li>
</ul>
<p>From the above it should be easy to construct the desired graph.</p>
<blockquote class="spoiler">
<p> You can simply delete all the edges between vertices that are both in <span class="math-container">$\{1,2,3\}$</span> or both in <span class="math-container">$\{4,5,6\}$</span> to obtain a graph that has <span class="math-container">$21- 2 \cdot \binom{3}{2} = 21 - 2 \cdot 3 = 15$</span> edges and does not have the half-clique property.</p>
</blockquote>
| 724
|
question answering
|
Analyzing parallel performance question
|
https://cs.stackexchange.com/questions/139303/analyzing-parallel-performance-question
|
<p>I was reviewing for my CS class and came across this question and answer combo that didn't have any explanation why it was correct. I'm confused on how they got the answer:</p>
<blockquote>
<p>We have a system to which we can instantaneously add and remove cores
-- adding more cores never leads to slowdown from things like false sharing, thread overhead, context switching, etc</p>
<p>When the program foo() is executed to completion with a single core in
the system, it completes in 20 minutes. When foo() is run with a total
of three cores in the system, it completes in 10 minutes.</p>
<p>If 100% of foo() is parallelizable, with 3 cores it would take
20/3=6.66 minutes. Since it instead takes 10 minutes, what fraction of
foo() is parallelizable?</p>
</blockquote>
<p>ANSWER GIVEN: 0.75</p>
<blockquote>
<p>How many minutes would it take to execute foo on this magical system
as the number of cores approaches infinity?</p>
</blockquote>
<p>ANSWER GIVEN: 5</p>
<p>Could someone explain how the staff got these answers?</p>
|
<p>Suppose that the fraction of <code>foo()</code> that is parallelizable is <span class="math-container">$\alpha$</span>. Then the total execution time with <span class="math-container">$n$</span> cores is:</p>
<ul>
<li>the non-parallelizable execution, done by only one core: <span class="math-container">$(1- \alpha)\times 20$</span></li>
<li>the parallelizable execution: <span class="math-container">$\alpha \times 20 \times \frac{1}{n}$</span></li>
</ul>
<p>Then, for the first question, we have to consider that the number of cores is <span class="math-container">$3$</span>, and the total time is <span class="math-container">$10$</span>. The equation becomes:</p>
<p><span class="math-container">$(1-\alpha)\times 20 + \alpha\times \frac{20}{3} = 10\Leftrightarrow \alpha = 0.75$</span>.</p>
<p>For the second question, we suppose known the answer to the first question: <span class="math-container">$\alpha = 0.75$</span>, and we suppose then that <span class="math-container">$n = \infty$</span>. The total execution time becomes:</p>
<p><span class="math-container">$0.25 \times 20 + 0.75\times \frac{20}{\infty} = 0.25 \times 20 = 5$</span>.</p>
| 725
|
question answering
|
Example of preservation failing in Java - follow up question
|
https://cs.stackexchange.com/questions/156781/example-of-preservation-failing-in-java-follow-up-question
|
<p>This is a <a href="https://cs.stackexchange.com/questions/156721/example-of-progress-and-preservation-failing-in-a-commonly-used-programming-lang">follow-up question to my previous question</a></p>
<p>I have been reading <a href="https://medium.com/hackernoon/java-is-unsound-28c84cb2b3f#.xs8voadvf" rel="nofollow noreferrer">this post</a> and it comes up with the following example showing how Java type system is unsound:</p>
<pre><code>interface IFoo<T> {
Number foo(T t);
}
class Foo<T extends Number> implements IFoo<T> {
public Number foo(T t) { return t; }
}
Number bar(IFoo<String> foos) { return foos.foo("NaN"); }
</code></pre>
<p>And in the same post it talks about how we can never instantiate such <code>Foo<String></code>. But does it really prove type system being unsound?</p>
<p>Also, based on the <a href="https://cs.stackexchange.com/a/156767/9397">answer to the previous question</a>, to prove type "Preservation" not happening, we need to prove type system being unsound. But this example can never be "evaluted" and then we check if after evaluation type is preserved because there is no such a <code>Foo<String></code>. I am very confused.</p>
|
<p>The Java type system is strong. There are simply some errors that could, in theory, be reported at compile time that the Java Language Specification does not explicitly prohibit, and thus they are reported at run-time instead of compile time.</p>
<p>This really isn't that complicated, and it's all spelled out in the Java Language Specification and the Java Virtual Machine Specification. Please read those, then any of your remaining questions should be answerable.</p>
<p><a href="https://docs.oracle.com/javase/specs/" rel="nofollow noreferrer">https://docs.oracle.com/javase/specs/</a></p>
| 726
|
question answering
|
Are there any research papers rethinking browser architecture?
|
https://cs.stackexchange.com/questions/57152/are-there-any-research-papers-rethinking-browser-architecture
|
<p>I am interested in any research that reviews the state of affairs when it comes to browsers today, be it their concurrency models, their performance, or anything relevant to such topics. Specifically, I am interested in whether any effors are being taken in academia to take on the shortcomings of browser design currently used in the wild.</p>
<p>[<strong>Update, some more context</strong>]: This question stemmed from a reading of the original Erlang paper, in which the descriptions provided for fault-tolerance and strong process isolation made me think of how browers work (some do provide process/tab isolation, but this is inherently tied to their own implementations, most of which can only do process isolation that relies on OS primitives). So, another way of way of answering parts of my question is pointing me to any <em>implementations</em> that might have gone a different way, be it adopting functional languages or some similar. Now that I clarified my question a bit more I found this link [1], which actually does not mention any other implementation but merely describes why C++ rules the browser field. Therefore I am still interested in any theoretical takes on the topic.</p>
<p>Perhaps I am asking in the wrong SE forum, in which case please advise me on where to post my question. </p>
<p>[1] <a href="https://softwareengineering.stackexchange.com/questions/41883/why-are-most-browsers-developed-in-c">https://softwareengineering.stackexchange.com/questions/41883/why-are-most-browsers-developed-in-c</a></p>
| 727
|
|
question answering
|
Finding the timestamps of processes implementing Lamport's clocks
|
https://cs.stackexchange.com/questions/126393/finding-the-timestamps-of-processes-implementing-lamports-clocks
|
<p>I have been asked this question, but don't know how to go about answering it. </p>
<p>Three process, which are implementing Lamport's clocks, are running and a lot of events are taking, place including some messages being sent between the processes. The arrows and circles represent in-processor events and messages being sent between process. Assume all clocks starts on 0 and the time goes from left to right. Provide the logical timestamps associated with each event.</p>
<p><a href="https://i.sstatic.net/2DCk7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2DCk7.png" alt="image"></a></p>
<p>To my understadning each circle will mean +1 increase in the proccess clock and the arrows means that it should add the time from where the process is from +1 to the process which receives the arrow. Is this a correct understanding of the task or am I missing something?</p>
| 728
|
|
question answering
|
Coefficients in cost function in A-star
|
https://cs.stackexchange.com/questions/114842/coefficients-in-cost-function-in-a-star
|
<p>I'd like to expand on this question : </p>
<p><a href="https://stackoverflow.com/questions/52420788/why-does-the-a-star-algorithm-need-gn">https://stackoverflow.com/questions/52420788/why-does-the-a-star-algorithm-need-gn</a></p>
<p>Dijkstra's algorithm uses cost function <span class="math-container">$f(n) = g(n)$</span>
whereas A* uses cost function <span class="math-container">$f(n) = g(n) + h(n)$</span>, with <span class="math-container">$g(n)$</span> being the cost of the path from the start node to node <span class="math-container">$n$</span>, and <span class="math-container">$h(n)$</span> is a heuristic function that estimates the cost of the cheapest path from node <span class="math-container">$n$</span> to the goal.</p>
<p>It is clear from the linked question's answer that A* needs its <span class="math-container">$g(n)$</span> function in the cost function.
My question however is the following. Can one use the cost function :</p>
<p><span class="math-container">$f(n) = \alpha g(n) + (1-\alpha)h(n)$</span></p>
<p>for some alpha <span class="math-container">$0<\alpha<1$</span> ?</p>
<p>I ask because in some cases I observed it can be much faster to prioritize (through a coefficient) estimated cost over already traversed cost. I am not sure however if this still results in an optimal path?</p>
<p>EDIT : multiplying the heuristic <span class="math-container">$h(n)$</span> by some alpha <span class="math-container">$0<\alpha<1$</span> is allowed, since this operation still underestimates if <span class="math-container">$h(n)$</span> already did (which is necessary to obtain the resulting optimal path). I am more concerned about the multiplying of <span class="math-container">$g(n)$</span>.</p>
|
<p>For A* to get the optimal path it requires that <span class="math-container">$f(n) \leq g(goal)$</span>. In other words that the heuristic underestimates the cost from the node to the goal.</p>
<p>Multiplying a valid hueristic with <span class="math-container">$0 \lt\alpha\lt 1$</span> will not violate this requirement.</p>
<p>Multiplying <span class="math-container">$g(n)$</span> is not allowed because you can end up with <span class="math-container">$f(goal) = \alpha g(goal) < f(n)$</span> which would violate the requirement for getting the optimal path.</p>
| 729
|
question answering
|
A question on decidability
|
https://cs.stackexchange.com/questions/140104/a-question-on-decidability
|
<p>I have a homework question that is as follows:</p>
<p><em>L(P) is a language of ASCII input strings for which a given program, P, returns "yes". Is the set of all input strings P decidable, such that P is a decision program and L(P) is decidable?</em></p>
<p>My intuition leads me to believe that the set is, in fact, decidable but I am having a tough time proving my answer.</p>
<p>Would appreciate any help on this. Thanks</p>
|
<p>Let <span class="math-container">$L=\{\langle M \rangle \mid M \text{ is a TM such }L(M)\in R\}$</span>, where <span class="math-container">$\langle M\rangle$</span> is the encoding of a TM <span class="math-container">$M$</span>, and <span class="math-container">$R$</span> is the set of all decidable languages. This is the language in question.</p>
<p>Notice that <span class="math-container">$R\neq \emptyset$</span> and also since there are languages not in <span class="math-container">$R$</span> (like the halting problem), then <span class="math-container">$R\neq RE$</span>. Simply put, <span class="math-container">$R$</span> is <em>not</em> trivial (obviously).</p>
<p>Apply Rice's theorem on the property <span class="math-container">$"R"$</span>, to directly get that <span class="math-container">$L$</span> is not decidable.</p>
| 730
|
question answering
|
in order of binary search tree
|
https://cs.stackexchange.com/questions/115218/in-order-of-binary-search-tree
|
<p>This is what I got for the in-order of the bst but it's wrong because I'm answering some questions about some successors of some of the letters and I got them wrong. so I'm wondering where in this in-order i've gone wrong? <a href="https://i.sstatic.net/VRRAB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VRRAB.jpg" alt="enter image description here"></a></p>
<p>d, b, m, h, i, e, a, j, k, f, g, c</p>
<p>(Sorry if these questions aren't allowed here, please let me know where I can ask it if not!)</p>
|
<p>Remember that an in-order traversal lists the elements from left-to-right, descending the tree: <code>Left</code>, <code>Root</code>, and <code>Right</code>. This means that starting from the root (<code>a</code>), we will traverse the whole left branch, <em>then</em> print <code>a</code>, and finally traverse the right branch.</p>
<p>You started off correct; however, your mistake was that you put <code>m</code> beforee <code>h</code>. It is an easy mistake to make so you have to be careful. The <code>m</code> node is the <em>right</em> node of <code>h</code> so the traversal for that subtree will be <code>h, m</code>. The same goes for <code>i</code> and <code>e</code> in your traveral---they should be flipped.</p>
<p>The correct answer for this in-order traversal is <code>d b h m e i a j f k c g</code>.</p>
| 731
|
question answering
|
Help with a question on write-through and no-write allocate in caches
|
https://cs.stackexchange.com/questions/155996/help-with-a-question-on-write-through-and-no-write-allocate-in-caches
|
<p>I am struggling with this question as I am not sure whether the answer that has been provided is correct or not. The image should be sufficient to tell the question.
<br>The attached image is of the answer. For the question, imagine all but the first two columns are empty.<a href="https://i.sstatic.net/7Y0Ii.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7Y0Ii.png" alt="enter image description here" /></a>
<br>Here's what I am struggling with:
<br>1. Why is when the address 8500 referenced, a "miss". From my understanding, since it shares the same tag and index value with the address 8496, it should be a "hit".
<br>2. Why is it a "miss" when the address 304 is referenced again. Since it was a "miss" the first time, it would have been loaded in the cache and then when called again it should be a "hit". Is this not how it works?</p>
| 732
|
|
question answering
|
Can an NP-hard problem be polynomial on average?
|
https://cs.stackexchange.com/questions/28466/can-an-np-hard-problem-be-polynomial-on-average
|
<p>I'm wondering if there are any $NP$-hard problems which are ``polynomial" in the average case. I think there are two ways to interpret this?</p>
<ul>
<li>If $P \neq NP$, can there be an algorithm solving an $NP$-hard problem with amortized (average case) running time of $O(n^k)$ for a constant $k$?</li>
<li>Are there any problems which are $NP$-hard which are also in $BPP$, or even $PP$?</li>
</ul>
<p>Can anyone answer or provide a reference answering either of these questions?</p>
|
<p>It would seem that the question has been answered at <a href="https://cstheory.stackexchange.com/questions/496/are-there-np-complete-problems-with-polynomial-expected-time-solutions">CSTheory.SE</a>.</p>
<p>Summary: it is, indeed possible.</p>
<p>For example, the Max 2-CSP problem is NP hard with an $O(n)$ expected time algorithm.</p>
<p>This makes sense, I guess. Sometimes only a small subset of instances is needed to make a problem $NP$-hard, like SAT vs 3SAT. But you can expand the problem, and as long as it still contains the hard instances, it will be NP-hard, but the probability of success with a fast algorithm will be raised.</p>
| 733
|
question answering
|
Formal invalidation of question about self-referential partial halting problem solver
|
https://cs.stackexchange.com/questions/78057/formal-invalidation-of-question-about-self-referential-partial-halting-problem-s
|
<p>How do you formally invalidate a question about the decidability of a partial halting problem solver that answers correctly with the following kind of input: Turing machines that don't use the partial halting problem solver inside?</p>
<p>I made this question and it was marked as unclear. I deleted it. This is my question:</p>
<p>"Let H(M, i) be the halting set of M Turing machines with i input that don't have an H solver inside. Is there a way to prove this invalid or undecidable?</p>
<p>H is inside of a machine if by extracting some code from it you can solve H with that extracted code."</p>
| 734
|
|
question answering
|
Associativity Question , Computer organization
|
https://cs.stackexchange.com/questions/68701/associativity-question-computer-organization
|
<p>an access sequence of cache block address of length N and contaons n unique addresses. The no. of unique block address between 2 consecutive accesses to the same block address is bound above by k. What is miss ratio if access sequence is passed through a cache of associativity A >= k exercising LRU replacement policy? Ans is n/N but how?</p>
<p>Problem : I m not understanding what is meant by an access sequence of cache block address of length N ? Describe it and give answer of whole question .</p>
|
<p>An <em>access sequence</em> is a sequence of accesses. The length of the sequence is the number of accesses it contains. In this case, each access is an access to an address. So that should help you decode what is meant by "an access sequence of cache block address".</p>
<p>As far as solving the problem, it's your exercise so I'll let you have the joy of solving it. I suggest that you try working through some examples with small values of $N$ and $k$. For instance, try $k=1$ and $N=10$ and try to work out some examples of access sequences, and for each compute what the miss ratio is (by hand). See if you spot a pattern.</p>
<p>A tip on how to parse the language: They are talking about an access sequence of length N, where each element in the sequence is a cache block address. Don't read it as "cache block addresses of length N": I don't know what that would mean either, and it's probably not what they intended.</p>
| 735
|
question answering
|
Question Regarding Design Constraints in Software Engineering Exam
|
https://cs.stackexchange.com/questions/161912/question-regarding-design-constraints-in-software-engineering-exam
|
<p>I recently had a software engineering exam, and there's a particular question that's causing some confusion between my professor and me. I'd like to get some insights from the community to better understand this issue. If I can convince my professor I was right, I could get a higher grade.</p>
<p>The Question:</p>
<p>The question in the exam was as follows:</p>
<p>"The design constraints imposed on the sorting system are:</p>
<p>a. Programming language and algorithms</p>
<p>b. Security and system interaction</p>
<p>c. Platform and schedule</p>
<p>d. Usability and performance"</p>
<p>In our textbook, "Essentials of Software Engineering" by Frank F. Tsui, it states that: " The thinking process related to design constraints can be summarized as follows:</p>
<p>User-interface
Typical and maximum input sizes
Platforms
Schedule requirements "
And my professor said that the correct answer is c), based on this section of the book.</p>
<p>Then the book goes:</p>
<p>"The steps and thoughts related to design decisions for the sorting problem can be summarized as follows:</p>
<p>Programming language: Typically this will be a technical design decision, although it is not uncommon to be given as a design constraint.</p>
<p>Algorithms : ... Algorithms are usually design decisions, but they can be given as design constraints or even considered functional requirements. "</p>
<p>My professor marked my answer as incorrect and insisted that the correct answer is "c. Platform and schedule." However, my perspective is that "a. Programming language and algorithms" is a valid choice as well, based on the information from our textbook. His response was : "Design constraints (limit) and design decisions (decisions) are two different ones which are different in chapters 1.1.4 and 1.1.5 respectively. The question on the test was about design constraints, and programming language and algorithms are design decisions."</p>
<p>I've discussed this with my professor, but we couldn't come to an agreement. I believe that the textbook's explanation supports my answer.</p>
<p>I would greatly appreciate it if someone could provide their insights on this matter. Am I interpreting the book correctly? Is "Programming language and algorithms" a valid choice as design constraints based on the provided book excerpt? If so, how can I approach this situation with my professor to clarify my perspective?</p>
<p>I want to ensure that I have a clear understanding of the material and the exam evaluation. Your input would be valuable in resolving this issue.</p>
<p>Thank you in advance for your assistance.</p>
<p><a href="https://i.sstatic.net/heIZo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/heIZo.jpg" alt="Design Constraints" /></a>
<a href="https://i.sstatic.net/DgUQP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DgUQP.png" alt="Sorting problem" /></a></p>
|
<p>I would argue that the answer is "all of the above".</p>
<ul>
<li>Programming language and algorithms: is it worth the cost of a new compiler and associated tools to use the newest wiz-bang language, or do you make do with the language and tools you have already purchased? What algorithms will allow the application to perform in a timely manner?</li>
<li>Security and system interaction: does the computer that runs the application have enough resources (both compute power for security and external interfaces for system interaction) to successfully perform the job.</li>
<li>Platform and Schedule: does a development system need to be purchased to run on the proposed platform, does the proposed platform have the resources to successfully run the application? Can the project be completed within the scheduled time frame?</li>
<li>Usability and Performance: big time constraint here, if your application runs too slow or has a terrible user interface, it will never sell.</li>
</ul>
| 736
|
question answering
|
How to remember NFA's choice on a certain computation?
|
https://cs.stackexchange.com/questions/49755/how-to-remember-nfas-choice-on-a-certain-computation
|
<p>I'm working on solving the question answered at this page but with different values at the table, my alphabet is {a,b,c}
<a href="https://cs.stackexchange.com/questions/1467/words-that-have-the-same-right-and-left-associative-product">Words that have the same right- and left-associative product</a></p>
<p>Currently I'm in the stage where I have drawn the DFA of the multiplication table, found its reverse which was an NFA. </p>
<p>Here is the NFA I got by reversing the multiplication table's DFA
<a href="https://i.sstatic.net/N6L4X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N6L4X.png" alt="enter image description here"></a></p>
<p>I apology for the miss draw, but I hope its readable.</p>
<p>Now I have taken the input "abcb" and applied it on the above NFA, and I have gone through this tree <a href="https://i.sstatic.net/35q3A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/35q3A.png" alt="enter image description here"></a></p>
<p>As you can see here that the input is all consumed at the branch "C" and I could reach the Final State. Would someone elaborate to me how can I backtrack from that branch which is "C" in this case and indicate that the "C" is the state which shall be marked as the Final State my NFA?</p>
|
<p>Hint: Let the input be $x_1,\ldots,x_n$. As you mention, it is easy to compute the left-associative product $L_n$ "as you go": $L_{m+1} = L_m x_{m+1}$. The right-associative product recurrence has the wrong direction: $R_m = x_{m+1} R_{m+1}$. To cope with that, at each step you have to <em>guess</em> the value of $R_{m+1}$; it needs to be a value which conforms to your earlier guess of $R_m$ and to the value of $x_{m+1}$. At the end, compare your initial guess of $R_1$ to the value of $L_n$. Arrange things so that you only need to keep track of finitely many values of products.</p>
| 737
|
question answering
|
Rational agent question from Russell and Norvig
|
https://cs.stackexchange.com/questions/14351/rational-agent-question-from-russell-and-norvig
|
<p>Question from <em>Artificial Intelligenge: A Modern Approach</em> by Russell and Norvig (Exercise 2.1).</p>
<blockquote>
<p>Suppose that the performance measure is concerned with just the first
$T$ time steps of the environment and ignores everything thereafter.
Show that a rational agent's action may depend not just on the state
of the environment but also on the time step it has reached.</p>
</blockquote>
<p>This question is extremely confusing to me. My initial thought is that this is obvious. A rational agent wants to maximize its performance, and the first $T$ time steps are a factor in the performance measure. So for instance, if the environment is in state $A$ at time step 1, the performance measure can be different than being in state $A$ at step 2 since the state of the environment in step 1 is relevant to the performance measure in the latter case. Thus as the performance measures can be different, the rational agent may make different actions.</p>
<p>Perhaps that is the answer, but I am still confused on why it matters that the performance measure is concerned with only a finite sequence of initial time steps. My interpretation of the question seems to make that irrelevant. Only the fact that the performance measure has some historical factor is of any concern.</p>
<p>Can anyone help clarify what is happening in this question?</p>
|
<p>This question is obvious as you told and its purpose is to ensure that the reader has understood a part of the chapter.
You are right that the actions that the agent does will differ in the time period T. They will also differ after this period, because the agent's actions will have no value.</p>
<p>One example may be for a car agent that has to cover as much distance as possible and has limited fuel. If only the distance reached during the first hour counts, the car could take it in full throttle(exhausting the fuel very fast). However if there was no time limit the car agent should choose the optimal speed that maximizes the distance covered per fuel unit spent.</p>
| 738
|
question answering
|
Regular Language - Context Free Language
|
https://cs.stackexchange.com/questions/119254/regular-language-context-free-language
|
<p>I know this is not a question answer posting site but for the sake of explaining my doubt I will like to post a question</p>
<blockquote>
<p>Let <span class="math-container">$A$</span> be a <span class="math-container">$regular$</span> <span class="math-container">$language$</span> and <span class="math-container">$B$</span> be a <span class="math-container">$CFL$</span> over the alphabet
<span class="math-container">$\sum^*$</span>, which of the following about the langauge
<span class="math-container">$R=\overline{A}-B$</span> is <span class="math-container">$TRUE?$</span></p>
<p>a. <span class="math-container">$R$</span> is necessarily <span class="math-container">$CFL$</span> but <span class="math-container">$not$</span> necessarily <span class="math-container">$regular$</span></p>
<p>b. <span class="math-container">$R$</span> is necessarily <span class="math-container">$regular$</span> but <span class="math-container">$infinite$</span></p>
<p>c. <span class="math-container">$R$</span> is necessarily <span class="math-container">$non$</span> <span class="math-container">$regular$</span></p>
<p>d. <span class="math-container">$None$</span></p>
<p>e. <span class="math-container">$\phi$</span></p>
</blockquote>
<hr>
<p>Now I have approached this problem in 2 ways and I am getting 2 different results. </p>
<hr>
<p>The first way in which I have approached is that, since <span class="math-container">$A'-B=A'\cap B'$</span>, so it is <span class="math-container">$Reg$</span> <span class="math-container">$L$</span> <span class="math-container">$\cap$</span> <span class="math-container">$CSL=CSL$</span> , so answer is <span class="math-container">$NONE$</span></p>
<hr>
<p>On the other hand I have think of it like this, since <span class="math-container">$A$</span> is <span class="math-container">$Regular$</span> it's complement is also <span class="math-container">$regular$</span>, Now we know that</p>
<p><a href="https://i.sstatic.net/nqLc9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nqLc9.png" alt="enter image description here"></a> </p>
<p>So <span class="math-container">$regular$</span> <span class="math-container">$language$</span> being a subset of <span class="math-container">$CFL$</span> must give us <span class="math-container">$\phi$</span> when we are doing <span class="math-container">$A'-B$</span>, so this time I am getting <span class="math-container">$\phi$</span> as answer</p>
<hr>
<p>My question is which of my approach is correct? Is the first one correct? If so, then why the second one is wrong?</p>
<p>Or is the second one correct? If so then why the first one is wrong?</p>
<hr>
<p>I believe that the second method shown by me is wrong as say we have a regular language <span class="math-container">$A=\phi$</span>, so <span class="math-container">$\overline{A}=\sum^*$</span> and say <span class="math-container">$B=a^nb^n$</span></p>
<p>then <span class="math-container">$\overline{A}-B=a^xb^y$</span>, where <span class="math-container">$x\neq y$</span>, so it is a <span class="math-container">$CFL$</span> but not <span class="math-container">$\phi$</span>.</p>
<p>So where did I went wrong in my second proof using <span class="math-container">$Chomsky$</span> <span class="math-container">$hierarchy?$</span> </p>
<hr>
<p>Downvoters(if any) please mention the reason of downvote in comment. Thank you.</p>
|
<p>Your second method to arrive at <span class="math-container">$\phi$</span> is bogus - consider the case where <span class="math-container">$B$</span> is the empty language. Where you are confused here is you are looking at the difference between the sets of all regular languages and all context free languages, instead of the difference between the two concrete languages.</p>
<p>The correct answer is d. None (you can't say anything)</p>
<p>Because </p>
<p><span class="math-container">$R=\overline{A}-B = \overline{A \bigcup B} $</span></p>
<p>Since <span class="math-container">$A \bigcup B$</span> is clearly a <span class="math-container">$CFL$</span>, <a href="https://stackoverflow.com/a/34247059/1319284">this answer</a> applies which states that the complement of a <span class="math-container">$CFL$</span> is not necessarily <span class="math-container">$CFL$</span> itself (<span class="math-container">$CFL$</span> are not closed under complement).</p>
| 739
|
question answering
|
Half precision floating point question -- smallest non-zero number
|
https://cs.stackexchange.com/questions/140115/half-precision-floating-point-question-smallest-non-zero-number
|
<p>There's a floating point question that popped up and I'm confused about the solution. It states that</p>
<blockquote>
<p>IEEE 754-2008 introduces half precision, which is a binary
floating-point representation that uses 16 bits: 1 sign bit, 5
exponent bits (with a bias of 15) and 10 significand bits. This format
uses the same rules for special numbers that IEEE754 uses. Considering
this half-precision floating point format, answer the following
questions: ....</p>
<p>What is the smallest positive non-zero number it can represent?</p>
</blockquote>
<p>The answer says:
bias = 15
Binary representation is: <span class="math-container">$0 \, 00000 \, 0000000001 = 2^{-14} * 2^{-10}=2^{-24}$</span></p>
<p>I've understood the binary representation part, but how does it get to those exponents of 2??</p>
|
<p>In this example, <span class="math-container">$2^{-10}$</span> is the mantissa, and <span class="math-container">$2^{-14}$</span> is the exponent.</p>
<p>For a fuller explanation of subnormal numbers in IEEE-754 floating point, see <a href="https://cs.stackexchange.com/questions/131754/how-to-represent-zero-as-floating-point-number/131758#131758">this previous answer</a>.</p>
<p>Your example binary16 (i.e. half-precision) floating point number is a subnormal number because the exponent field is the "all zeroes" pattern. This means:</p>
<ul>
<li>The significand field contains the fractional part of the mantissa, with an implicit "0" to the left of the binary point.</li>
<li>The exponent is set to <span class="math-container">$2^{-14}$</span>. For binary32 (i.e. single precision) this would be <span class="math-container">$2^{-126}$</span> and for binary64 (i.e. double precision) it would be <span class="math-container">$2^{-1022}$</span>.</li>
</ul>
<p>So the number is <span class="math-container">$+0.0000000001_2 \times 2^{-14} = 2^{-24}$</span>.</p>
| 740
|
question answering
|
Is $k$-Clique NP-hard?
|
https://cs.stackexchange.com/questions/119801/is-k-clique-np-hard
|
<p>On my lecture note it was written that "Finding a clique of size <span class="math-container">$k$</span> in a graph is NP".</p>
<p>Later in an example for reduction the following was written:</p>
<p>"Assume we know how to answer "Is there a clique of size <span class="math-container">$k$</span> in a graph", then each time we will hide one node of the graph an on the newly created graph we will check Is there a clique of size <span class="math-container">$k$</span> in a graph:</p>
<p>If there is a clique of size <span class="math-container">$k$</span> and the node is not a part of the clique</p>
<p>If there is no clique of size <span class="math-container">$k$</span> and the node is part of the clique</p>
<p>I have read more about it, and wanted to be sure I have understand this example.</p>
<ol>
<li><p>Given a graph, answering the question whereas or not it has clique of size <span class="math-container">$k$</span> is NP</p></li>
<li><p>Given a graph, printing a clique of size <span class="math-container">$k$</span> is P</p></li>
</ol>
<p>If it is correct then this is an example why NP is a problem (1) that the verifier (2) is P?</p>
| 741
|
|
question answering
|
Question about data path dependencies in a program
|
https://cs.stackexchange.com/questions/171698/question-about-data-path-dependencies-in-a-program
|
<p>I cannot understand solution to the problem 5.5 in "<strong>Computer Systems: Programmers Perspective</strong>". I this chapter it cover microarchitecture based optimizations and data path dependency. As a reference it uses this reference machine:
<a href="https://i.sstatic.net/bZHZVFpU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZHZVFpU.png" alt="enter image description here" /></a></p>
<p>I cannot understand the answer about this program inner loop:</p>
<pre><code>double poly(double a[], double x, long degree)
{
long i;
double result = a[0];
double xpwr = x; /* Equals x^i at start of loop */
for (i = 1; i <= degree; i++)
{
result += a[i] * xpwr;
xpwr = x * xpwr;
}
return result;
}
</code></pre>
<p>Question is: "On our reference machine, with arithmetic operations having the latencies
shown in Figure 5.12, we measure the CPE for this function to be 5.00. Explain
how this CPE arises based on the data dependencies formed between
iterations due to the operations implementing lines 7–8 of the function."</p>
<p>Provided answer:
We can see that the performance-limiting computation here is the repeated
computation of the expression xpwr = x * xpwr. This requires a floatingpoint
multiplication (5 clock cycles), and the computation for one iteration
cannot begin until the one for the previous iteration has completed. The
updating of result only requires a floating-point addition (3 clock cycles)
between successive iterations.</p>
<p><strong>My question:</strong></p>
<p>I do not understand why <code>result += a[i] * xpwr</code> doesn't form a critical path of 8 cycles. We also need <code>result</code> from previous iteration and to calculate it we need 1 floating addition and 1 floating multiplication.</p>
|
<blockquote>
<p>We also need result from previous iteration and to calculate it we need 1 floating addition and 1 floating multiplication.</p>
</blockquote>
<p>Only the addition depends on the previous <code>result</code>.</p>
<p>What happens here is that <code>a[i] * xpwr</code> is computed, and only <em>then</em> we need to wait for the previous <code>result</code> to be ready to be able to do the addition.</p>
<p>The multiplication <code>a[i] * xpwr</code> together with the addition <code>result += [whatever value we just got]</code> add up to a latency of 8 cycles, but the result doesn't go into the multiplication of the next iteration, it goes into the addition. The additions are chained together in a loop-carried dependency chain (with a lower latency per iteration than the loop-carried dependency chain created by <code>xpwr = x * xpwr</code>, so it's not important), but the <code>a[i] * xpwr</code> multiplications come into that chain from the side, they're not <em>in</em> the chain.</p>
| 742
|
question answering
|
Why is it not possible to prove the equivalence of nondeterministic and deterministic Turing Machines the same way as for NFAs and DFAs?
|
https://cs.stackexchange.com/questions/114808/why-is-it-not-possible-to-prove-the-equivalence-of-nondeterministic-and-determin
|
<p>I found en excercise asking this question.
I know that for proving the equivalence of NFAs and DFAs we can use the conversion through subsets, and that for proving the equivalence of nondeterministic TMs and deterministic ones we can build a 3-tape deterministic TM M which emulates the steps of a given nondeterministic one, let's call it N, proceeding this way:</p>
<ol>
<li>It copies the input string of N on the first tape </li>
<li>On the 2nd tape, for each <span class="math-container">$i$</span>th step of computation of N it produces a maximum of <span class="math-container">$d^i$</span> strings of length <span class="math-container">$i$</span> consisting of sequences of numbers between <span class="math-container">$i$</span> and <span class="math-container">$d$</span> representing the possible nondeterministic computations of N where <span class="math-container">$d$</span> is the nondeterminism degree of N</li>
<li>For each of the above strings, it copies the content of the 1st tape on the 3rd tape and it tries every nondeterministic choice represened by its symbols, one by one </li>
</ol>
<p>Basically M does a breadth first search on the tree of the computations of N, so if N has an accepting configuration on a computation path, surely M will find it because it will travel that path sooner or later.</p>
<p>So, for answering the question, I thought that the main reason is that a Turing Machines has a tape whereas a finite state automaton not so we can't use the same procedure used for converting a NFA into a DFA. Is it a sufficient answer? Should I mention that a Turing Machine isn't guaranteed to halt on any input too? How would you answer? Thanks in advance.</p>
| 743
|
|
question answering
|
A Question relating to a Turing Machine with a useless state
|
https://cs.stackexchange.com/questions/636/a-question-relating-to-a-turing-machine-with-a-useless-state
|
<p>OK, so here is a question from a past test in my Theory of Computation class:</p>
<blockquote>
<p>A useless state in a TM is one that is never entered on any input string. Let $$\mathrm{USELESS}_{\mathrm{TM}} = \{\langle M, q \rangle \mid q \text{ is a useless state in }M\}.$$
Prove that $\mathrm{USELESS}_{\mathrm{TM}}$ is undecidable. </p>
</blockquote>
<p>I think I have an answer, but I'm not sure if it is correct. Will include it in the answer section.</p>
|
<p>This is clearly reducible from the Halting Problem. If a machine $M$ does not stop on input $x$ then any final state is "useless". Given an input $M,x$ for the Halting problem, it is easy to construct $M_x$ that halts on every input (thus its final state is not useless) if and only if $M$ halts on $x$. That way you can decide Halting Problem if you can decide $\mathrm{USELESS}_{\mathrm{TM}}$, which yields a contradiction.</p>
| 744
|
question answering
|
Need help with previous "Automata / Theory Of Computation" exam question
|
https://cs.stackexchange.com/questions/119421/need-help-with-previous-automata-theory-of-computation-exam-question
|
<p>I passed by this question in a previous exam while studying for the "Automata / Theory Of Computation" and I am struggling to find answer. I would appreciate it if someone can help me with it:</p>
<p>This is the question:</p>
<p>a)On the basis of what was covered in class, draw the Venn diagram representing the following sets:</p>
<p>1.REXP: the set of the language given by all regular expressions</p>
<p>2.DFSA: the set of all languages recognized by deterministic FSAs</p>
<p>3.NFSA: the set of all languages recognized by non-deterministic FSAs</p>
<p>4.CFG: the set of all languages generated by context free grammars </p>
<p>5.PDA: the set of all languages recognized by PDAs</p>
|
<p>The relevant set-theoritic relations are
<span class="math-container">$$
\mathsf{REXP} = \mathsf{DFSA} = \mathsf{NFSA} \subsetneq \mathsf{CFG} = \mathsf{PDA}.
$$</span>
The corresponding Venn diagram consists of two circles, one inside the other.</p>
| 745
|
question answering
|
L = { <M>, M is a DFA accepting all strings except finitely many
|
https://cs.stackexchange.com/questions/88036/l-m-m-is-a-dfa-accepting-all-strings-except-finitely-many
|
<p>Is the $L$ = { $<M>$, $M$ is a DFA accepting all strings except finitely many. }
decidable ?<br><br> I am sort of confused about the question - what exactly does $M$ accept. How are those finitely many strings look like for a particular $DFA$? It is known that $DFA$ accepts infinite number of strings if it accepts at least one string with length $>=k$, where $k$ is the number of states. Does it mean that M doesn't accept any strings of size less then $|k|$ - since this number would be finite ?
<br><br>I came up with the
decider for $L$ call it $I$, which basically checks $M$ on accepting any strings of size less than $k$. <br>
I = “On input $<M>$, where $M$ is a $DFA$:<br>
1. Let $k$ be the number of states of $M$<br>
2. Construct $k$ $DFA's$ denote them $D_i$, where $i$ goes from $0$ to $k-1$, $D_i$ accepts all strings of length exactly $i$ <br>
3. Construct a $DFA$ call it $T_i$ such that $L_i(T) = L(M) ∩ L(D_i)$.<br>
4. Test $L_i(T) = ∅$ using the $E_{DFA}$ decider<br>
5. if $E_{DFA} $ accepts for every $T_i$ and $E_{DFA}$ reject on $M$ => accept , otherwise reject.<br>
<br>
Is there a simpler way of answering the question ?</p>
|
<p>It is indeed decidable, but your solution is not quite correct.</p>
<p>To restate the problem, we're given an encoding $\langle M\rangle$ of a DFA $M$, and we want to decide whether accepts every possible string except a very small (finite) number of them. So for each DFA encoding in this language $L$, there's an associated $x_{M} \in \mathbb{N}$ (which we don't know) and $M$ only rejects $x_{M}$ strings.</p>
<p>So for example, a DFA $A$ that accepts everything (i.e. $\Sigma^{\ast}$) would be in $L$, because $x_{A} = 0$, which is definitely a finite number. Similarly if we have a DFA $B$ which rejects the string $a$, but accepts everything else, this would be in $L$ as $x_{B} = 1$, which is also finite.</p>
<p>Right, now to showing that this decidable for every DFA $M$. To do this we need to make a key observation, similar to what you were proposing, but sort of reversed; if a DFA rejects only finitely many strings, they must all have length smaller than $k-1$, where $k$ is the number of states. Why is this true? Suppose that it rejects a string with length greater than $k-1$, then to process this string the DFA must go around a cycle (either a loop or a longer path, or some combination thereof), but if it can go around the cycle once, it can do it twice, or three times, or any number of times. So if there's at least one long rejected string, there must be an infinite number of rejected strings.</p>
<p>Using this, we can see that if it rejects a string of length somewhere between $k$ and $2k$ (a single go around the cycle can use each state at most twice), then it must reject an infinite number of strings.</p>
<p>Finally we can get to our decider:</p>
<ul>
<li>On input $\langle M \rangle$ where $M$ is a DFA:
<ol>
<li>One by one, enumerate all strings of length $k$ to $2k$.
<ol>
<li>Simulate $M$ on each string.</li>
<li>If $M$ rejects any tested string, reject.</li>
</ol></li>
<li>If $M$ rejects no strings of length between $k$ and $2k$, accept.</li>
</ol></li>
</ul>
| 746
|
question answering
|
Question regarding coin change algorithm (DP and greedy)
|
https://cs.stackexchange.com/questions/64900/question-regarding-coin-change-algorithm-dp-and-greedy
|
<p>The question goes something like this:</p>
<p>Suppose you are living in a country where coins have values that are powers of p, V = [1, 3, 9, 27]. How do you think the dynamic programming and greedy approaches would compare?</p>
<p>Intuitively I want to answer that DP will be faster because greedy runs the same number of comparisons regardless of the relationship between the elements in V. But DP is a recursive call on previous elements, so the fact that there is always a ratio of p between each denomination of V would suggest that DP will end up making less recursive calls. Can anyone confirm my answer or tell me why I'm wrong?</p>
|
<p>In terms of running time, the greedy algorithm is still going to be faster than the DP algorithm. DP will always produce the optimal solution regardless of values in V. Greedy, on the other hand, takes advantage of extra structure present in some value choices that allow it to effectively ignore possible ways of getting to the goal value. A coin system where the values are powers of p would allow the greedy algorithm to also consistently produce the optimal solution.</p>
| 747
|
question answering
|
Logic Question - Why is This an Implication?
|
https://cs.stackexchange.com/questions/21512/logic-question-why-is-this-an-implication
|
<p>I have a question about predicate logic. Suppose we have the following predicates:</p>
<p>$\text{Study}(x,y)$: x studies y</p>
<p>$\text{Comp}(x)$: x is a computing student</p>
<p>I want to encode the following sentence in predicate logic: "Some, but not all computer students study logic."</p>
<p>A potential answer is:</p>
<p>$$\exists x(\text{Comp}(x)\land \text{Study}(x,l))\land\neg \forall x(\text{Comp}(x)\implies \text{Study}(x,l))$$</p>
<p>Why is there an $\implies$ and not a $\land$? Is this formulation correct?</p>
|
<p>Because $\neg\forall x\,(\text{Comp}(x) \wedge \text{Study}(x,l))$ means "It is not true that every student is both a computing student and studying logic." In particular, that would be true if there is at least one student who is not a computing student, regardless of whether all computing students do or do not study logic.</p>
| 748
|
text generation
|
Second-order Markov text generation?
|
https://cs.stackexchange.com/questions/77465/second-order-markov-text-generation
|
<p>Looking at <a href="https://youtu.be/WyAtOqfCiBw?t=1m45s" rel="nofollow noreferrer">this</a> video starting at 1:45, the author claims to be using a second-order approximation for a Markov text generation. He has one letter which he outputs followed by another letter which tells him which state to go to - so in other words, he might first pick AC, so he outputs A and then goes to state C, where he picks CB, so he outputs C and goes to state B, etc. So he has a uni-gram output followed by a uni-gram transition piece of information.</p>
<p>But from my understanding, a second-order approximation would be like <a href="http://www.decontextualize.com/teaching/dwwp/topics-n-grams-and-markov-chains/" rel="nofollow noreferrer">this</a>, with a bigram followed by a unigram as the "transition" state.</p>
<blockquote>
<p>This is the same character-level order-2 n-gram analysis of the (very
brief) text “condescendences” as above, but this time keeping track of
all characters that follow each n-gram:</p>
<p>co n</p>
<p>on d</p>
<p>nd e, e</p>
<p>de s, n</p>
<p>es c, (end of text)</p>
<p>sc e</p>
<p>ce n, s</p>
<p>en d, c</p>
<p>nc e</p>
<p>The table above doesn’t just give us some interesting statistical
data. It also allows us to reconstruct the underlying text—or, at
least, generate a text that is statistically similar to the original
text. Here’s how we’ll do it: (1) start with the initial n-gram
(co)—those are the first two characters of our output. (2) Now, look
at the last n characters of output, where n is the order of the
n-grams in our table, and find those characters in the “n-grams”
column. (3) Choose randomly among the possibilities in the
corresponding “next” column, and append that letter to the output.
(Sometimes, as with co, there’s only one possibility). (4) If you
chose “end of text,” then the algorithm is over. Otherwise, repeat the
process starting with (2).</p>
</blockquote>
<p>I'm extremely confused because both these sources consider themselves to be of order 2 in approximation, but it seems to me that the second source is of a higher order than the first. Is this true, or am I just completely misunderstanding what's happening?</p>
| 749
|
|
text generation
|
Phrase generation approaches
|
https://cs.stackexchange.com/questions/67772/phrase-generation-approaches
|
<p>In generating reports, sometimes there is a need to produce quite involved phrases in one of natural languages given numerical or boolean parameters.</p>
<p>To get a feel of it, it is enough to take a look at <a href="https://stackoverflow.com/questions/8982163/how-do-i-tell-python-to-convert-integers-into-words">convert integers into words</a> or <a href="https://stackoverflow.com/questions/3177836/how-to-format-time-since-xxx-e-g-4-minutes-ago-similar-to-stack-exchange-site">time ago</a> algorithms.</p>
<p>These solutions use (procedural/OO/functional) programming and Turing-complete computations, but the question is, whether there is some declarative approach, or hybrid approach, which can be more generic for this class of algorithms? By hybrid I understand decomposed into preparation phase (calculating or defining all necessary units, tuples and conditions), and phrase generation phase, which works with declarative definitions (some kind of domain-specific language)?</p>
<p>The current approach I see in Open Source projects is like the above-mentioned questions: conditional statements interspersed with string interpolations or concatenations. I guess, there are more optimal ones.</p>
<p>What I am searching for is some right term for the problem. I am sure there is a lot of research on the topic, but I have so far failed to come with good keywords to find suitable theory. This is something in between of constant phrases and arbitrary text generation. Ideally, the declarative part should be something easy for domain experts, like a grammar.</p>
<p>In practice, there is also table-driven approach, but what about theories?</p>
<p>Also found this oldish work <a href="http://www.aclweb.org/anthology/W98-1425" rel="nofollow noreferrer">A flexible shallow approach to text generation</a> by S. Busemann and H.Horacek, but it goes too fast into too specific cases.</p>
|
<p>I've found a survey on <a href="https://arxiv.org/pdf/1703.09902.pdf" rel="nofollow noreferrer">natural-language generation approaches</a> by Albert Gatt, Emiel Krahmer, 2018. According to that, my question is about <em>linguistic realization</em>. And the paper mentions three most common ones:</p>
<ul>
<li>human-crafted templates</li>
<li>human-crafted grammar-based systems</li>
<li>statistical approaches</li>
</ul>
<p>If I remember correctly, there may also be rules-based NLG systems, which may be overlapping with the grammar-based in the above classification.</p>
<p>And I am sure there may be more approaches.</p>
| 750
|
text generation
|
Advice needed. NLP and ML. Where to start?
|
https://cs.stackexchange.com/questions/66309/advice-needed-nlp-and-ml-where-to-start
|
<p>Hi fellow computer scientists,</p>
<p>I just began my journey to the world of ML and NLP so please bear with me. I'm hoping to find some guidance here. I would be very grateful if anyone could point me in the right direction (reading materials, lectures, specific algorithms, tools, etc.) for solving the following list of problems:</p>
<ol>
<li>Spell checking, grammar checking, proofreading with ML </li>
<li>Text generation on a given topic with ML</li>
<li>"Artistic Style Transfer" for articles (if it is even possible), i.e. "transfer" Shakespeare's writing style onto a given text. </li>
</ol>
<p>I've done some learning already but neither of this helps:</p>
<ol>
<li>ML lectures by Andrew Ng</li>
<li>Hacker's guide to Machine learning</li>
<li>Udacity's Introduction to Deep Learning </li>
</ol>
|
<p>As far as I can see, your reading list lacks specific NLP introductions. A really good starting point is Dan Jurafsky and Chris Manning's Coursera course (for example here <a href="https://www.youtube.com/watch?v=nfoudtpBV68" rel="nofollow noreferrer">https://www.youtube.com/watch?v=nfoudtpBV68</a> ). This specifically covers spell checking in one of the first videos. </p>
<p>In general, spell checking is a rather easy task, while (convincing) text generation is a lot harder. Grammar checking should be easy in theory but I'm not sure about that. Concerning proofreading, I'm not completely sure what you mean by that. Does this extend over spell and grammar checking, in your point of view?</p>
<p>Artistic style transfer is possible although I'm not sure what the current state of the art. You could be more luckily diving into stylometry (<a href="https://en.wikipedia.org/wiki/Stylometry" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Stylometry</a>) there.</p>
<p>A really good NLP text book to start with is Jurafsky and Martin: Speech and Language Processing (specifically covering generation). 1st edition is publicly available as pdf (just google). But there is also a 2nd edition and they are currently working on a 3rd (the chapters they are done writing are freely available as well).</p>
| 751
|
text generation
|
How did the first computer model display text?
|
https://cs.stackexchange.com/questions/91344/how-did-the-first-computer-model-display-text
|
<p>I have a question ive not been able to explain, and I still cant find the answer looking through computer history. I want to know how the first letters were programmed into the first computer.</p>
<p>Let me explain more..</p>
<p>So when i research computer history they discuss the machines and how they work and when dos was invented, and I learned about how they used lightbulbs for an 8 bit calculator.. but then it just kind of skips my question in the timeline of it all and just say the next generation uses dos! </p>
<p>Well i understand how dos works.. but how to you program what each letter looks like... How did the computer know how to draw a T on the screen. Or a 1 or a 0? How did they saughter a circuit board and somehow get text to appear on a screen in characters we understand.. that blows my mind can anybody help?</p>
| 752
|
|
text generation
|
How to implement a maximal munch lexical analyzer by simulating NFA or running DFA?
|
https://cs.stackexchange.com/questions/97374/how-to-implement-a-maximal-munch-lexical-analyzer-by-simulating-nfa-or-running-d
|
<p>I'm planning to implement a lexical analyzer by either simulating NFA or running DFA using the input text. The trouble is, the input may arrive in small chunks and the memory may not be enough to hold one very long token in the memory.</p>
<p>Let's assume I have three tokens, "ab", "abcd" and "abce". The NFA I obtained is this:
<a href="https://i.sstatic.net/8dDT8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8dDT8.png" alt="enter image description here"></a></p>
<p>And the DFA I obtained is this:
<a href="https://i.sstatic.net/bZ40r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZ40r.png" alt="enter image description here"></a></p>
<p>Now if the input is "abcf", the correct action would be to read the token "ab" according to the maximal munch rule and then produce a lexer error token. However, both the DFA and the NFA have state transitions even after "ab" has been read. Thus, the maximal munch rule encourages to keep on reading after "ab" and read the "c" as well.</p>
<p>How do maximal munch lexers solve this issue? Do they store the entire token in memory and do backtracking from "abc" to "ab"?</p>
<p>One possibility would be to run the DFA with a "generation index", potentially multiple generations and multiple branches within generation at a time. So, the DFA would go from:</p>
<pre><code>{0(gen=0,read=0..0)},
</code></pre>
<p>read "a",</p>
<pre><code>{1(gen=0,read=0..1)},
</code></pre>
<p>read "b",</p>
<pre><code>{2+(gen=0,read=0..2,frozen), 2+(gen=0,read=0..2), 0(gen=1,read=2..2)},
</code></pre>
<p>read "c",</p>
<pre><code>{2+(gen=0,read=0..2,frozen), 3(gen=0,read=0..3)},
</code></pre>
<p>read "f",</p>
<pre><code>{2+(gen=0,read=0..2,frozen)}.
</code></pre>
<p>Then the lexer would report state 2+, and since there is no option to continue, would report an error state. Not sure how well this idea would work...</p>
<p>For "abcd", it would work like this:</p>
<pre><code>{0(gen=0,read=0..0)},
</code></pre>
<p>read "a",</p>
<pre><code>{1(gen=0,read=0..1)},
</code></pre>
<p>read "b",</p>
<pre><code>{2+(gen=0,read=0..2,frozen), 2+(gen=0,read=0..2), 0(gen=1,read=2..2)},
</code></pre>
<p>read "c",</p>
<pre><code>{2+(gen=0,read=0..2,frozen), 3(gen=0,read=0..3)},
</code></pre>
<p>read "d",</p>
<pre><code>{2+(gen=0,read=0..2,frozen), 4+(gen=0,read=0..4,frozen), 4+(gen=0,read=0..4), 0(gen=1,read=4..4)}.
</code></pre>
<p>Now of these, it's possible to drop the first (there is a longer match) and the third (there are no state transitions out), leaving:</p>
<pre><code>{4+(gen=0,read=0..4,frozen), 0(gen=1,read=4..4)}.
</code></pre>
<p>Then the lexer would indicate "match: 4+" and continue reading input from state 0 using generation index 1.</p>
<p>Is this idea of mine, running DFAs nondeterministically, how maximal munch lexical analyzers work?</p>
|
<p>There are two ways to handle this issue:</p>
<ol>
<li><p>The most common implementation (the one used in lex, flex and other similar scanner generators) is to always recall the last accept position and state (or accept code). When no more transitions are possible, the input is backed up to the last accept position and the last accept state is reported as the accepted token.</p>
<p>If you're trying to do streaming input, you will need a fallback buffer to handle this case.</p></li>
<li><p>Alternatively, if the scan reaches an accepting state but another transition is available, we can start performing two scans in parallel: one on the assumption that the transition will be taken, and the other on the assumption that it will not. The second thread may need to fork again, although there is a maximum number of forks, as with generalised LR parsing. In this model, we need to keep a buffer of possible "future" tokens which will be processed if the optimistic thread fails.</p></li>
</ol>
<p>I don't know of a practical implementation of the second strategy in a general purpose scanner generator, although there are some papers about how you might do it. Apparently it can be done in time and space linear to the size of the input, which is (in theory) better than the quadratic time consumption of backtracking.</p>
<p>However, it is pretty rare that you find a token grammar which needs to allow unrestricted backtracking. The most common cause of unrestricted backtracking is failing to take into account the fact that things like quoted strings might not be correctly terminated in an incorrect program, so you end up with just the rule:</p>
<pre><code>["]([^"]|\\.)*["] { Accept a string }
</code></pre>
<p>instead of the pair of rules</p>
<pre><code>["]([^"]|\\.)*["] { Accept a string. }
["]([^"]|\\.)* { Reject an unterminated string. }
</code></pre>
<p>(Maximal munch will guarantee that the second rule will only be used if the first rule cannot match.)</p>
<p>So while the second strategy may have some theoretical appeal, it seems to me that it's of little practical use. Flex even has some options which will help you to identify rules which could backup on failure, and this can help you craft your lexical grammar to avoid the problem. It's not always easy to eliminate 100% of backing up (although it often is, and if you manage to do so, flex will reward you by generating a faster lexer), but it's pretty rare to find a lexical grammar which requires more than a few characters of back-up, and the cost of a small fallback buffer is really not worth worrying about, in comparison with the complexity of the alternative (which, of course, also needs extra memory.)</p>
<p>I have seen intermediate strategies for particular grammars. If you know your grammar well enough, you could hand-build the speculative tokenisation in order to avoid backing up. I've seen that, years ago, in SGML lexers which eliminate the rescan of <code>></code> following a tagname by including a redundant rule which recognised a tag immediately followed by a <code>></code> and handled both tokens at once. That must have saved a few cycles, but it's hard to believe that it really made a huge difference, and the difference would likely be even less significant today. Still, if you are the type who obsesses about saving every possible cycle, you could do it. </p>
| 753
|
text generation
|
Is there any scenario whereby randomly shufflying a sequence improves it's compressibility?
|
https://cs.stackexchange.com/questions/114998/is-there-any-scenario-whereby-randomly-shufflying-a-sequence-improves-its-compr
|
<p>I'm performing some correlation assessment à la NIST <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-90B.pdf" rel="nofollow noreferrer">Recommendation for the Entropy Sources Used for Random Bit Generation</a>, § 5.1. </p>
<p>You take a test sequence and compress it with a standard compression algorithm. You then shuffle that sequence randomly using a PRNG, and re-compress. We expect that the randomly shuffled sequence to be harder to compress as any and all redundancy and correlations will have been destroyed. It's entropy will have increased. </p>
<p>So if there is any auto correlation, <span class="math-container">$ \frac{\text{size compressed shuffled}} {\text{size compressed original}} > 1$</span> .</p>
<p>This works using NIST's recommended bz2 algorithm, and on my data samples, the ratio is ~1.03. This indicates a slight correlation within the data. When I switch to LZMA, the ratio is ~0.99 which is < 1. And this holds over hundreds of runs so it's not just a stochastic fluke.</p>
<p>What would cause the LZMA algorithm to repetitively compress a randomly shuffled sequence (slightly) better than a non shuffled one?</p>
|
<p>I now think that it's because of this:-</p>
<blockquote>
<p>ZPAQ has 5 compression levels from fast to best. At all but the best level, it uses the statistics of the order-1 prediction table used for deduplication to test whether the input appears random. If so, it is stored <strong>without compression</strong> as a speed optimization. </p>
</blockquote>
<p>... from <a href="https://en.wikipedia.org/wiki/ZPAQ#Compression" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/ZPAQ#Compression</a></p>
<p>I'm referring to LZMA in my question, but I'm not familiar with how it's implemented in code. If it followed the same speed optimisation strategy as ZPAQ, the hypothesis would be consistent with my observations. You can imagine in edge cases where <span class="math-container">$ \frac{\text{size compressed shuffled}} {\text{size compressed original}} = 1 - \epsilon $</span>, that an <span class="math-container">$n$</span> order predictor decides that there is insufficient advantage to be had due to the necessary compression encoding overhead. My <span class="math-container">$\alpha \approx 0.01$</span>, but recently I've seen it as high as 0.04.</p>
| 754
|
text generation
|
Are there any neural NLG systems which don't generate in left-to-right order?
|
https://cs.stackexchange.com/questions/99987/are-there-any-neural-nlg-systems-which-dont-generate-in-left-to-right-order
|
<p>For a while, all classification tasks in natural language processing were based on simple RNN's, which operate in a very word-by-word order. Adding gating mechanisms increased ability to "look back", and the newer addition of context vectors which can train attention to different words during the task have made classification of text less about "left-to-right" reading and more about selective focusing.</p>
<p>However, I have never seen a seq2seq or any other <strong>natural language generation</strong> system (machine translation, image2seq, etc) which generates the desired sequential output not in sequential order. It seems this would be very powerful. Are there any examples of using attention not only in encoders, but also in decoders?</p>
| 755
|
|
text generation
|
Generating graphs with partially overlapping cliques
|
https://cs.stackexchange.com/questions/146645/generating-graphs-with-partially-overlapping-cliques
|
<p>Currently, I am working on a research project where I will utilise reinforcement learning for the diversified top-<span class="math-container">$k$</span> clique search problem. To train the reinforcement learning algorithm, I need to generate graphs that have similar properties, such as average degree and overlapping cliques, to the ones used in the paper <a href="https://www.sciencedirect.com/science/article/pii/S0305054819303090" rel="nofollow noreferrer">Local search for diversified Top-k clique search problem</a>, which you can find on this <a href="https://networkrepository.com/" rel="nofollow noreferrer">website</a>.</p>
<h2>What is the diversified top-<span class="math-container">$k$</span> clique search problem (DTKC)?</h2>
<p>DTKC is a less known combinatorial optimisation problem. The goal of DTKC is to maximise the length of the coverage a found clique set <span class="math-container">$D=\left \{C_1, C_2, \dots, C_{k-1}, C_k \right \}$</span> in a given graph <span class="math-container">$G$</span>. Each <span class="math-container">$C \in D$</span> should be a maximal clique, and the set can only contain, at most, <span class="math-container">$k$</span> cliques, which means that <span class="math-container">$ \left | D \right | \leq k$</span>, must always be true. As previously stated, the goal of DTKC is to maximise the coverage of the clique set, which is calculated by:
<span class="math-container">$$
\text{Cov}(D)=\bigcup_{C \in D}C
$$</span>
The coverage set <span class="math-container">$\text{Cov}(D)$</span> will contain all the unique nodes contained in the cliques. The goal function is then to maximise <span class="math-container">$|\text{Cov}(D)|$</span>.</p>
<h2>What do I need?</h2>
<p>At the start of my question, I stated that I needed to generate graphs as training data for my reinforcement learning algorithm. However, I tried several popular models, like the Barabási-Albert and Erdős–Rényi model, which either created too small cliques (with almost all found cliques having 3 to 5 nodes) or the cliques overlapped too much. I am looking for a model that can generate undirected graphs with partially overlapping cliques, up to around 100.000 nodes and an average degree between 10 and 100. With partially overlapping cliques, I mean that two or cliques share a certain amount of nodes. For instance, <a href="https://opus.lib.uts.edu.au/bitstream/10453/43795/1/%5B2015%20VLDBJ%5D%20Diversified%20Top-K%20Clique%20Search.pdf" rel="nofollow noreferrer">one of the papers</a> on DTKC describes how the largest maximal cliques in a graph will likely contain most of the same nodes (the paper shows an excellent example figure of this happening, but I am afraid to include it, because of copyright issues).</p>
<p>Any suggestions are welcome about how I should handle the generation. Information that would help is, for instance, proposing lesser-known graph generation models or how to compare graph generation models to real-world data graphs to see which input variables are needed to generate graphs with similar properties to those graphs. I prefer to use already implemented models, like those in <a href="https://networkx.org/documentation/stable/reference/generators.html" rel="nofollow noreferrer">Networkx</a>, but I can also implement them myself if needed, of course.</p>
<p>Hopefully, someone can help me with this specific and complex problem, but please do not be afraid to comment if you do not understand DTKC completely. If you know about a graph generator algorithm that can quickly generate graphs with overlapping cliques, it would already be of great help.</p>
<h3>Sources</h3>
<p><a href="https://www.sciencedirect.com/science/article/pii/S0305054819303090" rel="nofollow noreferrer">Local search for diversified Top-k clique search problem</a> by Jun Wu, Chu-Min Li, Lu Jiang, Junping Zhou, Minghao Yin</p>
<p><a href="https://opus.lib.uts.edu.au/bitstream/10453/43795/1/%5B2015%20VLDBJ%5D%20Diversified%20Top-K%20Clique%20Search.pdf" rel="nofollow noreferrer">Diversified Top-K Clique Search</a> by Long Yuan, Lu Qin, Xuemin Lin, Lijun Chang, and Wenjie Zhang.</p>
| 756
|
|
text generation
|
What does it mean to be "closed" under beta reduction?
|
https://cs.stackexchange.com/questions/41051/what-does-it-mean-to-be-closed-under-beta-reduction
|
<p>I am reading the paper <a href="http://research.microsoft.com/en-us/um/people/akenn/sml/CompilingWithContinuationsContinued.pdf" rel="nofollow"><em>Compiling with Continuations, Continued</em></a>, and in section 2.4, <em>Comparison with ANF</em>, the author draws attention to the fact that ANF is not closed under beta reduction. The snippet of text in question follows:</p>
<blockquote>
<p>As Flanagan et al. (1993) suggest, the “back end of an A-normal form
compiler can employ the same code generation techniques that a CPS
compiler uses”. However, as we mentioned in the Introduction, it is
not so apparent whether ANF is ideally suited to optimization. After
all, it is not even closed under the usual rule for β reduction (λx.A)
v → A[v/x].</p>
</blockquote>
<p>What does the author mean by this last sentence?
(My guess is that he means ANF beta reduction can introduce new, unbound variables into the reduced term. If that is the case, I am having a hard time visualizing when this would occur and would appreciate outside confirmation that my interpretation is correct.)</p>
|
<p>The claim is that after applying β-reduction to an expression in A-normal form you can be left with an expression no longer in A-normal form.</p>
<p>The only explicit definition I can find of A-normal form is not consistent with the definition Kennedy seems to be (implicitly) using in this paper. <a href="http://en.wikipedia.org/wiki/A-normal_form">Wikipedia</a> defines <em>A-normal form</em> as the subset of lambda calculus expressions where only constants, $\lambda$-terms, and variables can be arguments of function applications, and then (vaguely) says that results of non-trivial expressions must be captured by let-bound variables.</p>
<p>That is: <code>f(g(x))</code> is <em>not</em> in A-normal form because the argument to the application of <code>f</code> is another application (<code>g(x)</code>) rather than a constant, $\lambda$-term, or variable. This expression in A-normal form would be something like <code>let y=g(x) in f(y)</code>.</p>
<p>Kennedy uses the vague "definition"</p>
<blockquote>
<p>a let construct assigns names to every intermediate computation. (Section 1.1.ANF).</p>
</blockquote>
<p>But then in Section 1.2 he gives an example of an A-normal form not being preserved under β-reduction</p>
<blockquote>
<p>Consider the ANF term <code>let x = (λy.let z = a b in c) d in e</code>. Now naïve β-reduction produces <code>let x = (let z = a b in c) in e</code> which is not in normal form. The ‘fix’ is to define a more complex
notion of β-reduction that re-normalizes let constructs (Sabry and
Wadler 1997), in this case producing the normal form <code>let z = a b in (let x = c in e)</code>.</p>
</blockquote>
<p>So apparently Kennedy's variant of A-normal form places some kind of restriction on what can be in the assignment part of a <code>let</code> clause, but I can't figure out what (or why) that restriction is. In addition to Kennedy's paper you linked, I looked in several of his references:</p>
<blockquote>
<p>Amr Sabry; Philip Wadler: A reflection on call-by-value. <em>ACM T. Prog. Lang. and Sys. (TOPLAS)</em>, 19(6):916-941, 1997.</p>
<p>Amr Sabry; Matthias Felleisen: Reasoning about Programs in Continuation-Passing Style. <em>Lisp and Symbolic Computation</em> 6(3-4):289-360, 1993.</p>
<p>Flanagan, Cormac; Sabry, Amr; Duba, Bruce F.; Felleisen, Matthias: The Essence of Compiling with Continuations. <em>Proc. ACM SIGPLAN Conf. on Programming Language Design and Implementation</em>, (PLDI):237-247, 1993.</p>
</blockquote>
| 757
|
text generation
|
Help understanding formal language notation
|
https://cs.stackexchange.com/questions/32347/help-understanding-formal-language-notation
|
<p>I am reading this text and it is making absolutely no sense to me. It as if it assumed I will understand. Not to mention the writer apparently had a book made and his grammar is poor. Some of the plain English sentences do not even make sense or have the letter s on the end of words where they should not be. please help me understand this:</p>
<p>Language generators. A language generator is a device that can be used to generate the sentences of a language. A generator seems to be a device of limited usefulness as a language descriptor. Generators can more easily read and understand them. A language generation method constructs a language generator G and capable of generating a string of language by a process called derivation. For example, consider a language L over alphabet</p>
<blockquote>
<p>∑ = {a, b}</p>
<p>L= {a^n, b^n|n ≥ 1}</p>
<p>G = { s -> asb s -> ab s -> 3(i need to turn this symbol the
opposite direction)}</p>
</blockquote>
<p>W = aaa bbb</p>
<p>Derivation:</p>
<p>S ⇒ asb ⇒ aasbb ⇒ aaabbb</p>
<p>i.e., S ⇒ aaabbb It means we have derived aaabbb starting from start symbol S using the rules of language generator G in finite number of steps.</p>
|
<p>The problem seems to be that they assume you have a background in formal language theory.
Here's the basics.</p>
<ol>
<li><p>$\Sigma$ is the symbol which is traditionally used for an alphabet. When you're talking about strings, you always have a finite set of symbols that can be in those strings. Here we use $\Sigma=\{a,b\}$ to say that all our strings are binary, that is, they only contain the letters $a$ and $b$. We could have just as easily chosen $0,1$ or $\vee, \wedge$ or any other symbols we like. We just need a finite set of symbols.</p></li>
<li><p>$\epsilon$ is the empty string, the string of length 0. Sometimes you'll see this as $\lambda$, which is confusing, so be prepared for either. In a programming language like C, you'd see this written as <code>""</code>. It has the property that when you append it to the beginning or end of a word, it gives you that same word.</p></li>
<li><p>This is a description of context-free grammars. It's basically a concise way of defining a set of strings over some alphabet. We say that a string is in the set if there's some set of derivation rules we can apply to get that string.</p></li>
</ol>
<p>The derivation rules basically are a way of generating strings. The idea is that, you start with a special start symbol, usually called $S$. We have upper and lowercase letters, called non-terminals and terminals. We keep applying rules until we only have lowercase letters left.</p>
<p>The X -> Y things you see are the derivation rules. This is saying, in any string, you can replace the symbol on the left with the symbol on the right. The idea is that you keep doing this until you get a word with only lowercase letters.</p>
| 758
|
text generation
|
How is sound input and output data converted to use with machine learning networks?
|
https://cs.stackexchange.com/questions/11720/how-is-sound-input-and-output-data-converted-to-use-with-machine-learning-networ
|
<p>Suppose one has a couple of <em>.wav</em> files with English spoken words, multiple ones for each word, and for each such set there exists a transcription of their right output, the pronunciation as <em>ascii text</em>.</p>
<p>As far as I know, machine learning neural networks use arrays of floats as input and output, and also internally.</p>
<p>What do one do in machine learning in order to convert such 'real world' data formats/data sets into another data structure that is meaningful and suitable for the machine learning neural networks? </p>
<p>Furthermore, what classifies a particular data structure as 'suitable', except the fact that it can be expressed as arrays of integers (what fits every digital data)?</p>
<p>(I suppose it could be more sophisticated than stripping the headers and feeding the uncompressed binary data in as integers, or is it?)</p>
<hr>
<p><strong>edit</strong>: in an other SE site's <a href="https://stats.stackexchange.com/questions/7224/detecting-a-given-face-in-a-database-of-facial-images">question</a> (regarding how to filter out an image of Justin Bieber), an answer asserts that one <em>"has some method of feature generation to transform the raw images into features that are useful for machine learning purposes"</em>, but it doesn't explain how this is done, or how does one begin to create a method for such a feature conversion.</p>
|
<p>Probably the most common way to represent audio for speech recognition is using the <a href="http://en.wikipedia.org/wiki/Mel-frequency_cepstrum" rel="nofollow">Mel-frequency cepstrum</a> coefficients. If you're interested in finding out more about state of the art neural network based systems for speech recognition, I'd recommend checking out some of the recent work by <a href="http://www.cs.toronto.edu/~gdahl/" rel="nofollow">George Dahl</a>.</p>
<p>In regard to the other, more general portion of your question, using "binary" data in the way you describe is a very bad idea (if you mean what I think you mean). The binary representations used to store information in a computer is primarily designed for efficiency, both in terms of operations and space. Thus, this sort of representation is not necessarily representative of the actual characteristics of the data. </p>
<p>An example of this is the difference between signed and unsigned integers. If you're using 8 bit unsigned integers, then 11111111 > 00000001 but if you're using signed (2's complement) integers then 11111111 < 00000001. But, if you just represented the data as binary vectors, then you're essentially forcing a machine learning system to not only figure out whatever relationship you're trying to model, but also whatever encoding scheme you've used. You can avoid this by using representations that respect the characteristics (real valued, categorical, etc) of the data.</p>
<p>Lastly, if you're trying to predict 1 of $k$ mutually exclusive classes (such as characters), then the most common approach is to represent the class label (target) as a $k$ dimensional vector with a 1 at the $i$th position, corresponding to the class, and 0 everywhere else. You'll frequently see this referred to as a 1-of-$k$ or one-hot representation. You'll also want to use an appropriate output function for you're neural net, most likely the softmax function given by</p>
<p>$$
f(x_i) = \frac{\exp(x_i)}{\sum_{j=1}^k \exp(x_j)},
$$
where $x_i$, $i = 1, \ldots, k$, is the total input to the $i$th output unit. You can interpret this type of output as representing the conditional probability of each class given the input. The derivative is also particularly simple under the log-loss, which is what you should probably be using in this scenario. </p>
<p>In conclusion, neural networks are not magic black boxes. Just like any other ML technique, if you just throw data at them without considering the choices (inputs, outputs, loss function, etc) you're making and why, you'll probably have little success.</p>
| 759
|
text generation
|
Computing the unique triangles and edges from vertex connectivity of a Delaunay triangulation
|
https://cs.stackexchange.com/questions/152993/computing-the-unique-triangles-and-edges-from-vertex-connectivity-of-a-delaunay
|
<p>I am currently studying triangular mesh generation in 2D and, in that connection, the Delaunay triangulation of a list of vertices <span class="math-container">$\{ v_1, v_2, ..., v_N \}$</span>. The divide and conquer algorithm of Lee & Schachter (from their paper "Two Algorithms for Constructing a Delaunay Triangulation", 1980) constructs the Delaunay triangulation efficiently, but represents the triangulation in a way, which I need to translate to a different representation for my further computations. To be specific, for each vertex, <span class="math-container">$v_n$</span>, Lee & Schachter's algorithm gives a list of vertices, <span class="math-container">$\{ v_{n1}, v_{n2}, ..., v_{n k_n} \}$</span>, to which <span class="math-container">$v_n$</span> is connected by an edge of the triangulation and which are in counterclockwise order. What I would like to have is the following:</p>
<ol>
<li><p>A table of unique triangles in the triangulation, i.e. an array of dimension <span class="math-container">$(\text{Number of unique triangles}) \times 3$</span> which in its <span class="math-container">$p$</span>th row contains the three indices of the vertices making up the <span class="math-container">$p$</span>th unique triangle.</p>
</li>
<li><p>A table of unique edges in the triangulation, i.e. an array of dimension <span class="math-container">$(\text{Number of unique edges}) \times 2$</span> which in its <span class="math-container">$q$</span>th row contains the two indices of the vertices making up the <span class="math-container">$q$</span>th unique edge.</p>
</li>
</ol>
<p>I do not care about the ordering of the two tables. The only thing that matters is that the tables contain all unique triangles and edges.</p>
<p>It is straightforward to compute these two tables using a brute force approach. For example, to construct the table of unique triangles one could for <span class="math-container">$n = 1, 2, ..., N$</span> add the triangles <span class="math-container">$\{ v_n, v_{n1}, v_{n2} \}$</span>, <span class="math-container">$\{ v_n, v_{n2}, v_{n3} \}$</span>,..., <span class="math-container">$\{ v_n, v_{n k_n}, v_{n1} \}$</span> to a list ultimately containing all possible triangles of the triangulation, and then sort the list and choose the unique ones. This strategy, however, appears quite wasteful in terms of both computational work and memory to me, since all triangles will initially appear three times in the list. Unfortunately, I have not been able to come up with a better idea so far.</p>
<p>My question is therefore the following: How do I efficiently construct the above tables from the triangulation representation of Lee & Schachter?</p>
| 760
|
|
text generation
|
Local type argument synthesis when type variable does not appear in arguments
|
https://cs.stackexchange.com/questions/73831/local-type-argument-synthesis-when-type-variable-does-not-appear-in-arguments
|
<p>I am implementing the techniques described in the classic <a href="https://www.cis.upenn.edu/~bcpierce/papers/lti.pdf" rel="noreferrer">Local Type Inference</a> paper. Specifically, I am implementing the type argument synthesis algorithm from section 3.</p>
<p>My algorithm seems to mostly work, but it doesn’t seem to produce reasonable results when a quantified type variable appears in the <em>result</em> of a function, but not in its arguments. For context, I’ve reproduced the $\text{App-InfAlg}$ rule here:</p>
<p>$$
\dfrac{\begin{align}\tt\Gamma \vdash f \in All(\overline{X}) \overline{T} \rightarrow R \qquad &\tt \Gamma \vdash \overline{e} \in \overline{S} \qquad \lvert\overline{X}\rvert > 0\\
\tt\emptyset \vdash_\overline{X}\overline{S} <: \overline{T} \Rightarrow \overline{C}&\qquad\tt \sigma \in \bigwedge \overline{C} \Downarrow R
\end{align}}
{\tt\Gamma \vdash f(\overline{e}) \in \sigma R \Rightarrow f[\sigma \overline{X}](\overline{e})}
(\text{App-InfAlg})
$$</p>
<p>The most important piece here is the $\tt\emptyset \vdash_\overline{X}\overline{S} <: \overline{T} \Rightarrow \overline{C}$ premise, which invokes the constraint generation algorithm. Importantly, though, it <em>only</em> generates constraints using $\tt\overline{S}$ and $\tt\overline{T}$, which correspond to the argument types (that is, the types to the left of the arrow). This is problematic for types like this, which include type variables that only appear in the result:</p>
<p>$$
\tt All(X, Y)(X) \rightarrow Y
$$</p>
<p>Or, even more simply:</p>
<p>$$
\tt All(X)() \rightarrow X
$$</p>
<p>In this case, my implementation happily infers the type of the above two functions to be $\tt Y$ and $\tt X$, respectively, which are clearly not valid types, since they are type variables that have escaped their scope!</p>
<p>My guess is that my implementation is wrong, and the algorithm accounts for this case. In that situation, I would expect the algorithm to either reject the applications or infer $\tt Bot$ as the result type. However, I don’t see how this could possibly be accounted for, since the algorithmic inference rule only uses $\tt R$ for the purposes of turning the constraints $\overline{C}$ into the substitution set $\sigma$.</p>
<p>How does the algorithm handle this situation?</p>
|
<p>As a prelude, there is some terminological confusion in your question. The issue is about a type variable occurring in a result <em>type</em> of a function. This is fairly minor. A more serious one is when you say "my implementation happily infers the types of the above two functions to be ...". What functions? Functions are terms like (in this case) $\tt fun(x:T)e$ but the rule you quote is actually about function application, i.e. it applies to terms like $\tt f(e)$. Either way, the two things you present are types not terms. What you seem to want to say is: given a function $\mathtt{f}\in\tt All(X)()\to X$, the expression $\tt f()$ seems to have the inferred type $\tt X$.</p>
<p>$\tt \sigma \in \bigwedge \overline{C} \Downarrow R$ implies that $\sigma$ is a map for all the variables in $\tt R$ or else it wouldn't even make sense. Maybe it doesn't make sense, but we see that in the rule you listed the constraint generation step produces a $\mathtt{\bar X}/V$ constraint and the substitution algorithm will thus produce a $\mathtt{\bar X}/V$ substitution which, by definition, has a mapping for each type variable in $\tt \bar X$. I assume your confusion is that you look at the minimal substitution generating algorithm $\sigma_{C\tt R}$ at the bottom of page 11, note that in your second case, for example, the constraint generation step produces an empty set of constraints and thus the "for each constraint" iterates over an empty set producing a substitution with no mappings which presumably would act as the identity. However, this would violate the definition of an $\mathtt{\bar X}/V$ substitution. The detail here is the definition of the empty constraint set. At the beginning of section 3.3 on page 9 it states:</p>
<blockquote>
<p>The empty $\mathtt{\bar X}/V$ constraint set, written $\emptyset$,
contains the trivial constraint $\tt Bot <: X_\mathit{i} <: Top$ for
each variable $\mathtt{X}_i$.</p>
</blockquote>
<p>Since the result type is covariant (in a top-level context), we'll get $\tt Bot$ in those cases. In particular, given that $\mathtt{f}\in\tt All(X)()\to X$, the expression $\tt f()$ will have the inferred type $\tt Bot$. The produced substitution $\sigma_{\emptyset\tt R}$ will map $\tt X$ to $\tt Bot$.</p>
| 761
|
text generation
|
How to transform an arbitrary graph into a fixed vector representation?
|
https://cs.stackexchange.com/questions/112767/how-to-transform-an-arbitrary-graph-into-a-fixed-vector-representation
|
<p>Actuality I work in computer vision, specifically on a problem known as "scene graph modeling." This problem aims to convert an image <span class="math-container">$I$</span> in a graph <span class="math-container">$G=(V,E)$</span> where the nodes <span class="math-container">$V$</span> represent the objects (and the features) in scene and the edges <span class="math-container">$E$</span> the relationships between objects. An interesting paper on this topic is <a href="https://arxiv.org/pdf/1808.00191.pdf" rel="nofollow noreferrer">Graph R-CNN for Scene Graph Generation</a> (Note that unlike of only to detect the objects in an image, the scene graph aims to capture the contextual information of image). A graph is a mathematics structure rich in information, and it would be very interesting to integrate graphs in a machine learn approach. In order to achieve this task is necessary to transform a graph in a vector representation. Some works that intend solve this problem are the following:</p>
<ul>
<li><a href="https://pdfs.semanticscholar.org/1762/baa638866a13dcc6d146fd5a49b36cbd9c30.pdf?_ga=2.208708955.832419040.1565783111-1394282387.1538378021" rel="nofollow noreferrer">SEMI-SUPERVISED CLASSIFICATION WITH
GRAPH CONVOLUTIONAL NETWORKS</a>: The problem with this algorithm is that assumes a fix number of nodes. After training, this algorithm take a graph <span class="math-container">$G=(V,E)$</span> as input (whit <span class="math-container">$N$</span> nodes, that is, <span class="math-container">$|V|=N$</span>) and outputs a fixed vector representation.</li>
<li><a href="http://www.mlgworkshop.org/2017/paper/MLG2017_paper_21.pdf" rel="nofollow noreferrer">graph2vec: Learning Distributed Representations of
Graphs</a>: This algorithm is flexible due to permit build a vector representation from a graph <span class="math-container">$G$</span> without restrict the number of nodes. However, it needs to know the whole space graph. That is, given a set <span class="math-container">$G=\{g_{1},g_{2},\dots,g_{i},\dots,g_{m}\}$</span>, where <span class="math-container">$g_{i}$</span> is the i-th graph, this algorithm builds a vectorial representation <span class="math-container">$V=\{v_{1},v_{2},\dots,v_{i},\dots,v_{m}\}$</span>, where <span class="math-container">$v_{i}$</span> is the i-th vector associated with the graph <span class="math-container">$g_{i}$</span>. This algorithm is originally proposed to text analysis, where the features in nodes are of low dimension, I do not know if it can work using high dimension features in nodes. </li>
</ul>
<p>I would like to know if there is another simple algorithm that allows me to convert any graph into a fixed vector representation.</p>
| 762
|
|
text generation
|
Open Problem: Structural Learnability of Pseudo-Random Boolean Circuits
|
https://cs.stackexchange.com/questions/171777/open-problem-structural-learnability-of-pseudo-random-boolean-circuits
|
<p>I would like to propose an open problem at the intersection of computational complexity, pseudorandomness, and circuit theory. This problem has potential implications for cryptography, AI model analysis, and the theory of explainability in stochastic systems.</p>
<p><strong>Informal formulation</strong></p>
<p>Let us define a class of pseudo-random boolean circuits generated by constrained stochastic algorithms (e.g. mutation-guided optimization or architecture-limited randomness). These circuits are not purely random, but produced via limited algorithmic stochasticity.</p>
<p>Core Question:
Does there exist a deterministic polynomial-time algorithm that, with high probability over the choice of such a circuit, can extract significant structural information—such as symmetry, repeated logical patterns, or simplifiable blocks—that was not explicitly encoded in the generator?</p>
<p>In short: Can pseudo-random circuit structure be "de-randomized" in a meaningful and efficient way?</p>
<p><strong>Tentative Formalization</strong></p>
<p>Let <span class="math-container">$G(n)$</span> be a family of stochastic generators that output Boolean circuits <span class="math-container">$S$</span> of size <span class="math-container">$O(n^k)$</span> for some <span class="math-container">$k \geq 1$</span>, under architectural or computational constraints (e.g., circuit depth, gate fan-in, local search heuristics).</p>
<p>Let <span class="math-container">$S = G(n)$</span> be a randomly generated Boolean circuit from this family.</p>
<p>We hypothesize:
There does not exist a deterministic polynomial-time algorithm <span class="math-container">$A$</span> such that, with probability approaching <span class="math-container">$1$</span> as <span class="math-container">$n \to \infty$</span>, <span class="math-container">$A(S)$</span> is able to recover non-trivial structural regularities in <span class="math-container">$S$</span> which allow:
compression,
simplification,
or more efficient evaluation,
beyond what is achievable by generic (e.g., brute-force) analysis.
That is, the structural learnability of such constrained pseudo-random circuits may be computationally intractable.</p>
<p><strong>Why This Might Matter</strong></p>
<p>In cryptography, the security of certain primitives relies on pseudorandom functions or structures being indistinguishable from true randomness. If they admit structural analysis, this could open attack vectors.
In machine learning, large models often resemble constrained stochastic circuits. Understanding if their structure can be efficiently inferred may inform the field of explainability or model compression.
In complexity theory, this probes the boundary between true randomness, pseudorandomness, and structural compressibility.</p>
<p>I welcome any references, feedback, refinements, or alternative perspectives. Thank you for your time and thoughts!</p>
<p>P.S. Possible example to clarify the problem
To make the question more concrete, consider the following randomized generation process:</p>
<p>Let <span class="math-container">$G_n$</span> be a randomized generator that constructs a Boolean circuit <span class="math-container">$C_n$</span> with <span class="math-container">$n$</span> inputs and size <span class="math-container">$O(n^2)$</span> by iteratively combining subcircuits with randomly chosen logical gates (AND, OR, NOT). The process proceeds as:</p>
<p>Start with <span class="math-container">$n$</span> input variables <span class="math-container">$x_1, \dots, x_n$</span>.</p>
<p>At each step, randomly select two subcircuits <span class="math-container">$C_i$</span> and <span class="math-container">$C_j$</span> and form a new gate using:
<span class="math-container">$$
C_t = C_i \land C_j, \quad C_i \lor C_j, \quad \text{or} \quad \neg C_i.
$$</span></p>
<p>Repeat until the circuit reaches the target size.
This circuit may compute a simple function, e.g., <span class="math-container">$\text{XOR}(x_1, \dots, x_n)$</span> or <span class="math-container">$\text{Majority}(x_1, \dots, x_n)$</span>, but due to the randomized construction, its structure will appear noisy and unintelligible.</p>
<p>Main question:
Can a polynomial-time algorithm, given only the final circuit <span class="math-container">$C_n$</span>, recognize the underlying function (or determine that it is structurally "simple") with high probability over the randomness of <span class="math-container">$G_n$</span>?</p>
<p>This highlights a possible inherent gap between functional simplicity and structural interpretability, and may suggest fundamental limits on algorithmic circuit analysis.</p>
| 763
|
|
text generation
|
Will this algorithm always solve a constrained sudoku puzzle in quadratic time?
|
https://cs.stackexchange.com/questions/107183/will-this-algorithm-always-solve-a-constrained-sudoku-puzzle-in-quadratic-time
|
<h1>Constrained Puzzle Generation:</h1>
<p>Let us say a sudoku puzzle is generated with the following procedure:</p>
<ol>
<li>Gather a sequence input of 9 unique numbers in the range <span class="math-container">$[1 .. 9]$</span>. Call it <span class="math-container">$S$</span>.</li>
<li>Map <span class="math-container">$S$</span> to a <span class="math-container">$3 \times 3$</span> grid <span class="math-container">$G$</span> as follows:
<span class="math-container">$$G_{i,j} = \begin{cases}
S_{j} & i = 0\\
S_{j + 3} & i = 1\\
S_{j + 6} & i = 2
\end{cases}$$</span></li>
<li>Let's now call <span class="math-container">$M$</span> the sudoku board contained of 9 smaller <span class="math-container">$3 \times 3$</span> grids. (For instance <span class="math-container">$G$</span> will be one of these grids in the board). Define it as follows:</li>
</ol>
<p><span class="math-container">$$M_{i,j} = \text{shift}(G, i + 3 j)$$</span></p>
<p>Where <span class="math-container">$\text{shift}(G, 1)$</span> is defined as:</p>
<ul>
<li>Move <span class="math-container">$G_{0,0}$</span> to <span class="math-container">$G_{0,1}$</span></li>
<li>Move <span class="math-container">$G_{0,1}$</span> to <span class="math-container">$G_{0,2}$</span></li>
<li>Move <span class="math-container">$G_{0,2}$</span> to <span class="math-container">$G_{1,0}$</span></li>
<li>Move <span class="math-container">$G_{1,0}$</span> to <span class="math-container">$G_{1,1}$</span></li>
<li>Move <span class="math-container">$G_{1,1}$</span> to <span class="math-container">$G_{1,2}$</span></li>
<li>Move <span class="math-container">$G_{1,2}$</span> to <span class="math-container">$G_{2,0}$</span></li>
<li>Move <span class="math-container">$G_{2,0}$</span> to <span class="math-container">$G_{2,1}$</span></li>
<li>Move <span class="math-container">$G_{2,1}$</span> to <span class="math-container">$G_{2,2}$</span></li>
<li>Move <span class="math-container">$G_{2,2}$</span> to <span class="math-container">$G_{0,0}$</span></li>
</ul>
<p>Then define <span class="math-container">$\text{shift}(G, n) = \text{shift}(\text{shift}(G, n-1), 1)$</span>. Basically a "shift" is moving everything one cell to the right when possible or else move it down to the leftmost position in the next row.</p>
<ol start="4">
<li>Now, for all present entries in a difficult puzzle (let's say <a href="https://gizmodo.com/can-you-solve-the-10-hardest-logic-puzzles-ever-created-1064112665" rel="nofollow noreferrer">world's hardest puzzle</a>) we make the entries in <span class="math-container">$M$</span> present in the final output.</li>
</ol>
<hr>
<h1>Example</h1>
<ol>
<li>Let's say our input is <span class="math-container">$S = [8,5,9,6,1,2,4,3,7]$</span>.</li>
<li>We map <span class="math-container">$S$</span> to <span class="math-container">$G$</span> and get:</li>
</ol>
<p><span class="math-container">$$G = \begin{bmatrix}
8 & 5 & 9\\
6 & 1 & 2\\
4 & 3 & 7
\end{bmatrix}$$</span></p>
<ol start="3">
<li>Now we can produce <span class="math-container">$M$</span> with the shifts which would look like the following:</li>
</ol>
<p><span class="math-container">$$M = \begin{bmatrix}
8 & 5 & 9 & 4 & 3 & 7 & 6 & 1 & 2\\
6 & 1 & 2 & 8 & 5 & 9 & 4 & 3 & 7\\
4 & 3 & 7 & 6 & 1 & 2 & 8 & 5 & 9\\
7 & 8 & 5 & 2 & 4 & 3 & 9 & 6 & 1\\
9 & 6 & 1 & 7 & 8 & 5 & 2 & 4 & 3\\
2 & 4 & 3 & 9 & 6 & 1 & 7 & 8 & 5\\
3 & 7 & 8 & 1 & 2 & 4 & 5 & 9 & 6\\
5 & 9 & 6 & 3 & 7 & 8 & 1 & 2 & 4\\
1 & 2 & 4 & 5 & 9 & 6 & 3 & 7 & 8\\
\end{bmatrix}$$</span></p>
<ol start="4">
<li>Now map this onto the present entries in a difficult puzzle like <a href="https://gizmodo.com/can-you-solve-the-10-hardest-logic-puzzles-ever-created-1064112665" rel="nofollow noreferrer">this one</a>. We get the final grid:</li>
</ol>
<p><span class="math-container">$$M = \begin{bmatrix}
8 & & & & & & & & \\
& & 2 & 8 & & & & & \\
& 3 & & & 1 & & 8 & & \\
& 8 & & & & 3 & & & \\
& & & & 8 & 5 & 2 & & \\
& & & 9 & & & & 8 & \\
& & 8 & & & & & 9 & 6\\
& & 6 & 3 & & & & 2 & \\
& 2 & & & & & 3 & & \\
\end{bmatrix}$$</span></p>
<hr>
<h1>Semi-Solver</h1>
<p>If we assume that a sudoku puzzle was generated with this procedure we can now create a "semi"-solver. I say "semi" because we need the <span class="math-container">$3 \times 3$</span> grid <span class="math-container">$M_{2,2}$</span> already solved for us. Let's assume we have this. As an example I will assume we are provided:</p>
<p><span class="math-container">$$\begin{bmatrix}
5 & 9 & 6\\
1 & 2 & 4\\
3 & 7 & 8
\end{bmatrix}$$</span></p>
<p>Now we will flatten it into: <span class="math-container">$[5,9,6,1,2,4,3,7,8]$</span> and permute as follows:</p>
<pre><code>[8, 5, 9, 6, 1, 2, 4, 3, 7]-----list 1
[7, 8, 5, 9, 6, 1, 2, 4, 3]-----list 2
[3, 7, 8, 5, 9, 6, 1, 2, 4]-----list 3
[4, 3, 7, 8, 5, 9, 6, 1, 2]-----list 4
[2, 4, 3, 7, 8, 5, 9, 6, 1]-----list 5
[1, 2, 4, 3, 7, 8, 5, 9, 6]-----list 6
[6, 1, 2, 4, 3, 7, 8, 5, 9]-----list 7
[9, 6, 1, 2, 4, 3, 7, 8, 5]-----list 8
[5, 9, 6, 1, 2, 4, 3, 7, 8]-----list 9
</code></pre>
<p>Now for each list, we will turn them into a <span class="math-container">$3 \times 3$</span> grid using the same mapping in step 2 above. For example list 1 would get mapped to</p>
<p><span class="math-container">$$\begin{bmatrix}
8 & 5 & 9 \\
6 & 1 & 2 \\
4 & 3 & 7
\end{bmatrix}$$</span></p>
<p>Now we position these in the game board the same way we did as step 3 above. For example our layout would be as follows:</p>
<pre><code>**list1** **list4** **list7**
**list2** **list5** **list8**
**list3** **list6** **list9**
</code></pre>
<p>In the prior example this would give us the correct solution: </p>
<p><span class="math-container">$$M = \begin{bmatrix}
8 & 5 & 9 & 4 & 3 & 7 & 6 & 1 & 2\\
6 & 1 & 2 & 8 & 5 & 9 & 4 & 3 & 7\\
4 & 3 & 7 & 6 & 1 & 2 & 8 & 5 & 9\\
7 & 8 & 5 & 2 & 4 & 3 & 9 & 6 & 1\\
9 & 6 & 1 & 7 & 8 & 5 & 2 & 4 & 3\\
2 & 4 & 3 & 9 & 6 & 1 & 7 & 8 & 5\\
3 & 7 & 8 & 1 & 2 & 4 & 5 & 9 & 6\\
5 & 9 & 6 & 3 & 7 & 8 & 1 & 2 & 4\\
1 & 2 & 4 & 5 & 9 & 6 & 3 & 7 & 8\\
\end{bmatrix}$$</span></p>
<p>Then we have list 9 (our input) will always give you correct solution in quadratic time. </p>
<h1>Question</h1>
<p>Will this algorithm always solve the given puzzle if we assume the puzzle input was created with these constraints?</p>
|
<p>Let me start from correctness check, will this method always generate grids abiding sudoku rules?<br>
In fact yes, your shift operator is simply circulant matrix, scattering boxes. Since circulant matrix always abide rules by row and column but not by box, you use permutation (scattering scheme), which now abide all rules, it will produce a subset of possible grids, let us call it <span class="math-container">$G$</span>.</p>
<p>Now mapping <span class="math-container">$G_i$</span> entries to some mask (here given sudoku) is bijective, since you promise to give puzzles generated that way, and those are deterministic.<br>
There are only <span class="math-container">$9!$</span> grids generated that way, as far as go either way it will always work. Please note there are 6 670 903 752 021 072 936 960 grids <a href="http://www.afjarvis.staff.shef.ac.uk/sudoku/" rel="nofollow noreferrer">computed by Bertram Felgenhauer and Frazer Jarvis</a>.</p>
<p>Your mapping allow to recover solution if you give 9 unique numbers from grid.</p>
<p>Any other puzzle provided as input will simply fail.</p>
| 764
|
text generation
|
Efficient n-choose-k random sampling
|
https://cs.stackexchange.com/questions/104930/efficient-n-choose-k-random-sampling
|
<p>Is there an efficient method of sampling an n-choose-k combination at random (with uniform probability, for example)?</p>
<p>I have read <a href="https://cs.stackexchange.com/questions/79555">this question</a> but it asks for generations of all combinations, not combinations at random.</p>
<p>I general I'm aware of rejection sampling, however it's very inefficient.</p>
<p>I also came across <a href="https://cs.stackexchange.com/questions/87631">reservoir sampling</a>, but that appears to be primarily geared towards very large or unknown n. I'm more interested in large but finite n (definitely not large enough to not be able to fit in memory. Well. An n-sized array itself will fit in memory, but the state space of all n-choose-k combinations might not).</p>
<p>Is there any survey/review on this topic? Does Knuth cover random n-choose-k sampling in his TAOCP texts?</p>
<p>Thanks in advance.</p>
<p><strong>Edit</strong>: To be a bit more specific, a 5-choose-3 space over the string 'ABCDE' would look like this:</p>
<p>['ABC', 'ABD', 'ABE', 'ACD', 'ACE', 'ADE', 'BCD', 'BCE', 'BDE', 'CDE']</p>
<p>(Note: this is combination without replacement). And I want to be able to sample from this space with uniform distribution, using a general algorithm (one that works with arbitrary n and k).</p>
|
<p>Here is the simplest algorithm, which is efficient when <span class="math-container">$k$</span> is much smaller than <span class="math-container">$n$</span> relatively.</p>
<hr>
<p>Input: two positive integers <span class="math-container">$n$</span> and <span class="math-container">$k$</span> with <span class="math-container">$k\le n$</span><br>
Output: a random permutation of <span class="math-container">$k$</span> integers from <span class="math-container">$1,2,\cdots,n$</span><br>
Algorithm:</p>
<ol>
<li>Let <code>arr</code> be an array of size <span class="math-container">$n$</span> and a default value that is not <code>True</code>.</li>
<li>Let <code>out</code> be an empty array.</li>
<li>Let <code>i</code> be a random integer in <span class="math-container">$[0,n)$</span>. If <code>arr[i]</code> is not <code>True</code>, append <code>i+1</code> to <code>out</code> and set <code>arr[i]</code> to <code>True</code>.</li>
<li>Go back to 3 unless we have selected <span class="math-container">$k$</span> elements.</li>
<li>return <code>out</code>.</li>
</ol>
<hr>
<p>If <span class="math-container">$k\le n/2$</span>, then the algorithm above runs in <span class="math-container">$O(k)$</span> average time. For example, if <span class="math-container">$n=1000$</span> and <span class="math-container">$k=50$</span>, it will use the random number generator less than 55 times in average. This is much better the reservoir sampling that will use the random number generator about 1000 times.</p>
<hr>
<p>How to speed up the algorithm if <span class="math-container">$k$</span> is not much smaller than <span class="math-container">$n$</span>?</p>
<p>We will let each element in <code>arr</code> be a pair <code>(i, False)</code>. After we have selected <span class="math-container">$n/4$</span> elemented, we will compactify <code>arr</code> by removing the elements whose second entries have been changed to <code>True</code>. We will set <span class="math-container">$n$</span> to <span class="math-container">$n-n/4$</span> and <span class="math-container">$k$</span> to <span class="math-container">$k-n/4$</span>. Repeat the algorithm.</p>
| 765
|
T5 model
|
T5 model custom vocabulary
|
https://stackoverflow.com/questions/62519413/t5-model-custom-vocabulary
|
<p>Is there a way to choose my custom vocabulary in T5-model while fine-tuning for a text summarization task?</p>
<p>I tried using a sentencepiece model to create my custom tokenizer but the model predicted some tokens which was not present in my tokenizer and hence the tokenizer takes it as an unknown token.</p>
|
<p>It is okay to add few tokens but you can not be using a totally different vocabulary and fine-tuning at the same time! The pre-trained weights are trained with the pre-trained vocabulary :) If you change the vocabulary, the trained weights become meaningless and invalid! If you want to use another vocabulary you have to train from scratch! To add tokens to vocabulary you can for example do:</p>
<pre><code>tokenizer = BertTokenizer.from_pretrained(model_name)
tokenizer.add_tokens(['new', 'codekali', 'blabla'])
model = Bert.from_pretrained(model_name, return_dict=False)
model.resize_token_embeddings(len(tokenizer))
</code></pre>
<p>The last line is important because you need to tell the model that the number of tokens is changed.</p>
| 0
|
T5 model
|
Using the T5 model with huggingface's mask-fill pipeline
|
https://stackoverflow.com/questions/61408753/using-the-t5-model-with-huggingfaces-mask-fill-pipeline
|
<p>Does anyone know if it is possible to use the T5 model with hugging face's mask-fill pipeline? The below is how you can do it using the default model but i can't seem to figure out how to do is using the T5 model specifically? </p>
<pre><code>from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
</code></pre>
<p>Trying this for example raises the error "TypeError: must be str, not NoneType" because nlp_fill.tokenizer.mask_token is None.</p>
<pre><code>nlp_fill = pipeline('fill-mask',model="t5-base", tokenizer="t5-base")
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
</code></pre>
| 1
|
|
T5 model
|
KeyError: 'target_text' while training a t5 model using simple transformers
|
https://stackoverflow.com/questions/72880981/keyerror-target-text-while-training-a-t5-model-using-simple-transformers
|
<p>I am training a T5 model using Simple transformers and that is giving keyError:'target_text'
what could be The probable cause of it and how can I solve this?
here is my code</p>
<pre><code> !pip install SimpleTransformers
import logging
import pandas as pd
df=pd.read_csv('/content/Vastu - Sheet1 (4).csv',sep=',')
df
df["prefix"]="ask_question"
from sklearn.model_selection import train_test_split
train_df, eval_df = train_test_split(df, test_size=0.05)
train_df.keys()
from simpletransformers.t5 import T5Model ,T5Args
model_args=T5Args()
model_args.num_train_epochs = 3
model = T5Model(
model_type='t5',
model_name="t5-base",
args=model_args,
use_cuda=False,
)
model.train_model(train_df)
</code></pre>
| 2
|
|
T5 model
|
Error while converting google flan T5 model to onnx
|
https://stackoverflow.com/questions/78483209/error-while-converting-google-flan-t5-model-to-onnx
|
<p>I am looking to convert flan-T5 model downloaded from Hugging face into onnx format and make inference with the same.</p>
<p>My input data is the <strong>symptoms of disease</strong> and expected output is the <strong>Disease name</strong></p>
<pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
import onnx
# Set the device to GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-xl").to(device)
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl")
# Export the model to ONNX format
onnx_path = "flan-t5-xl.onnx"
dummy_input = tokenizer("What's the disease name in this text: Example text", return_tensors="pt", padding=True).to(device)
dummy_input_ids = dummy_input["input_ids"]
dummy_attention_mask = dummy_input["attention_mask"]
dummy_decoder_input_ids = tokenizer("<pad>", return_tensors="pt").input_ids.to(device)
with torch.no_grad():
torch.onnx.export(
model,
(dummy_input_ids, dummy_attention_mask, dummy_decoder_input_ids),
onnx_path,
opset_version=11,
input_names=["input_ids", "attention_mask", "decoder_input_ids"],
output_names=["output"],
dynamic_axes={
"input_ids": {0: "batch_size"},
"attention_mask": {0: "batch_size"},
"decoder_input_ids": {0: "batch_size"},
"output": {0: "batch_size", 1: "sequence_length"},
},
)
print(f"Model saved to {onnx_path}")
# Inference using the ONNX model on GPU
import onnxruntime
onnx_model = onnxruntime.InferenceSession(onnx_path, providers=["CUDAExecutionProvider"]
)
</code></pre>
<blockquote>
<p>InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from flan-t5-xl.onnx failed:This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (/decoder/block.0/layer.0/SelfAttention/Sub_output_0) of operator (Min) in node (/decoder/block.0/layer.0/SelfAttention/Min) is invalid.</p>
</blockquote>
<pre><code>input_text = input("Enter Disease/Symptom Detail: ")
inputs = tokenizer(input_text, return_tensors="pt", padding=True).to(device)
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
decoder_input_ids = tokenizer("<pad>", return_tensors="pt").input_ids.to(device)
onnx_inputs = {
"input_ids": input_ids.cpu().numpy(),
"attention_mask": attention_mask.cpu().numpy(),
"decoder_input_ids": decoder_input_ids.cpu().numpy(),
}
onnx_output = onnx_model.run(None, onnx_inputs)[0]
decoded_output = tokenizer.decode(onnx_output[0], skip_special_tokens=True)
print('-' * 100)
print(f"Name of Disease based on Entered Text: {decoded_output}")
</code></pre>
|
<p>Use <a href="https://huggingface.co/datasets/bakks/flan-t5-onnx" rel="nofollow noreferrer">https://huggingface.co/datasets/bakks/flan-t5-onnx</a> instead.</p>
<p>And to convert the <code>google/flan-t5</code>, see <a href="https://huggingface.co/datasets/bakks/flan-t5-onnx/blob/main/exportt5.py" rel="nofollow noreferrer">https://huggingface.co/datasets/bakks/flan-t5-onnx/blob/main/exportt5.py</a></p>
<pre><code>from pathlib import Path
import transformers as t
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSeq2SeqLM
# print out the version of the transformers library
print("transformers version:", t.__version__)
models = [
#"google/flan-t5-small",
#"google/flan-t5-base",
#"google/flan-t5-large",
"google/flan-t5-xl",
"google/flan-t5-xxl",
]
for model_id in models:
model_name = model_id.split("/")[1]
onnx_path = Path("onnx/" + model_name)
# load vanilla transformers and convert to onnx
model = ORTModelForSeq2SeqLM.from_pretrained(model_id, from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# save onnx checkpoint and tokenizer
model.save_pretrained(onnx_path)
tokenizer.save_pretrained(onnx_path)
</code></pre>
<p>Then try again:</p>
<pre><code>import onnxruntime
onnx_model = onnxruntime.InferenceSession(
onnx_path, providers=["CUDAExecutionProvider"]
)
</code></pre>
| 3
|
T5 model
|
Output logits from T5 model for text generation purposes
|
https://stackoverflow.com/questions/73314467/output-logits-from-t5-model-for-text-generation-purposes
|
<p>I am using the T5 model found on Hugging Face for text summarization. How can I output the logits of the T5 model directly given a text input for generation purposes (not training)?</p>
<p>I want to generate the outputs token by token so that I can calculate the entropy of each output token, respectively. It does not seem like the .generate() method will work for this.</p>
<p>I effectively want to create my own generate function but I need to obtain the logits of the model to be able to do this.</p>
|
<p>You can use the forward function to get your logits, and apply argmax as such:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch.nn.functional as F
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
input_ids = tokenizer("test here", padding="longest",
max_length=128
truncation=True,
return_tensors="pt"
)
logits = model(**input_ids).logits
preds = F.softmax(logits, dim=-1).argmax(dim=-1)
y = tokenizer.batch_decode(sequences=preds, skip_special_tokens=True)
</code></pre>
<p>You may check the original source here:
<a href="https://stackoverflow.com/questions/72177055/forward-outputs-on-multiple-sequences-is-wrong?noredirect=1#comment127536983_72177055">Forward outputs on multiple sequences is wrong</a></p>
| 4
|
T5 model
|
Does Huggingface's T5 Model Vocabulary include English-only version?
|
https://stackoverflow.com/questions/61880247/does-huggingfaces-t5-model-vocabulary-include-english-only-version
|
<p>Does anyone know if HuggingFace's T5 model (small) comes with mono-language vocabulary? The T5 paper by Google indicates that their vocabulary is trained on English and 3 other languages. Is there a version of this vocabulary that contains English only vocabulary? </p>
|
<p>When looking at the publicly available <a href="https://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json" rel="nofollow noreferrer">model card definition</a>, the HuggingFace T5-small also seems to contain the necessary translation tasks, which makes it a multi-lingual model. Note that the summarization task should still be English-only, so depending on your task, this might not matter too much.</p>
| 5
|
T5 model
|
Hugging Face T5 model that is not pre-trained and training it
|
https://stackoverflow.com/questions/74267006/hugging-face-t5-model-that-is-not-pre-trained-and-training-it
|
<p>I want to use the Hugging Face T5 model to do summarization but I want to train the model with my own dataset.</p>
<p>How can I get the T5 model such that it has not been trained yet? And what steps do I need to take to train it?</p>
<p>Currently I am look at this tutorial:
<a href="https://huggingface.co/docs/transformers/tasks/summarization" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/tasks/summarization</a></p>
<p>I also found this code which allows me to get a T5 model without weights but how can I train it with my own data?
<a href="https://stackoverflow.com/questions/73700165/how-to-use-architecture-of-t5-without-pretrained-model-hugging-face">How to use architecture of T5 without pretrained model (Hugging face)</a></p>
| 6
|
|
T5 model
|
Using the encoder part only from T5 model
|
https://stackoverflow.com/questions/71788825/using-the-encoder-part-only-from-t5-model
|
<p>I want to build a classification model that needs only the encoder part of language models. I have tried Bert, Roberta, xlnet, and so far I have been successful.</p>
<p>I now want to test the encoder part only from T5, so far, I found encT5 <a href="https://github.com/monologg/EncT5" rel="noreferrer">https://github.com/monologg/EncT5</a></p>
<p>And T5EncoderModel from HuggingFace.</p>
<p>Can anyone help me understand if T5EncoderModel is what I am looking for or not?</p>
<p>It says in the description: The bare T5 Model transformer outputting encoder’s raw hidden-states without any specific head on top.</p>
<p>This is slightly confusing to me, especially that encT5 mentioned that they implemented the encoder part only because it didn't exist in HuggingFace which is what makes me more doubtful here.</p>
<p>Please note that I am a beginner in deep learning, so please go easy on me I understand that ny questions can be naive to most of you.</p>
<p>Thank you</p>
|
<p>Load T5 encoder checkpoint only:</p>
<pre><code>from transformers import T5EncoderModel
T5EncoderModel._keys_to_ignore_on_load_unexpected = ["decoder.*"]
auto_model = T5EncoderModel.from_pretrained("t5-base")
</code></pre>
<p>Note that T5 doesn't have CLS token so you should use another strategy (mean pooling, etc) for your classification task</p>
| 7
|
T5 model
|
How to use AllenNLP interpret on finetuned t5 model
|
https://stackoverflow.com/questions/69527860/how-to-use-allennlp-interpret-on-finetuned-t5-model
|
<p>I have trained a T5 model on a specific dataset for the purpose of keyword extraction. I wish to use Allen NLP Interpret to know various saliency mappings for the inputs given to my model. Where do I make changes such that I can use the package.</p>
|
<p>The AllenNLP guide has a chapter on interpreting models: <a href="https://guide.allennlp.org/interpret" rel="nofollow noreferrer">https://guide.allennlp.org/interpret</a></p>
<p>Also, for custom models, here's an example: <a href="https://stackoverflow.com/questions/65806905/how-to-use-allen-nlp-interpret-on-custom-models/65951248#65951248">How to use Allen NLP interpret on custom models</a></p>
| 8
|
T5 model
|
T5 model generates short output
|
https://stackoverflow.com/questions/74981011/t5-model-generates-short-output
|
<p>I have fine-tuned the T5-base model (from hugging face) on a new task where each input and target are sentences of 256 words.
The loss is converging to low values however when I use the <code>generate</code> method the output is always too short.
I tried giving minimal and maximal length values to the method but it doesn't seem to be enough. I suspect the issue is related to the fact that the sentence length before tokenization is 256 and after tokenization, it is not constant (padding is used during training to ensure all inputs are of the same size).
Here is my generate method:</p>
<pre class="lang-py prettyprint-override"><code>model = transformers.T5ForConditionalGeneration.from_pretrained('t5-base')
tokenizer = T5Tokenizer.from_pretrained('t5-base')
generated_ids = model.generate(
input_ids=ids,
attention_mask=attn_mask,
max_length=1024,
min_length=256,
num_beams=2,
early_stopping=False,
repetition_penalty=10.0
)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids][0]
preds = preds.replace("<pad>", "").replace("</s>", "").strip().replace(" ", " ")
target = [tokenizer.decode(t, skip_special_tokens=True, clean_up_tokenization_spaces=True) for t in reference][0]
target = target.replace("<pad>", "").replace("</s>", "").strip().replace(" ", " ")
</code></pre>
<p>The inputs are created using</p>
<pre class="lang-py prettyprint-override"><code>tokens = tokenizer([f"task: {text}"], return_tensors="pt", max_length=1024, padding='max_length')
inputs_ids = tokens.input_ids.squeeze().to(dtype=torch.long)
attention_mask = tokens.attention_mask.squeeze().to(dtype=torch.long)
labels = self.tokenizer([target_text], return_tensors="pt", max_length=1024, padding='max_length')
label_ids = labels.input_ids.squeeze().to(dtype=torch.long)
label_attention = labels.attention_mask.squeeze().to(dtype=torch.long)
</code></pre>
|
<p>For whom it may concern, I found out the issue was with the <code>max_length</code> argument of the generation method. It limits the maximal number of tokens <strong>including</strong> the input tokens. In my case it was required to set <code>max_new_tokens=1024</code> instead of the argument provided in the question.</p>
| 9
|
T5 model
|
How to properly finetune t5 model
|
https://stackoverflow.com/questions/71607360/how-to-properly-finetune-t5-model
|
<p>I'm finetuning a t5-base model following <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/translation.ipynb#scrollTo=ZwZDgY-0DbrD" rel="nofollow noreferrer">this notebook</a>.
However, the loss of both validation set and training set decreases very slowly. I changed the learning_rate to a larger number, but it did not help. Eventually, the bleu score on the validation set was low (around 13.7), and the translation quality was low as well.</p>
<pre><code>***** Running Evaluation *****
Num examples = 1000
Batch size = 32
{'eval_loss': 1.06500244140625, 'eval_bleu': 13.7229, 'eval_gen_len': 17.564, 'eval_runtime': 16.7915, 'eval_samples_per_second': 59.554, 'eval_steps_per_second': 1.906, 'epoch': 5.0}
</code></pre>
<p>If I use the "Helsinki-NLP/opus-mt-en-ro" model, the loss decreases properly, and at the end, the finetuned model works pretty well.</p>
<p>How to fine-tune t5-base properly? Did I miss something?</p>
|
<p>I think the metrics shown in the tutorial are for the already trained EN>RO opus-mt model which was then fine-tuned. I don't see the before and after comparison of the metrics for it, so it is hard to tell how much of a difference that fine-tuning really made.</p>
<p>You generally shouldn't expect the same results from fine-tuning T5 which is not a (pure) machine translation model. More important is the difference in metrics before and after the fine-tuning.</p>
<p>Two things I could imagine having gone wrong with your training:</p>
<ol>
<li>Did you add the proper T5 prefix to the input sequences (<code>"translate English to Romanian: "</code>) for both your training and your evaluation? If you did not you might have been training a new task from scratch and not use the bit of pre-training the model did on MT to Romanian (and German and perhaps some other ones). You can see how that affects the model behavior for example in this inference demo: <a href="https://huggingface.co/t5-base?text=translate%20English%20to%20Romanian%3A%20How%20are%20you%3F" rel="nofollow noreferrer">Language used during pretraining</a> and <a href="https://huggingface.co/t5-base?text=translate+English+to+Spanish%3A+How+are+you%3F" rel="nofollow noreferrer">Language not used during pretraining</a>.</li>
<li>If you chose a relatively small model like <code>t5-base</code> but you stuck with the <code>num_train_epochs=1</code> in the tutorial your train epoch number is probably a lot too low to make a noticable difference. Try increasing the epochs for as long as you get significant performance boosts from it, in the example this is probably the case for at least the first 5 to 10 epochs.</li>
</ol>
<p>I actually did something very similar to what you are doing before for EN>DE (German). I fine-tuned both <code>opus-mt-en-de</code> and <code>t5-base</code> on a custom dataset of 30.000 samples for 10 epochs. <code>opus-mt-en-de</code> BLEU increased from 0.256 to 0.388 and <code>t5-base</code> from 0.166 to 0.340, just to give you an idea of what to expect. Romanian/the dataset you use might be more of a challenge for the model and result in different scores though.</p>
| 10
|
T5 model
|
Huggingface GPT2 and T5 model APIs for sentence classification?
|
https://stackoverflow.com/questions/62561471/huggingface-gpt2-and-t5-model-apis-for-sentence-classification
|
<p>I've successfully used the <a href="https://huggingface.co/transformers/model_doc/bert.html" rel="nofollow noreferrer">Huggingface Transformers BERT model</a> to do sentence classification using the <a href="https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification" rel="nofollow noreferrer">BERTForSequenceClassification</a> class and API. I've used it for both 1-sentence sentiment analysis and 2-sentence NLI.</p>
<p>I can see that other models have analogous classes, e.g. <a href="https://huggingface.co/transformers/model_doc/xlnet.html#xlnetforsequenceclassification" rel="nofollow noreferrer">XLNetForSequenceClassification</a> and <a href="https://huggingface.co/transformers/model_doc/roberta.html#robertaforsequenceclassification" rel="nofollow noreferrer">RobertaForSequenceClassification</a>. This type of sentence classification usually involves placing a classifier layer on top of a dense vector representing the entirety of the sentence.</p>
<p>Now I'm trying to use the <a href="https://huggingface.co/transformers/model_doc/gpt2.html" rel="nofollow noreferrer">GPT2</a> and <a href="https://huggingface.co/transformers/model_doc/t5.html" rel="nofollow noreferrer">T5</a> models. However, when I look at the available classes and API for each one, there is no equivalent "ForSequenceClassification" class. For example, for GPT2 there are <a href="https://huggingface.co/transformers/model_doc/gpt2.html#gpt2model" rel="nofollow noreferrer">GPT2Model</a>, <a href="https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel" rel="nofollow noreferrer">GPT2LMHeadModel</a>, and <a href="https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel" rel="nofollow noreferrer">GPT2DoubleHeadsModel</a> classes. Perhaps I'm not familiar enough with the research for GPT2 and T5, but I'm certain that both models are capable of sentence classification.</p>
<p>So my questions are:</p>
<ol>
<li><p>What Huggingface classes for GPT2 and T5 should I use for 1-sentence classification?</p>
</li>
<li><p>What classes should I use for 2-sentence (sentence pair) classification (like natural language inference)?</p>
</li>
</ol>
<p>Thank you for any help.</p>
|
<p>You need to use GPT2Model class to generate the sentence embeddings of the text. once you have the embeddings feed them to a Linear NN and softmax function to obtain the logits, below is a component for text classification using GPT2 I'm working on(still a work in progress, so I'm open to suggestions), it follows the logic I just described:</p>
<pre><code>from torch_model_base import TorchModelBase
import torch
import torch.nn as nn
import torch.utils.data
from transformers import GPT2Tokenizer, GPT2Model
import random
from spacy.util import minibatch, compounding
import numpy as np
from sklearn.base import TransformerMixin, BaseEstimator
import pandas as pd
from typing import List, Tuple
def mean_across_all_tokens(hidden_states):
return torch.mean(hidden_states[-1], dim=1)
def sum_all_tokens(hidden_states):
return torch.sum(hidden_states[-1], dim=1)
def concat_all_tokens(hidden_states):
batch_size, max_tokens, emb_dim = hidden_states[-1].shape
return torch.reshape(hidden_states[-1], (batch_size, max_tokens * emb_dim))
class GPT2SequenceClassifierModel(nn.Module):
def __init__(
self,
hidden_size: int,
num_classes: int,
gpt_model_name: str,
max_seq_length: int = 280,
embedding_func=mean_across_all_tokens,
combine_sentence_tokens=True
):
super(GPT2SequenceClassifierModel, self).__init__()
self.hidden_size = hidden_size
self.fc1 = nn.Linear(hidden_size, num_classes)
self.model = GPT2Model.from_pretrained(
gpt_model_name,
output_hidden_states=True
)
self.tokenizer = GPT2Tokenizer.from_pretrained(gpt_model_name)
self.combine_sentence_tokens = combine_sentence_tokens;
self.embedding_func = embedding_func;
self.model.eval()
self.max_length = max_seq_length
def _tokenize(self, text_list: List[str]) -> Tuple[torch.tensor, torch.tensor]:
# Tokenize the text with the provided tokenizer
#self.tokenizer.pad_token = self.tokenizer.eos_token
self.tokenizer.add_special_tokens({'pad_token': '[PAD]'})
self.tokenizer.add_special_tokens({'cls_token': '[CLS]'})
self.model.resize_token_embeddings(len(self.tokenizer))
input_ids = self.tokenizer.batch_encode_plus(text_list,
add_special_tokens=True,
max_length=self.max_length,
pad_to_max_length=True
)["input_ids"]
return torch.LongTensor(input_ids)
def _tokenize_and_predict(self, text_list: List[str]) -> torch.tensor:
input_ids_tensor = self._tokenize(text_list)
out = self.model(input_ids=input_ids_tensor)
hidden_states = out[2]
if (self.combine_sentence_tokens):
return self.embedding_func(hidden_states)
else:
return hidden_states[-1];
def forward(self, text_list: List[str]):
"""
:param input_ids: (torch.LongTensor of shape (batch_size, input_ids_length))
:return: logits for class
"""
if isinstance(text_list, pd.Series):
text_list = text_list.tolist()
with torch.no_grad():
# fine tuning GPT2 model is too expensive, so won't do it
gpt_out = self._tokenize_and_predict(text_list)
batch_size = len(text_list)
assert gpt_out.shape == (batch_size, self.hidden_size)
prediction_vector = self.fc1(gpt_out) # (batch_size , max_len, num_classes)
logits = torch.softmax(prediction_vector, dim=1)
return logits
class GPT2Classifier(TorchModelBase):
"""GPT2 + NN head for classification problems.
The network will work for any kind of classification task.
Parameters
----------
embed_dim: dimension of byte-pair/token embeddings generated by the model, check the model card(n_embd prop), since each model is compatible with only 1 no. of dimensions
max_seq_length: max tokens in a sequence(n_positions param in hugging face model config), if sequenc is shorter will get padded
"""
def __init__(self,
model_name="distilgpt2",
embed_dim=768,
max_seq_length=1024,
**kwargs
):
self.model_name = model_name
self.embed_dim = embed_dim
self.max_seq_length = max_seq_length
self.model = None # call fit() to set this
self.tokenizer = None # call fit() to set this
self.classes = None # call fit() to set this
super(GPT2Classifier, self).__init__(**kwargs)
self.params += ['model_name']
def fit(self, X, y):
"""Standard `fit` method.
Parameters
----------
X : np.array
y : array-like
Returns
-------
self
"""
self.classes = list(set(y))
self.model = GPT2SequenceClassifierModel(
hidden_size=self.embed_dim,
num_classes=len(self.classes),
gpt_model_name=self.model_name,
max_seq_length=self.max_seq_length
)
self.opt = self.optimizer(
self.model.parameters()
)
self.model.train()
loss = nn.CrossEntropyLoss()
print("Training... max iters: ", self.max_iter)
for ephoc in range(self.max_iter):
print("ephoc no: ", ephoc)
zipped_data = list(zip(X,y))
random.shuffle(zipped_data)
batches = minibatch(zipped_data, size=self.batch_size)
for batch in batches:
X_batch, y_batch = zip(*batch)
batch_preds = self.model(X_batch)
err = loss(batch_preds, torch.LongTensor(y_batch))
# Backprop:
self.opt.zero_grad()
err.backward()
self.opt.step()
return self
def predict_proba(self, X):
"""Predicted probabilities for the examples in `X`.
Parameters
----------
X : np.array
Returns
-------
np.array with shape (len(X), self.n_classes_)
"""
self.model.eval()
with torch.no_grad():
preds = self.model(X)
preds = preds.numpy()
return preds
def predict(self, X):
"""Predicted labels for the examples in `X`. These are converted
from the integers that PyTorch needs back to their original
values in `self.classes_`.
Parameters
----------
X : np.array
Returns
-------
list of length len(X)
"""
probs = self.predict_proba(X)
return [self.classes[i] for i in probs.argmax(axis=1)]
</code></pre>
| 11
|
T5 model
|
Making prediction from encoder and decoder of T5 model without using generate method
|
https://stackoverflow.com/questions/71049114/making-prediction-from-encoder-and-decoder-of-t5-model-without-using-generate-me
|
<p>I was working on the optimization of the T5 model I separated the model into encoder and decoder and converted them to ONNX using Nvidia TensorRT repo <a href="https://github.com/NVIDIA/TensorRT/tree/main/demo/HuggingFace" rel="nofollow noreferrer">https://github.com/NVIDIA/TensorRT/tree/main/demo/HuggingFace</a> but I am unable to make an inference. The model, I used is a QA model based on T5 and its prediction is done using generate method. Hence is there any way by which we can generate using T5 without using generate method?.</p>
| 12
|
|
T5 model
|
How to freeze parts of T5 transformer model
|
https://stackoverflow.com/questions/71048521/how-to-freeze-parts-of-t5-transformer-model
|
<p>I know that T5 has K, Q and V vectors in each layer. It also has a feedforward network. I would like to freeze K, Q and V vectors and only train the feedforward layers on each layer of T5. I use Pytorch library. The model could be a wrapper for huggingface T5 model or a modified version of it. I know how to freeze all parameters using the following code:</p>
<pre class="lang-py prettyprint-override"><code>tokenizer = AutoTokenizer.from_pretrained(underlying_model_name)
model = T5ForConditionalGeneration.from_pretrained(underlying_model_name)
for p in model.parameters():
p.requires_grad = False # freezing
</code></pre>
<p>Could you please guide me how can I do this?</p>
<p>This <a href="https://github.com/microsoft/LoRA" rel="nofollow noreferrer">github project</a> probably could be helpful but it's for Roberta and GPT, could I adapt it for T5?</p>
|
<p>I've adapted a solution based on <a href="https://discuss.huggingface.co/t/how-to-freeze-some-layers-of-bertmodel/917" rel="nofollow noreferrer">this discussion</a> from the Huggingface forums.
Basically, you have to specify the names of the modules/pytorch layers that you want to freeze.</p>
<p>In your particular case of T5, I started by looking at the model summary:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import T5ModelForConditionalGeneration
model = T5ModelForConditionalGeneration.from_pretrained("t5-small")
print(model)
</code></pre>
<p>This gives the following (abbreviated output):</p>
<pre><code>T5ForConditionalGeneration(
(shared): Embedding(32128, 512)
(encoder): T5Stack(
(embed_tokens): Embedding(32128, 512)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=512, out_features=512, bias=False)
(k): Linear(in_features=512, out_features=512, bias=False)
(v): Linear(in_features=512, out_features=512, bias=False)
(o): Linear(in_features=512, out_features=512, bias=False)
(relative_attention_bias): Embedding(32, 8)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseReluDense(
(wi): Linear(in_features=512, out_features=2048, bias=False)
(wo): Linear(in_features=2048, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
[...] # abbreviated output
</code></pre>
<p>with this, we can then generate a list of modules that we want to freeze. In particular, I decided to freeze the entire <code>T5LayerSelfAttention</code> block for the encoder (and, additionally, the <code>T5LayerCrossAttention</code> for the decoder):</p>
<pre class="lang-py prettyprint-override"><code># All modules in the
modules_to_freeze = [model.encoder.block[i].layer[0] for i in range(len(model.encoder.block))]
# And the decoder modules, which has both a SelfAttention (layer[0])
modules_to_freeze.extend([model.decoder.block[i].layer[0] for i in range(len(model.decoder.block))])
# and CrossAttention (layer[1]) block
modules_to_freeze.extend([model.decoder.block[i].layer[1] for i in range(len(model.decoder.block))])
</code></pre>
<p>And then simply freeze all the parameters in the respective modules:</p>
<pre class="lang-py prettyprint-override"><code>for module in modules_to_freeze:
for param in module.parameters():
param.requires_grad = False # Actual freezing operation
</code></pre>
<p>You can verify that these are actually frozen in your model by running the following:</p>
<pre class="lang-py prettyprint-override"><code>for param in model.parameters():
print(param.requires_grad)
</code></pre>
<p>which should print quite a few <code>False</code> as well. If you really only want to freeze K, Q and V, you can adapt the above process to just sub-select the modules you want.</p>
| 13
|
T5 model
|
HuggingFace - Why does the T5 model shorten sentences?
|
https://stackoverflow.com/questions/72882799/huggingface-why-does-the-t5-model-shorten-sentences
|
<p>I wanted to train the model for spell correction. I trained two models allegro/plt5-base with polish sentences and google/t5-v1_1-base with english sentences. Unfortunately, I don't know for what reason, but both models shorten the sentences.
Example:</p>
<pre><code>phrases = ['The name of the man who was kild was Jack Robbinson he has black hair brown eyes blue Jacket and blue Jeans.']
encoded = tokenizer(phrases, return_tensors="pt", padding=True, max_length=512, truncation=True)
print(encoded)
# {'input_ids': tensor([[ 37, 564, 13, 8, 388, 113, 47, 3, 157, 173,
# 26, 47, 4496, 5376, 4517, 739, 3, 88, 65, 1001,
# 1268, 4216, 2053, 1692, 24412, 11, 1692, 3966, 7, 5,
# 1]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
# 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')}
encoded.to('cuda')
translated = model.generate(**encoded)
print(translated)
# tensor([[ 0, 37, 564, 13, 8, 388, 113, 47, 2170, 47, 4496, 5376,
# 4517, 739, 3, 88, 65, 1001, 1268, 4216]], device='cuda:0')
tokenizer.batch_decode(translated, skip_special_tokens=True)
#['The name of the man who was born was Jack Robbinson he has black hair brown']
</code></pre>
<p>And something like this happens in almost every longer sentence. I tried to check if the model has any maximum sentence length set based on the documentation: <a href="https://huggingface.co/transformers/v3.1.0/model_doc/t5.html" rel="nofollow noreferrer">https://huggingface.co/transformers/v3.1.0/model_doc/t5.html</a>. But the config of this model has no such field:
<code>n_positions – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_positions can also be accessed via the property max_position_embeddings.</code>
This is the entire config of the model:</p>
<pre><code>T5Config {
"_name_or_path": "final_model_t5_800_000",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_max_distance": 128,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"torch_dtype": "float32",
"transformers_version": "4.18.0",
"use_cache": true,
"vocab_size": 32128
}
</code></pre>
<p>What can be done to make the model return whole sentences?</p>
<h4>Update</h4>
<p>I looked in the old documentation earlier. But in the new one I don't see a field in the config at all about the maximum sentence length. <a href="https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5Config" rel="nofollow noreferrer">new documentation</a></p>
|
<p>I have already managed to solve the problem. When generating the tokens with the model, the max_length parameter had to be added, as below:</p>
<pre><code>translated = self._model.generate(**encoded, max_length=1024)
</code></pre>
<p>As a result, the model was no longer truncating sentences.</p>
| 14
|
T5 model
|
Question Answering with pre-trained model T5
|
https://stackoverflow.com/questions/71861922/question-answering-with-pre-trained-model-t5
|
<p>I want to use the pre-trained T5 model <a href="https://huggingface.co/docs/transformers/model_doc/t5" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/model_doc/t5</a> on the task of Question Answering on the <a href="https://huggingface.co/datasets/boolq" rel="nofollow noreferrer">https://huggingface.co/datasets/boolq</a> knowing that my inputs will be the passage and the question and the output is the boolean true or false that is the answer for the question.</p>
<p>I have seen some people tuning the model to this specific task. But, I want to know if there is a way to do it with pre-trained model to get some outputs and then compare them with the model after tuning.</p>
<p>Thanks!</p>
|
<p>Wasn't the T5 model also trained on BoolQ which would make this difficult and kind of fishy to test/evaluate because the later test data would not really be unseen data for the model? You can see it listed in the <a href="https://huggingface.co/t5-base" rel="nofollow noreferrer">model card on huggingface</a> as well as Google's <a href="https://arxiv.org/abs/1905.10044" rel="nofollow noreferrer">original paper</a>.</p>
<p>What I do find strange is that giving the pretrained T5-base a question from the dataset <a href="https://huggingface.co/t5-base?text=question%3A%20is%20there%20a%20now%20you%20see%20me%203%20coming%20out%0A%09%09context%3A%20Now%20You%20See%20Me%20is%20a%20series%20of%20heist%20thriller%20film%20written%20by%20%0AEd%20Solomon%2C%20Boaz%20Yakin%2C%20and%20Edward%20Ricourt.%20They%20focus%20on%20the%20actions%20of%0A%20a%20team%20of%20illusionists%20named%20%22The%20Four%20Horsemen%22%20who%20pull%20off%20near%20%0Aimpossible%20heists.%20The%20series%20features%20an%20ensemble%20cast%20including%20Jesse%20%0AEisenberg%2C%20Mark%20Ruffalo%2C%20Woody%20Harrelson%2C%20Isla%20Fisher%2C%20Dave%20Franco%2C%20%0AMichael%20Caine%2C%20Lizzy%20Caplan%2C%20and%20Morgan%20Freeman.%20The%20first%20film%20was%20%0Areleased%20in%202013%2C%20while%20the%20second%20was%20released%20in%202016%2C%20and%20a%20third%20%0Afilm%20is%20currently%20in%20development%20and%20set%20to%20be%20released%20in%202019.%20The%20%0Aseries%20has%20received%20mixed%20reviews%20from%20critics%20and%20audiences%20and%20grossed%0A%20nearly%20%24700%20million%20worldwide." rel="nofollow noreferrer">does not yield the expected answer or answer format</a>. There is a fine-tuned version of t5 for BoolQ which gives <a href="https://huggingface.co/mrm8488/t5-small-finetuned-boolq?text=question%3A%20is%20there%20a%20now%20you%20see%20me%203%20coming%20out%0A%09%09context%3A%20Now%20You%20See%20Me%20is%20a%20series%20of%20heist%20thriller%20film%20written%20by%20%0AEd%20Solomon%2C%20Boaz%20Yakin%2C%20and%20Edward%20Ricourt.%20They%20focus%20on%20the%20actions%20of%0A%20a%20team%20of%20illusionists%20named%20%22The%20Four%20Horsemen%22%20who%20pull%20off%20near%20%0Aimpossible%20heists.%20The%20series%20features%20an%20ensemble%20cast%20including%20Jesse%20%0AEisenberg%2C%20Mark%20Ruffalo%2C%20Woody%20Harrelson%2C%20Isla%20Fisher%2C%20Dave%20Franco%2C%20%0AMichael%20Caine%2C%20Lizzy%20Caplan%2C%20and%20Morgan%20Freeman.%20The%20first%20film%20was%20%0Areleased%20in%202013%2C%20while%20the%20second%20was%20released%20in%202016%2C%20and%20a%20third%20%0Afilm%20is%20currently%20in%20development%20and%20set%20to%20be%20released%20in%202019.%20The%20%0Aseries%20has%20received%20mixed%20reviews%20from%20critics%20and%20audiences%20and%20grossed%0A%20nearly%20%24700%20million%20worldwide." rel="nofollow noreferrer">a more acceptable answer</a>. Same problem with the pretrained model for <a href="https://huggingface.co/t5-base?text=question%3A%20What%20does%20increased%20oxygen%20concentrations%20in%20the%20patient%E2%80%99s%20lungs%20displace%3F%20context%3A%20Hyperbaric%20%28high-pressure%29%20medicine%20uses%20special%20oxygen%20chambers%20to%20increase%20the%20partial%20pressure%20of%20O%202%20around%20the%20patient%20and%2C%20when%20needed%2C%20the%20medical%20staff.%20Carbon%20monoxide%20poisoning%2C%20gas%20gangrene%2C%20and%20decompression%20sickness%20%28the%20%E2%80%99bends%E2%80%99%29%20are%20sometimes%20treated%20using%20these%20devices.%20Increased%20O%202%20concentration%20in%20the%20lungs%20helps%20to%20displace%20carbon%20monoxide%20from%20the%20heme%20group%20of%20hemoglobin.%20Oxygen%20gas%20is%20poisonous%20to%20the%20anaerobic%20bacteria%20that%20cause%20gas%20gangrene%2C%20so%20increasing%20its%20partial%20pressure%20helps%20kill%20them.%20Decompression%20sickness%20occurs%20in%20divers%20who%20decompress%20too%20quickly%20after%20a%20dive%2C%20resulting%20in%20bubbles%20of%20inert%20gas%2C%20mostly%20nitrogen%20and%20helium%2C%20forming%20in%20their%20blood.%20Increasing%20the%20pressure%20of%20O%202%20as%20soon%20as%20possible%20is%20part%20of%20the%20treatment." rel="nofollow noreferrer">Question answering in the SQuAD format</a> even when using the exact example and format from the paper.</p>
<p>Which leads me to think the fine-tuning on question answering is unlike some other tasks not actually included in the released version of the model or at least does not seem to have enough of an effect for the model to remember how the task works. In which case fine-tuning on it (again/more) would make sense again.</p>
| 15
|
T5 model
|
What does the vocabulary of a pre-trained / fine-tuned T5 model look like?
|
https://stackoverflow.com/questions/77248165/what-does-the-vocabulary-of-a-pre-trained-fine-tuned-t5-model-look-like
|
<p>My question is regarding the pre-trained T5 models found on Huggingface. In either case of taking the fully-trained model, or after fine-tuning it, is there an API function for directly downloading the vocabulary?</p>
<p>More specifically, the default <code>vocab_size</code> for T5 is 32128 (<a href="https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Config.vocab_size" rel="nofollow noreferrer">from the documentation</a>). Does that mean that after the model is trained, its decoder can generate up to 32128 unique words?</p>
<p>As an aside, I have noticed that capitalization does sometimes appear in my fine-tuned T5, does that mean the 32128 vocabulary could also be comprised of capitalized variants of words, e.g., is there one vocab index for "hello" and another index for "Hello"?</p>
|
<ul>
<li><p>The T5 default vocabulary consists of 32,128 <strong>subword</strong> tokens (utilizing the SentencePiece tokenizer), not word tokens. Thus, it can generate a larger vocabulary than the specified 32,128.</p>
</li>
<li><p>"hello" and "Hello" are treated as different tokens because T5's tokenizer is <strong>case-sensitive</strong>.</p>
</li>
</ul>
| 16
|
T5 model
|
How to use output from T5 model to replace masked tokens in input sequence
|
https://stackoverflow.com/questions/75977316/how-to-use-output-from-t5-model-to-replace-masked-tokens-in-input-sequence
|
<p>I'm working with the T5 model from the Hugging Face Transformers library and I have an input sequence with masked tokens that I want to replace with the output generated by the model. Here's the <a href="https://huggingface.co/docs/transformers/model_doc/t5#inference" rel="nofollow noreferrer">code</a>.</p>
<pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_data = "The <extra_id_0> walks in <extra_id_1> park"
input_ids = tokenizer(input_data, return_tensors="pt").input_ids
sequence_ids = model.generate(input_ids)
output_sequences = tokenizer.batch_decode(sequence_ids)
output_sequences
</code></pre>
<p>This code produces the following output:</p>
<pre><code>['<pad><extra_id_0> park offers<extra_id_1> the<extra_id_2> park.</s>']
</code></pre>
<p>What I want to do is replace the masked tokens <code><extra_id_0></code> and <code><extra_id_1></code> in the input sequence with the corresponding output tokens from the model, so that the final output is:</p>
<pre><code>The park offers walks in the park.
</code></pre>
<p>I'm hoping someone can help me with the code to achieve this.</p>
<p>Notice that this is the correspondence:</p>
<pre><code>mask in input_data -> answer in output_sequences
<extra_id_0> -> <extra_id_0> park offers (so we extract 'park offers' only)
<extra_id_1> -> <extra_id_1> the (so we extract 'the' only)
</code></pre>
|
<p>The t5 model considers tokens which begin with <extra_id as potential mask tokens. As written in the <a href="https://huggingface.co/docs/transformers/model_doc/t5#training" rel="noreferrer">documentation</a></p>
<p>"Each sentinel token represents a unique mask token for this sentence and should start with <extra_id_0>, <extra_id_1>, … up to <extra_id_99>"</p>
<p>In the output, you can consider the text between <extra_id_0> and <extra_id_1> as your output for the mask_0, the text between <extra_id_1> and <extra_id_2> as your output for the mask 1 and so on.</p>
<p>To extract this from your generated output, you can use the following code snippet. This will take the number of masks as input and return a list of string as output where each element represents the text predicted for the corresponding mask.</p>
<pre><code>def extract_text(text,num_masks=1):
list_of_text = []
for i in range(num_masks):
prev_id = '<extra_id_' + str(i) + '>'
curr_id = '<extra_id_' + str(i+1) + '>'
st_token_index = text.index(prev_id)
end_token_index = text.index(curr_id)
list_of_text.append(text[st_token_index+12:end_token_index])
return list_of_text
</code></pre>
<p>Also, you should note that t5 is not really the best choice for the masked language modelling task as discussed <a href="https://github.com/huggingface/transformers/issues/3985" rel="noreferrer">here</a>. Models like BERT are specifically trained for these type of tasks and can directly be used with the fill mask pipeline from huggingface</p>
<pre><code>from transformers import pipeline
nlp_fill = pipeline('fill-mask')
</code></pre>
| 17
|
T5 model
|
Poor rouge metric on CNN DailyMail dataset for pretrained T5 model
|
https://stackoverflow.com/questions/76115668/poor-rouge-metric-on-cnn-dailymail-dataset-for-pretrained-t5-model
|
<p>I am trying to fine-tune a pre-trained T5 model on CNN/DailyMail dataset with the following code:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from datasets import load_dataset
from transformers import DefaultDataCollator
from transformers import TrainingArguments, Trainer
from transformers import T5Tokenizer, T5ForConditionalGeneration
import os
import evaluate
tokenizer = T5Tokenizer.from_pretrained("t5-small")
rouge = evaluate.load('rouge')
def process_data_to_model_inputs(batch):
encoder_max_length = 512
decoder_max_length = 128
# tokenize the inputs and labels
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_max_length)
outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_max_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["decoder_attention_mask"] = outputs.attention_mask
batch["labels"] = outputs.input_ids.copy()
batch["labels"] = [[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]]
return batch
def setup_distributed_environment():
dist.init_process_group(backend='nccl')
torch.manual_seed(42)
def generate_summary(batch, model):
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
inputs = inputs.to(model.device) # Ensure that tensors are on the same device as the model
summary_ids = model.generate(inputs.input_ids, num_beams=4, max_length=128, early_stopping=True)
batch["predicted_highlights"] = tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
return batch
def train():
setup_distributed_environment()
cnndm = load_dataset("cnn_dailymail", "3.0.0")
tokenized_cnndm = cnndm.map(
process_data_to_model_inputs,
batched=True,
remove_columns=cnndm["train"].column_names
)
model = T5ForConditionalGeneration.from_pretrained("t5-small")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
local_rank = int(os.environ["LOCAL_RANK"])
global_rank = int(os.environ["RANK"])
model = nn.parallel.DistributedDataParallel(model, device_ids=[local_rank])
training_args = TrainingArguments(
output_dir="./updated_squad_fine_tuned_model",
evaluation_strategy="epoch",
learning_rate=5.6e-05,
lr_scheduler_type="linear",
warmup_ratio=0.1,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
local_rank=local_rank,
fp16=True,
remove_unused_columns=False
)
data_collator = DefaultDataCollator()
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_cnndm["train"].select(range(50000)),
eval_dataset=tokenized_cnndm["validation"].select(range(10000)),
tokenizer=tokenizer,
data_collator=data_collator,
)
trainer.train()
if local_rank == 0:
model.module.save_pretrained("fine_tuned_squad_model")
tokenizer.save_pretrained("fine_tuned_squad_model")
results = cnndm["test"].select(range(5000)).map(lambda batch: generate_summary(batch, model.module), batched=True, remove_columns=["article"], batch_size=16)
# Compute the metric using the generated summaries and the reference summaries
rouge_score = rouge.compute(predictions=results["predicted_highlights"], references=results["highlights"])
print(rouge_score)
def main():
torch.cuda.empty_cache()
train()
if __name__ == '__main__':
main()
</code></pre>
<p>I am not running it on the entire dataset, but taking the first 50k training examples and 10k validation examples. After training, I'm using the first 10k test examples for computing the rouge metric.</p>
<p>I'm using the <code>t5-small</code> variation from huggingface transformers library. I'm also using a distributed setup, running the program in 4 nodes with the following command:</p>
<pre><code>torchrun --nproc_per_node=gpu --nnodes=4 --node_rank=0 --rdzv_id=456 --rdzv_backend=c10d --rdzv_endpoint=129.82.44.119:30375 cnn_hf_test.py
</code></pre>
<p>After training, I'm getting the following output:</p>
<pre><code>{'loss': 2.1367, 'learning_rate': 4.258706467661692e-05, 'epoch': 0.64}
{'eval_runtime': 8.023, 'eval_samples_per_second': 1246.419, 'eval_steps_per_second': 19.569, 'epoch': 1.0}
{'loss': 0.0305, 'learning_rate': 2.2686567164179102e-05, 'epoch': 1.28}
{'loss': 0.0172, 'learning_rate': 2.7860696517412936e-06, 'epoch': 1.92}
{'eval_runtime': 8.0265, 'eval_samples_per_second': 1245.871, 'eval_steps_per_second': 19.56, 'epoch': 2.0}
{'train_runtime': 5110.103, 'train_samples_per_second': 19.569, 'train_steps_per_second': 0.306, 'train_loss': 0.6989707885800726, 'epoch': 2.0}
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1564/1564 [1:25:08<00:00, 3.27s/it]
{'rouge1': 0.008768024824095142, 'rouge2': 0.000294696538416436, 'rougeL': 0.008527464153847374, 'rougeLsum': 0.00875863140146953}
WARNING:torch.distributed.elastic.rendezvous.dynamic_rendezvous:The node 'jupiter.cs.colostate.edu_805773_0' has failed to send a keep-alive heartbeat to the rendezvous '456' due to an error of type RendezvousTimeoutError.
</code></pre>
<p>From my understanding, the rouge score metric is very poor and it should at least be greater than <code>0.2</code> for ROGUE-1, but I'm getting <code>0.008</code>.</p>
<p>My cluster setup does not allow me to load a larger model like <code>t5-base</code> or <code>t5-large</code>.</p>
<p>Could you provide me with some suggestions to improve the rouge score metric? Or is this performance expected for this setup and model? Any insight is much appreciated.</p>
| 18
|
|
T5 model
|
perform peft with lora on flan-t5 model causing no executable batch size error
|
https://stackoverflow.com/questions/77334292/perform-peft-with-lora-on-flan-t5-model-causing-no-executable-batch-size-error
|
<p>I'm trying to perform PEFT with LoRA. I'm using the Google flan-T5 base model. I'm using the Python code below. I'm running the code with an nvidia GPU with 8 GB of ram on Ubuntu server 18.04 LTS. In the Python code I'm loading the public dataset from huggingface. I've loaded the pre-trained flan-T5 model. I've set up the PEFat and LoRA model.</p>
<p>I then add the LoRA adapter and layers to the original LLM. I define a trainer instance, but when I try to train the PEFT adapter and save the model, I get the error below that "no executable batch size found."</p>
<p>Can anyone see what the issue might be and can you suggest how to solve it?</p>
<p>Code:</p>
<pre><code># import modules
from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer
import torch
import time
import evaluate
import pandas as pd
import numpy as np
# load dataset and LLM
huggingface_dataset_name = "knkarthick/dialogsum"
dataset = load_dataset(huggingface_dataset_name)
# load pre-trained FLAN-T5 model
model_name='google/flan-t5-base'
original_model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# set up peft LORA model
from peft import LoraConfig, get_peft_model, TaskType
lora_config = LoraConfig(
r=32, # Rank
lora_alpha=32,
target_modules=["q", "v"],
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM # FLAN-T5
)
# add LoRA adpter layers/parameters to the origianl LLM to be trained
peft_model = get_peft_model(original_model,
lora_config)
print(print_number_of_trainable_model_parameters(peft_model))
# define training arguments and create Trainer instance
output_dir = f'./peft-dialogue-summary-training-{str(int(time.time()))}'
peft_training_args = TrainingArguments(
output_dir=output_dir,
auto_find_batch_size=True,
learning_rate=1e-3, # Higher learning rate than full fine-tuning.
num_train_epochs=1,
logging_steps=1,
max_steps=1
)
peft_trainer = Trainer(
model=peft_model,
args=peft_training_args,
train_dataset=tokenized_datasets["train"],
)
# train PEFT adapter and save the model
peft_trainer.train()
peft_model_path="./peft-dialogue-summary-checkpoint-local"
peft_trainer.model.save_pretrained(peft_model_path)
tokenizer.save_pretrained(peft_model_path)
</code></pre>
<h1>Error:</h1>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[16], line 1
----> 1 peft_trainer.train()
3 peft_model_path="./peft-dialogue-summary-checkpoint-local"
5 peft_trainer.model.save_pretrained(peft_model_path)
File ~/anaconda3/envs/new_llm/lib/python3.10/site-packages/transformers/trainer.py:1664, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1659 self.model_wrapped = self.model
1661 inner_training_loop = find_executable_batch_size(
1662 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1663 )
-> 1664 return inner_training_loop(
1665 args=args,
1666 resume_from_checkpoint=resume_from_checkpoint,
1667 trial=trial,
1668 ignore_keys_for_eval=ignore_keys_for_eval,
1669 )
File ~/anaconda3/envs/new_llm/lib/python3.10/site-packages/accelerate/utils/memory.py:134, in find_executable_batch_size.<locals>.decorator(*args, **kwargs)
132 while True:
133 if batch_size == 0:
--> 134 raise RuntimeError("No executable batch size found, reached zero.")
135 try:
136 return function(batch_size, *args, **kwargs)
RuntimeError: No executable batch size found, reached zero.
</code></pre>
<p>Update:</p>
<p>I restarted my kernel and error went away, not sure why. Perhaps previous model I had run was taking up too much space.</p>
|
<p>Try removing the <code>auto_find_batch_size=True</code> in <code>TrainingArguments</code> and set batch size on your own</p>
| 19
|
T5 model
|
How to use architecture of T5 without pretrained model (Hugging face)
|
https://stackoverflow.com/questions/73700165/how-to-use-architecture-of-t5-without-pretrained-model-hugging-face
|
<p>I would like to study the effect of pre-trained model, so I want to test t5 model with and without pre-trained weights. Using pre-trained weights is straight forward, but I cannot figure out how to use the architecture of T5 from hugging face without the weights. I am using Hugging face with pytorch but open for different solution.</p>
|
<p><a href="https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model</a></p>
<p>"Initializing with a config file does not load the weights associated with the model, only the configuration."</p>
<p>for without weights create a T5Model with config file</p>
<pre><code>from transformers import AutoConfig
from transformers import T5Tokenizer, T5Model
model_name = "t5-small"
config = AutoConfig.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5Model.from_pretrained(model_name)
model_raw = T5Model(config)
</code></pre>
| 20
|
T5 model
|
Clarification on T5 Model Pre-training Objective and Denoising Process
|
https://stackoverflow.com/questions/78252488/clarification-on-t5-model-pre-training-objective-and-denoising-process
|
<p>I am currently developing a T5 model (encoder-decoder architecture) from scratch for educational purposes. While working on this project, I've encountered some confusion regarding the pre-training objective, specifically the <em>denoising objective</em>. I would like to clarify my understanding and have some questions about the process.</p>
<p>Given the sentence:</p>
<blockquote>
<p>Thank you for inviting me to your party last week.</p>
</blockquote>
<p>Based on my understanding, during the pre-training phase with a denoising objective, the model works as follows:</p>
<ul>
<li><strong>Encoder input</strong>: <code>Thank you <X> me to your party <Y> week</code></li>
<li><strong>Decoder input</strong>: <code><X> for inviting <Y> last</code></li>
<li><strong>Decoder labels (true labels)</strong>: <code>for inviting <Y> last <Z></code></li>
</ul>
<p>Here are my questions:</p>
<ol>
<li>Is my interpretation of how the encoder input, decoder input, and decoder labels are constructed correct?</li>
<li>In this setup, the model is expected to predict sentinel tokens (e.g., <code><X></code>, <code><Y></code>). Could this potentially introduce confusion for the model, for example, it may take the idea that it is possible for the word "last" to come after the token ? Or does the model naturally learn to interpret these situations correctly?</li>
</ol>
<hr />
<p><strong>Accordingly to the paper:</strong></p>
<p><a href="https://i.sstatic.net/QYUco.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QYUco.png" alt="denoising objective" /></a></p>
<blockquote>
<p>we process the sentence <code>Thank you for inviting me to your party last week.</code> The words <code>for</code>, <code>inviting</code> and <code>last</code> are randomly chosen for corruption. Each consecutive span of corrupted tokens is replaced by a sentinel token (shown as <code><X></code> and <code><Y></code>) that is unique over the example. Since <code>for</code> and <code>inviting</code> occur consecutively, they are replaced by a single sentinel <code><X></code>. The output sequence then consists of the dropped-out spans, delimited by the sentinel tokens used to replace them in the input plus a final sentinel token <code><Z></code>.</p>
</blockquote>
|
<p>I think my interpretation is spot on, and that for large data sets, the model will understand that the Sentinels indicate the missing parts.</p>
| 21
|
T5 model
|
while exporting T5 model to onnx using fastT5 getting "RuntimeError:output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2]"
|
https://stackoverflow.com/questions/66693724/while-exporting-t5-model-to-onnx-using-fastt5-getting-runtimeerroroutput-with
|
<p>i'm trying to convert T5 model to onnx using the <a href="https://github.com/Ki6an/fastT5" rel="nofollow noreferrer">fastT5</a> library, but
getting an error while running the following code</p>
<pre><code>from fastT5 import export_and_get_onnx_model
from transformers import AutoTokenizer
model_name = 't5-small'
model = export_and_get_onnx_model(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
t_input = "translate English to French: The universe is a dark forest."
token = tokenizer(t_input, return_tensors='pt')
tokens = model.generate(input_ids=token['input_ids'],
attention_mask=token['attention_mask'],
num_beams=2)
output = tokenizer.decode(tokens.squeeze(), skip_special_tokens=True)
print(output)
</code></pre>
<p>the error:</p>
<pre><code>/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py:244: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_mask.shape[1] < attention_mask.shape[1]:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-16-80094b7c4f6f> in <module>()
7 input_names=decoder_input_names,
8 output_names=decoder_output_names,
----> 9 dynamic_axes=dyn_axis_params,
10 )
24 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions)
497 position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length)
498
--> 499 scores += position_bias
500 attn_weights = F.softmax(scores.float(), dim=-1).type_as(
501 scores
RuntimeError: output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2]
</code></pre>
<p>can someone please help me solve the issue?
<br/>
thank you.</p>
|
<p>I've checked the repository, it looks like a known issue as reported here : <a href="https://github.com/Ki6an/fastT5/issues/1" rel="nofollow noreferrer">https://github.com/Ki6an/fastT5/issues/1</a></p>
<p>Developer of the library has posted a solution and created a notebook file here: <a href="https://colab.research.google.com/drive/1HuH1Ui3pCBS22hW4djIOyUBP5UW93705?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1HuH1Ui3pCBS22hW4djIOyUBP5UW93705?usp=sharing</a></p>
<p>Solution is to modify modeling_t5.py file, at line 494 :</p>
<pre><code># Define this at line 426:
int_seq_length = int(seq_length)
# Change this at line 494:
position_bias = position_bias[:, :, -seq_length:, :]
position_bias = position_bias[:, :, -int_seq_length:, :] # Updated version
</code></pre>
<p>If you don't want to modify the file yourself, you will need to wait until <a href="https://github.com/huggingface/transformers/pull/10651" rel="nofollow noreferrer">this pull request</a> to be merged into Transformers library.</p>
| 22
|
T5 model
|
Generating partial string as output after fine-tuning T5 model
|
https://stackoverflow.com/questions/79619113/generating-partial-string-as-output-after-fine-tuning-t5-model
|
<p>I'm using fine-tuned T5 model for performing spell checks in my dataset of consisting of reviews. However, I'm facing an issue where the model when performing spell checks does not give entire string as an output or sometimes repeats the phrases of the given review. It is not in large amounts but there do exists some reviews. During the fine-tuning of the model, the <strong>training and validation losses were 0.0003 and 0.0002 respectively</strong>. I have attached my code and 2 reviews as well for your reference.</p>
<pre><code>class ReviewDataset(Dataset):
def __init__(self, texts):
self.inputs = ["fix: " + text for text in texts]
def __len__(self):
return len(self.inputs)
def __getitem__(self, idx):
return self.inputs[idx]
def collate_fn(batch):
encodings = tokenizer(
batch,
padding=True,
truncation=True,
max_length=128,
return_tensors="pt"
)
return encodings
</code></pre>
<p>The for loop which performs the task is:</p>
<pre><code>all_predictions = []
with torch.no_grad():
for batch in dataloader:
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=64)
decoded = tokenizer.batch_decode(outputs, skip_special_tokens=True)
all_predictions.extend(decoded)```
</code></pre>
<p>The reviews are below:
1)As per the global review I purchased this product. Also I used this product for 5 times from my 1st purchase of bottle.. Suddenly the product color changes into golden and shimmery.. While I was spray the product on my face it was exhausted in a shimmering liquid form. It was a real shock to me.. It's a big problem.....</p>
<p><em>As per the global review I purchased this product. Also I used this product for 5 times from my first purchase of bottle..Suddenly the product color changes into golden and shimmery.. While I was spraying the product on my face it was exhausted in a shimmering liquid form. It was a</em></p>
<p>2)I am literally not so happy with this product I am disappointed with this product</p>
<p><em>I am literally not so happy with this product I am disappointed with this product I am disappointed with this product.</em></p>
|
<p>According to your two problems:</p>
<ol>
<li><p><strong>performing spell checks does not give entire string as an output</strong></p>
</li>
<li><p><strong>sometimes repeats the phrases of the given review</strong></p>
</li>
</ol>
<p>I think you can adjust two arguments ( max_length, no_repeat_ngram_size ) for model.generate() to improve two problems:</p>
<ol>
<li><p><strong>Enlarge max_length size to solve problem-1.</strong></p>
</li>
<li><p><strong>Add no_repeat_ngram_size argument to reduce problem-2 error.</strong></p>
</li>
</ol>
<p>reference:</p>
<ol>
<li><p><a href="https://huggingface.co/docs/transformers/main/main_classes/text_generation#transformers.GenerationConfig.max_length" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/main/main_classes/text_generation#transformers.GenerationConfig.max_length</a></p>
</li>
<li><p><a href="https://huggingface.co/docs/transformers/main/main_classes/text_generation#transformers.GenerationConfig.no_repeat_ngram_size" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/main/main_classes/text_generation#transformers.GenerationConfig.no_repeat_ngram_size</a></p>
</li>
</ol>
| 23
|
T5 model
|
Abstractive Text summarization using T5 pre-trained model
|
https://stackoverflow.com/questions/67345611/abstractive-text-summarization-using-t5-pre-trained-model
|
<p>Hello I'm using t5 pretrained abstractive summarization how I can evaluate the summary output accuracy IN short how much percent my model are accurate</p>
|
<p>You could use ROUGE Metric as its metric for automatic summarization evaluation.</p>
<p><a href="https://pypi.org/project/rouge-metric/" rel="nofollow noreferrer">https://pypi.org/project/rouge-metric/</a></p>
| 24
|
T5 model
|
TypeError: 'DataLoader' object is not subscriptable T5 model
|
https://stackoverflow.com/questions/78525690/typeerror-dataloader-object-is-not-subscriptable-t5-model
|
<p>I'm trying to train my model using the following commands:</p>
<pre><code>import numpy as np
import pandas as pd
import os
import pandas as pd
import tensorflow as tf
from transformers import AutoTokenizer, T5ForConditionalGeneration, T5Config
# IMPORT REQUIRED DATASET
path = "/content/train_set.csv"
path_val = "/content/dev_set.csv"
path_test = "/content/test_ur.csv"
ds_train = pd.read_csv(path)
ds_val = pd.read_csv(path_val)
ds_test = pd.read_csv(path_test)
ds_train
# Set the model and tokenizer
model_name = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
# Preprocess the data
inputs = []
targets = []
def preprocess_function(examples):
inputs = examples[1]
targets = examples[0]
model_inputs = tokenizer(inputs, max_length=128, padding="max_length", truncation=True)
labels = tokenizer(targets, max_length=128, padding="max_length", truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
ds_train = ds_train.apply(preprocess_function)#, batched=True)
ds_val = ds_val.apply(preprocess_function) #, batched=True)
#ds_test = ds_test.apply(preprocess_function)#, batched=True)
# Create a dataset for training
dataset_train = tf.data.Dataset.from_tensor_slices(dict(ds_train))
dataset_val = tf.data.Dataset.from_tensor_slices(dict(ds_val))
#dataset_test = tf.data.Dataset.from_tensor_slices(dict(ds_test))
# Shuffle and batch the dataset
batch_size = 16
dataset_train = dataset_train.shuffle(100).batch(batch_size)
dataset_val = dataset_val.batch(batch_size)
#dataset_test = dataset_test.batch(batch_size)
from torch.utils.data import DataLoader
# Load the training and validation datasets
dataset_train = DataLoader(dataset_train, batch_size=16)
dataset_val = DataLoader(dataset_val, batch_size=16)
# Evaluate the model
def evaluate(model, dataset):
total_loss = 0
for batch in dataset:
outputs = model(batch['input_ids'], attention_mask=batch['attention_mask'], labels=batch['labels'])
loss_value = outputs.loss
total_loss += loss_value
return total_loss / len(dataset)
# Compile the model
model.compile()#optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4))
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./log_results",
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
logging_steps=100,
evaluation_strategy="epoch",
eval_steps=400,
save_steps=1e6,
gradient_accumulation_steps=2,
weight_decay=0.01,
)
from transformers import Trainer, default_data_collator
# Create a Trainer instance
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset_train,
eval_dataset=dataset_val,
compute_metrics=lambda pred: {"accuracy": tf.reduce_mean(tf.cast(tf.equal(pred.label_ids, pred.predictions), tf.float32))},
data_collator=default_data_collator,
)
# Train the model
trainer.train()
</code></pre>
<p>However I get the following error:</p>
<pre><code>TypeError: 'DataLoader' object is not subscriptable
</code></pre>
<p>.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................</p>
| 25
|
|
T5 model
|
Determining the probability of a sequence generated by T5 model by HuggingFace
|
https://stackoverflow.com/questions/75028507/determining-the-probability-of-a-sequence-generated-by-t5-model-by-huggingface
|
<p>I am using T5-Large by HuggingFace for inference. Given a premise and a hypothesis, I need to determine whether they are related or not. So, if I feed a string <code>"mnli premise: This game will NOT open unless you agree to them sharing your information to advertisers. hypothesis: Personal data disclosure is discussed."</code> the model is supposed to return either <code>entailment</code>, <code>neutral</code>, or <code>contradiction</code>.</p>
<p>Though I am able to determine the result, I am unable to determine the probability of the sequence generated. For instance, consider the model will generate <code>entailment</code> for the example given above. I also want to know what is the probability of <code>entailment</code>. So far, I have been using the following code,</p>
<pre class="lang-py prettyprint-override"><code>from transformers import T5Tokenizer, T5ForConditionalGeneration
def is_entailment(premise, hypothesis):
entailment_premise = premise
entailment_hypothesis = hypothesis
token_output = tokenizer("mnli premise: " + entailment_premise + " hypothesis: " + entailment_hypothesis,
return_tensors="pt", return_length=True)
input_ids = token_output.input_ids
output = model.generate(input_ids, output_scores=True, return_dict_in_generate=True, max_new_tokens=15)
entailment_ids = output["sequences"]
entailment = tokenizer.decode(entailment_ids[0], skip_special_tokens=True)
return entailment
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', return_dict=True)
premise = "This game will NOT open unless you agree to them sharing your information to advertisers."
hypothesis = "Personal data disclosure is discussed."
print(is_entailment(premise, hypothesis))
</code></pre>
<p>I have tried using the scores we get as output, but not sure how to calculate the probability from them. Same goes for the last hidden states that can be fetched as the output from the <code>generate()</code>. I saw in another <a href="https://stackoverflow.com/questions/70299442/how-to-get-a-probability-distribution-over-tokens-in-a-huggingface-model">question</a> on Stack Overflow that suggested using a softmax function on the last hidden states but I am unsure how to do it.</p>
<p>How can I calculate the probability of the sequence being generated? That is, if I get <code>entailment</code> for a pair of hypothesis and premise, what would be the <code>P(entailment)</code>?</p>
|
<p>What you get as the scores are output token distributions before the softmax, so-called logits. You can get the probabilities of generated tokens by normalizing the logits and taking respective token ids. You can get them from the field <code>sequences</code> from what the <code>generate</code> method returns.</p>
<p>These are, however, not the probabilities you are looking for because T5 segments your output words into smaller units (e.g., "entailment" gets segmented to <code>['▁', 'en', 'tail', 'ment']</code> using the <code>t5-small</code> tokenizer). This is even trickier because different answers get split into a different number of tokens. You can get an approximate score by averaging the token probabilities (this is typically used during beam search). Such scores do not sum up to one.</p>
<p>If you want a normalized score, the only way is to feed all three possible answers to the decoder, get their scores, and normalize them to sum to one.</p>
| 26
|
T5 model
|
Modifying T5 for sequence labelling
|
https://stackoverflow.com/questions/69800263/modifying-t5-for-sequence-labelling
|
<p>I am trying to modify the T5-model as a sequence labelling task (to do NER).
I create my model class by taking the last hidden states of the T5-model and add a linear layer with 3 out-features (for simple IOB-tags).
Here is my model class:</p>
<pre><code>class Seq2SeqTokenCLS(nn.Module):
def __init__(self):
super(Seq2SeqTokenCLS, self).__init__()
self.num_labels = 3
self.base_model = T5ForConditionalGeneration.from_pretrained('t5-small')
# average of n last hidden layers
self.layers = 3
# change beam search or greedy search here
# Suggested parameters from the T5 paper: num_beams = 4 and length penalty alpha = 0.6
self.base_model.config.num_beams = 1 # <-- change to 1 for greedy decoding
self.base_model.config.length_penalty = 0.6 # <-- comment this out for greedy decoding
self.dropout = nn.Dropout(0.5)
self.dense = nn.Linear(in_features=512 * self.layers, out_features=self.num_labels)
def forward(self, input_ids, attn_mask, labels):
hidden_states = self.base_model(
input_ids,
attention_mask=attn_mask,
output_hidden_states=True
)
hidden_states = torch.cat([hidden_states['decoder_hidden_states'][-(n+1)] for n in range(self.layers)], dim=2)
logits = self.dense(self.dropout(hidden_states))
loss = None
loss_fct = nn.CrossEntropyLoss(weight=class_weights)
# Only keep active parts of the loss
if attn_mask is not None:
active_loss = attn_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)
active_labels = torch.where(
active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)
)
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
return {'logits':logits,
'loss':loss}
</code></pre>
<p>However, I am confused about how should do inference in this approach. Should I use the <code>.generate</code> function as when T5 has a standard LM head? If that is the case, then I don't know how to inherit the function into my new model class...</p>
<p>Or can I use a normal evaluation loop?
<em>E.g. something like this?:</em></p>
<pre><code>predictions = []
all_labels = []
with torch.no_grad():
for batch in tqdm(test_loader):
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids=input_ids,
attn_mask=attention_mask
)
for sample, lab in zip(outputs['logits'],labels):
preds = torch.argmax(sample, dim=1)
predictions.append(preds)
all_labels.append(lab)
</code></pre>
<p>I would still like to experiment with beam search...</p>
| 27
|
|
T5 model
|
AWS Sagemaker T5 or huggingface Model training issue
|
https://stackoverflow.com/questions/75055543/aws-sagemaker-t5-or-huggingface-model-training-issue
|
<p>I am trying to train a t5 conditional Generation model in Sagemaker, its running fine when I am passing the arguments directly in notebook but its not learning anything when I am passing estimator and train.py script, I followed the documentation provided by hugging face as well as AWS. But still we are facing issue it is saying training is completed and saving model with in 663 seconds what ever might be the size of dataset. Kindly give suggestions for this.</p>
|
<p>Check Amazon CloudWatch logs to be able to tell what took place during training (train.py stdout/stderr). This <a href="https://github.com/aws-samples/amazon-sagemaker-training-jobs-benchmarks/blob/main/utilities/download_sagemaker_job_logs.py" rel="nofollow noreferrer">utility</a> can help with downloading logs to your local machine/notebook.</p>
| 28
|
T5 model
|
Problem with custom metric for custom T5 model
|
https://stackoverflow.com/questions/76199989/problem-with-custom-metric-for-custom-t5-model
|
<p>I have created a custom dataset and trained on it a custom <code>T5ForConditionalGeneration</code> model that predicts solutions to quadratic equations like this:</p>
<p>Input: <code>"4*x^2 + 4*x + 1"</code>
Output: <code>D = 4 ^ 2 - 4 * 4 * 1 4 * 1 4 * 1 4 * 1 4 * 1 4</code></p>
<p>I need to get accuracy for this model but I get only loss when I use <code>Trainer</code> so I used a custom metric function (I didn't write it but took it from a similar project):</p>
<pre><code>def compute_metrics4token(eval_pred):
batch_size = 4
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
answer_accuracy = []
token_accuracy = []
num_correct, num_total = 0, 0
num_answer = 0
number_eq = 0
for p, l in zip(decoded_preds, decoded_labels):
text_pred = p.split(' ')
text_labels = l.split(' ')
m = min(len(text_pred), len(text_labels))
if np.array_equal(text_pred, text_labels):
num_answer += 1
for i, j in zip(text_pred, text_labels):
if i == j:
num_correct += 1
num_total += len(text_labels)
number_eq += 1
token_accuracy = num_correct / num_total
answer_accuracy = num_answer / number_eq
result = {'token_acc': token_accuracy, 'answer_acc': answer_accuracy}
result = {key: value for key, value in result.items()}
for key, value in result.items():
wandb.log({key: value})
return {k: round(v, 4) for k, v in result.items()}
</code></pre>
<p>Problem is that it doesn't work and I don't really understand why and what can I do to get accuracy for my model.
I get this error when I use the function:</p>
<pre><code>args = Seq2SeqTrainingArguments(
output_dir='./',
num_train_epochs=10,
overwrite_output_dir = True,
evaluation_strategy = 'steps',
learning_rate = 1e-4,
logging_steps = 100,
eval_steps = 100,
save_steps = 100,
load_best_model_at_end = True,
push_to_hub=True,
weight_decay = 0.01,
per_device_train_batch_size=8,
per_device_eval_batch_size=4
)
trainer = Seq2SeqTrainer(model=model, train_dataset=train_dataset, eval_dataset=eval_dataset, args=args,
data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics4token)
</code></pre>
<pre><code><ipython-input-48-ff7980f6dd66> in compute_metrics4token(eval_pred)
4 # predictions = np.argmax(logits[0])
5 # print(predictions)
----> 6 decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
7 # Replace -100 in the labels as we can't decode them.
8 labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py in batch_decode(self, sequences, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
3444 `List[str]`: The list of decoded sentences.
3445 """
-> 3446 return [
3447 self.decode(
3448 seq,
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py in <listcomp>(.0)
3445 """
3446 return [
-> 3447 self.decode(
3448 seq,
3449 skip_special_tokens=skip_special_tokens,
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
3484 token_ids = to_py_obj(token_ids)
3485
-> 3486 return self._decode(
3487 token_ids=token_ids,
3488 skip_special_tokens=skip_special_tokens,
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py in _decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)
547 if isinstance(token_ids, int):
548 token_ids = [token_ids]
--> 549 text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
550
551 clean_up_tokenization_spaces = (
TypeError: argument 'ids': 'list' object cannot be interpreted as an integer
</code></pre>
<p>When I print out <code>predictions</code> I get a tuple:</p>
<pre><code>(array([[[-32.777344, -34.593437, -36.065685, ..., -34.78577 ,
-34.77546 , -34.061115],
[-58.633934, -32.23472 , -31.735909, ..., -40.335655,
-40.28701 , -37.208904],
[-56.650974, -33.564095, -34.409576, ..., -36.94467 ,
-43.246735, -37.469246],
...,
[-56.62741 , -24.561722, -34.11228 , ..., -35.34798 ,
-42.287125, -38.889412],
[-56.632545, -24.470266, -34.0792 , ..., -35.313175,
-42.235626, -38.891712],
[-56.687027, -24.391508, -34.12526 , ..., -35.30828 ,
-42.204193, -38.88395 ]],
[[-29.79866 , -32.22621 , -32.689865, ..., -32.106445,
-31.46681 , -31.706667],
[-62.101192, -33.327423, -30.900173, ..., -38.046883,
-42.26345 , -38.97748 ],
[-54.726807, -29.13115 , -30.294558, ..., -28.370876,
-41.23722 , -37.91609 ],
...,
[-57.279373, -23.954525, -34.066246, ..., -35.047447,
-41.599922, -38.489853],
[-57.31298 , -23.879845, -34.0837 , ..., -35.03614 ,
-41.557755, -38.530064],
[-57.39132 , -23.831306, -34.120094, ..., -35.039547,
-41.525337, -38.55728 ]],
[[-29.858566, -32.452713, -34.05892 , ..., -33.93065 ,
-32.109177, -32.874695],
[-61.375793, -33.656853, -32.95248 , ..., -42.28087 ,
-42.637173, -39.21142 ],
[-58.43721 , -32.496166, -36.44046 , ..., -39.33864 ,
-42.139664, -38.695328],
...,
[-59.654663, -24.117435, -34.266438, ..., -35.734142,
-40.55384 , -38.467537],
[-38.54418 , -18.533113, -29.775307, ..., -26.856483,
-33.07976 , -29.934727],
[-27.716005, -14.610603, -23.752686, ..., -21.140053,
-26.855148, -24.429493]],
...,
[[-33.252697, -34.72487 , -36.395184, ..., -36.87368 ,
-35.207897, -34.468285],
[-59.911736, -32.730076, -32.622803, ..., -43.382267,
-42.25615 , -38.35135 ],
[-54.982887, -31.847572, -32.773827, ..., -38.500675,
-43.97969 , -37.41088 ],
...,
[-56.896988, -23.213766, -34.04734 , ..., -35.88832 ,
-42.176086, -38.953568],
[-56.994152, -23.141619, -34.054848, ..., -35.875816,
-42.176453, -38.97729 ],
[-57.076714, -23.05831 , -34.048904, ..., -35.888298,
-42.165287, -39.020435]],
[[-30.070187, -32.049232, -34.63928 , ..., -35.02118 ,
-32.14465 , -32.891876],
[-61.720093, -32.994057, -32.988144, ..., -42.054638,
-42.18583 , -38.990112],
[-57.74364 , -31.431454, -35.969643, ..., -38.593002,
-42.276768, -38.895355],
...,
[-58.677704, -23.567434, -35.6751 , ..., -36.018696,
-40.343582, -38.681267],
[-58.682228, -23.563087, -35.668964, ..., -36.019753,
-40.336178, -38.67661 ],
[-58.718002, -23.609531, -35.67758 , ..., -36.001644,
-40.366055, -38.67864 ]],
[[-30.320919, -33.430378, -34.84311 , ..., -37.259563,
-32.59662 , -33.03912 ],
[-61.275875, -34.824192, -34.07767 , ..., -44.637024,
-41.718002, -38.974827],
[-54.49349 , -30.689342, -35.539658, ..., -39.984665,
-39.87059 , -37.038437],
...,
[-58.939384, -23.831846, -34.525368, ..., -35.930893,
-40.29633 , -37.637936],
[-58.95117 , -23.824234, -34.520042, ..., -35.931396,
-40.297188, -37.636852],
[-58.966076, -23.795956, -34.519627, ..., -35.901787,
-40.261116, -37.612514]]], dtype=float32), array([[[-1.43104442e-03, -2.98473001e-01, 9.49775204e-02, ...,
-1.77978892e-02, 1.79805323e-01, 1.33578405e-01],
[-2.35560730e-01, 1.53045550e-01, 5.15255742e-02, ...,
-1.57466665e-01, 3.49459350e-01, 7.28092641e-02],
[ 1.60562042e-02, -1.40354022e-01, 5.29232398e-02, ...,
-2.38162443e-01, -7.72500336e-02, 6.80136457e-02],
...,
[ 7.33550191e-02, -3.35853845e-01, 2.25579832e-03, ...,
-1.93636306e-02, 1.08121082e-01, 5.24416938e-02],
[ 8.32231194e-02, -3.11688155e-01, -2.13681534e-02, ...,
3.23344418e-03, 1.08062990e-01, 7.20862746e-02],
[ 9.58326831e-02, -3.00361574e-01, -3.02627794e-02, ...,
3.01265554e-03, 1.20107472e-01, 9.56629887e-02]],
[[-1.16950013e-01, -3.43173921e-01, 1.87818244e-01, ...,
-2.71256089e-01, 7.42092952e-02, 5.77520356e-02],
[-1.62564963e-01, -3.87467295e-01, 1.71134964e-01, ...,
-7.83916116e-02, -3.65173034e-02, 2.08234787e-01],
[-3.71523261e-01, -8.74521434e-02, 1.39187068e-01, ...,
-3.08779895e-01, 3.88156146e-01, 9.99216512e-02],
...,
[ 2.14628279e-02, -3.35561454e-01, -3.76663893e-03, ...,
-1.29795140e-02, 1.44181430e-01, 1.15508482e-01],
[ 3.47745977e-02, -3.30934107e-01, 1.10013550e-02, ...,
-1.84394475e-02, 1.52143195e-01, 1.38157398e-01],
[ 3.02720107e-02, -3.37626845e-01, 1.35379741e-02, ...,
-3.80427912e-02, 1.50906458e-01, 1.38765752e-01]],
[[-6.50129542e-02, -2.63762653e-01, 2.16862872e-01, ...,
-1.66922837e-01, 1.09285273e-01, -6.40013069e-02],
[-5.23199737e-01, -2.32228413e-01, 1.44963071e-01, ...,
-1.41557693e-01, 1.90811172e-01, -2.22496167e-01],
[-2.24985227e-01, -3.69372189e-01, 7.32450858e-02, ...,
6.57786876e-02, 9.70033705e-02, 7.83021152e-02],
...,
[-1.93579309e-03, -3.92921537e-01, -1.28203649e-02, ...,
-8.74079913e-02, 1.13596492e-01, 9.25250202e-02],
[ 4.55581211e-03, -3.65802884e-01, -2.60831695e-02, ...,
-4.12549600e-02, 1.17429778e-01, 1.05997331e-01],
[ 2.46201381e-02, -3.47863257e-01, -4.48134281e-02, ...,
-2.53352951e-02, 1.16753690e-01, 1.36296600e-01]],
...,
[[-6.47678748e-02, -3.45555365e-01, 7.19114989e-02, ...,
-9.16809738e-02, 2.15520635e-01, 1.01671875e-01],
[-7.61077851e-02, -1.51827012e-03, 9.52102616e-02, ...,
-1.39335945e-01, 1.05894208e-01, 3.23191588e-03],
[-3.24888170e-01, -2.17741728e-03, 5.32661797e-03, ...,
-2.78430730e-01, 3.59415114e-01, 1.19439401e-01],
...,
[ 6.89201057e-02, -3.63149673e-01, 7.96841756e-02, ...,
-3.25191446e-04, 1.26513481e-01, 1.36511743e-01],
[ 8.16355348e-02, -3.54205281e-01, 7.69739375e-02, ...,
-2.90949806e-03, 1.31863236e-01, 1.56503588e-01],
[ 8.36645439e-02, -3.38536322e-01, 8.00612345e-02, ...,
-9.39210225e-03, 1.29102767e-01, 1.64855778e-01]],
[[-1.63163885e-01, -3.34902078e-01, 1.11728966e-01, ...,
-1.10363133e-01, 1.19786285e-01, -9.18702483e-02],
[-3.36889774e-01, -3.34888607e-01, 1.30680993e-01, ...,
1.22191897e-03, 1.45059675e-01, -1.27688542e-01],
[-5.92090450e-02, -2.07585752e-01, 2.05589265e-01, ...,
-6.80094585e-02, 2.11224273e-01, 3.92790437e-01],
...,
[ 4.86238785e-02, -4.19503808e-01, -3.39424387e-02, ...,
-1.76134892e-02, 1.00283481e-01, 1.38210282e-01],
[ 5.81516996e-02, -4.04477298e-01, -4.19086292e-02, ...,
-1.02474755e-02, 1.06062084e-01, 1.59754634e-01],
[ 6.70261905e-02, -3.86263877e-01, -4.19785343e-02, ...,
9.05385148e-03, 1.01594023e-01, 1.69663757e-01]],
[[-1.22184128e-01, -3.67584258e-01, 3.60302597e-01, ...,
-4.39502299e-02, 1.33717149e-01, 1.53699834e-02],
[-3.37780178e-01, -4.05100137e-01, 2.02614054e-01, ...,
-5.41410968e-02, 1.55447468e-01, -9.28792357e-02],
[ 1.81227952e-01, -2.29236633e-01, 2.40814224e-01, ...,
1.39913429e-02, 7.61386827e-02, 3.62152725e-01],
...,
[ 1.47830993e-02, -4.26465064e-01, -1.54972840e-02, ...,
3.74358669e-02, 1.52016997e-01, 1.53155088e-01],
[ 3.46656404e-02, -4.00052220e-01, -3.53843644e-02, ...,
2.64652576e-02, 1.62517026e-01, 1.66649833e-01],
[ 4.50411513e-02, -3.61773074e-01, -5.50217964e-02, ...,
3.68298292e-02, 1.67936400e-01, 1.76781893e-01]]],
dtype=float32))
</code></pre>
<p>I thought that maybe I need to take argmax from these values but then I still get errors.</p>
<p>If something is unclear I would be happy to provide additional information. Thanks for any help.</p>
<p>EDIT:</p>
<p>I am adding an example of an item in the dataset:</p>
<pre><code>dataset['test'][0:5]
{'text': ['3*x^2 + 9*x + 6 = 0',
'59*x^2 + -59*x + 14 = 0',
'-10*x^2 + 0*x + 0 = 0',
'3*x^2 + 63*x + 330 = 0',
'1*x^2 + -25*x + 156 = 0'],
'label': ['D = 9^2 - 4 * 3 * 6 = 9; x1 = (-9 + (9)**0.5) // (2 * 3)
= -1.0; x2 = (-9 - (9)**0.5) // (2 * 3) = -2.0',
'D = -59^2 - 4 * 59 * 14 = 177; x1 = (59 + (177)**0.5) // (2 * 59)
= 0.0; x2 = (59 - (177)**0.5) // (2 * 59) = 0.0',
'D = 0^2 - 4 * -10 * 0 = 0; x = 0^2 // (2 * -10) = 0',
'D = 63^2 - 4 * 3 * 330 = 9; x1 = (-63 + (9)**0.5) // (2 * 3) =
-10.0; x2 = (-63 - (9)**0.5) // (2 * 3) = -11.0',
'D = -25^2 - 4 * 1 * 156 = 1; x1 = (25 + (1)**0.5) // (2 * 1) =
13.0; x2 = (25 - (1)**0.5) // (2 * 1) = 12.0'],
'__index_level_0__': [10803, 14170, 25757, 73733, 25059]}
</code></pre>
|
<p>It seems like the task you're trying to achieve is some sort of "translation" task so the most appropriate model is to use the <code>AutoModelForSeq2SeqLM</code>.</p>
<p>And in the case of unspecified sequence, it might be more appropriate to use</p>
<ul>
<li>BLEU / ChrF or newer neural-based metrics for translation</li>
<li>ROUGE for summarization</li>
</ul>
<p>You can take a look at various translation-related metrics on <a href="https://www.kaggle.com/code/alvations/huggingface-evaluate-for-mt-evaluations" rel="nofollow noreferrer">https://www.kaggle.com/code/alvations/huggingface-evaluate-for-mt-evaluations</a></p>
<hr />
<h1>Treating it as a normal Machine Translation task</h1>
<p>To read the data, you'll have to make sure that the model's forward function</p>
<ul>
<li>sees the data point as <code>{"text": [0, 1, 2, ... ], "labels": [0, 9, 8, ...]}</code> in your <code>datasets.Dataset</code> object</li>
<li>use the collator to do batch, e.g. <code>DataCollatorForSeq2Seq</code></li>
</ul>
<p>And here's a working snippet of how the code (in parts) can be ran: <a href="https://www.kaggle.com/alvations/how-to-train-a-t5-seq2seq-model-using-custom-data" rel="nofollow noreferrer">https://www.kaggle.com/alvations/how-to-train-a-t5-seq2seq-model-using-custom-data</a></p>
<h3>Data processing part.</h3>
<pre class="lang-py prettyprint-override"><code>from datasets import Dataset
import evaluate
from transformers import AutoModelForSeq2SeqLM, Trainer, AutoTokenizer, DataCollatorForSeq2Seq
math_data = {'text': ['3*x^2 + 9*x + 6 = 0',
'59*x^2 + -59*x + 14 = 0',
'-10*x^2 + 0*x + 0 = 0',
'3*x^2 + 63*x + 330 = 0',
'1*x^2 + -25*x + 156 = 0'],
'target': ['D = 9^2 - 4 * 3 * 6 = 9; x1 = (-9 + (9)**0.5) // (2 * 3) = -1.0; x2 = (-9 - (9)**0.5) // (2 * 3) = -2.0',
'D = -59^2 - 4 * 59 * 14 = 177; x1 = (59 + (177)**0.5) // (2 * 59) = 0.0; x2 = (59 - (177)**0.5) // (2 * 59) = 0.0',
'D = 0^2 - 4 * -10 * 0 = 0; x = 0^2 // (2 * -10) = 0',
'D = 63^2 - 4 * 3 * 330 = 9; x1 = (-63 + (9)**0.5) // (2 * 3) = -10.0; x2 = (-63 - (9)**0.5) // (2 * 3) = -11.0',
'D = -25^2 - 4 * 1 * 156 = 1; x1 = (25 + (1)**0.5) // (2 * 1) = 13.0; x2 = (25 - (1)**0.5) // (2 * 1) = 12.0']}
math_data_eval = {'text': ["10 + 9x(x+3y) - 3x^3"], "target": ["10 + 9x^2 + 27xy - 3x^3"]}
ds_train = Dataset.from_dict(math_data)
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
tokenizer = AutoTokenizer.from_pretrained("t5-small")
data_collator = DataCollatorForSeq2Seq(tokenizer)
ds_train = ds_train.map(lambda x: tokenizer(x["text"], truncation=True, padding="max_length", max_length=512)
)
ds_train = ds_train.map(lambda y:
{"labels": tokenizer(y["target"], truncation=True, padding="max_length", max_length=512)['input_ids']}
)
ds_eval = Dataset.from_dict(math_data_eval)
ds_eval = ds_eval.map(lambda x: tokenizer(x["text"],
truncation=True, padding="max_length", max_length=512))
ds_eval = ds_eval.map(lambda y:
{"labels": tokenizer(y["target"], truncation=True, padding="max_length", max_length=512)['input_ids']}
)
</code></pre>
<h3>Metric definition part.</h3>
<pre><code>import numpy as np
metric = evaluate.load("sacrebleu")
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
# Replace -100s used for padding as we can't decode them
preds = np.where(preds != -100, preds, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
</code></pre>
<h3>Trainer setup part.</h3>
<pre><code>from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
# set training arguments - these params are not really tuned, feel free to change
training_args = Seq2SeqTrainingArguments(
output_dir="./",
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_with_generate=True,
logging_steps=2, # set to 1000 for full training
save_steps=16, # set to 500 for full training
eval_steps=4, # set to 8000 for full training
warmup_steps=1, # set to 2000 for full training
max_steps=16, # delete for full training
# overwrite_output_dir=True,
save_total_limit=1,
#fp16=True,
)
# instantiate trainer
trainer = Seq2SeqTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
train_dataset=ds_train.with_format("torch"),
eval_dataset=ds_eval.with_format("torch"),
data_collator=data_collator,
compute_metrics=compute_metrics
)
trainer.train()
</code></pre>
<hr />
<h1>That works and good. But why is the output of the model still so bad?</h1>
<ul>
<li>Most probably you need to tune some hyperparameter, batch_size, more data, different learning rates or increase no. of max_steps</li>
<li>It can also be that your vocab is pretrained for natural language but your data isn't, in that case, I'll suggest to try modifying the tokenizer before training, e.g. <a href="https://stackoverflow.com/questions/76198051/how-to-add-new-tokens-to-an-existing-huggingface-tokenizer">How to add new tokens to an existing Huggingface tokenizer?</a></li>
</ul>
| 29
|
T5 model
|
Google's flan-t5 models are not loading on HuggingFaceHub through Langchain
|
https://stackoverflow.com/questions/76209003/googles-flan-t5-models-are-not-loading-on-huggingfacehub-through-langchain
|
<p>I am trying to replicate the example code provided on Langchain website (<a href="https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html" rel="nofollow noreferrer">link here</a>) but I am getting the following error whether I run it on Google colab or locally:</p>
<p><strong>HfHubHTTPError: 504 Server Error: Gateway Time-out for url: <a href="https://huggingface.co/api/models/google/flan-t5-xl" rel="nofollow noreferrer">https://huggingface.co/api/models/google/flan-t5-xl</a></strong></p>
<p>The full code from the website is as follows:</p>
<pre><code>!pip install huggingface_hub > /dev/null
# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token
from getpass import getpass
HUGGINGFACEHUB_API_TOKEN = getpass()
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN
from langchain import HuggingFaceHub
repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
</code></pre>
<p>the last line is where I am getting the error</p>
<pre><code>llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
</code></pre>
<p>I tried it on Google Colab as well as local machine and it throws the same error. I tried hitting the URL <a href="https://huggingface.co/api/models/google/flan-t5-xl" rel="nofollow noreferrer">https://huggingface.co/api/models/google/flan-t5-xl</a> through a browser and got the same error.</p>
|
<p>Use the following model: "google/flan-t5-xxl"</p>
| 30
|
T5 model
|
How to get reproducible results of T5 transformer model
|
https://stackoverflow.com/questions/64839614/how-to-get-reproducible-results-of-t5-transformer-model
|
<p>I'm trying to get reproducible results of T5 transformer model:</p>
<pre><code>import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
def set_seed(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(42)
t5model = T5ForConditionalGeneration.from_pretrained('ramsrigouthamg/t5_paraphraser')
tokenizer = T5Tokenizer.from_pretrained('t5-base')
device = torch.device("cpu")
print ("device ",device)
t5model = t5model.to(device)
max_len = 256
text = "paraphrase: " + txt + " </s>"
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
beam_outputs = t5model.generate(
input_ids=input_ids, attention_mask=attention_masks,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.98,
early_stopping=True,
num_return_sequences=10,
)
</code></pre>
<p>Though I set a seed number, <code>t5model.generate</code> gives me different results each time I run it.</p>
<p>What is the right way to set the seed number, in order to get the same results of <code>t5model.generate</code> after multiple executions?</p>
|
<p>You need to reload the state_dict of your model to produce every time the same output.</p>
<p>What happens here, is that the T5 model initialization is calling the pytorch random number generator. That means, every time you run the following code, you will get the same output:</p>
<pre class="lang-py prettyprint-override"><code>set_seed(42)
t5model = T5ForConditionalGeneration.from_pretrained('ramsrigouthamg/t5_paraphraser')
t5model = t5model.to(device)
beam_outputs = []
for x in range(3):
beam_outputs.append(t5model.generate(
input_ids=input_ids, attention_mask=attention_masks,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.98,
early_stopping=True,
num_return_sequences=5,
))
tokenizer.batch_decode([y for x in beam_outputs for y in x])
</code></pre>
<p>Setting the seed of the random number generator doesn't mean that it will generate the same output every time you call it, it means that the sequence of generated numbers is initialized by the same seed (check this <a href="https://discuss.pytorch.org/t/does-pytorch-change-its-internal-seed-during-training/46505/4" rel="nofollow noreferrer">link</a> for further information):</p>
<pre class="lang-py prettyprint-override"><code>torch.manual_seed(42)
print(torch.randn(2))
print(torch.randn(2))
print(torch.randn(2))
torch.manual_seed(42)
print(torch.randn(2))
print(torch.randn(2))
print(torch.randn(2))
</code></pre>
<p>Output:</p>
<pre><code>tensor([0.3367, 0.1288])
tensor([0.2345, 0.2303])
tensor([-1.1229, -0.1863])
tensor([0.3367, 0.1288])
tensor([0.2345, 0.2303])
tensor([-1.1229, -0.1863])
</code></pre>
| 31
|
T5 model
|
Why is unnormalized input added to output in Huggingface T5 model?
|
https://stackoverflow.com/questions/76760152/why-is-unnormalized-input-added-to-output-in-huggingface-t5-model
|
<p>In the T5 Hugging face code (see for instance <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L341C38-L341C38" rel="nofollow noreferrer">this</a>), it seems that Input is "never normalized", in the following sense : each component outputs : <code>input + component_fct(norm(input))</code>. So the initial network input kept being added to more and more tensor, which are the result of applying the current subcomponent to its normalized input.</p>
<p>Intuitively, I feel it would make more sense to have : <code>norm(input) + component_fct(norm(input))</code>, so that we add things of the same magnitude.</p>
<p>Is there a reason for doing as it is currently done ?</p>
|
<p>T5 uses residual connections/skip connections where the input to a layer/group is added to the output of that layer. This is done to avoid vanishing gradient problems-- where the gradients of the loss function become very small as they get backpropagated through layers of the network. this makes the network difficult to train effectively.</p>
<p>This method, where the original, unmodified input is combined with the output, is a unique feature of a pre-LayerNorm version of the Transformer model, which T5 employs. Layer Normalization (or LayerNorm) is executed before the self-attention and feed-forward sub-layers-- unlike the original Transformer model where it's applied afterwards. Consequently, the output of these sub-layers is combined with the original, unnormalized input.</p>
<p>The goal of models like T5 isn't necessarily to maintain the same scale or magnitude throughout the network, but to optimize the learning process and final performance.</p>
<p>This design choice has been found to improve the performance of the model-- you can see how they discuss this decision in the "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer," and the T5 model code in the 🤗Transformers library reflects these design choices.</p>
<ul>
<li><a href="https://paperswithcode.com/method/t5" rel="nofollow noreferrer">Papers with Code about t5</a></li>
<li><a href="https://www.analyticsvidhya.com/blog/2021/08/all-you-need-to-know-about-skip-connections/" rel="nofollow noreferrer">Good description of skip connections</a></li>
<li><a href="https://arxiv.org/pdf/2102.12895.pdf" rel="nofollow noreferrer">Evolving attention with residual convolutions</a></li>
<li><a href="https://arxiv.org/abs/1910.10683" rel="nofollow noreferrer">Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer</a></li>
</ul>
| 32
|
T5 model
|
How can we optimize and quantize fine-tune model of t5-grammar-correction, like pszemraj/t5-v1_1-base-ft-jflAUG, pszemraj/grammar-synthesis-base,
|
https://stackoverflow.com/questions/73163013/how-can-we-optimize-and-quantize-fine-tune-model-of-t5-grammar-correction-like
|
<p>when I use fastt5 library to convert t5-base fine-tune models like "<strong>pszemraj/t5-v1_1-base-ft-jflAUG</strong>", "<strong>pszemraj/grammar-synthesis-base</strong>", fastt5 library <a href="https://i.sstatic.net/YFXQW.png" rel="nofollow noreferrer">Image of error</a> <strong>show tuple index out of range error</strong> in this file "/usr/local/lib/python3.7/dist-packages/transformers/utils/generic.py". The second error is coming that "encoder = model.encoder" is not defined and I was working in colab notebook, So how can we solve these issues and convert these t5-base-grammar fine-tuned model as I mentioned earlier one model is -> <strong>pszemraj/t5-v1_1-base-ft-jflAUG, ...</strong> to onnx and quantize them.</p>
| 33
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.