text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Numerical Methods and Statistics Suggested Reading x = 2 print(x) 2 print(x * 4) 8 x = x + 2 print(x) 4 import math x = math.cos(43) print(x) 0.5551133015206257 a = 2 a = a * b b = 4 --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-5-de5ed0aa3cf0> in <module> 1 a = 2 ----> 2 a = a * b 3 b = 4 NameError: name 'b' is not defined x = 2 x * 2 print(x) 2 x = 4 print('This is the simple way of printing x', x) This is the simple way of printing x 4 y = 2.3 print('This is how we print x', x, 'and y', y, '. Notice spaces are inserted for us') This is how we print x 4 and y 2.3 . Notice spaces are inserted for us print('What if I want to print y ({}) without spaces?'.format(y)) What if I want to print y (2.3) without spaces? z = 0.489349842432423829328933 * 10**32 print('{}'.format(z)) 4.893498424324239e+31 print('What if I want to print x twice, like this {0:} {0:}, and y once, {1:}?'.format(x,y)) What if I want to print x twice, like this 4 4, and y once, 2.3? print('With a fixed precision (number of digits): {:.4}'.format(z)) With a fixed precision (number of digits): 4.893e+31 print('Make it take up exactly 10 spaces: {:10.4}'.format(z)) print('Make it take up exactly 10 spaces: {:10.4}'.format(math.pi)) print('Make it take up exactly 10 spaces: {:10.4}'.format(z*math.pi)) Make it take up exactly 10 spaces: 4.893e+31 Make it take up exactly 10 spaces: 3.142 Make it take up exactly 10 spaces: 1.537e+32 print('You can demand to show without scientific notation too: {:f}'.format(z)) You can demand to show without scientific notation too: 48934984243242387076283208040448.000000 print('or demand to use scientific notation: {:e}'.format(math.pi)) or demand to use scientific notation: 3.141593e+00 print('There are many more formatting tags, such as binary: {:b} and percentage: {:%}'.format(43, 0.23)) There are many more formatting tags, such as binary: 101011 and percentage: 23.000000% %%bash pip install fixedint Requirement already satisfied: fixedint in /home/whitead/miniconda3/lib/python3.7/site-packages (0.1.5) from fixedint import * def bprint(x): print('{0:08b} ({0:})'.format(x)) i = UInt8(8) bprint(i) 00001000 (8) i = UInt8(24) bprint(i) 00011000 (24) i = UInt8(2**8 - 1) bprint(i) 11111111 (255) i = UInt8(2**8) bprint(i) 00000000 (0) This effect, where if we add 1 too many to a number and "roll-over" to the beginnig is called integer overflow. This is a common bug in programming, where a number becomes too large and then rolls-back. Ineger overflow can result in negative numbers, if we have signed numbers. i = Int8(2**7 - 1) bprint(i) bprint(i + 1) 01111111 (127) -10000000 (-128) Notice here that our max is half previously, because one of our 8 bits is for sign, so only 7 bits are available for integers. The key takeaway from integer overflow is that if we have integers with not enough bytes, we can suddenly have negative numbers or zero when adding large numbers. Real world examples of integer overflow include view counts becoming 0, having data usage become negative on a cellphone, the Y2K bug, a step counter going to 0 steps.
https://nbviewer.jupyter.org/github/whitead/numerical_stats/blob/master/unit_3/lectures/lecture_1.ipynb
CC-MAIN-2020-40
refinedweb
576
73.68
CodePlexProject Hosting for Open Source Software I had 1.3.9 source running nicely without issues. I did a clean download (from) into a new folder of 1.3.10 and get the following error: c:\Users\John\Downloads\Orchard.Source.1.3.10\src\Orchard.Web\Modules\Contrib.Taxonomies\Projections\TermsFilter.cs(46): error CS0246: The type or namespace name 'IHqlExpressionFactory' could not be found (are you missing a using directive or an assembly reference? First, it's interesting to note that the Orchard.sln does not reference this Cotrib.Taxonomies module in the solution file so someone trying to place it into their own source control will miss it. Next, notice that the references in this project must be fixed. Finally even after fixing "!" references, all the references to Hql....objects are missing. I can't be the only one get this issue. Can someone point me to the solution? Should I fall back into 1.3.9? For now, just remove that project from the solution by deleting the whole Contrib.Taxonomies folder. Also delete Orchard.Fields. bertrandleroy wrote: For now, just remove that project from the solution by deleting the whole Contrib.Taxonomies folder. Also delete Orchard.Fields. Thanks. FYI for others that run into the same issue I have, I also had to delete Orchard.Fields. It's name made me a little bit more worried but after deleting both of those, it's up and running. Thanks! Also remove projections and contrib.cache. I've uploaded an updated source file without those and without the extra hg folders, and we'll investigate how those made it into the package. Thanks for the heads up. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/284111
CC-MAIN-2017-26
refinedweb
316
69.89
Hillary Clinton has easily set a fundraising record, pulling in $26 million between January and March. The Clinton campaign would not publicly say how much of the money it plans to save for the general election. A number of pundits have predicted this will be the most expensive election in American history. More Below the Ad Advertisement Get truth delivered to your inbox every week. Previous item: Gitmo Legal Process Questioned After Hicks Trial Next item: McCain Defends Surge During Iraq Visit If you have trouble leaving a comment, review this help page. Still having problems? Let us know. If you find yourself moderated, take a moment to review our comment policy. By Tanmack, April 2, 2007 at 9:54 pm Link to this comment (Unregistered commenter) You are so right, Ernest. Clinton slashed anti-poverty programs knowing full well there weren’t jobs for those people, abandoning millions of children to homelessness and utter hardship. Aping the right to get elected, he moved the party to the center losing progressives, workers, etc., who went Green and voted for Nader. Her support of the war was tactical not true belief—Israeli aimed. The donor list will bear that out. After Bush, people won’t stand for the same type of politician. Hillary will fizzle. By GW=MCHammered, April 2, 2007 at 8:19 pm Link to this comment (Unregistered commenter) Referring to Correction: Federal Minimum Wage in 1938 was 25-cents per hour. At 5.662% annual climb, it would be $12.32 today… my bad! But the point remains, campaigning for dollars does not serve the people. By Louise, April 2, 2007 at 6:15 pm Link to this comment (Unregistered commenter) Pete: Afternoon folks. This is Pete, and Repete here at ‘DC Upsand Downs’ bringing you the first race in a long series of races. Repete: Yep, things are really heating up here in the run for the ‘08 White House Cup’, but it’s still anybody’s race! Pete: I don’t know Repete, the odds favor Clinton. Repete: Folks, they are moving to the starting gate! And their off! Clinton takes the lead. Moving up in the first turn is Dodd. Clinton moves ahead. Edwards coming up fast behind Dodd. With Richardson close behind. Bringing up the rear is Biden. Folks, we have a Horse Race! Pete: Actually, this is the first workout. Repete: Right! It just looks like a horse race! Pete: Missing from the track are Obama and McCain. Repete: Actually, we’ve been notified there’ll be lot’s more entries at the starting gate for the qualifying run. (Well actually, more like the second workout.) Pete: Hold your bets folks. We just received word the ‘Dark Horse’ Kucinich was left off the list of entrants. Repete: Nobody seems to know much about Kucinich, but advance odds put him as a favorite with the crowds in the stands! Pete: We’ll be paying attention to how well this Dark Horse places at the end of the final qualifying run. Repete: Well I’m a gambling man, so I’ll put my money on Kucinich. Coming from behind with ten to one odds promises a terrific payout! Pete: From the Stable, these latest qualifying numbers. Clinton $36 million. Edwards $14 million Richardson $11 million Dodd $16.5 million Biden $4 million Repete: That’s $81.5 million Pete, is that the biggest purse ever? Pete: Uh no, actually that’s the ‘bid for the run’ fees ... so far. Repete: Wow, almost sounds like the ‘08 White House Cup’ is on the auction block! Pete: Yeh, weird huh? Repete: Yeh weird huh? By Christopher Robin, April 2, 2007 at 1:26 pm Link to this comment (Unregistered commenter) Have you ever noticed the most money often goes to the most bland candidates? With perhaps the exception of Reagan, who wasn’t bland, but had the finanical agenda? By Lord B, April 2, 2007 at 12:41 pm Link to this comment (Unregistered commenter) Money buys democracy. And the media gauge a candidate’s prospects for winning by how much money they raise. If money is the only rationale for evaluating a candidate’s chances for winning the Presidency then our country continues its slide into anything BUT a democracy. Hillary, you can go to hell. By Lee, April 2, 2007 at 11:33 am Link to this comment (Unregistered commenter) Not such a bad deal, we get two for the price of one. Talk to the money! We are doomed to Mediocrity and business as usual By GW=MCHammered, April 2, 2007 at 10:08 am Link to this comment (Unregistered commenter) In 1924, the Calvin Coolidge campaign cost $4 million. In 2004, the George W. Bush campaign spent $367 million. This implies a 5.662% annual increase in presidential campaign spending. To point out the absurdity of this increase, were the $1 per hour federal minimum wage established in 1938 to rise at this same rate, the federal minimum wage today would be $49.28 per hour. Interestingly though, since 1974 the average home prices in the Pacific Northwest climbed at about this same rate, near 6%, from $30k to $200k. Gasoline prices have climbed from 45-cents to about $3 per gallon, the same rate. College tuition, many cars and more all rocketed in equivalent proportion - health care costs climbed even faster. All wages float above the minimum wage. But just to keep up with these rising prices, the federal minimum wage should have grown from $2.10 per hour in 1974 to about $15 per hour today. Instead, that’s close to the average manufacturing wage. According to the U.S. Department of Labor, changes in the federal minimum wage from 1938 to 1968 jumped by a factor of 6.4. Meaning the 1976 minimum wage of $2.30 again should have been nearly $15 last year. During the forty years from 1938 to 1978 the federal minimum wage climbed by a factor of 10.6 from 25-cents to $2.65. Meaning the late 1966 wage of $1.40, late last year should have paid almost $15 per hour. Yes, campaign spending is ludicrous. But oppressed wages is the real evil here. With worker productivity up and two-thirds of the US economy being consumer-driven, imagine the bustling America if workers were paid even their historic worth; Social Security and Medicare could be made more secure from the additional tax revenue too. Instead, the upper-crust 300,000 citizens now earn nearly as much as the bottom 150-million according to the March 29, 2007, NYT article ‘Income Gap Is Widening Data Shows.’ “In Lincoln’s day, America was wrapped in whiteness. Today, it is wrapped in upper-crust entitle. It’s time to kill the greed and fill the need.” By felicity, April 2, 2007 at 10:00 am Link to this comment (Unregistered commenter) I echo those here who want to know the sources of Hillary’s campaign contributions. It’s predicted that a nominee for the presidency will spend $500 million on his/her campaign. If I had an outstanding debt of $500 million dollars I’d have to devote every waking - maybe sleeping - moment to pay off my creditors. Can we assume that if the president, whoever it is, is a human being and lives on this planet he/she would have to do the same? Likely, so the four or eight years the individual holds the office will be devoted to paying off his/her debts. What’s wrong with this picture. Everything if we expect a president to work for our interests and, more importantly, the interests of the nation. By david simpson, April 2, 2007 at 9:16 am Link to this comment (Unregistered commenter) Hillary Clinton is the same old wine in a slightly new bottle. Money & power for the sake of power - bad combination. By George S Semsel, April 2, 2007 at 8:56 am Link to this comment (Unregistered commenter) The huge sums of money already being raised by one seen as just another loser Democrat tell us that what passes for elections in the USofA is little more than a peculiar form of auction in which only the rich can participate, but in which the highest bidder doesn’t necessarily win. By Firebrand, April 2, 2007 at 5:25 am Link to this comment (Unregistered commenter) Time to form a third party By Peter RV, April 2, 2007 at 4:39 am Link to this comment (Unregistered commenter) Hillary Clinton is, whom are we kidding?, the perfect candidate of Jewish America so there is little doubt that she will be the Democratic candidate for Presidency. The machine is repeating relentlesly the mantra of her intelligence (largely unproved unless confused with her ambitions), which in real money means she will pursue this bloody war in the Middle East until AIPAC is fully satisfied that Israel will rule there supreme. If our Nation collapses in the process, so be it. Where is this going to lead the U.S. doesn’t seem to be her concern. Well folks, if we get Hillary it serves us damn right. By John Lowell, April 2, 2007 at 1:36 am Link to this comment (Unregistered commenter) Its become more and more clear: Clinton is likely to win the Democratic nomination and, possibly, the presidency. The Obama and Edwards candidacies serve only to work against one another but not against Clinton’s. If she is to be stopped, one or other of them will have to drop out which is very unlikey. Of the three or four principal candidates in the race, none could be more contemptible than Clinton, although all are most unqualifiedly contemptible. I have not voted since 1992 so awful have I considered the choices but I’d think about coming out of retirement should this paramecium appear to have a leg up on things. John Lowell By Druthers, April 2, 2007 at 1:09 am Link to this comment (Unregistered commenter) The really interesting thing to know would be who the money came from. How much from AIPAC to each canditate? How much from the arms industry? Much of it will go to the media for usual smiles and promises, then a big chunk to the “consultants.” Hillary is the war canditate so she gets the biggest tidbit, unless Obama gets ahead in the polls. Just follow the money trail. By Tanmack, April 1, 2007 at 11:07 pm Link to this comment (Unregistered commenter) Who is giving all this money, and why? And how is she using it? To buy support, as she did in South Carolina, employing a african-american state senator for $10,000 a month to do pr work—like trying to get obama uninvited from a speech before the South Carolina Black Political Caucus. For shame. By Ernest Canning, April 1, 2007 at 9:35 pm Link to this comment (Unregistered commenter) Re Comment #61640 by Margaret Currey. Before you coronate Mrs. Clinton, adding a crown of superlatives—e.g., “very smart woman;” “make a good president,” presumptive nominee, take a moment to actually “think” about where this woman stands on issues of importance to the vast majority of Americans. In the midst of a global class war, ask yourself why it is that all that money is flowing to the spouse of a former President who joined with Reagan and Bush in betraying the middle and working classes as NAFTA and the WTO opened the door to the flight of America’s manufacturing base as it departed in search of cheap foreign labor, leaving what remained of American labor to be Wal-Mart-ized? Could it be that, when it comes to the great class divide, Hillary stands with the rich and the powerful? Could it be that the subsidies she is proposing for health care insurers as part of her so-called “universal health care” plan is intended to benefit insurance company CEOs—that this is the real reason she will not back the Conyers-Kucinich single-payer plan that would eliminate the insurance company middle man profits that account for 1/3 of the cost of health care? Tell me something, Margaret, what good does Hillary’s intellect do for any of us if she is prepared to betray us? By Jonas South, April 1, 2007 at 6:23 pm Link to this comment (Unregistered commenter) Hillary has the Midas touch. Public records show that she invested $1000 with a man who had business pending before her husband, then the governor of Arkansas, and the guy promptly produced a $90,000 profit. If you ever achieved such a rate of return (9000%) on your own investments, or if you are naïve enough to think it possible, then go jump from the fat into the fire, and vote for this ethically challenged lady. By Skruff, April 1, 2007 at 4:35 pm Link to this comment (Unregistered commenter) Why wouldn’t she raise lots of bongo-bucks. She’s the hidden candidate of the monied interests. We are in the midst of yet another major shift in politics… While the Repubblicans spend monet which won’t be made until the next century, the D’s are seriptiously becoming the party of “fical conservatives.” Solomon Brothers (the one Wall Street firm that backed the first Bill Clinton run predicts that if Hill-the-shill is elected she will cut fat flesh and bone from the federal budget in an attempt to “balance” it as her hubby alledgedly did…. But where will this self proclaimed “hawk’s” axe fall…. Bet it’s not at defense!! By Margaret Currey, April 1, 2007 at 4:20 pm Link to this comment (Unregistered commenter) Hiliary will run and be nominiated this is according to Charlie Rangel, who says she is smart and would make a good president. Now the rest of American needs to know this is a very smart women, those who say she is for the war and then is against the war is o.k. War has a funny habit of not being perdictable and to be for war when the country is winning is one thing but after a while when there is no clear victory then it is time to get out, hopefully the Iraqui people will be able to lead their own country, no other country can do it for then because then the country will become a colony. Margaret from Vancouver, Washington Posted on Jan 31, 2015 A Progressive Journal of News and Opinion Publisher Zuade Kaufman | Editor Robert Scheer About Us | Contact Us | User Agreement | Privacy Policy | Comment Policy Website development by Hop Studios Like Truthdig on Facebook
http://www.truthdig.com/eartotheground/item/clinton_rakes_it_in
CC-MAIN-2015-06
refinedweb
2,467
71.44
HJi, im a rookie on this . I've been working on this for 4 hours and I don't seem to spot the problems on this salary program. Its supposed to compute salaries for 4 types of workers. It is working just fine until the following line: while ( paycode = getchar() ......................... can u help me out? thanks a lot.. #include <stdio.h> int main() { float weeklysale, salary1, salary2, salary3, salary4, rate; int id, paycode, hours, itemsproduced; printf( "\nEnter employee's ID number (-1 to end): "); scanf( "%d", &id ); if ( id > 99 ) { printf ("Wrong ID Number. Please enter a new one (2 digits max.): "); scanf( "%d", &id ); while ( id != -1 ) { printf( "\nSelect a Paycode:\n1 is for managers\n2 is for commission workers\n" ); printf( "3 is for hourly workers\n4 is for pieceworkers\n\n" ); printf( "Enter chosen Paycode: "); scanf( "%d", &paycode ); while ( paycode = getchar() ) { switch ( paycode ) { case '1': paycode = 1; break; case '2': paycode = 2; break; case '3': paycode = 3; break; case '4': paycode = 4; break; case '\n':; }
https://cboard.cprogramming.com/c-programming/10565-salary-program-heeeeeeeelp.html
CC-MAIN-2017-26
refinedweb
168
80.82
[an error occurred while processing this directive]. Python is a universal programming language — this means it can express every computation just like Java and Scheme. The syntax of Python is somewhat similar to C++ or Java. Python has become very popular because of its simplicity, elegance, extensive libraries, and integration into popular web servers. In addition, because Python is interpreted and dynamically types, it is much quicker to write simple programs in Python than in a compiled, statically-typed language like Java. Python is already installed on the ITC and CS lab machines. It is to your advantage to do the assignments in Small Hall and ask questions of your fellow students and the ACs. You may also find it helpful, though, to have Python installed on your own machine. To install Python on your own machine: Nonterminal ::= ReplacementThis means whenever Nonterminal appears, it can be replaced by Replacement. There can be several rules with the same nonterminal on the left hand side. For example, the rules: State ::= Virginiawould mean that the expression State State could be any of four possible strings: Virginia Virginia, Virginia Maryland, Maryland Virginia, or Maryland Maryland. State ::= Maryland This form of description is powerful, since the rules can be recursively defined. For example, StateList ::=means whenever StateList appears we can either replace it with empty, or replace it with a State followed by a StateList. This describes a list of zero or more State elements. StateList ::= State StateList PythonCode ::= StatementsIn Python, unlike C++ or Java, the whitespace (such as new lines) has meaning. Statements cannot be separated into multiple lines, and only one statement may appear on a single line. Indentation within a line also matters. Instead of using parentheses to provide code structure, Python uses the indentation to group statements into blocks. Statements ::= Statement <newline> Statements Instructions ::= Python has many different kinds of statements. The three you will use most are assignment, applications and function definitions: Statement ::= AssignmentStatement Statement ::= ApplicationStatement Statement ::= FunctionDefinition Also note that the coment (statements that are there for notes and clarifications but not to be executed) are denoted by # (everything after the # until the end of the line is a comment). AssignmentStatement ::= Variable = Expression To evaluate an AssignmentStatement, Python evaluates the Expression and places the value in the place named Variable. four = 4 two = four / 2 bigger = (four * four) < 17 print "four: %d two: %d bigger: %d" % (four, two, bigger) four: 4 two: 2 bigger: 1 Python considers the empty string, 0, and the empty list to be false, and (almost) everything else to be true. The print procedure takes a string as input followed by a % and a list of values (in parentheses). The values are matched to the % codes in the string. Different % codes are used for printing different types of values. Here, %d is used to print an integer. Essentially, it provides an easy way to convert to strings other types of inputs. (Some other string formatting codes are: %s - string or any object, %c - character, %i - integer, %f - floating point decimal). The types can be combined together. For example: "%d -- %s -- %f" % (2, "hello", 3.14) will output: 2 -- hello -- 3.14 In Python, we apply a function to operands by following the name of the function with a comma-separated list of arguments surrounded by parentheses (just as in Java): ApplicationStatement ::= Name ( Arguments ) Arguments ::= Arguments ::= MoreArguments MoreArguments ::= Argument ,MoreArguments MoreArguments ::= Argument Argument ::= Expression Python functions may return one or more results. When multiple results are returned, the call site can bind them to different variables listed in parentheses. Python only has a few primitive functions, but a very extensive standard library. FunctionDefinition ::= def Name ( Parameters ): <newline> Statements Parameters ::= Parameters ::= MoreParameters MoreParameters Parameter ,MoreParameters MoreParameters ::= Parameter Parameter ::= Variable For example, def square (x): return x * x def quadratic (a, b, c, x): return a * square (x) + b * x + c print quadratic (2,3,7,4) 51 Note the identation in that code segment. It is extremely important to preserve the identation as it is what is going to indicate where the function body ends. The return statement is the similar to the return in Java, except multiple values can be returned: ReturnStatement ::= return ExpressionList ExpressionList ::= Expression ExpressionList :: Expression, ExpressionList It is used to return a value from a procedure. When execution encounters a return statement, the listed expressions are evaluated and their values are returned to the caller. Python also provides several statements similar to control structures in Java. Three useful ones are if and while and for which are described below. Statement ::= if Expression: <newline> Statements Python also supports alternative clauses but uses else to distinguish them: Statement ::= if (Expression) : #if test Statements1 elif : #else if in Python Statements2 else : Statements3 Statement ::= while Expression:The indentation of the statements determines the statements that are in the loop body. Statements Here is an example that will print out the first 10 Fibonacci numbers: i = 1; a = 1; b = 1; while (i <= 10): print "Fibonacci %s = %s" % (i,b); next = a + b; a = b; b = next; i = i + 1; print "Done." Statement ::= for i in range(n): Statements The indentation of the statements determines the statements that are looped. For the value of n either an integer can be used or abother function, such as the length of a string (which we obtain in Python using len(Expression). You can define a string by putting it it single quotes. Also you can use a for loop to iterate over the items in a collection object (see the section on Lists). Python has latent (invisible) types that are checked dynamically (we will cover what this means later in the class). The four types you will find most useful are numbers, strings, lists, and dictionaries. Python does not do exact arithmetic. Instead of using fractions, everything is treated as either an integer or a floating point number. Here are some examples: four = 4 pi = 3.14159 nothalf = 1/2 #(evaluates to 0) half = 1.0/2.0 #(evaluates to .5) What happened when we defined nothalf?. In Python strings lst = [10, 20, 30] print lst[0] 10print lst[2] 30 Indices in Python are much more powerful than that, however. We can "slice" lists to only look at a subset of the list data. lst = [10, 20, 40, "string", 302.234] print lst[0:2] [10, 20] print lst[:3] [10, 20, 40] print lst[1:] [20, 40, 'string', 302.23399999999998] (Note that decimal numbers are not exact in Python! We put 302.234 in the list, but the printed value is(): yellow[key] = 0 Python provides classes as a way to package procedures and state, similar to Java classes..For example, here is the Timer class from Problem Set 1: import time # this imports the time library module class Timer: def __init__ (self): self.running = 0 self.startTime = 0 self.endTime = 0 def start (self): self.running = 1 self.startTime = time.clock() def stop (self): self.endTime = time.clock () self.running = 0 # pre: The timer must not be running. # post: Returns the elapsed time (between the start and stop events) # in seconds. def elapsed (self): assert (not self.running) return self.endTime - self.startTime To create an instance of a class we use the class name (this invokes the special __init__ method: The syntax for calling methods is similar to Java:The syntax for calling methods is similar to Java:import Timer timer = Timer.Timer () will invoke the start method on the Timer object we created.will invoke the start method on the Timer object we created.timer.start()
http://www.cs.virginia.edu/~evans/cs216/guides/python.html
crawl-001
refinedweb
1,261
54.83
.46 to v2.5.47 ============================================ <rgooch@atnf.csiro.au> Removed DEVFS_FL_AUTO_OWNER flag <rgooch@atnf.csiro.au> util.c: Documentation fix base.c: Switched lingering structure field initialiser to ISO C Added locking when updating FCB flags <anton@samba.org> vmlinux.lds init.text -> text.init etc changes and other random cleanups <Matt_Domsch@dell.com> megaraid: remove mega_{reorder,swap}_hosts Patch posted to l-k by Mike Anderson <andmike@us.ibm.com> on 21-Oct-2002. <daisy@teetime.dynamic.austin.ibm.com> SCTP - Fix bug #547270. Retain the order of the retransmission. <jgrimm@touki.austin.ibm.com> sctp: header update for new error cause: (13) Protocol Violation <nivedita@w-nivedita.beaverton.ibm.com> sctp: Added checks for tcp-style sockets to sctp_peeloff() and AUTOCLOSE options. <sridhar@dyn9-47-18-140.beaverton.ibm.com> sctp: User initiated ABORT support. (ardelle.fan) <nivedita@w-nivedita.beaverton.ibm.com> sctp: Added SCTP SNMP MIB infrastructure. <Matt_Domsch@dell.com> megaraid: s/pcibios_read_config/pci_read_config <jgrimm@touki.austin.ibm.com> sctp: Always respond to ECNE sender. (jgrimm) Handle lost CWR case, by always sending CWR whether we've actually lowered our cwnd vars or not. Otherwise, the peer will keep sending ECNEs forever. <anton@samba.org> ppc64: boot Makefile fixes and remove LVM1 ioctl translation code <anton@samba.org> ppc64: fix cond_syscall so it works instead of oopses <Matt_Domsch@dell.com> megaraid: cleanups so it builds again <sridhar@dyn9-47-18-140.beaverton.ibm.com> [SCTP]: Initial souce address selection support. <dougg@torque.net> [PATCH] sbp2 (ieee1394) for lk2.5.44-bk3 This firewire mass storage driver is broken by the biosparam changes. Small hack in patch below so it will compile in 2.5.44 (was set up anticipating 2.5.45 but would not have compiled). Doug Gilbert <dougg@torque.net> Changes: - add SYNCHRONIZE_CACHE command support - clean up module removal noise - add some more parameters for driverfs merges work from Patrick Mansfield. There are now options to set: - max_luns (default 2) - scsi_level (default 3) Now if multiple scsi_debug pseudo devices are selected they will get these tuples (assuming "2" is the next available host number): 2 0 0 0 2 0 0 1 2 0 1 0 2 0 1 1 2 0 2 0 ... 2 0 7 1 3 0 0 0 3 0 0 1 etc <anton@samba.org> ppc64: Add POLLREMOVE <sridhar@dyn9-47-18-140.beaverton.ibm.com> [SCTP]: use dst_pmtu() to get the pmtu. <davem@nuts.ninka.net> [IPV4]: Report zero route advmss properly. <greg@kroah.com> [PATCH] USB: scanner fixes due to changes to USB structures. <baldrick@wanadoo.fr> [PATCH] USB: fix typo <dhollis@davehollis.com> [PATCH] 2.5.45 drivers/net/irda/irda-usb.c Compile Fix Fixes an apparent typo in irda-usb.c that prevented it from compiling. <greg@kroah.com> [PATCH] USB: audio fix up for missed debug code. <ddstreet@ieee.org> [PATCH] [patch] set interrupt interval in usbfs This patch sets up the URB interval correctly when using interrupts via usbfs. This is finally possible since the automagic resubmission is gone. <Matt_Domsch@dell.com> megaraid: avoid 64/32 division when calculating BIOS CHS translation . <jmorris@intercode.com.au> [CRYPTO]: Cleanups based upon feedback from jgarzik. - make crypto_cipher_flags() return u32 (this means it will return the actual flags reliably, instead of being just a boolean op). - simplify error path in crypto_init_flags(). <jmorris@intercode.com.au> [CRYPTO]: Add crypto_alg_available interface. <bart.de.schuymer@pandora.be> net/ipv4/netfilter/ipt_physdev.c: Bug fix in matching. <davem@nuts.ninka.net> [SPARC64]: Add device mapper translations. <hch@lst.de> [SPARC]: Cleanup scsi driver registration. <davem@nuts.ninka.net> [NET]: Some missed cases of dst_pmtu conversion. <bart.de.schuymer@pandora.be> [BRIDGE]: Update br-netfilter for dst_pmtu changes. <willy@debian.org> [NET]: Cleanup wan/packet ioctls. <abslucio@terra.com.br> [NET]: Port 2.4.x pktgen to 2.5.x <rmk@arm.linux.org.uk> [PATCH] PCI hotplug comment fixes Fix comments about /sbin/hotplug; pci_insert_device does not call /sbin/hotplug. <scottm@somanetworks.com> [PATCH] 2.5.45 CompactPCI driver patch 1/4 This is a patch 1 of 4 of my CompactPCI hotplug core and drivers, consisting of the required core PCI changes. The various arch file changes are to change pcibios_fixup_pbus_ranges to from __init to __devinit, so that pci_setup_bridge can be safely exported from drivers/pci/setup-bus.c. <greg@kroah.com> PCI: move EXPORT_SYMBOL for the pbus functions to the setup-bus.c file. This fixes a linking error if setup-bus.c isn't compiled into the kernel. <scottm@somanetworks.com> [PATCH] 2.5.45 CompactPCI driver patch 2/4 This is patch 2 of 4 of my CompactPCI hotplug core and drivers, consisting of the CompactPCI hotplug driver core. It is basically a glue layer on top of the PCI hotplug core that exposes an API roughly similiar in concept to the API implemented by MontaVista from the PICMG 2.12 specification, minus all the Win32isms and cruft. <scottm@somanetworks.com> [PATCH] 2.5.45 CompactPCI driver patch 3/4 This is patch 3 of 4 of my CompactPCI hotplug core and drivers, consisting of the Ziatech ZT5550 hotplug driver. The hardware banging code in this driver started its life in the PICMG 2.12 driver code that MontaVista released at the end of 2001. <scottm@somanetworks.com> [PATCH] 2.5.45 CompactPCI driver patch 4/4 This is a patch 4 of 4 of my CompactPCI hotplug core and drivers, consisting of the generic port I/O cPCI hotplug driver. Let me know if the kernel parameter parsing code that's #ifndef MODULE is objectionable. I spent quite a while today testing it, it seems reasonably robust. Without it, this driver would only be useable as a module, which I've not figured out how to do with the new kernel configuration stuff. <greg@kroah.com> [PATCH] PCI Hotplug: removed a compiler warning of a unused variable in the cpcihp_generic driver. <greg@kroah.com> [PATCH] PCI Hotplug: fix compiler warning. <jung-ik.lee@intel.com> [PATCH] Patch: 2.5.45 PCI Fixups for PCI HotPlug The following patch changes function scopes only but fixes kernel dump on Hot-Add of PCI bridge cards. <t-kouchi@mvf.biglobe.ne.jp> [PATCH] ACPI PCI hotplug updates These are updates of the acpiphp driver for 2.5. - change debug flag from 'acpiphp_debug' to 'debug' for insmod - whitespace cleanup - message cleanup <dougg@torque.net> Attached is an addition to the patches on this driver that I've been posting recently. This one adds: - slave_attach() and slave_detach() - code clean up looking for a problem ** - more debug code allowing scanning cmd sequence to be seen in the log (when opts=1) ** after several (never the first) sequences of modprobe/rmmod on scsi_debug there is either: - an oops during modprobe when driverfs tries to create a directory - or a WARN_ON() at drivers/base/bus.c:277 during rmmod [examples attached] I'm not sure whether the problem is in scsi_debug, the scsi mid level or in the driverfs code. Grepping indicates that not many people currently utilize per driver parameters with driverfs (i.e. driver_create_file() and driver_remove_file()). <anton@samba.org> ppc64: initramfs fixes <anton@samba.org> ppc64: updates for 2.5.45 <anton@samba.org> ppc64: numa updates <davem@nuts.ninka.net> [SPARC]: Add POLLREMOVE. <davem@nuts.ninka.net> [SPARC]: Add sys_remap_file_pages syscalls. <davem@nuts.ninka.net> [NET]: Add NET_PKTGEN. <jmorris@intercode.com.au> [CRYPTO]: Rework HMAC interface. <kuznet@ms2.inr.ac.ru> [NET]: IPSEC updates. - Add ESP tranformer. - Add AF_KEY socket layer. - Rework xfrm structures for user interfaces - Add CONFIG_IP_{AH,ESP}. <davem@nuts.ninka.net> [SPARC]: Fix typo in ESP changes. <davem@nuts.ninka.net> [SPARC]: Fix typos in QLOGICPTI changes. <Andries.Brouwer@cwi.nl> [TCP] Do not update rcv_nxt until ts_recent is updated. <willy@debian.org> [kbuild]: Use include_config instead of include-config. <davem@nuts.ninka.net> [CRYPTO]: Include kernel.h in crypto.h <davem@nuts.ninka.net> [NET]: Fix xfrm policy locking. <davem@nuts.ninka.net> [SPARC64]: Translate SO_{SND,RCV}TIMEO socket options. <rmk@flint.arm.linux.org.uk> [MTD] Fix mtdblock.c build error Move spin_unlock_irq() down one line. <rmk@flint.arm.linux.org.uk> [ARM] Clean up sa1100 hardware specific headers. Remove implementation specific header files from arch-sa1100/hardware.h Move SA1101_[pv]2[vp] into SA-1101.h. <rmk@flint.arm.linux.org.uk> [SERIAL] Fix up ARM serial drivers This cset makes ARM serial drivers build. <rmk@flint.arm.linux.org.uk> [ARM] Fix typo in arch/arm/mm/Makefile Typo prevented ARM926 cpu enabled builds from succeeding. <hch@lst.de> [PATCH] get rid of ->init in osst Since osst is the last driver still implementing ->init and Willem said he's gonna do a major update including a resync with st anyway I think it's okay to put this hack in for now. Instead of ->init beeing directly called from the midlayer osst_attach now calls in in the beginning - it has an internal protection so that the initialization will be called only one anyway. <hch@lst.de> [PATCH] proper scsi_devicelist handling Factor out code calling methods of all device template on a scsi_device out to three helper functions in scsi.c, make scsi_devicelist static to it and add a r/w semaphore to protect it. Make scsi_host_list and scsi_host_hn_list static to hosts.c and remove the never used scsi_host_tmpl_list (we only add to it and remove from it but never traverse it) <davem@nuts.ninka.net> [SPARC64]: Handle kernel integer divide by zero properly. <hch@lst.de> [PATCH] get rid of global arrays in sr Similar cleanup to the recent sd patch: allocate the scsi_cd struct in sd_attach instead of needing the global array and sd_init. Tested with a DVD reader/CD write combination and ide-scsi. <zaitcev@redhat.com> [SPARC]: Update makefiles for current kbuild. <zaitcev@redhat.com> [SPARC]: Streamlined probing for Zilog. <zaitcev@redhat.com> [SPARC]: Cleanups and bug fixes. - vac property enumeration pollutes namespace - LEON sparc needs extra nop in task switch - Comment out debugging printk. <rth@dorothy.sfbay.redhat.com> Zero UNIQUE on exec. <jgarzik@redhat.com> Alan snuck in an ugly bandaid into de2104x net driver... add a #warning so it's not forgotten. <kai@tp1.ruhr-uni-bochum.de> kbuild: initramfs updates Use ld to link the cpio archive into the image, build was broken due to requiring a recent version of objcopy before, plus assorted cleanups: o Don't include arch/$(ARCH)/Makefile, export the needed arch-specific flags instead. o Name the generated section consistently .init.ramfs everywhere. <erik@aarg.net> [PATCH] USB: added support for Palm Tungsten T devices to visor driver <rth@dorothy.sfbay.redhat.com> Merge bits from entry-rewrite tree: * Whitespace cleanups in entry.S * Pass in pt_regs by reference, not by fake value, to some entry points. * Don't export wrusp. <rth@dorothy.sfbay.redhat.com> Fix single denorm -> double conversion. Patch from George France <france@handhelds.org>. <anton@samba.org> ppc64: updates from Dave Engebretsen in 2.4 <rth@dorothy.sfbay.redhat.com> More merging from entry-rewrite tree: Implement sys_sethae, osf_getpriority, sys_getxuid, sys_getxgid, sys_getxpid, sys_pipe, alpha_ni_syscall directly in assembly. Bounce alpha_create_module, sys_ptrace through an assembly stub to pass pt_regs by reference. <rusty@rustcorp.com.au> [PATCH] Initializer conversions for drivers/block The old form of designated initializers are obsolete: we need to replace them with the ISO C forms before 2.6. Gcc has always supported both forms anyway. <anton@samba.org> ppc64: rework ppc64 hashtable management <anton@samba.org> ppc64: defconfig update <hch@lst.de> [PATCH] get rid of sg_init Next step of my ->init removal series. sg does a few to much wierd things with it's global array thay I prefer to leave it to Doug to get rid of it (if he wants to), but this patch at least gets rid of sg_init. Move the register_chrdev to init_sg - open properly checks whether the device exists so this doesn't cause any harm. Remove the initial allocation of the device array - the resizing code in sg_attach will properly take care of it when called the first time. Tested with a DVD reader/CD writer combination and ide-scsi. <davem@nuts.ninka.net> [AF_KEY]: Convert to/from IPSEC_PROTO_ANY. <davem@nuts.ninka.net> [NET]: XFRM policy bug fixes. - Fix dst metric memcpy length. - Iterator for walking skb sec_path goes in wrong direction. <kuznet@ms2.inr.ac.ru> [IPSEC]: Bug fixes and updates. - Implement IP_IPSEC_POLICY setsockopt - Rework input policy checks to use it - dst->child destruction is repaired - Fix tunnel mode IP header building. <davem@nuts.ninka.net> [SUNZILOG]: uart_event --> uart_write_wakeup. <davem@nuts.ninka.net> [SPARC64]: Add initramfs sections. <davem@nuts.ninka.net> [SPARC]: Add initramfs bits. <davem@nuts.ninka.net> [SCTP]: Convert to xfrm_policy_check. <davem@nuts.ninka.net> [TCP_IPV6]: Remove unused label discard_and_relse. <davem@nuts.ninka.net> [IPSEC]: Export xfrm_policy_list. <kai@tp1.ruhr-uni-bochum.de> kbuild: Fix up initramfs, adapt arch/alpha Grrh, don't do last minute changes without retesting. Adapt arch/alpha as well, other archs need to o add LDFLAGS_BLOB to arch/$(ARCH)/Makefile o add .init.ramfs to arch/$(ARCH)/vmlinux.lds.S See arch/i386/{Makefile,vmlinux.lds.S} for guidance ;) <alan@lxorguk.ukuu.org.uk> [PATCH] binfmt flat uses zlib and clean up dependency rules (first "default" rule takes precedence) <alan@lxorguk.ukuu.org.uk> [PATCH] final eata polish <alan@lxorguk.ukuu.org.uk> [PATCH] 2.5.46 - aha1740 update <alan@lxorguk.ukuu.org.uk> [PATCH] first pass eata-pio updates <alan@lxorguk.ukuu.org.uk> [PATCH] fd_mcs finish up I hope <alan@lxorguk.ukuu.org.uk> [PATCH] silly typo fix <alan@lxorguk.ukuu.org.uk> [PATCH] fix 5380 prototype for biosparam <alan@lxorguk.ukuu.org.uk> [PATCH] bring ibmmca into line <alan@lxorguk.ukuu.org.uk> [PATCH] in2000 new_eh and locking fixes <alan@lxorguk.ukuu.org.uk> [PATCH] tidy the 53c406, kill off old header <alan@lxorguk.ukuu.org.uk> [PATCH] NCR5380 fix the locking fix fix <alan@lxorguk.ukuu.org.uk> [PATCH] kill old reset stuff in nsp - it supports new_eh anyway <alan@lxorguk.ukuu.org.uk> [PATCH] fix qlogicfas pcmcia build <alan@lxorguk.ukuu.org.uk> [PATCH] u14f/34f build fix <alan@lxorguk.ukuu.org.uk> [PATCH] printk levels for wd7000 <alan@lxorguk.ukuu.org.uk> [PATCH] first pass over ultrastor.c (still used for u24f) <alan@lxorguk.ukuu.org.uk> [PATCH] NOMMU update for fs/locks.c Since we don't have mandatory mmap lock files we can lose this chunk <alan@lxorguk.ukuu.org.uk> [PATCH] update the stat ifdef rule for v850 <alan@lxorguk.ukuu.org.uk> [PATCH] handle buggy PIT, also do delays spec requires This is used by the following Cyrix patch to handle buggy or spec tight PIT stuff <alan@lxorguk.ukuu.org.uk> [PATCH] use the PIT bug workarounds rather than killing TSC <alan@lxorguk.ukuu.org.uk> [PATCH] add pit_latch to headers to avoid warnings <cloos@lugabout.jhcloos.org> sbp2.h: Update sbp2scsi_biosparam() declaration to match sbp2.c sbp2.c: C s/capacy/capacity/ <davej@codemonkey.org.uk> [PATCH] Use better compiler flags for Cyrix 3. From 2.4 <davej@codemonkey.org.uk> [PATCH] revamped machine check exception support. - Split out from bluesmoke.c into per-vendor files (Me) (If we were that way inclined, we could even make the per-vendor bits CONFIG_ options, but thats probably overkill) - Fixes Kconfig markup. (Roman Zippel) - P4 can use non-fatal background checker too. (Venkatesh Pallipadi) - Don't clear MCA status info in case of non-recoverable if OS has failed in logging those, BIOS can still ahve a look at that info. (Venkatesh) - We can init bank 0 on P4 (Zwane Mwaikambo) - Compile away to nothing if CONFIG_X86_MCE=n - Various other cleaning (Me) <rmk@flint.arm.linux.org.uk> [ARM] Make ARM SCSI drivers build 2.5.46 appears to require drivers/scsi/scsi.h to be included before drivers/scsi/hosts.h. Make this happen in the Acorn SCSI drivers. <scott.feldman@intel.com> e100 net driver: remove driver-isolated flag/lock. Other locks already cover the areas in question, and additionally this lock was held in areas where it should not have been, triggering error messages in 2.5.x. <hch@sgi.com> [PATCH] fix intermezzo compile failure Intermezzo has some strange, broken code trying to deal with extended attributes and and ACLs. Fortunately the xattr code is hidden under a config option that's never set, but the ACL code is enabled by CONFIG_POSIX_ACL that's se by ext2/ext3 and jfs now. Change it to #if 0 to get intermezzo compiling again. <yokota@netlab.is.tsukuba.ac.jp> [PATCH] NinjaSCSI-3R driver patch updated. NinjaSCSI-3R PCMCIA SCSI host adapter driver updated for the latest kernel tree. <rmk@flint.arm.linux.org.uk> [ARM] Fixes for 2.5.46 - Add LDFLAGS_BLOB definitions - Tweak kernel_thread for better code - Fix vmlinux-armv.lds.in to prevent ld complaining about the architecture private flags. (I'm not certain that the last item isn't a hole in some bug fix in ld - this fix appears to work with every binutils I've found thus far. However, if this suspected bug gets fixed, we're going to have to rethink how we combine binary objects into ELF objects.) <akpm@digeo.com> [PATCH] `event' removal: core kernel Patch from Manfred Spraul f_version and i_version are used by filesystems to check if it can reuse the f_pos position across readdir calls without validation. Right now f_version and i_version are modified by f_version = ++event; i_version = ++event; if (f_version != i_version) goto revalidate and event is a global, exported variable. But that's not needed, f_version = 0; i_version++; if (f_version != i_version) goto revalidate works too, without the ugly 'event' variable. I got an ok from viro, and I had notified the fs maintainers, no complaints either - block_dev.c, block_llseek updates f_version to '++event'. grep showed that no device driver uses f_version, this is dead code copied from the default llseek implementation. - the llseek implementations and get_empty_flip set f_version to '++event' This is not dead code, but filp->f_version = 0 achieves the same effect: f_version is used by the readdir() implementation of several filesystems to skip the revalidation of f_pos at the beginning of a readdir call: If llseek was not called and the filesystem did not change since the last readdir call, then the value in f_pos can be trusted. The implementation (for example in ext2) is inode->i_version = ++event; in all operations that change a directory At the beginning of file_operation->readdir(): if(inode->i_version != flip->f_version) revalidate(); filp->f_version = inode->i_version; There are other users of f_version, but none of them use the default llseek implementation (e.g. fs/pipe.c) <akpm@digeo.com> [PATCH] `event' removal: ext2 Patch from Manfred Spraul Use a local counter instead of the global 'event' variable for the readdir() optimization. Depends on patch-event-II Background: The only user of i_version and f_version in ext2 is ext2_readdir(). As an optimization, ext2 performs the validation of the start position for readdir() only if flip->f_version != inode->i_version. If there was no llseek and no directory change since the last readdir() call, then f_pos can be trusted. f_version is set to 0 in get_empty_flip and during llseek. Right now, i_version set to ++event during ext2_read_inode and commit_chunk, i.e. at inode creation and if a directory is changed. Initializing i_version to 1, and updating with i_version++ achieves the same effect, without the need of a global variable. Global uniqueness is not required, there are no other uses of [if]_version in ext2. Change relative to the patch you have right now: i_version is initialized to 1 instead of 0. For ext2 it's doesn't matter [there is always a valid 'len' value at the beginning of a directory data block], but it's cleaner. <akpm@digeo.com> [PATCH] `event' removal: other filesystems Patch from Manfred Spraul Several filesystems compare f_version and i_version to validate directory positions in readdir(): The directory position is revalidated if i_version is not equal f_version. Operations that could invalidate the cached position set i_version or f_version to '++event', event is a global variable. Global uniqueness is not needed, 'i_version++' and 'f_version=0' is sufficient to guarantee that the next readdir() will revalidate the directory position, and that avoids the need for an ugly global variable. The attached patch converts all filesystems except ext2, which was converted with a seperate patch. <akpm@digeo.com> [PATCH] `event' removal: kill it Final act, from Manfred: The attached patch removes 'event' entirely from the kernel: it's not used anymore. All event users [vfat dentry revalidation; ext2/3 inode generation; readdir() file position revalidation in several filesystems] were converted to local counters. <akpm@digeo.com> [PATCH] fix mod_timer() race If two CPUs run mod_timer against the same not-pending timer then they have no locking relationship. They can both see the timer as not-pending and they both add the timer to their cpu-local list. The CPU which gets there second corrupts the first CPU's lists. This was causing Dave Hansen's 8-way to oops after a couple of minutes of specweb testing. I believe that to fix this we need locking which is associated with the timer itself. The easy fix is hashed spinlocking based on the timer's address. The hard fix is a lock inside the timer itself. It is hard because init_timer() becomes compulsory, to initialise that spinlock. An unknown number of code paths in the kernel just wipe the timer to all-zeroes and start using it. I chose the hard way - it is cleaner and more idiomatic. The patch also adds a "magic number" to the timer so we can detect when a timer was not correctly initialised. A warning and stack backtrace is generated and the timer is fixed up. After 16 such warnings the warning mechanism shuts itself up until a reboot. It took six patches to my kernel to stop the warnings from coming out. The uninitialised timers are extremely easy to find and fix. But it will take some time to weed them all out. Maybe we should go for the hashed locking... Note that the new timer->lock means that we can clean up some awkward "oh we raced, let's try again" code in timer.c. But to do that we'd also need to take timer->lock in the commonly-called del_timer(), so I left it as-is. The lock is not needed in add_timer() because concurrent add_timer()/add_timer() and concurrent add_timer()/mod_timer() are illegal. <akpm@digeo.com> [PATCH] timers: initialisers Add some infrastructure for statically initialising timers, use that in workqueues. <akpm@digeo.com> [PATCH] timers: scsi The patches which I needed to avoid the warnings with my build. <davem@nuts.ninka.net> [SPARC64]: Define LDFLAGS_BLOB <akpm@digeo.com> [PATCH] timers: drivers/* Results of a quick pass through everything under drivers/. We're mostly OK in there. I will have missed some. <akpm@digeo.com> [PATCH] timers: input, networking More timer micropatches. <anton@samba.org> [PATCH] fix slab allocator for non zero boot cpu The slab allocator doesnt initialise ->array for all cpus. This means we fail to boot on a machine with boot cpu != 0. I was testing current 2.5 BK. Luckily Rusty was at hand to explain the ins and outs of initialisers to me. ). <mdharm-usb@one-eyed-alien.net> [PATCH] USB storage: move init of residue to a central place This patch moves the initialization of the SCSI residue field to be in just a couple of places, instead of all over the map. It's code consolidation. <mdharm-usb@one-eyed-alien.net> [PATCH] USB storage: fix result code checks This patch fixes up some result-code tests that were missed in previous patches. <mdharm-usb@one-eyed-alien.net> [PATCH] USB storage: check for abort at higher levels This patch adds tests for an aborted command to higher-level functions. This allows faster exit from a couple of paths and will allow code consolidation in the lower-level transport functions. . <randy.dunlap@verizon.net> [PATCH] usb-midi requires SOUND usb-midi requires SOUND, otherwise, when built in-kernel but soundcore is modular, usb-midi can't resolve some sound interfaces. <akpm@digeo.com> [PATCH] use timer intialiser in workqueues Teach DECLARE_WORK about __TIMER_INITIALIZER. So all statically initialised workqueues have valid timers. eg: drivers/char/random.c:batch_work. <ahaas@airmail.net> [PATCH] C99 designated initializers for fs/ext2 This converts the new ACL bits in fs/ext2 to use C99 designated initializers. <ahaas@airmail.net> [PATCH] C99 designated initializers for fs/ext3 This fixes the new ACL bits in fs/ext3 to use C99 designated initializers. <davem@nuts.ninka.net> [IPSEC/CRYPTO]: Allocate work buffers instead of using kstack. <davem@nuts.ninka.net> [NET]: Copy msg_namelen back to user in recv{from,msg} even if it is zero. <hch@lst.de> [PATCH] allow registering individual HBAs Yes, this is the patch every maintainer of a modern HBA waited for the last years </shameless plug>. With all my recent changes there's no more reason to call scsi_register_host except for the intialization it performs to every host found in scsi_register. But a driver can aswell do that at the end of it's per-HBA detection routine (i.e. in ->probe for a modern PCI driver), so export that code as scsi_add_host to drivers. Do the same for the release path (scsi_remove_host). Such a new-style driver needs neither ->detect or ->release and is in theory hotplug-capable (well, once all the races in the scsi midlayer are fixed..) <davem@nuts.ninka.net> [IPSEC]: RAWv4 makes inverted policy check. <hch@lst.de> [PATCH] scsi device template cleanups Now that .init isn't implement anymore we can get rid of it and do some more cleanup in the scsi device template: * remove .blk - unused since 2.5.46 * remove .dev_noticed, only midlayer user is gone together with .init. remaining instance now driver-private * remove .nr_dev and .dev_max - they're purely driver internal and at least in sd and sr they'll be completly gone very soon. <jejb@mulgrave.(none)> split sg.c changes out of Christoph Hellwig's template changes <akpm@digeo.com> [PATCH] initialise timers in sound/ The result of a timer audit in sound/* <manfred@colorfullife.com> [PATCH] `i_version' initialization fix Ahm. No. It must be i_version = 1 Otherwise there is a trivial bug: mkdir("dir"); <force the directory out of dcache> dir = open("dir"); lseek(dir,1,SEEK_SET); readdir(); lseek sets f_version to 0, and readdir() trusts f_pos, because i_version is 0, too. This applies to all filesystems. The ext2 patch already sets i_version to 1. <rmk@flint.arm.linux.org.uk> [SERIAL] serial bits from -ac (from Alan Cox) This adds support for 68328, 68360, MCF and NB85E serial drivers. <torvalds@penguin.transmeta.com> Avoid gcc warning, and clean up current text address handling (it's "current_text_addr()", not the home-brew gcc label magic) <akpm@digeo.com> [PATCH] initialize timers under arch/ This completes the kernel-wide audit. . <rmk@flint.arm.linux.org.uk> [MTD] mtdblock devices are called mtdblock%d not mtd%d <jgarzik@redhat.com> Remove performance barrier in i810_rng char driver. In order to conserve CPU, the read(2) syscall would schedule_timeout unconditionally. This also crippled speed, and was a bad design decision. This cset merges the updated read(2) logic of the sister driver amd768_rng from Alan, which schedules only when it needs to. On my test system, by one microbenmark, read(2) output jumped from 0.08 kbit/s to "what Intel expects" of 20 kbit/s. End users may notice a significant decrease in idle time after this change (and a correspondingly large increase in /dev/hwrng user speed), if /dev/hwrng is used to its maximum capacity. <dhinds@sonic.net> [PATCH] PATCH: small attribution fixes This cleans up some obsolete email addresses. <dhinds@sonic.net> [PATCH] PATCH: PCMCIA network driver update This brings several PCMCIA network drivers into sync with 2.4 and the pcmcia-cs package. The axnet_cs driver gets a major cleanup. <dhinds@sonic.net> [PATCH] PATCH: more PCMCIA fixes for 2.5 include/pcmcia/ciscode.h o added product ID's for a few more cards drivers/net/pcmcia/fmvj18x_cs.c o Added MODULE_DESCRIPTION o Added support for RATOC cards o Added support for Nextcom NC5310B cards o Added support for SSi 78Q8370 chipset o Added support for TDK GN3410 multifunction cards o Better errno for failed module initialization o Cleaned up whitespace drivers/net/pcmcia/smc91c92_cs.c o Added full duplex support for smc91c100 based cards o Better errno for failed module initialization o Synced up naming of stuff to match pcmcia-cs version o Cleaned up whitespace drivers/pcmcia/cardbus.c drivers/pcmcia/cistpl.c drivers/pcmcia/cs_internal.c o Fixed card identification bug triggered by invoking certain PCMCIA tools when cardmgr is not running. <dhinds@sonic.net> [PATCH] PATCH: PCMCIA updates for 2.5, #4 drivers/ide/legacy/ide-cs.c: o Added MODULE_{AUTHOR,DESCRIPTION}, fixed MODULE_LICENSE o Added support for (Panasonic) KME KXLC005 cards o Better errno for failed module initialization drivers/parport/parport_cs.c o Fixed it so it actually works o Removed cruft for old kernels o Better errno for failed module initialization <mingo@elte.hu> [PATCH] thread-aware coredumps, 2.5.43-C3 This is the second iteration of thread-aware coredumps. Changes: - Ulrich Drepper has reviewed the data structures and checked actual coredumps via readelf - everything looks fine and according to the spec. - a serious bug has been fixed in the thread-state dumping code - it was still based on the 2.4 assumption that the task struct points to the kernel stack - it's task->thread_info in 2.5. This bug caused bogus register info to be filled in for threads. -.) - fill_psinfo() is now called with the thread group leader, for the coredump to get 'process' state. - initialize the elf_thread_status structure with zeroes. the IA64 ELF bits are not included, yet, to reduce complexity of the patch. The patch has been tested on x86 UP and SMP. <torvalds@home.transmeta.com> The crypto auto-load should only be enabled if crypto in enabled. <trond.myklebust@fys.uio.no> [PATCH] Make ->readpages palatable to NFS The following patch makes the ->readpages() address_space_operation take a struct file argument just like ->readpage(). <trond.myklebust@fys.uio.no> [PATCH] Convert NFS client to use ->readpages() - Add the library function read_cache_pages(), which is used in a similar fashion to the single page 'read_cache_page()'. It hides the details of the LRU cache etc. from a filesystem that wants to to populate an address space with a list of pages. - Fix NFS so that readahead uses the ->readpages() interface. Means that we can immediately schedule an RPC call in order to complete the I/O, rather than relying on somebody later triggering it by calling lock_page() (and hence sync_page()). The sync_page() method is race-prone, since the waiting page may try to call it before we've finished initializing the 'struct nfs_page'. - Clear out nfs_sync_page(), the nfs_inode->read list, and friends. When the I/O completion gets scheduled in ->readpage(), ->readpages(), they have no reason to exist. <akpm@digeo.com> [PATCH] init timers under fs/ There's only the one, in XFS. <torvalds@home.transmeta.com> Fix floppy timer initialization <rmk@flint.arm.linux.org.uk> [ARM] Fix Acorn RISCPC mouse input driver - Always pass a dev_id when allocating a shared interrupt - Correct Y axis direction - Correct order of mouse buttons - Correct polarity of mouse buttons - Rename CONFIG_MOUSE_ACORN to CONFIG_MOUSE_RISCPC <rmk@flint.arm.linux.org.uk> [ARM] Make rpckbd.c compile - Add missing interrupt.h include - Bring up to date wrt serio code - CONFiG_SERIO_ACORN should be CONFIG_SERIO_RPCKBD <rmk@flint.arm.linux.org.uk> [ARM] Make ambakmi.c compile - Remove obsolete include <rmk@flint.arm.linux.org.uk> [ARM] Update RISC PC and Neponset default configurations <rmk@flint.arm.linux.org.uk> [GEN] Update credits + maintainers files for ARM people. <patmans@us.ibm.com> [PATCH] fix 2.5 scsi queue depth setting This patch fixes queue depth setting of scsi devices. This is done by pairing shost->slave_attach() calls with a scsi_build_commandblocks in the new scsi_slave_attach. This is a patch aginst linux-scsi.bkbits.net/scsi-for-linus-2.5 after applying the last posted hch version of the "Eliminate scsi_host_tmpl_list" patch, it still applies with offset to the current scsi-for-linus-2.5. It also: Will properly call shost->slave_attach after a scsi_unregister_device() followed by a scsi_register_device() - as could happen if you were able to rmmod all upper level drivers and then insmod any of them back (only possible when not booted on scsi). Checks for scsi_build_commandblocks() allocation failures. Sets queue depth even if shost->slave_attach() does not call scsi_adjust_queue_depth. Removes the use of revoke (no drivers are setting it, it was only call via the proc scsi remove-single-device interface). There are at least two problems with sysfs and scsi (one in sysfs, one in scsi, I'll try and post more soon ...) so I could not completey test rmmod of an adapter or upper level driver without leading to an oops or shutdown hang. hosts.c | 5 -- hosts.h | 6 -- osst.c | 9 ++- scsi.c | 118 +++++++++++++++++++++++++++++++-------------------- scsi.h | 2 scsi_mid_low_api.txt | 24 ---------- scsi_scan.c | 9 --- sd.c | 10 +++- sg.c | 10 ++-- sr.c | 7 ++- st.c | 11 +++- 11 files changed, 106 insertions(+), 105 deletions(-) ===== drivers/scsi/hosts.c 1.23 vs edited ===== <edward_peng@dlink.com.tw> sundance net driver updates: - fix crash while unloading driver - fix previous fixes to apply only to specific chips - new tx scheme, improves high-traffic stability, not racy <nathans@sgi.com> [XFS] Fix an oversight in the mount option parsing code which would result in a kernel panic on certain option strings. SGI Modid: 2.5.x-xfs:slinx:130571a <nathans@sgi.com> [XFS] Fix the handling of the realtime device on the mount path - this was broken a few weeks ago with the rework of the target device pointer between the xfs_mount and pb_target structures. SGI Modid: 2.5.x-xfs:slinx:130572a <nathans@sgi.com> [XFS] Minor header reorg to get xfs_lrw.h back into line with the other linux headers. Allows us to not repeat the xfs_stratcb declaration in several places. Also rename linvfs_set_inode_ops to xfs_set_inodeops since its an auxillary routine not a linvfs method. SGI Modid: 2.5.x-xfs:slinx:130573a <nathans@sgi.com> [XFS] Fix compile error from non-DMAPI enabled builds. SGI Modid: 2.5.x-xfs:slinx:130575a <nathans@sgi.com> [XFS] Fix xfs_da_node_split handling of dir/attr buffers for filesystems built with a directory block size larger than the filesystem (and hence attr) blocksize. This does not affect filesystems built with default mkfs.xfs parameters, and only hits when a large number of attributes are set on an inode. SGI Modid: 2.5.x-xfs:slinx:130577a <hch@sgi.com> [XFS] fix jiffies (lbolt) compare SGI Modid: 2.5.x-xfs:slinx:130589a <hch@sgi.com> [XFS] remove nopkg() alias for ENOSYS SGI Modid: 2.5.x-xfs:slinx:130598a <hch@lab343.munich.sgi.com> [XFS] Move a couple of routines with knowledge of pagebuf targets, block devices, and struct inodes down in with the rest of the Linux-specific code. SGI Modid: 2.5.x-xfs:slinx:130824a <nathans@sgi.com> [XFS] The revalidate routine is now a local, static inline elsewhere, so no longer needs to be declared globally here. SGI Modid: 2.5.x-xfs:slinx:130827a <sandeen@sgi.com> [XFS] Avoid creating attrs for acls which can be stored in the standard permission bits, and remove existing attrs if acls are reduced to standard permissions. SGI Modid: 2.5.x-xfs:slinx:130256a <hch@sgi.com> [XFS] fix NULL pointer dereference in pagebuf SGI Modid: 2.5.x-xfs:slinx:130709a <sandeen@sgi.com> [XFS] pagebuf flags cleanup SGI Modid: 2.5.x-xfs:slinx:130823a <sandeen@sgi.com> [XFS] Fix root exec access checks on files with acls SGI Modid: 2.5.x-xfs:slinx:130837a <hch@sgi.com> [XFS] remove inode reference cache SGI Modid: 2.5.x-xfs:slinx:131130a <sandeen@sgi.com> [XFS] Remove tabs from printk's SGI Modid: 2.5.x-xfs:slinx:131185a <hch@sgi.com> [XFS] fix kNFSD operation SGI Modid: 2.5.x-xfs:slinx:131214a <nathans@sgi.com> [XFS] Fix a couple of issues on the error path when dealing with external devices (log/realtime). path_init was missing the LOOKUP_POSITIVE flag, so it would fail to tell us if the file doesn't exist, there was a spot where we were returning the wrong signedness for the code, and when mount is failing, we can call into xfs_blkdev_put with a NULL pointer depending on which devices were initialised and which weren't. SGI Modid: 2.5.x-xfs:slinx:131469a <nathans@sgi.com> [XFS] Fix compile error with XFS_BIG_FILESYSTEMS set. SGI Modid: 2.5.x-xfs:slinx:131618a <sandeen@sgi.com> [XFS] Prevent a couple transactions from happening on ro mounts SGI Modid: 2.5.x-xfs:slinx:131187a <lord@sgi.com> [XFS] Contributed fix from ASANO Masahiro <masano@tnes.nec.co.jp>. In calculating the layout of a log record for a buffer, the linux code deals with buffers which are not contiguous in memory - this only applies to an inode buffer. This adds one more fragmentation case to the code, and a line was missing from this. The end result would be the logging of too much data if this was not the last component of the buffer. The code was definitely wrong, but I think the chances of hitting this were pretty slim, and the resulting error would only matter if there was a crash shortly afterward. SGI Modid: 2.5.x-xfs:slinx:131221a <hch@sgi.com> [XFS] more dead code removal SGI Modid: 2.5.x-xfs:slinx:131386a <tytso@snap.thunk.org> Fix illegal sleep in mbcache. This patch from Andreas Gruenbacher fixes an illegal sleep trace. <cattelan@sgi.com> [XFS] Fix fsx corruption. SGI Modid: 2.5.x-xfs:slinx:131438a <lord@sgi.com> [XFS] fix loop termination logic in xfs_sync SGI Modid: 2.5.x-xfs:slinx:131490a <cattelan@sgi.com> [XFS] narrow down comment SGI Modid: 2.5.x-xfs:slinx:131504a <lord@sgi.com> [XFS] break out the allocator specific parts of the xfs I/O path into a separate file, xfs_iomap.c out of xfs_lrw.c. Remove some parts of the code which were not doing anything for us. This is step one in some major reorgs of this code. SGI Modid: 2.5.x-xfs:slinx:131524a <sandeen@sgi.com> [XFS] Be more careful about quota state changes on ro-devices We can't allow quota state changes on a read-only device, this would kick of a failing transaction & shut down the fs. Previously the test was quota/no quota but we need to disallow any change wrt user and/or group quota state. SGI Modid: 2.5.x-xfs:slinx:131554a <sandeen@sgi.com> [XFS] Remove a couple other readonly device change remnants SGI Modid: 2.5.x-xfs:slinx:131565a <tytso@snap.thunk.org> Add '.' and '..' entries to be returned by readdir of htree directories This patch from Chris Li adds '.' and '..' to the rbtree so that they are properly returned by readdir. <lord@sgi.com> [XFS] remove VPURGE SGI Modid: 2.5.x-xfs:slinx:131630a <lord@sgi.com> [XFS] remove excess vn_remove from the unmount path SGI Modid: 2.5.x-xfs:slinx:131939a <lord@sgi.com> [XFS] Add XFS_POSIX_ACL to control ACL compilation in xfs SGI Modid: 2.5.x-xfs:slinx:132045a <hch@sgi.com> [XFS] Don't require ACL helpers for XFS SGI Modid: 2.5.x-xfs:slinx:132176a <hch@sgi.com> [XFS] Fix up some Kconfig merging issues <porter@cox.net> PPC32: Add new arch/ppc/syslib/ directory for "system library" code. This changeset moves all "system library" code to this directory. System library code includes all common libraries of routines (PIC, system controller, host bridge, kernel feature enablement are all examples of things that belong here). The existing arch/ppc/kernel/ directory keeps all "core" CPU support. Cache handling, basic cpu startup, tlb manipulation, and core kernel code all belong here. The arch/ppc/platforms/ directory now contains only platform family specific files. For SoC processors this includes the OCP glue-code that defines an SoC family. <patmans@us.ibm.com> Re: [PATCH] fix 2.5 scsi queue depth setting On Wed, Nov 06, 2002 at 01:50:00PM -0500, J.E.J. Bottomley wrote: > I'm OK with that, since the drivers can be audited as they're moved over to > slave attach. It also works for drivers that use older hardware (like the > 53c700) which don't call adjust_queue_depth from slave attach, but slightly > later on when they've really verified the device will accept tags. In this > case, I don't want the mid layer to call adjust_queue_depth for me even if I > leave slave_attach with only one command allocated. OK, here it is again, as discussed, plus it calls scsi_release_commandblocks on slave_attach failure. <tytso@snap.thunk.org> Check for failed kmalloc() in ext3_htree_store_dirent() This patch checks for a failed kmalloc() in ext3_htree_store_dirent(), and passes the error up to its caller, ext3_htree_fill_tree(). <hch@sgi.com> [XFS] Fix compilation with ACLs enabled SGI Modid: 2.5.x-xfs:slinx:132214a <hch@sgi.com> export find_trylock_page for XFS <dledford@aladin.rdu.redhat.com> aic7xxx_old: multiple updates and fixes, driver ported to scsi mid-layer new error handling scheme <rmk@flint.arm.linux.org.uk> [MTD] Avoid bad pointer dereferences in mtd partition cmd line parsing In response to RMK's message to ipaq@handhelds.org (), checking the return value from memparse() before using the output pointers when parsing mtd partition specs. Patch from Dave Neuer. <rmk@flint.arm.linux.org.uk> [ARM] Actually update Neponset default configuration. A previous cset claimed that it updated the default configuration. It lies. This cset does though. (why does bk allow deltas to be created for files with no changes?) <kuznet@ms2.inr.ac.ru> [IPSEC]: Semantic fixes with help from Maxim Giryaev. - BSD apps want sin_zero cleared in sys_getname. - Fix protocol setting in flow descriptor of RAW sockets - Wildcard protocol is represented differently in policy than for association. - Missing update of key manager sequence in xfrm_state entries. <akpm@digeo.com> [NET]: Timer init fixes. <davem@nuts.ninka.net> [SPARC64]: Include asm/uaccess.h in asm/elf.h <uzi@uzix.org> [SPARC64]: 0x22/0x10 is Ultra-I/spitfire. <jmorris@intercode.com.au> [CRYPTO]: Add SHA256 plus bug fixes. - Bugfix in sha1 copyright - Add support for SHA256, test vectors and HMAC test vectors - Remove obsolete atomic messages. <davem@nuts.ninka.net> [CRYPTO]: Add in crypto/sha256.c <jmorris@intercode.com.au> [CRYPTO]: Add blowfish algorithm. <kuznet@ms2.inr.ac.ru> [IPSEC]: Few changes to keep racoon ISAKMP daemon happy. <edward_peng@dlink.com.tw> dl2k net driver update from vendor: * ethtool support * changed default media to auto-negotiation * fix disconnect bug * fix RMON statistics overflow * always use io mapping to access eeprom <tytso@snap.thunk.org> Fix ext3 htree rename bug. This fixes an ext3 htree bug pointed out by Christopher Li; if adding the new name to the directory causes a split, this can cause the directory entry containing the old name to move to another block, and then the removal of the old name will fail. <tytso@snap.thunk.org> Fix meta_bg compatibility with e2fsprogs 1.30. The algorithm for finding the block group descriptor blocks for the future on-line resizable ext2/3 format change got out of sync with what was actually shipped in e2fsprogs 1.30. (And what is in e2fsprogs 1.30 is better since it avoids a free block fragmentation at the beginning of the block group.) This change is safe, since no one is actually using the new meta_bg block group layout just yet. <tytso@snap.thunk.org> Fix and simplify port of Orlov allocator to ext3. My ext3 port of the Orlov allocator was buggy; the block group descriptor counts for free inodes and directories were getting doubly decremented / incremented. This patch fixes this, as well as simplifying the code. <trond.myklebust@fys.uio.no> [PATCH] Fix typo in nfs_readpages. Make sure we drain the entire list of pages that failed to get added to the mapping. <neilb@cse.unsw.edu.au> [PATCH] md: Misc little raid fixes Roughly in order of patch: 1/ xor.h is never needed in md.c 2/ set sb_loaded when we 'sync' mddev to rdev as well as when we load sb into rdev from disk. 2/ due to lifetime changes, active count can be 2 when we stop array 3/ due to lifetime changes, we need to explicitly clear the ->pers when we stop an array 4/ autostart can only work for 0.90.0 superblocks. None others would be silly enough to store device numbers for all devices in the superblock... 5/ we had lost the setting of 'sb' when auto-starting an array. 6/ Code currently calls export_rdev(start_rdev) when IS_ERR(start_rdev), which causes an oops. 7/ /proc/mdstat contents error: code does not take into account that delayed resyncs can wait with curr_resync = 1 or 2. 8/ There is a premature "return NOTIFY_DONE", that possibly was in for debugging once... <neilb@cse.unsw.edu.au> [PATCH] md: Fix assorted raid1 problems. From Angus Sawyer <angus.sawyer@dsl.pipex.com>: 1. Null pointer dereference in end_sync_read r1_bio->read_disk is not initialised correctly in sync_request . this is used in end_sync_read to reference the structure conf->mirror[read_disk].rdev which with one disk missing is NULL. 2. Null pointer dereference in mempool_free() This is a race between close_sync() conf->r1_bufpool =3D NULL and put_buf() mempool_free(). bio completion -> resume_device -> put_buf -> mempool_free(r1_bufpool) | [ wakeup] | close_sync() -> r1_bufpool = NULL; The patch attached reorders the mempool_free before the barrier is released and merges resume_device() into put_buf(), (they are only used together). Otherwise I have kept the locking and wakeups identical to the existing code. (maybe this could be streamlined) 3. BUG() at close_sync() if (waitqueue_active(&conf->wait_resume). This occurs with and without the patch for (2). I think this is a false BUG(). From what I understand of the device barrier code, there is nothing wrong with make_request() waiting on wait_resume when this test is made. Therefore I have removed it (the wait_idle test is still correct). 4. raid1 tries to start a resync if there is only one working drive, which is pretty pointless, and noisy. We notice that special case and avoid the resync. <neilb@cse.unsw.edu.au> [PATCH] md: Fix bug in raid5 When analysing a stripe in handle_stripe we set bits R5_Wantread or R5_Wantwrite to indicate if a read or write is needed. We don't actually schedule the IO immediately as this is done under a spinlock (sh->lock) and generic_make_request can block. Instead we check these bits after the lock has been lifted and then schedule the IO. But once the lock has been lifted we aren't safe against multiple access, and it is possible that the IO will be scheduled never, or twice. So, we use test_and_clear to check and potentially schedule the IO. This wasn't a problem in 2.4 because the equivalent information was stored on the stack instead of in the stripe. We also make sure bi_io_vec[0] has correct values as a previous call to generic_make_request may have changed them. <neilb@cse.unsw.edu.au> [PATCH] md: Fix another two bug in raid5 A partial block write over a block on a failed device would need to pre-read that block, which means pre-read all blocks in stripe and generate that block. But the generate-block code never checked for this possibility, so it wouldn't happen. <neilb@cse.unsw.edu.au> [PATCH] kNFSd: Use ->sendpage to send nfsd (and lockd) replies. From Hirokazu Takahashi <taka@valinux.co.jp> As all rpc server replies are now in well defined pages, we can use ->sendpage to send these replies, and so make use for zero-copy transmit on network cards that support it. <neilb@cse.unsw.edu.au> [PATCH] kNFSd: Support zero-copy read for NFSD From Hirokazu Takahashi <taka@valinux.co.jp> This patch changes read and readdir in nfsd. read: If the file supports readpage, we use it to collect pages out of the page cache and to attache them directly to the outgoing nfs reply. The reduces the number of copies by one, and if the filesystem/device driver didn't copy the data, and if the network card can support not copying the data, then you get zero-copy reads. readdir: A separate page is used for stoing the readdir response so that a fill PAGE_SIZE bytes of reply can be supported. <neilb@cse.unsw.edu.au> [PATCH] kNFSd: Make sure final xdr_buf.len is correct on server reply rq_res->len was not always updated properly. It is only needed in the sendto routine, so we calculate it just before that is called, and don't bother updating it anywhere else. <neilb@cse.unsw.edu.au> [PATCH] kNFSd: Convert readlink to use a separate page for returning symlink contents. This allows NFSv3 to manage 4096byte symlinks. Also remove now-unused svcbuf_reserver function. This was used to reserve space in output buffer for 'data', but now this is stored in separate page. <neilb@cse.unsw.edu.au> [PATCH] kNFSd: Make sure svc_process releases response even on error. If a rpc operation indicates that response should be dropped (e.g. kmalloc failure) we must still call pc_release to release anything it may have allocated. <davidel@xmailserver.org> [PATCH] epoll bits 0.34 - Some constant adjusted - Better hash initialization - Correct timeout setup - Added __KERNEL__ bypass to avoid userspace inclusion problems - Cleaned up locking - Function poll_init_wait() now calls poll_init_wait_ex() - Event return fix ( Jay Vosburgh ) - Use <linux/hash.h> for the hash <paulus@samba.org> [PATCH] Update macserial driver This updates the macserial driver in 2.5 so it compiles and works. The main changes are to use schedule_work instead of task queues and BHs. The patch also removes the wait_key method. I know we need to change macserial to use the new serial infrastructure. I'm posting this patch in case it is useful to anyone trying to compile up a kernel for a powermac at the moment. <paulus@samba.org> [PATCH] Update powermac IDE driver This updates the powermac IDE driver in 2.5 so it uses the 2.5 kernel interfaces and types rather than the 2.4 ones. It also makes it use blk_rq_map_sg rather than its own code to set up scatter/gather lists in pmac_ide_build_sglist, and makes it use ide_lock instead of io_request_lock. <paulus@samba.org> [PATCH] Fix typo in sl82c105.c driver This fixes a minor typo in sl82c105.c which stops it from compiling. <rjweryk@uwo.ca> [PATCH] Make VT8653 work with AGP This makes VT8653 (VIA Apollo Pro266T) work with AGP. I had someone test it and verify it works. <neilb@cse.unsw.edu.au> [PATCH] Support latest NVRAM card from micromemory. Just a new PCI ID (and get twice the MegaHz :-). <paulus@samba.org> PPC32: Make flush_icache_page a no-op, do the flush in update_mmu_cache. <davem@nuts.ninka.net> [CRYPTO]: Make sha256.c more palatable to GCCs optimizers. <jmorris@intercode.com.au> [CRYPTO]: minor updates - Fixed min keysize bug for Blowfish (it is 32, not 64). - Documentation updates. <paulus@samba.org> PPC32: define MAP_POPULATE, MAP_NONBLOCK, POLLREMOVE <paulus@samba.org> PPC32: add new syscalls: lookup_dcookie, epoll_*, remap_file_pages <bart.de.schuyer@pandora.be> [BRIDGE]: Fix help docs. <paulus@samba.org> PPC32: make the idle loop able to be platform-specific. <trini@kernel.crashing.org> PPC32: Define default settings for advanced config options. This simplifies the C code by removing some #ifdefs. <paulus@samba.org> PPC32: Fix up the arch-specific export list. We need __div64_32 exported, and flush_icache_page is now a noop so it shouldn't be exported. <trini@kernel.crashing.org> PPC32: Remove more #ifdefs now that the config defines suitable defaults for the advanced kernel config options. <paulus@samba.org> PPC32: More sensible arrangement of the sections in vmlinux.lds.S. This moves the sections which are read-only (e.g. exception table, kallsyms data) to go before the read/write data section, and the feature fixup section into the init data area. It also adds the initramfs section. <paulus@samba.org> PPC32: Improved support for PReP platforms, forward-ported from 2.4. <paulus@samba.org> PPC32: Remove powermac SCSI boot disk discovery code. This didn't compile since sd_find_target is gone, and is to move into userspace anyway. <paulus@samba.org> PPC32: Remove AFLAGS for arch/ppc/mm/hashtable.o, not needed now. <paulus@samba.org> PPC32: Define CLONE_UNTRACED for assembler code, fix a too-long branch <paulus@samba.org> PPC32: Fixes for the Makefiles under arch/ppc/boot. With these changes the boot wrapper successfully builds, although this may not be the absolute best way to do things. <paulus@samba.org> PPC32: Increase max kernel size in boot wrapper, fix compile warnings <anton@samba.org> ppc64: small fixes for updates in BK <anton@samba.org> ppc64: defconfig update <dhowells@redhat.com> [PATCH] add missing __exit specifications This adds some missing __exit specifications which lead to a failure to link the AFS code directly into the kernel. <willy@debian.org> [PATCH] C99 initialisers C99 initialiser conversion; some from Rusty, some from me. <willy@debian.org> [PATCH] initramfs support Support initramfs on parisc <willy@debian.org> [PATCH] misc updates - CREDITS & MAINTAINERS updates - changes for the new kstat/dkstat struct - Kconfig updates - L_TARGET isn't obsolete yet - fix the sys_truncate/truncate64 issue properly this time - add MAP_POPULATE & MAP_NONBLOCK definitions <willy@debian.org> [PATCH] generic prefetch support in xor.h Add prefetching support to asm-generic/xor.h. This gives a healthy speedup on both PA-RISC & IA64. <willy@debian.org> [PATCH] support non-rt signals >32 On PA-RISC, SIGRTMIN is 36, so a 32-bit data type is not enough. We conditionalise it so other arches don't pay the cost. <jgarzik@redhat.com> Merge DaveM's cleanup of Broadcom's GPL'd 4401 net driver <paulus@samba.org> The patch below contains some minor updates to the bmac and mace ethernet drivers used on powermacs. The bmac.c change is just to remove some compile warnings. The mace.c change is to move an inline function definition to before the point where it is used. <fscked@netvisao.pt> Convert 3c505 net driver to use spinlocks instead of cli/sti <mzyngier@freesurf.fr> More znet net driver updates. Driver now survives plug/unplug of cable. <jgarzik@redhat.com> Use dev_kfree_skb_any not dev_kfree_skb in tg3 net driver function tg3_free_rings. Spotted by DaveM. <jgarzik@redhat.com> Properly terminate b44 net driver's PCI id table (caught by Arjan @ Red Hat) <jt@hpl.hp.com> IrDA update 1/3: <Following patch from Martin Diehl> o [CRITICA] Do all serial driver config change in process context o [CORRECT] Safe registration of dongle drivers o [FEATURE] Rework infrastructure of SIR drivers o [CORRECT] Port irtty driver to new SIR infrastructure o [CORRECT] Port esi/actisys/tekram driver to new SIR infrastructure <Note : there is still some more work to do around SIR drivers, such as porting other drivers to the new infrastructure, but this is functional and tested, and old irtty is broken> <jgarzik@redhat.com> IrDA update 2/3: (Adrian Bunk) * C99 initializers * fix public symbol name conflict (me) * further clean up namespace on donauboe driver in module_init/exit area <jt@hpl.hp.com> IrDA update 3/3: <Thanks to Martin Diehl> o [CORRECT] Handle non-linear and shared skbs o [CORRECT] Tell kernel we can handle multithreaded receive <Of course, this has been tested extensively on SMP> <zippel@linux-m68k.org> [PATCH] remove old config tools This deletes the old config tools and moves Michael's maintainer entry for them to CREDITS and I added myself for KCONFIG instead. <zippel@linux-m68k.org> [PATCH] various kconfig updates Various small kconfig updates to fix all the reported little problems and the single menu mode for menuconfig by Petr Baudis <pasky@ucw.cz>. <zippel@linux-m68k.org> [PATCH] kconfig documentation update This removes the old documentation, adds the new one and fixes all references to it. <cel@citi.umich.edu> [PATCH] allow nfsroot to mount with TCP nfsroot needs to pass the network protocol (UDP/TCP) into the mount functions in order to support mounting root partitions via NFS over TCP. <cel@citi.umich.edu> [PATCH] too many setattr calls from VFS layer New code in 2.5 VFS layer invokes notify_change to clear the suid and sgid bits for every write request. notify_change needs to optimize out calls to ->setattr that don't do anything, because for many network file systems, an on-the-wire SETATTR request is generated for every ->setattr call, resulting in unnecessary latency for NFS writes. <cel@citi.umich.edu> [PATCH] bug in NFSv2 end-of-file read handling NFSv2 doesn't pass connectathon 2002, at least on some Linux kernels. Trond deemed the following modification necessary in all kernels to address the problem. <ahaas@airmail.net> [PATCH] C99 designated initializers for fs/hugetlbfs/inode.c <rohit.seth@intel.com> [PATCH] Broken Hugetlbpage support in 2.5.46 The hugetlb page support in 2.5.46 is broken (Don't know if this is the first version of kernel or any prior revs also have that). Basically the free side of hugepages was really not freeing the physical resources (for the cases when the pages allocated using system call interface). Attached is the patch that should resolve it. (doesn't break the hugetlbfs support either). <willy@debian.org> [PATCH] CONFIG_STACK_GROWSUP Change ARCH_STACK_GROWSUP to CONFIG_STACK_GROWSUP as requested. <anton@samba.org> ppc64: initramfs update <hch@infradead.org> [PATCH] read(v)/write(v) fixes Clean up vfs_readv/writev() interface and avoid code duplication. Make kNFSd use the cleaned-up interfaces, and disable the code that accesses the low-level readpage() function of the exported filesystem (not allowed - many filesystems need extra setup, which is why we have a separate ->sendpage() routine for that). <anton@samba.org> ppc64: merge some ioctl32.c changes from sparc64 <kuznet@ms2.inr.ac.ru> [IPSEC] More work. 1. Expiration of SAs. Some missing updates of counters. Question: very strange, rfc defines use_time as time of the first use of SA. But kame setkey refers to this as lastuse. 2. Bug fixes for tunnel mode and forwarding. 3. Fix bugs in per-socket policy: policy entries do not leak but are destroyed, when socket is closed, and are cloned on children of listening sockets. 4. Implemented use policy: i.e. use ipsec if a SA is available, ignore if it is not. 5. Added sysctl to disable in/out policy on some devices. It is set on loopback by default. 6. Remove resolved reference from template. It is not used, but pollutes code. 7. Added all the SASTATEs, now they make sense. <anton@samba.org> ppc64: fix misc_register usage from Michael Still <bhards@bigpond.net.au> [SCTP]: Remove duplicate include. <bhards@bigpond.net.au> [NETFILTER]: Remove duplicate include. <rth@dorothy.sfbay.redhat.com> Fix merge error in do_entArith: don't send SIGFPE on successful emulation. From Ivan. <kuznet@ms2.inr.ac.ru> [IPSEC]: Fix lockup in xfrm4_dst_check. <torvalds@home.transmeta.com> From Rick Lindsley <ricklind@us.ibm.com>: missing return value in sysfs partition code. <ahaas@airmail.net> [PATCH] C99 designated initializer for kernel/cpufreq.c <ahaas@airmail.net> [PATCH] C99 designated initializers for fs/affs <ahaas@airmail.net> [PATCH] C99 designated initializers for fs/umsdos <ahaas@airmail.net> [PATCH] C99 designated initializers for fs/fat <ahaas@airmail.net> [PATCH] C99 designated initializer for include/linux/cpufreq.h <ahaas@airmail.net> [PATCH] C99 designated initializers for drivers/char <davidm@napali.hpl.hp.com> [PATCH] let binfmt_misc optionally preserve argv[1] This makes it possible for binfmt_misc to optionally preserve the contents of argv[1]. This is needed for building accurate simulators which are invoked via binfmt_misc. I had brought up this patch a while ago (see URL below) and there was no negative feedback (OK, there was no feedback at all... ;-). The patch is trivial and the new behavior is triggered only if the letter "P" (for "preserve") is appended to the binfmt_misc registration string, so it shold be completely safe. <rjweryk@uwo.ca> [PATCH] Fix ALSA emu10k1 bass control This trivial patch fixes a mixer problem with the emu10k1 driver in ALSA. In sound/pci/emu10k1/emufx.c, the line static const u32 bass_table[41][5] = { only has 40 lines defined, instead of 41. This results in no sound output when the bass control is set at 100% (but works fine at 98%) I added the missing line, which is present in the OSS emu10k1 driver. <davidel@xmailserver.org> [PATCH] The epoll saga continues ... Proper wakeup code in ep_insert and ep_modify <hch@lst.de> [PATCH] page zero is not mapped on m68knommu <hch@lst.de> [PATCH] ksize of uClinux find out the effective size of a kmalloc()ed object, needed by uClinux but also usefull for the "normal" ports, thus not ifdef'ed. <hch@lst.de> [PATCH] exec.c uClinux bits Stub out put_dirty_page and setup_arg_pages for !CONFIG_MMU and add free_arg_pages that frees all arg pages (noop for CONFIG_MMU) <trond.myklebust@fys.uio.no> [PATCH] Add nfs_writepages & backing_dev... The following patch adds a simple ->writepages method that interprets the extra information passed down in Andrew's writeback_control structure, and translates it into nfs-speak. It also adds a backing_dev_info structure that scales the readahead in terms of the rsize. Maximum readahead is still 128k if you use 32k rsize, but it is scaled down to 4k if you use 1k rsize. <trond.myklebust@fys.uio.no> [PATCH] Make nfs_find_request() scale nfs_find_request() needs to be called every time we schedule a write on the page cache. Currently it is implemented as a linked list which needs to be traversed completely in the case where we don't already have a pending write request on the page in question. The following patch adopts the new radix tree, as is already used in the page cache. Performance change is more or less negligeable with the current hard limit of 256 outstanding write requests per mount. However when I remove this limit then the old nfs_find_request() actually results in a 50% reduction in speed on my benchmark test (iozone with 4 threads each writing a 512Mb file on a 512Mb Linux client against a Solaris server on 100Mbit switched net). With this patch, the result for the same benchmark is a 50% increase in speed. <trond.myklebust@fys.uio.no> [PATCH] add an NFS memory pool Ensure that we can still flush out a minimum number of read, write, and commit requests if memory gets low. <maalanen@ra.abo.fi> [PATCH] block_loop.c kfree error Label in wrong place. <axboe@suse.de> [PATCH] ide-cd patchlet o Correct printk() format, from Marcelo Roberto Jimenez o Check for NULL address in cdrom_newpc_intr() and bail <axboe@suse.de> [PATCH] soft and hard barriers Right now we have one type of barrier in the block layer, and that is used mainly for making sure that the io scheduler doesn't reorder requests when we don't want it to. We also need a flag to tell the io scheduler and low level queue that this is a barrier. So basically two needs: o software barrier, prevents the io scheduler from reordering o hardware barrier, driver must prevent drive from reordering So this patch gets rid of REQ_BARRIER and instead adds REQ_SOFTBARRIER and REQ_HARDBARRIER. <axboe@suse.de> [PATCH] make 16 the default fifo_batch count Lets just make the default fifo_batch count 16. I see a slight slope in throughput, but the various interactiveness improvements are worth it, imho. Plus this gets Andrew of my back, he's been lobbying for this for a while. <thchou@ali.com.tw> [PATCH] Update pci id for ALi chipset series <sfbest@us.ibm.com> [PATCH] add missing jfs_acl.h <axboe@suse.de> [PATCH] enable ide to use bios timings This is the 2nd version Torben did, basically the same as the one from yesterday but with symbolic tune defines instead of more magic numbers. I think the feature is good to have, and it would even allow good ide performance even for an unsupported chipset as long as the bios sets the timings right. From Torben Mathiasen. <samel@mail.cz> [PATCH] fix documentation in include_asm-i386_bitops.h When I was searching for prototype for set_bit() I found IMHO wrong doc entries in include/asm-i386/bitops.h. <linux@hazard.jcu.cz> [PATCH] fix do_timer.h compiler warning warning: implicit declaration of function `smp_local_timer_interrupt' <bunk@fs.tum.de> [PATCH] Labeled elements are not a GNU extension Labeled elements are not a GNU extension but part of C99. <ahaas@neosoft.com> [PATCH] designated initializer patches for fs_devfs <james_mcmechan@hotmail.com> [PATCH] added include needed to compile centaur.c for 2.5.46-bk1 <bunk@fs.tum.de> [PATCH] generic_fillattr() duplicate line. (fwd) The duplicate line was introduced by Al's [PATCH] (1/5) beginning of getattr series. patch and is still present in 2.5.45. <rusty@rustcorp.com.au> [PATCH] vmalloc.h needs pgprot_t Again, uncovered in PPC compile. <dave@qix.net> [PATCH] 2.4 drivers_char_random.c fix sample shellscripts This fixes the sample shellscripts given in the comments of drivers/char/random.c. The scripts save and restore random seeds for /dev/random across reboots. <peter@chubb.wattle.id.au> [PATCH] Fix name of discarded section in modules.h Changeset willy@debian.org|ChangeSet|20021016154637|46581 in linux 2.5 changed the name of .exit.text to .text.exit. Unfortunately, one change got missed. Fix. <mikal@stillhq.com> [PATCH] [Trivial Patch] journal_documentation-001 This corrects DocBook formatting errors in the journalling API documentation which stopped a "make psdocs" from working <pavel@ucw.cz> [PATCH] Clean up nbd.c I've never seen any of those errors, so I guess its okay to convert them to BUG_ONs. It makes code look better. <vberon@mecano.gme.usherb.ca> [PATCH] Small fix for Documentation_Changes (2.5) <tim@physik3.uni-rostock.de> [PATCH] move _STK_LIM to <linux_resource.h> I don't see any connection between the stack limit and scheduling. So I think _STK_LIMIT is better defined in <linux/resource.h> than in <linux/sched.h>. The only place STK_LIM is used is in <asm/resource.h>, which only gets included by <linux/resource.h>, so no change in #includes is necessary. <linux@brodo.de> [PATCH] cpufreq: correct initialization on Intel Coppermines cpufreq: Intel Coppermines -- the saga continues. The detection process for speedstep-enabled Pentium III Coppermines is considered proprietary by Intel. The attempt to detect this capability using MSRs failed. So, users need to pass the option "speedstep_coppermine=1" to the kernel (boot option or parameter) if they own a SpeedStep capable PIII Coppermine processor. Tualatins work as before. <rddunlap@osdl.org> [PATCH] Fix sscanf("-1", "%d", &i) <hch@lst.de> [PATCH] mpage.c is missing a include Most arches seem to pull in mm.h in implictly, but at least m68knommu needs it explicitly. <hch@lst.de> [PATCH] uClinux pgprot bits <hch@lst.de> [PATCH] add a description to flat.h <torvalds@home.transmeta.com> Avoid compiler warning. [un]likely() wants a boolean, not a pointer expression. <cel@citi.umich.edu> [PATCH] remove unused NFS and RPC headers <cel@citi.umich.edu> [PATCH] remove unused cl_flags field The RPC clnt struct has a cl_flags field with one bit defined (in an NFS header, no less). no one ever sets the flag, so remove flag, field, and test in NFSv2 XDR routines that check for the flag. <cel@citi.umich.edu> [PATCH] remove unused NFS cruft remove some definitions and declarations that are no longer used. <cel@citi.umich.edu> [PATCH] remove unused RPC cruft smaller patch that removes unused RPC cruft. <cel@citi.umich.edu> [PATCH] minor TCP connect cleanup TCP connect semantics now assume the rpciod is already running, so there is no longer a need to bump the rpciod semaphor when connecting or closing an RPC over TCP transport socket. <cel@citi.umich.edu> [PATCH] use C99 static struct initializers fix up the last remaining static struct initializers in the RPC client and portmapper. <cel@citi.umich.edu> [PATCH] fix jiffies wrap in new RPC RTO estimator the new RPC RTO estimator has some jiffies wrap problems. <cel@citi.umich.edu> [PATCH] RTO estimator cleanup patch clean up RPC client's RTO estimator. <paulus@samba.org> [PATCH] Update ADB drivers in 2.5 This updates the ADB driver and the three low-level ADB bus adaptor drivers used on powermacs. The files affected are all in drivers/macintosh; they are adb.c, macio-adb.c, via-cuda.c and via-pmu.c. The main changes in this patch are: - Remove the use of global cli/sti and replace them with local cli/sti, spinlocks and semaphores as appropriate. - Use DECLARE_WORK/schedule_work instead of tq_struct/schedule_task. - Improvements to the PMU interrupt handling and sleep/wakeup code. <paulus@samba.org> [PATCH] remove obsolete powermac drivers This removes two drivers from drivers/macintosh. The files affected are all in drivers/macintosh. First, the patch removes the old macintosh ADB keyboard driver and the macintosh keymap file, which are no longer used now that we use the input layer and the adbhid.c driver. Secondly, it removes the drivers/macintosh/rtc.c driver, which was only ever used on PPC, and which is obsolete now that we use the drivers/char/genrtc.c driver instead. <pavel@ucw.cz> [PATCH] Typo in ide Tiny cleanups in IDE... <trond.myklebust@fys.uio.no> [PATCH] slabify the sunrpc layer In order to better cope with low memory conditions, add slabs for struct rpc_task and 'small' RPC buffers of <= 2k. Protect these using mempools. The only case where we appear to use buffers of > 2k is when symlinking, and is due to the fact that the path can be up to 4k in length. For the moment, we just use kmalloc(), but it may be worth it some time in the near future to convert nfs_symlink() to use pages. <trond.myklebust@fys.uio.no> [PATCH] Lift the 256 outstanding NFS read/write request limit. Given the previous set of patches that integrate NFS with the VM + pdflush memory control, and add mechanisms to cope with low memory conditions, the time is now ripe to rip out the 256 outstanding request limit, as well as the associated LRU list in the superblock, and the nfs_flushd daemon. The following patch offers a 30% speed increase on my test setup with 512MB of core memory (iozone using 4 threads each writing a 512MB file over 100Mbit to a Solaris server). Setting mem=64m, I still see a 2-3% speed increase. <torvalds@home.transmeta.com> Bit find operations return past the end on failure. <zwane@holomorphy.com> [PATCH] do_nmi needs irq_enter/irq_exit lovin... Use new "nmi_enter/exit()" which acts the same as the regular irq entries (increases the preempt count appropriately), but doesn't try to start processing softirqs on nmi exit (it just decreases the count). <dhinds@sonic.net> [PATCH] drivers/parport/parport_cs.c compilation problem Sorry, one small goof... <taka@valinux.co.jp> [PATCH] enhance ->sendfile(), allowing kNFSd to use it I enhanced the sendfile method so that we could pass a proper actor to it (which exposes the full power of the internal implementation). Now knfsd calls the sendfile vector rather than depending on a readpage() that hasn't been set up fully. <zippel@linux-m68k.org> [PATCH] kconfig update - fix loading of another configuration - accept longer strings in configuration - move conf_filename to mconf.c (it's the only user) - fix off by one error during string scanning <manfred@colorfullife.com> [PATCH] remove lock_kernel from fifo_open Sufficient locking for fifo_open is provided by the inode semaphore. <hch@lst.de> [PATCH] switch over loop.c to ->sendfile last direct call into fs code is gone <trond.myklebust@fys.uio.no> [PATCH] NFS coherency fix DOH!!! Somebody clone me a replacement brain: I must have burnt another fuse. It turns out the new readpages was evading our read/write serialization. This broke things like 'ld' over NFS, which rewrites chunks of files it has already written. <perex@suse.cz> ALSA update - small patches - added kmalloc_nocheck and vmalloc_nocheck macros - PCM - the page callback returns 'struct page *' - fixed delay function (moved put_user call outside spinlock) - OSS PCM emulation - fixed read() lock when stream was terminated and no data is available - EMU8000 - added 'can schedule' condition to snd_emu8000_write_wait() - AC'97 - added ALC650 support - ALI5451 - removed double free <perex@suse.cz> ALSA update - Moved initialization of card->id to card_register() function. The new default id is composed from the shortname given by driver. - ES18xx - Fixed power management defines - VIA82xx - The SG table is build inside hw_params (outside spinlock - memory allocation). <perex@suse.cz> ALSA update - CS46xx driver - DSP is started after initializing AC97 codecs - rewrite SPDIF output stuff - variable period size support on playback and capture - DAC volume mechanism rewrite - IEC958 input volume mechanism rewrite - added "AC3 Mode Switch" in mixer - code cleanups - ENS1371 driver - added definitions for the ES1373 chip - added code to control IEC958 (S/PDIF) channel status register <perex@suse.cz> ALSA update - CS4231 - added sparc support to merge sparc/cs4231.c code - ICE1712 - added support for AK4529 - added support for Midiman M-Audio Delta410 - USB driver - fixed against newer USB API but allow compilation under 2.4 <akpm@digeo.com> [PATCH] misc fixes - Revert the 3c59x.c compile warning fixes. The return type of inl() was reverted back to the correct 32 bits. - Fix an uninitialised timer in ext3 (JBD debug mode only) - run setup_ro_after() during initialisation. - Fix ifdef/endif imbalance in JFS <akpm@digeo.com> [PATCH] Fix readv/writev return value A patch from Janet Morgan <janetmor@us.ibm.com> If you feed an iovec with a bad address not at the zeroeth segment into readv or writev, it returns the wrong value. iovec 1: base is 8050b20 len is 64 iovec 2: base is ffffffff len is 64 iovec 3: base is 8050ba0 len is 64 The writev should return 64 bytes but is returning 128 This is because we've added the new segment's length into `count' before running access_ok(). The patch changes it to fix that up on the slow path, if access_ok() fails. <akpm@digeo.com> [PATCH] SMP iowait stats Patch from William Lee Irwin III <wli@holomorphy.com> Idle time accounting is disturbed by the iowait statistics, for several reasons: (1) iowait time is not subdivided among cpus. The only way the distinction between idle time subtracted from cpus (in order to be accounted as iowait) can be made is by summing counters for a total and dividing the individual tick counters by the proportions. Any tick type resolution which is not properly per-cpu breaks this, meaning that cpus which are entirely idle, when any iowait is present on the system, will have all idle ticks accounted to iowait instead of true idle time. (2) kstat_read_proc() misreports iowait time The idle tick counter is passed twice to the sprintf(), once in the idle tick position, and once in the iowait tick position. (3) performance enhancement The O(1) scheduler was very carefully constructed to perform accesses only to localized cachelines whenever possible. The global counter violates one of its core design principles, and the localization of "most" accesses is in greater harmony with its overall design and provides (at the very least) a qualitative performance improvement wrt. cache. The method of correcting this is simple: embed an atomic iowait counter in the runqueues, find the runqueue being manipulated in io_schedule(), increment its atomic counter prior to schedule(), and decrement it after returning from schedule(), which is guaranteed to be the same one, as the counter incremented is tracked as a variable local to the procedure. Then simply sum to obtain a global iowait statistic. (Atomicity is required as the post-wait decrement may occur on a different cpu from the one owning the counter.) io_schedule() and io_schedule_timeout() are moved to sched.c as they must access the runqueues, which are private to sched.c, and nr_iowait() is created in order to export the sum of all runqueues' nr_iowait(). <akpm@digeo.com> [PATCH] hugetlb: fix zap_hugetlb_resources() Patch from William Lee Irwin III <wli@holomorphy.com> This patch eliminates zap_hugetlb_resources, along with its usages. This actually fixes bugs, as zap_hugetlb_resources was itself buggy. <akpm@digeo.com> [PATCH] hugetlb: remove unlink_vma() Patch from William Lee Irwin III <wli@holomorphy.com> This patch removes the unused function unlink_vma(). <akpm@digeo.com> [PATCH] hugetlb: internalize hugetlb init Patch from William Lee Irwin III <wli@holomorphy.com> This patch internalizes hugetlb initialization, implementing a command-line option in the process. <akpm@digeo.com> [PATCH] hugetlb: remove sysctl.c intrusion Patch from William Lee Irwin III <wli@holomorphy.com> This patch removes hugetlb's intrusion into kernel/sysctl.c <akpm@digeo.com> [PATCH] hugetlb: remove /proc/ intrusion Patch from William Lee Irwin III <wli@holomorphy.com> This patch removes hugetlb's intrusion into /proc/ <akpm@digeo.com> [PATCH] hugetlb: make private functions static Patch from William Lee Irwin III <wli@holomorphy.com> This patch makes various private structures and procedures static. <akpm@digeo.com> [PATCH] Fix math underflow in disk accounting Patch from Lev Makhlis <mlev@despammed.com> The disk accounting will overflow after 4,000,000 seconds. Extend that by a factor of 1000. <akpm@digeo.com> [PATCH] buffer_head refcounting fixes and cleanup There was some strange code in the __getblk()/__find_get_block()/ __bread() area which was performimg multiple bh_lru_install() calls as well as multiple touch_buffer() calls. Fix all that up. We only need to run bh_lru_install() and touch_buffer() in __find_get_block(). Because if the block wasn't found, __getblk() will create it and will re-run __find_get_block(). Also document a few things and make a couple of internal symbols static to buffer.c Also, don't run __find_get_block() from within unmap_underlying_metadata(). We hardly expect to find that block inside the LRU. And we hardly expect to use it as metadata in the near future so there's no point in letting it evict another buffer if we found it. So just go straight into the pagecache lookup for unmap_underlying_metadata(). <akpm@digeo.com> [PATCH] fix page alloc/free accounting We're currently incrementing /proc/vmstat:pgalloc in front of the per-cpu page queues, and incrementing /proc/vmstat:pgfree behind the per-cpu queues. So they get out of whack. Change it so that we increment the counters each time someone requests a page. ie: they're both in front of the queues. Also, remove a duplicated prep_new_page() call and as a consequence, drop the whole additional list walk in rmqueue_bulk(). <akpm@digeo.com> [PATCH] remove duplicated disk statistics This patch will break some userspace monitoring apps in the name of having sane disk statistics in 2.6.x. Patch from Rick Lindsley <ricklind@us.ibm.com> In 2.5.46, there are now disk statistics being collected twice: once for gendisk/hd_struct, and once for dkstat. They are collecting the same thing. This patch removes dkstat, which also had the disadvantage of being limited by DK_MAX_MAJOR and DK_MAX_DISK. (Those #defines are removed too.) In addition, this patch removes disk statistics from /proc/stat since they are now available via sysfs and there seems to have been a general preference in previous discussions to "clean up" /proc/stat. Too many disks being reported in /proc/stat also caused buffer overflows when trying to print out the data. The code in led.c from the parisc architecture has not apparently been recompiled under recent versions of 2.5, since it references kstat.dk_drive which doesn't exist in later versions. Accordingly, I've added an #ifdef 0 and a comment to that code so that it may at least compile, albeit without one feature -- a step up from its state now. If it is preferable to keep the broken code in, that patch may easily be excised from below. <torvalds@home.transmeta.com> Linux v2.5.47 Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/15123/
crawl-003
refinedweb
13,413
58.28
Binary search is one of the most basic algorithms I know. Given a sorted list of comparable items and a target item being sought, binary search looks at the middle of the list, and compares it to the target. If the target is larger, we repeat on the smaller half of the list, and vice versa. With each comparison the binary search algorithm cuts the search space in half. The result is a guarantee of no more than comparisons, for a total runtime of . Neat, efficient, useful. There’s always another angle. What if we tried to do binary search on a graph? Most graph search algorithms, like breadth- or depth-first search, take linear time, and they were invented by some pretty smart cookies. So if binary search on a graph is going to make any sense, it’ll have to use more information beyond what a normal search algorithm has access to. For binary search on a list, it’s the fact that the list is sorted, and we can compare against the sought item to guide our search. But really, the key piece of information isn’t related to the comparability of the items. It’s that we can eliminate half of the search space at every step. The “compare against the target” step can be thought of a black box that replies to queries of the form, “Is this the thing I’m looking for?” with responses of the form, “Yes,” or, “No, but look over here instead.” As long as the answers to your queries are sufficiently helpful, meaning they allow you to cut out large portions of your search space at each step, then you probably have a good algorithm on your hands. Indeed, there’s a natural model for graphs, defined in a 2015 paper of Emamjomeh-Zadeh, Kempe, and Singhal that goes as follows. You’re given as input an undirected, weighted graph , with weights for . You can see the entire graph, and you may ask questions of the form, “Is vertex the target?” Responses will be one of two things: - Yes (you win!) - No, but is an edge out of on a shortest path from to the true target. Your goal is to find the target vertex with the minimum number of queries. Obviously this only works if is connected, but slight variations of everything in this post work for disconnected graphs. (The same is not true in general for directed graphs) When the graph is a line, this “reduces” to binary search in the sense that the same basic idea of binary search works: start in the middle of the graph, and the edge you get in response to a query will tell you in which half of the graph to continue. And if we make this example only slightly more complicated, the generalization should become obvious: Here, we again start at the “center vertex,” and the response to our query will eliminate one of the two halves. But then how should we pick the next vertex, now that we no longer have a linear order to rely on? It should be clear, choose the “center vertex” of whichever half we end up in. This choice can be formalized into a rule that works even when there’s not such obvious symmetry, and it turns out to always be the right choice. Definition: A median of a weighted graph with respect to a subset of vertices is a vertex (not necessarily in ) which minimizes the sum of distances to vertices in . More formally, it minimizes , where is the sum of the edge weights along a shortest path from to . And so generalizing binary search to this query-model on a graph results in the following algorithm, which whittles down the search space by querying the median at every step. Algorithm: Binary search on graphs. Input is a graph . - Start with a set of candidates . - While we haven’t found the target and : - Query the median of , and stop if you’ve found the target. - Otherwise, let be the response edge, and compute the set of all vertices for which is on a shortest path from to . Call this set . - Replace with . - Output the only remaining vertex in Indeed, as we’ll see momentarily, a python implementation is about as simple. The meat of the work is in computing the median and the set , both of which are slight variants of Dijkstra’s algorithm for computing shortest paths. The theorem, which is straightforward and well written by Emamjomeh-Zadeh et al. (only about a half page on page 5), is that this algorithm requires only queries, just like binary search. Before we dive into an implementation, there’s a catch. Even though we are guaranteed only many queries, because of our Dijkstra’s algorithm implementation, we’re definitely not going to get a logarithmic time algorithm. So in what situation would this be useful? Here’s where we use the “theory” trick of making up a fanciful problem and only later finding applications for it (which, honestly, has been quite successful in computer science). In this scenario we’re treating the query mechanism as a black box. It’s natural to imagine that the queries are expensive, and a resource we want to optimize for. As an example the authors bring up in a followup paper, the graph might be the set of clusterings of a dataset, and the query involves a human looking at the data and responding that a cluster should be split, or that two clusters should be joined. Of course, for clustering the underlying graph is too large to process, so the median-finding algorithm needs to be implicit. But the essential point is clear: sometimes the query is the most expensive part of the algorithm. Alright, now let’s implement it! The complete code is on Github as always. Always be implementing We start with a slight variation of Dijkstra’s algorithm. Here we’re given as input a single “starting” vertex, and we produce as output a list of all shortest paths from the start to all possible destination vertices. We start with a bare-bones graph data structure. from collections import defaultdict from collections import namedtuple Edge = namedtuple('Edge', ('source', 'target', 'weight')) class Graph: # A bare-bones implementation of a weighted, undirected graph def __init__(self, vertices, edges=tuple()): self.vertices = vertices self.incident_edges = defaultdict(list) for edge in edges: self.add_edge( edge[0], edge[1], 1 if len(edge) == 2 else edge[2] # optional weight ) def add_edge(self, u, v, weight=1): self.incident_edges[u].append(Edge(u, v, weight)) self.incident_edges[v].append(Edge(v, u, weight)) def edge(self, u, v): return [e for e in self.incident_edges[u] if e.target == v][0] And then, since most of the work in Dijkstra’s algorithm is tracking information that you build up as you search the graph, we define the “output” data structure, a dictionary of edge weights paired with back-pointers for the discovered shortest paths. class DijkstraOutput: def __init__(self, graph, start): self.start = start self.graph = graph # the smallest distance from the start to the destination v self.distance_from_start = {v: math.inf for v in graph.vertices} self.distance_from_start[start] = 0 # a list of predecessor edges for each destination # to track a list of possibly many shortest paths self.predecessor_edges = {v: [] for v in graph.vertices} def found_shorter_path(self, vertex, edge, new_distance): # update the solution with a newly found shorter path self.distance_from_start[vertex] = new_distance if new_distance < self.distance_from_start[vertex]: self.predecessor_edges[vertex] = [edge] else: # tie for multiple shortest paths self.predecessor_edges[vertex].append(edge) def path_to_destination_contains_edge(self, destination, edge): predecessors = self.predecessor_edges[destination] if edge in predecessors: return True return any(self.path_to_destination_contains_edge(e.source, edge) for e in predecessors) def sum_of_distances(self, subset=None): subset = subset or self.graph.vertices return sum(self.distance_from_start[x] for x in subset) The actual Dijkstra algorithm then just does a “breadth-first” (priority-queue-guided) search through , updating the metadata as it finds shorter paths. def single_source_shortest_paths(graph, start): ''' Compute the shortest paths and distances from the start vertex to all possible destination vertices. Return an instance of DijkstraOutput. ''' output = DijkstraOutput(graph, start) visit_queue = [(0, start)] while len(visit_queue) > 0: priority, current = heapq.heappop(visit_queue) for incident_edge in graph.incident_edges[current]: v = incident_edge.target weight = incident_edge.weight distance_from_current = output.distance_from_start[current] + weight if distance_from_current <= output.distance_from_start[v]: output.found_shorter_path(v, incident_edge, distance_from_current) heapq.heappush(visit_queue, (distance_from_current, v)) return output Finally, we implement the median-finding and -computing subroutines: def possible_targets(graph, start, edge): ''' Given an undirected graph G = (V,E), an input vertex v in V, and an edge e incident to v, compute the set of vertices w such that e is on a shortest path from v to w. ''' dijkstra_output = dijkstra.single_source_shortest_paths(graph, start) return set(v for v in graph.vertices if dijkstra_output.path_to_destination_contains_edge(v, edge)) def find_median(graph, vertices): ''' Compute as output a vertex in the input graph which minimizes the sum of distances to the input set of vertices ''' best_dijkstra_run = min( (single_source_shortest_paths(graph, v) for v in graph.vertices), key=lambda run: run.sum_of_distances(vertices) ) return best_dijkstra_run.start And then the core algorithm QueryResult = namedtuple('QueryResult', ('found_target', 'feedback_edge')) def binary_search(graph, query): ''' Find a target node in a graph, with queries of the form "Is x the target?" and responses either "You found the target!" or "Here is an edge on a shortest path to the target." ''' candidate_nodes = set(x for x in graph.vertices) # copy while len(candidate_nodes) > 1: median = find_median(graph, candidate_nodes) query_result = query(median) if query_result.found_target: return median else: edge = query_result.feedback_edge legal_targets = possible_targets(graph, median, edge) candidate_nodes = candidate_nodes.intersection(legal_targets) return candidate_nodes.pop() Here’s an example of running it on the example graph we used earlier in the post: ''' Graph looks like this tree, with uniform weights a k b j cfghi d l e m ''' G = Graph(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm'], [ ('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'e'), ('c', 'f'), ('f', 'g'), ('g', 'h'), ('h', 'i'), ('i', 'j'), ('j', 'k'), ('i', 'l'), ('l', 'm'), ]) def simple_query(v): ans = input("is '%s' the target? [y/N] " % v) if ans and ans.lower()[0] == 'y': return QueryResult(True, None) else: print("Please input a vertex on the shortest path between" " '%s' and the target. The graph is: " % v) for w in G.incident_edges: print("%s: %s" % (w, G.incident_edges[w])) target = None while target not in G.vertices: target = input("Input neighboring vertex of '%s': " % v) return QueryResult( False, G.edge(v, target) ) output = binary_search(G, simple_query) print("Found target: %s" % output) The query function just prints out a reminder of the graph and asks the user to answer the query with a yes/no and a relevant edge if the answer is no. An example run: is 'g' the target? [y/N] n Please input a vertex on the shortest path between 'g' and the target. The graph is: e: [Edge(source='e', target='d', weight=1)] i: [Edge(source='i', target='h', weight=1), Edge(source='i', target='j', weight=1), Edge(source='i', target='l', weight=1)] g: [Edge(source='g', target='f', weight=1), Edge(source='g', target='h', weight=1)] l: [Edge(source='l', target='i', weight=1), Edge(source='l', target='m', weight=1)] k: [Edge(source='k', target='j', weight=1)] j: [Edge(source='j', target='i', weight=1), Edge(source='j', target='k', weight=1)] c: [Edge(source='c', target='b', weight=1), Edge(source='c', target='d', weight=1), Edge(source='c', target='f', weight=1)] f: [Edge(source='f', target='c', weight=1), Edge(source='f', target='g', weight=1)] m: [Edge(source='m', target='l', weight=1)] d: [Edge(source='d', target='c', weight=1), Edge(source='d', target='e', weight=1)] h: [Edge(source='h', target='g', weight=1), Edge(source='h', target='i', weight=1)] b: [Edge(source='b', target='a', weight=1), Edge(source='b', target='c', weight=1)] a: [Edge(source='a', target='b', weight=1)] Input neighboring vertex of 'g': f is 'c' the target? [y/N] n Please input a vertex on the shortest path between 'c' and the target. The graph is: [...] Input neighboring vertex of 'c': d is 'd' the target? [y/N] n Please input a vertex on the shortest path between 'd' and the target. The graph is: [...] Input neighboring vertex of 'd': e Found target: e A likely story The binary search we implemented in this post is pretty minimal. In fact, the more interesting part of the work of Emamjomeh-Zadeh et al. is the part where the response to the query can be wrong with some unknown probability. In this case, there can be many shortest paths that are valid responses to a query, in addition to all the invalid responses. In particular, this rules out the strategy of asking the same query multiple times and taking the majority response. If the error rate is 1/3, and there are two shortest paths to the target, you can get into a situation in which you see three responses equally often and can’t choose which one is the liar. Instead, the technique Emamjomeh-Zadeh et al. use is based on the Multiplicative Weights Update Algorithm (it strikes again!). Each query gives a multiplicative increase (or decrease) on the set of nodes that are consistent targets under the assumption that query response is correct. There are a few extra details and some postprocessing to avoid unlikely outcomes, but that’s the basic idea. Implementing it would be an excellent exercise for readers interested in diving deeper into a recent research paper (or to flex their math muscles). But even deeper, this model of “query and get advice on how to improve” is a classic learning model first formally studied by Dana Angluin (my academic grand-advisor). In her model, one wants to design an algorithm to learn a classifier. The allowed queries are membership and equivalence queries. A membership is essentially, “What’s its label of this element?” and an equivalence query has the form, “Is this the right classifier?” If the answer is no, a mislabeled example is provided. This is different from the usual machine learning assumption, because the learning algorithm gets to construct an example it wants to get more information about, instead of simply relying on a randomly generated subset of data. The goal is to minimize the number of queries before the target hypothesis is learned exactly. And indeed, as we saw in this post, if you have a little extra time to analyze the problem space, you can craft queries that extract quite a lot of information. Indeed, the model we presented here for binary search on graphs is the natural analogue of an equivalence query for a search problem: instead of a mislabeled counterexample, you get a nudge in the right direction toward the target. Pretty neat! There are a few directions we could take from here: (1) implement the Multiplicative Weights version of the algorithm, (2) apply this technique to a problem like ranking or clustering, or (3) cover theoretical learning models like membership and equivalence queries in more detail. What interests you? Until next time!
https://jeremykun.com/tag/dijkstra/
CC-MAIN-2020-45
refinedweb
2,558
55.03
selects and menulists).:); /* do subclass specific constructor things here */ } require. Below is an example implementation that might be what a finished subclass looks like.); this.href = this.element.href; } MozMillLink.isType = function(node) { if (node.localName.toLowerCase() == "a") { // if node is a link element return true; } return false; } MozMillLink.prototype.getLinkLocation = function() { return this.href; } While this implementation is very basic and not very useful, it demonstrates how to put it all together. You might use this class as shown below: // This uses lazy loading, it is up to you to make sure you instantiate only links as Link elements var link = new MozMillLink('ID', 'linkID'); controller.window.alert(link.getLinkLocation()); link.click(); // click is a property of the parent // This uses explicit instantiation and is safer var link = controller.window.document.getElementById('linkID'); if (MozMillLink.isType(link)) { link = new MozMillLink('ID', 'linkID', {'element':link}) } controller.window.alert(link.getLinkLocation()); link.click(); Note that the explicit instantiation method is kind of tedious to write each time in your test. You should put your subclass in a shared module that performs this step automatically. The core mozmill subclasses use the 'findElement' namespace for this. In the next section I talk about how to hook all this up into Mozmill so that it can be used by your tests. Hooking your subclass. Components.utils.import("resource://mozmill/modules/mozelement.js"); this.subclasses.push("MySubclass"); /* MySubclass definition */
https://developer.mozilla.org/en-US/docs/Mozmill/Mozmill_Element_Object/Extending_the_MozMill_element_hierarchy$revision/88701
CC-MAIN-2014-10
refinedweb
234
51.34
Queue example - a concurrent web spider¶ Tornado’s tornado.queues module (and the very similar Queue classes in asyncio) implements an asynchronous producer / consumer pattern for coroutines, analogous to the pattern implemented for threads by the Python standard library’s queue module. A coroutine that yields Queue.get pauses until there is an item in the queue. If the queue has a maximum size set, a coroutine that yields Queue.put pauses until there is room for another item. A Queue maintains a count of unfinished tasks, which begins at zero. put increments the count; task_done decrements it. In the web-spider example here, the queue begins containing only base_url. When a worker fetches a page it parses the links and puts new ones in the queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to task_done decrements the counter to zero. The main coroutine, which is waiting for join, is unpaused and finishes. #!/usr/bin/env python3 import asyncio import time from datetime import timedelta from html.parser import HTMLParser from urllib.parse import urljoin, urldefrag from tornado import gen, httpclient, queues base_url = "" concurrency = 10 async def get_links_from_url(url): """Download the page at `url` and parse it for links. Returned links have had the fragment after `#` removed, and have been made absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes ''. """ response = await httpclient.AsyncHTTPClient().fetch(url) print("fetched %s" % url) html = response.body.decode(errors="ignore") return [urljoin(url, remove_fragment(new_url)) for new_url in get_links(html)] def remove_fragment(url): pure_url, frag = urldefrag(url) return pure_url def get_links(html): class URLSeeker(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.urls = [] def handle_starttag(self, tag, attrs): href = dict(attrs).get("href") if href and tag == "a": self.urls.append(href) url_seeker = URLSeeker() url_seeker.feed(html) return url_seeker.urls async def main(): q = queues.Queue() start = time.time() fetching, fetched, dead = set(), set(), set() async def fetch_url(current_url): if current_url in fetching: return print("fetching %s" % current_url) fetching.add(current_url) urls = await get_links_from_url(current_url) fetched.add(current_url) for new_url in urls: # Only follow links beneath the base URL if new_url.startswith(base_url): await q.put(new_url) async def worker(): async for url in q: if url is None: return try: await fetch_url(url) except Exception as e: print("Exception: %s %s" % (e, url)) dead.add(url) finally: q.task_done() await q.put(base_url) # Start workers, then wait for the work queue to be empty. workers = gen.multi([worker() for _ in range(concurrency)]) await q.join(timeout=timedelta(seconds=300)) assert fetching == (fetched | dead) print("Done in %d seconds, fetched %s URLs." % (time.time() - start, len(fetched))) print("Unable to fetch %s URLS." % len(dead)) # Signal all the workers to exit. for _ in range(concurrency): await q.put(None) await workers if __name__ == "__main__": asyncio.run(main())
https://www.tornadoweb.org/en/latest/guide/queues.html
CC-MAIN-2022-27
refinedweb
497
61.22
OOPs concepts in Python: Python Classes and Objects, Inheritance, Overloading, Overriding and Data hiding In the previous tutorial we some of the Input/output operations that Python provides. We came to know how to use these functions to read the data from the user or from the external sources and also how to write those data into external sources. Also, we learned how to divide a huge code into smaller methods using functions and how to call or access them. Further Reading => Explicit Range of Free Python Training Tutorials In this tutorial, we will discuss the Advanced Python concept called the OOPs and different types of oops concepts that are available in Python and how and where to use them. What You Will Learn: Watch the VIDEO Tutorials Video #1: Class, Objects & Constructor in Python Video #2: Concept of Inheritance in Python Video #3: Overloading, Overriding & Data Hiding in Python Classes and Objects - Python is an object-oriented programming language where programming stresses more on objects. - Almost everything in Python is objects. Classes Class in Python is a collection of objects, we can think of a class as a blueprint or sketch or prototype. It contains all the details of an object. In the real-world example, Animal is a class, because we have different kinds of Animals in the world and all of these are belongs to a class called Animal. Defining a class In Python, we should define a class using the keyword ‘class’. Syntax: class classname: #Collection of statements or functions or classes Example: class MyClass: a = 10 b = 20 def add(): sum = a+b print(sum) In the above example, we have declared the class called ‘Myclass’ and we have declared and defined some variables and functions respectively. To access those functions or variables present inside the class, we can use the class name by creating an object of it. First, let’s see how to access those using class name. Example: class MyClass: a = 10 b = 20 #Accessing variable present inside MyClass print(MyClass.a) Output 10 Output: Objects An object is usually an instance of a class. It is used to access everything present inside the class. Creating an Object Syntax: variablename = classname Example: ob = MyClass() This will create a new instance object named ‘ob’. Using this object name we can access all the attributes present inside the class MyClass. Example: class MyClass: a = 10 b = 20 def add(self): sum = self.a + self.b print(sum) #Creating an object of class MyClass ob = MyClass() #Accessing function and variables present inside MyClass using the object print(ob.a) print(ob.b) ob.add() Output: 10 20 30 Output: Constructor in Python Constructor in Python is a special method which is used to initialize the members of a class during run-time when an object is created. In Python, we have some special built-in class methods which start with a double underscore (__) and they have a special meaning in Python. The name of the constructor will always be __init__(). Every class must have a constructor, even if you don’t create a constructor explicitly it will create a default constructor by itself. Example: class MyClass: sum = 0 def __init__ (self, a, b): self.sum = a+b def printSum(self): print(“Sum of a and b is: ”, self.sum) #Creating an object of class MyClass ob = MyClass(12, 15) ob.printSum() Output: Sum of a and b is: 27 Output: If we observe in the above example, we are not calling the __init__() method, because it will be called automatically when we create an object to that class and initialize the data members if any. Always remember that a constructor will never return any values, hence it does not contain any return statements. Inheritance Inheritance is one of the most powerful concepts of OOPs. A class which inherits the properties of another class is called Inheritance. The class which inherits the properties is called child class/subclass and the class from which properties are inherited is called parent class/base class. Python provides three types of Inheritance: - Single Inheritance - Multilevel Inheritance - Multiple Inheritance Recommended Reading =>> Inheritance in Java #1) Single Inheritance In Single inheritance, one class will inherit the properties of one class only. Example: class Operations: a = 10 b = 20 def add(self): sum = self.a + self.b print(“Sum of a and b is: “, sum) class MyClass(Operations): c = 50 d = 10 def sub(self): sub = self.c – self.d print(“Subtraction of c and d is: ”, sub) ob = MyClass() ob.add() ob.sub() Output: Sum of a and b is: 30 Subtraction of c and d is: 40 Output: In the above example, we are inheriting the properties of the ‘Operations’ class into the class ‘MyClass’. Hence, we can access all the methods or statements present in the ‘Operations’ class by using the MyClass objects. #2) Multilevel Inheritance In multilevel Inheritance, one or more class act as a base class. Which means the second class will inherit the properties of the first class and the third class will inherit the properties of the second class. So the second class will act as both the Parent class as well as Child class. Example: class Addition: a = 10 b = 20 def add(self): sum = self.a + self.b print(“Sum of a and b is: ”, sum) class Subtraction(Addition): def sub(self): sub = self.b-self.a print(“Subtraction of a and b is: ”, sub) class Multiplication(Subtraction): def mul(self): multi = self.a * self.b print(“Multiplication of a and b is: ”, multi) ob = Multiplication () ob.add() ob.sub() ob.mul() Output: Sum of a and b is: 30 Subtraction of a and b is: 10 Multiplication of a and b is: 200 Output: In the above example, class ‘Subtraction’ inherits the properties of class ‘Addition’ and class ‘Multiplication’ will inherit the properties of class ‘Subtraction’. So class ‘Subtraction’ will act as both Base class and derived class. #3) Multiple Inheritance The class which inherits the properties of multiple classes is called Multiple Inheritance. Further Reading =>> Does Java support Multiple Inheritance? Example: class Addition: a = 10 b = 20 def add(self): sum = self. a+ self.b print(“Sum of a and b is: “, sum) class Subtraction(): c = 50 d = 10 def sub(self): sub = self.c-self.d print(“Subtraction of c and d is: ”, sub) class Multiplication(Addition,Subtraction): def mul(self): multi = self.a * self.c print(“Multiplication of a and c is: ”, multi) ob = Multiplication () ob.add() ob.sub() ob.mul() Output: Sum of a and b is: 30 Subtraction of c and d is: 10 Multiplication of a and c is: 500 Output: Method Overloading in Python Multiple methods with the same name but with a different type of parameter or a different number of parameters is called Method overloading Example: def product(a, b): p = a*b print(p) def product(a, b, c): p = a*b*c print(p) #Gives you an error saying one more argument is missing as it updated to the second function #product(2, 3) product(2, 3, 5) Output: 30 Output: Output: Method overloading is not supported in Python, because if we see in the above example we have defined two functions with the same name ‘product’ but with a different number of parameters. But in Python, the latest defined will get updated, hence the function product(a,b) will become useless. Method Overriding in Python If a subclass method has the same name which is declared in the superclass method then it is called Method overriding To achieve method overriding we must use inheritance. Example: class A: def sayHi(): print(“I am in A”) class B(A): def sayHi(): print(“I am in B”) ob = B() ob.sayHi() Output: I am in B Output: Data Hiding in Python Data hiding means making the data private so that it will not be accessible to the other class members. It can be accessed only in the class where it is declared. In python, if we want to hide the variable, then we need to write double underscore (__) before the variable name. Example: Class MyClass: __num = 10 def add(self, a): sum = self.__num + a print(sum) ob = MyClass() ob.add(20) print(ob.__num) #The above statement gives an error because we are trying to access private variable outside the class Output: 30 Traceback (most recent call last): File “DataHiding.py”, line 10, in print (ob.__num) AttributeError: MyClass instance has no attribute ‘__num Output: Conclusion The class is a blueprint or template which contains all the details of an object, where the object is an instance of a class. - If we want to get the properties of another class into a class, then this can be achieved by inheritance. - Inheritance is of 3 types- Single Inheritance, Multilevel Inheritance, and Multiple Inheritance. - Method overloading is not supported in Python. - Method overriding is used to override the implementation of the same function which is defined in another class. - We can make the data attributes as private or hide them so that it will not be accessible outside the class where it is defined. Our upcoming tutorial will explain more about Additional Python concepts in detail!! PREV Tutorial | NEXT Tutorial
https://www.softwaretestinghelp.com/python/python-oops-concepts/
CC-MAIN-2021-21
refinedweb
1,547
59.53
pullmyefinger 0 Report post Posted September 16, 2013 trying to get a global var created for one program. I did something like Global $ie then $ie=0 then checked for a situation with IF If fldksjflsdkj then $ie=1 endif three programs later I want to see if $ie = 0 or $ie=1 (program 1 is closed) if $ie=1 then do this endif the condition in the first program was true during testing so $ie should be = 1 but it is ignored. I got compile errors (autoset mustdeclarevars is not set to 1) in the latter program so I had to declare $ie as Global again. It wiped out the old value of $ie as with other programming languages. How do I get the value of $ie to be saved across programs? I thought that's what global meant. Share this post Link to post Share on other sites
https://www.autoitscript.com/forum/topic/154654-this-should-not-be-difficult/
CC-MAIN-2018-26
refinedweb
150
75.84
Feb 18, 2012 08:04 PM|JTGrime|LINK Hi all, im messing around with code first and Mvc creating a Database.SetIntializer class which overrides the base seed method. my problem is one of my model properties is a List<t> so when I try to seed with a new type it cannot implicitly converty my type to sys.gen.collection List<type>. public class CodeFirstDbInitializer : DropCreateDatabaseIfModelChanges<CodeFirstDb> { protected override void Seed(CodeFirstDb context) { context.Person.Add(new Person { FirstName = "James", LastName = "Grime", Age = 27, Address = new Address { number = 41, street = "Faraway close", Town = "Farishton", City = "Manchester", County = "Lancashire", PostCode = "1i3s", } }); base.Seed(context); } } I've tried writing this a few ways such as Address.Add( new person {x, y, z}) but nothing any ideas, cheers. Participant 1540 Points Feb 18, 2012 08:31 PM|eric2820|LINK Posting your base class might shed some light on this problem and allow more useful comentary. Feb 18, 2012 08:36 PM|JTGrime|LINK Sorry about that public class Person { private int _id; public int ID { get { return _id; } } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public List<Address> Address { get; set; } } public class Address { private int _id; public int ID { get { return _id; } } public int number { get; set; } public string street { get; set; } public string Town { get; set; } public string City { get; set; } public string County { get; set; } public string PostCode { get; set; } public List<Person> Tenants { get; set; } } I just wanted to see how it handles transact tables etc so these are just quick and dirty classes. I have changed them since to try testing when these properties are not lists P.s. how do I step out of code mode on here once ive pasted code in to return to normal comment text? Participant 1540 Points Feb 18, 2012 08:48 PM|eric2820|LINK Okay, so my assumptions about your returned type was a bit off. I see that you have a List<Address> in your Person class. I'm not sure that there is any reasonable way for MVC to map to that type. As for your P.s. question, the way to get out of code mode is to open a few more new lines before pasting code so that you have non-code lines to continue typing in after the code. Feb 18, 2012 08:54 PM|JTGrime|LINK thank you so much for your time its much appreciated. Feb 21, 2012 09:53 PM|JTGrime|LINK Ok figured this out a little after but didn't think to replay. when overriding and seeding a database with MVC/EF codefirst... well anywhere you employ anonymous type initialisation on System.Generic.Collections you need to initialise a new list also. Address = new List<Address> { new Address { ... }, new Address { ... } ... } So its important to initialise the collection because the class only specifies the type of the property and does not initialise it, I'm a noob. Thanks for your help all and hope this helps. 5 replies Last post Feb 21, 2012 09:53 PM by JTGrime
http://forums.asp.net/t/1771081.aspx?seeding+a+simple+model
CC-MAIN-2015-06
refinedweb
515
69.31
On 10 May 2006 at 8:07, Geir Magnusson Jr <geir@pobox.com>. Well, IIRC we have at least one potential developer on the list who hasn't been able to do that. (Aside: what happened about making a source snapshot?) > [ SNIP ] > > > I don't think it's a good idea for one module referencing into another > > directly where it isn't a well-defined interface that we can > > manage/enforce appropriately. Would you agree? > > But I don't see a solution here, yet. Just balling things up into a > "HDK" doesn't seem to fix it. Well, we copy things into deploy/jre and thereby enable modular development of java code. Why not do the same for C header files? Admittedly we don't actually take modular development of the java code as far as I'm suggesting but I think we should. > > We might as well try to agree how to construct and manage this > > "interface". I think the copying/"hdk" idea is a good solution. > > I guess I don't grok how the copying solves the root problem of > coupling, because the cross-coupling due to C/C++ is still there. (Not sure how it's "due to C/C++" since it's arguably worse for the java case, we just don't notice because we already copy the jars and added them to the (boot)classpath to enable compilation in the modules.) > I certainly can see that the copying is useful - for a given platform > target, you can copy the stuff to the top, I guess. > > But I can also see having a "top" level per module, for which the target > platform is copied into from the bowels of the module... I was thinking of the copying as being a statement that these header files (out of all those that might be used within a module) form the "public" interface for this module (public w.r.t. other modules at least). You are correct that we could copy them to the top of each module, but I think copying them to one top location is more consistent with the way we copy the boot jars to one location. > That seems cleaner and keeps separate "namespaces". When we combine the boot jars into one location, I don't think there is any difficulty in understanding which jar belongs to which module. I don't see how this is much different so long as we are thoughtful about naming the header files. I don't think there are enough to worry about it yet, but we could split the include directory with per-module subdirectories if things start to get out of hand. Putting them at the top-level makes it easier to do the "balling up" to create an compile-against-hdk. But I also think it is more consistent. When doing the javac in a module, we don't reference dependent modules directly, we just point at the complete collection of boot jars. I also think this way makes it easier to have identical build mechanisms for both a check-out-everything-build and a don't need to worry about whether the deploy/hdk tree was populated by simple copying or by untar'ing an hdk. > > I didn't imagine these issues would be this contentious or perhaps I'd > > have tried to separate these two (related) issues. > > That may be the problem here - I'm confusing the two issues? I still seem to be having trouble separating them, but I think that is because I'm in favour of supporting the one-module-hdk style of working from being possible so that makes me lean towards a solution that supports this. Where as, you aren't constrained by this (which I think this is a good thing since it makes for a more thorough discussion of the issues). > > > >>> This means we can then create a snapshot tar/zip of the common > >>> include directory which someone wishing to work on a single module > >>> could download to build against. This is not dissimilar to the > >>> current way in which you could build the java sources against a > >>> deploy/jre snapshot. > > > >> Why wouldn't someone wishing to work on a single module just checkout > >> all the code? > > > > So someone working on prefs (which is approximately 2MB) would need to > > check out all the source for luni (the current largest module at ~36MB), > > awt, swing, sound, etc. ? > > Yes. I actually think they would if it builds. It appears to me that > the classlibrary is intercoupled - our modules are an unnatural (to the > status quo) segmentation we've placed on top in our quest for something > better. > > I guess I need to re-read and figure out how this would solve that > problem. It seems to just add to the number of moving parts. I still think it is quite consistent with how we have solved the problem with respect to java code. It just happens that with the java code the jre is a natural point for combining the artifacts and there isn't one for the "hdk". > > I usually have half a dozen workspaces where I'm trying things out (even > > more at the moment since I'm looking at the four contributions that are > > being voted on). This isn't too bad at the moment with each one being > > about 1/4 GB but it will get bigger over time and therefore less > > manageable. > > Aha... > > > >> I'm really wary about having non-SVN-sourced artifacts meant to be > >> used in building/development. > > > > Isn't that modularity? Why shouldn't we do it - our customers will be? > > Because our users will be using released things. Things that are > stable. A "*DK" by definition is stable, stable code, stable > interfaces, etc. (ok, slow moving, really, but you know what I mean...) I was more thinking of the customers that might take Harmony and replace modules. For instance, the makers of an SSL Accelerator Card might replace the security module with a specialised one. > > > >> Smells a bit like the first step towards the classic Windows > >> "dll-hell". > > > > Undoubtedly, with people working on separate modules, we will get build > > breaks. But we'll get them when, for example, we don't have sufficient > > unit test coverage within the module being worked on - we had an example > > of this not long ago IIRC. We'll also have breaks when people have made > > different assumptions about the meaning of the spec or the definition of > > the internal API. It is a good thing that we find these *bugs* within > > the project... being our own customers! > > I don't understand how a "hdk" helps find the bugs. I was thinking of incidents like this where we discovered (I think) that the coverage of the text tests was insufficient: If people work on individual modules this kind of thing might happen more often but I think it is useful to discover bugs like this early. (Having said that we can/should include the tests from all modules in the hdk - I don't think they belong in the jdk or jre - so that single module developers can still have the option of running more extensive tests.) > > I think this will actually help *avoid* some of the problems you are > > thinking about. > > > >>> For windows, the snapshot would also include the .lib files that are > >>> required to build for the run-time .dll files. > >>> > >>> What is this new snapshot? Well, Tim suggested > >> (Where?) > >> > >>> that it was a little like the jdk but for Harmony development. So > >>> perhaps it should be called an hdk, Harmony Development Kit? > >> I'm missing the point... why wouldn't a developer checkout head, build > >> that, and then get to work on whatever part of whatever module they > >> wanted? > > > > The classlib is getting pretty big. If we are serious about modularity, > > then we should try to support it right through from build, development, > > deployment and ultimately at runtime. > > I think that we're serious about modularity as a packaging option (our > primary one) and something we're going to push for in the ecosystem. > However, I'm afraid of letting the tail wag the dog here... I thought the intentions of the project with respect to modularity were a little broader than just packaging. > [SNIP] > > > > >> Is it that since modules reference other modules, working in one > >> module means you need the dependent modules around? > > > > Yes, exactly. We shouldn't require users to have the entire "source" > > for all the dependent modules around. > > Are you accidentally conflating "user" and "developer" here? They are > entirely different roles. Yes, I was. Sorry. And yes, I appreciate just how different they can be! ;-) > Also, I certainly can see the theoretical value of this, but we tried > similar things in Geronimo, and it was, well, hell, because of the > coupling between Geronimo and it's dependencies. eventually, we gave up > and had an 'uberbuild', which wasn't so bad - you'd checkout everything, > do the master build, and then go dive into your module, which is what I > think talked about doing here before. > > So I guess I'm just resistive due to painful experience on this. It's hard to comment without understanding the specifics. > > We should support modularity at the development level. (Like we do > > already with the stubs for the luni/security kernel classes. We > > don't require developers to have the VM around.) > > > > When I'm developing C code I reference the libc header files but I don't > > go poking around including random headers from the libc source. > > > > So if I'm working on sql, I don't see why I shouldn't develop using > > header files and other well-defined parts of the API for other modules > > rather than having to have all of the source code checked out. > > Because I think version problems are going to spiral out of control. > > > Of course, we should still support people working by checking out > > everything but we shouldn't require it. > > This has to be the case. And IMO, whether you did something like "svn > co; build top" which fills in the "top-level header directory" (which I > think might be better per module), builds all jars, test jars, and libs, > and puts everything in place so you can go wander off and work in a > module, or do a lightweight checkout, stuff the hdk in a specific place > in your classlib/trunk tree, apart for versions, the situation must be > *identical*, or you are going to run into all sorts of build and > debugging/discussion hell. Yes. I agree it must be identical. > >> A long time ago we talked about something like this, a pre-build step > >> that produces a "hdk"-like set of artifacts that a developer could > >> then use if they were focused down inside a module. > >> > >> Is this the same thing returning for discussion? > > > > Not since I've been actively following the list but I'll dig about in > > the archives later. > > > >> Couldn't we just use a standard snapshot? Take the latest nightly, > >> drop into some magical place in the tree, and then go into your module > >> and work from there? > > > > Well, I was suggesting snapshots that might be: > > > > 1) hdk (inc. jdk and jre) > > 2) jdk (inc. jre) > > 3) jre > > > > but I think (?) you are suggesting using the one I overlooked: > > > > 0) everything in classlib/trunk > > > > I think 0) is going to get pretty big (but we should still create it) > > and I think we should actively support using 1) for development too. > > > > Wasn't someone recently pleading with Eclipse to make smaller artifacts? > > ;-) > > That's a totally different problem. > > In the end, I don't care if we snapshot things like this to make it > easier, but I'm really worried about what this will become. > > Also, I think it would be better to be clear about the issues in here. > > So assuming I understand the issues, I'm not against this as long as the > world is indistinguishable if I do a svn co and a "make the world" > top-level build, or just checkout a module and drop in a hdk above it in > the tree. > > AS a matter of fact, I think that the hdk is simple a tar of the junk > created by a "make the world" build... I wouldn't have put it quite like that, but yes perhaps that is correct. I'm saying that deploy/hdk tree should be created/used by a "make the world" build in such a way that compiling a module would be the same as if you have just untar'd an hdk snapshot. This is possible/easy if no module directly references another except via the "junk" in the hdk (specifically the jre/lib/boot jars for java code). > Also, for you, with multiple workspaces, I would imagine that your life > would be better with this being resolvable via a "pointer" in the build > properties (which defaults to "." or -ish), so you can have both a full > tree around, as well as one or more hdk snapshots. > > The following is an example of having the full tree ('full_checkout') at > the same time as a hdk ('binary snapshot aka hdk') with three work areas > (just the prefs checked out from SVN, the RMI contribution from Intel > and the RMI contribution from ITC) > > /your_see_drive :) > /full_checkout/ > /deploy/ > artifacts > / whatever/ > /binary snapshot aka hdk/ > /deploy/ > artifacts > build.props (points to /binary snapshot aka hdk/) > /modules/ > /pref/ > /contribution_1_from_SVN/ > build.props (points to /full checkout/) > /modules/ > /RMI from Intel/ > /contribution_2_from_SVN/ > build.props (points to /full checkout/) > /modules/ > /RMI_from_ITC/ > > > That way, you can dork w/ the full_checkout, fix something, and then > your other work environments that are pointing at it get that fix w/o > any work. > > I hope my crude art makes sense. Yes. But I'm not sure why I'd need the full_checkout around if I had the hdk snapshot though. And I'm not sure I'd mind having several hdk's around; They'd be a more manageable than the code I currently copy around. Regards, Mark. --------------------------------------------------------------------- To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org For additional commands, e-mail: harmony-dev-help@incubator.apache.org
http://mail-archives.apache.org/mod_mbox/harmony-dev/200605.mbox/%3C200605112001.k4BK180v005346@d06av02.portsmouth.uk.ibm.com%3E
CC-MAIN-2015-18
refinedweb
2,373
70.13
Hey bge scripters, this is something that others might find useful if you want to edit scripts without restarting the game engine. I used this to tweak YoFrankie gliding, it works especially well if you can play your game with a joystick (gamepad in my case), since then you dont even need to switch the window focus from the editor. You simply copy your script into an external python file (say game_test.py), make sure there is a main() function that runs the full script, put it in the .blender/scripts directory, and replace the text in the game engine with… import game_test reload(game_test) # reloads the module from the file game_test.main() When you save the updates will be applied immediately. reload() will slow things down a little - reloading all the time but its well worth it for testing game logic - you get feedback immediate, script errors, dynamics etc. Not sure if this is a common thing to do, so thaught Id post.
https://blenderartists.org/t/edit-a-script-as-you-play-python-trick/441273
CC-MAIN-2022-40
refinedweb
164
76.56
Joys of Extension Methods (and Comparisons) I’d like to preface this that all methodologies have advantages and disadvantages. Some methodologies add context and readibility, some add performance gains, etc.—it’s a trade off. To me, Extension Methods are a fantastic way to either a) add additional functionality very quickly to multiple types, such as the JSON example from a few days ago, or b) add functionality to classes that are sealed or already being consumed, like interfaces and BCL objects such as System.String. This is how LINQ was implemented in .NET 3.5. Now, when you add these Extension methods to .NET 3.5’s lambda expressions (I’m a HUGE fan of these… addictive), you can create extremely concise and readable code (in my opinion). So, let’s start off with a simple list of Strings. 😀 List<string> myListOfStrings = new List<string>(); myListOfStrings.Add(“Hello”); myListOfStrings.Add(“World”); myListOfStrings.Add(“Coding”); myListOfStrings.Add(“C#”); myListOfStrings.Add(“Control”); The trick is, we only want the ones that start with “C”. In .NET 2.0, you had two ways to do this (well, more if you looped and compared each… but we won’t think of those performance issues). Both used the Predicate delegate and allowed you to “inline” code, similar to anonymous methods in .NET 3.5 and JavaScript. // ASP.NET 2.0 – Using the Predicate delegate inline. foreach (string word in myListOfStrings.FindAll( delegate(string thisWord) { return thisWord.StartsWith(“C”); })) { Response.Write(word + “<br/>”); } This method was my preferred way to do it if I simply needed a one-time filter, check, or something. As you can see, the delegate passes the current “list item” as a property and you’re returning a boolean value whether or not to include it in the foreach iteration. You could also break the delegate logic into another method by constructing a container class. public class StartsWithLetterChecker { public string TextToFind { get; set; } public StartsWithLetterChecker() { } public bool Find(string word) { return word.StartsWith(this.TextToFind); } } You would then consume that class in your predicate parameter in .FindAll(). // ASP.NET 2.0 – Using the Predicate delegate via method. StartsWithLetterChecker predicateMethod = new StartsWithLetterChecker(); predicateMethod.TextToFind = “C”; foreach (string word in myListOfStrings.FindAll(predicateMethod.Find)) { Response.Write(word + “<br/>”); } This is very clean and very documented—you know exactly what’s happening, where, and why. It is, however, more lines of code, but that’d be worth it if the logic is a) heavily used by several predicates, or b) changes often. You’re also stuck in a box—creating a new class or new method for every permutation (starts with, ends with, contains, etc). With .NET 3.5, the code monkey has an array of extension methods available to them. These include: - ToList() - Where() - Union() - Contains() - Concat() - Cast() - Count() There are quite a few more, Object Browse into System.Core.Extension and take a look. These methods come from System.Core.dll and, similar to .NET 2.0, take a predicate value. The difference is that in .NET 3.5, we have lambda expressions to save us a lot of hassle. // ASP.NET 3.5 – Using an extension method and lambda expression. foreach (string word in myListOfStrings.Where(i => i.StartsWith(“C”))) { Response.Write(word + “<br/>”); } The “i” is simply a variable and could be most anything (i for iteration, I suppose, or my bad habits). Now, to those that would complain that you’re losing DRY by putting this logic here rather than encapsulating it in a method similar to the example above. Well, my answer to that is that this sort of code shouldn’t be (as in the example) at the presentation layer if it could be repetative. Extension methods, lambdas, even anonymous methods in .NET 3.5 are not an excuse to get lazy and break OOP, but provide an easier means to accomplish the same tasks AND current code with additional functionality.
https://tiredblogger.wordpress.com/2007/10/10/joys-of-extension-methods-and-comparisons/
CC-MAIN-2018-05
refinedweb
654
57.37
Can I use InterSystems iKnow without InterSystems DeepSee? Can we use iknow in the Cache tool means if i don't have deepsee tool to work on like for doing some sample programs. Because i don't have deepsee tool with me? And is there any open source for downloading deepsee? DeepSee has a Developer Tutorial in the Documentation. This Is very good . Some example available for this Document .For Ex."HoleFoods" and "Patients" cubes in the SAMPLES namespace . Hi, Kishan! Yes, you don't need DeepSee to develop solutions with iKnow. And also you can make solutions which combine iKnow and DeepSee.
https://community.intersystems.com/post/can-i-use-intersystems-iknow-without-intersystems-deepsee
CC-MAIN-2020-45
refinedweb
103
78.35
IntroductionNow, we will learn basic structure of the c# program as well as building blocks. In this article, we will cover only basic structure of c#. In this article we will learn sample code plus provide additional background information. C# Hello World! Example Simple c# program should take some minimum functionality - Imported namespace ( How to use libraries) - Namespace declaration (How to create new Namespace in c#) - A class (How to design a class in c#) - Class methods (Default class consist of single main() method, you can take other methods) - Class Properties(it shows the behavior of the class as well as object) - A Main method (its the entry point of the c# program) - Statements & Expressions (You can put your ideas using statements and expression) Lets take a simple that would print Hello World!, I have visual studio 2013 , you can take 2005 and above version. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace csharpprogram { class Program { static void Main(string[] args) { Console.WriteLine("Hello World"); Console.ReadKey(false); } } } If you want to compile and execute this code, press green triangle button, which is mentioned in VS IDE and see your result Ok, Now understand what is in this program: - The first five line of the program using System; and etc - the using keyword is most important keyword in c# library, you can take it in different purpose like The using block is used, where you want to deallocate resources forcefully. Also used where you want to add library references in the code. A program generally has multiple using statements. - The next line has the namespace declaration. A namespace is a collection of classes, enum , delegates , structure and etc, also its not a replacement of assemblies. Here csharpprogram namespace consists of single Program class. - The next line has a class declaration, the class Program contains data members and member functions, also known as encapsulation, which is discuss later. The data members shows the properties of the object and member functions shows behavior of the object. Like, we take a person as a object then we can take person name, person id, person address, etc as a data member and person_walk consider in member function. - The next line defines the Main method, which is the entry point for all C# programs. The main method consists of various statement. In our c# program, we can take multiple main methods. - The next line defines the WriteLine( ) method, which return the string output onto the screen. It is included in Console class , which is in System Namespace. - Using the Console.ReadKey( ) method, which is overloaded by boolean values. You can take true as well as false as a argument in it. False parameter shows presses key onto the screen. Using This method compiler wait for a key press and it prevents the screen from running and closing quickly when the program is launched from Visual Studio .NET. Note: Some Basic concepts about c# - C# is a case sensitive language. - Statement and expression end with semicolon(;) - Always C# program start from main method. How to Compile & Execute a C# Program: lets check this video. Great post! one can learn briefly about C# program structure from your blog. You describe the whole thing very clearly in step by step format. My ASP.NET Hosting a hosting tservice provides hosting plans in a cheaper way.
https://dotprogramming.blogspot.com/2014/05/csharp-program-structure.html
CC-MAIN-2017-43
refinedweb
566
64.81
#include "lib/time/tvdiff.h" #include "lib/cc/compat_compiler.h" #include "lib/defs/time.h" #include "lib/log/log.h" Go to the source code of this file. Return duration in seconds between time_t values t1 and t2 iff t1 is numerically less or equal than t2. Otherwise, return TIME_MAX. This provides a safe way to compute difference between two UNIX timestamps (t2 can be assumed by calling code to be later than t1) or two durations measured in seconds (t2 can be assumed to be longer than t1). Calling code is expected to check for TIME_MAX return value and interpret that as error condition. Definition at line 181 of file tvdiff.c. Referenced by predicted_ports_prediction_time_remaining(). Return the number of milliseconds elapsed between *start and *end. If the tv_usec difference is 500, rounds away from zero. Returns LONG_MAX on overflow and underflow. Definition at line 102 of file tvdiff.c. References TOR_USEC_PER_SEC. Referenced by circuit_build_times_handle_completed_hop(). Return the number of microseconds elapsed between *start and *end. Returns LONG_MAX on overflow and underflow. Definition at line 53 of file tvdiff.c. References TOR_USEC_PER_SEC.
https://people.torproject.org/~nickm/tor-auto/doxygen/tvdiff_8c.html
CC-MAIN-2019-39
refinedweb
181
53.68
C++ Program to check String Palindrome Hello Everyone! In this tutorial, we will learn how to demonstrate how to check if the String is Palindrome or not, in the C++ programming language. Condition for a String to be Palindrome: A String is considered to be a Palindrome if it is the same as its reverse. Steps to check for String Palindrome: Take the String to be checked for Palindrome as input. Initialize another array of characters of the same length to store the reverse of the string. Traverse the input string from its end to the beginning and keep storing each character in the newly created array of char. If the characters at each of the positions of the old chararray are the same as the new chararray, then the string is a palindrome. Else it isn't. Code: #include <iostream> #include <stdio.h> //This header file is used to make use of the system defined String methods. #include <string.h> using namespace std; int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to Determine whether String is Palindrome or not, in CPP ===== \n\n"; //String Variable Declaration char s1[100], c = 'a'; int n1, i = 0; cout << "\n\nEnter the String you want to check : "; cin >> s1; //Computing string length without using system defined method while (c != '\0') { c = s1[i++]; } n1 = i-1; char s2[n1+1]; cout << "Length of the entered string is : " << n1 << "\n\n"; i = 0; //Computing reverse of the String without using system defined method while (i != n1 + 1) { s2[i] = s1[n1 - i - 1]; i++; } cout << "Reverse of the entered string is : " << s2 << "\n\n\n"; i = 0; //Logic to check for Palindrome while (i != n1) { if (s2[i] != s1[i]) break; i++; } if (i != n1) cout << "The String \"" << s1 << "\"" << " is not a Palindrome."; else cout << "The String \"" << s1 << "\"" << " is a Palindrome."; cout << "\n\n"; return 0; } Output: We hope that this post helped you develop a better understanding of how to check string is palindrome or not in C++. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-program-to-check-string-palindrome
CC-MAIN-2021-04
refinedweb
358
78.89
Contents It’s been over two years since I wrote the first version of this tutorial. I decided to give it another run with some of the tools that have come about since then (particularly WebOb). Sometimes Python is accused of having too many web frameworks. And it’s true, there are a lot. That said, I think writing a framework is a useful exercise. It doesn’t let you skip over too much without understanding it. It removes the magic. So even if you go on to use another existing framework (which I’d probably advise you do), you’ll be able to understand it better if you’ve written something like it on your own. This tutorial shows you how to create a web framework of your own, using WSGI and WebOb. No other libraries will be used. For the longer sections I will try to explain any tricky parts on a line-by line basis following the example. At its simplest WSGI is an interface between web servers and web applications. We’ll explain the mechanics of WSGI below, but a higher level view is to say that WSGI lets code pass around web requests in a fairly formal way. That’s the simplest summary, but there is more – WSGI lets you add annotation to the request, and adds some more metadata to the request. WSGI more specifically is made up of an application and a server. The application is a function that receives the request and produces the response. The server is the thing that calls the application function. A very simple application looks like this: >>> def application(environ, start_response): ... start_response('200 OK', [('Content-Type', 'text/html')]) ... return ['Hello World!'] The environ argument is a dictionary with values like the environment in a CGI request. The header Host:, for instance, goes in environ['HTTP_HOST']. The path is in environ['SCRIPT_NAME'] (which is the path leading up to the application), and environ['PATH_INFO'] (the remaining path that the application should interpret). We won’t focus much on the server, but we will use WebOb to handle the application. WebOb in a way has a simple server interface. To use it you create a new request with req = webob.Request(''), and then call the application with resp = req.get_response(app). For example: >>> from webob import Request >>> req = Request.blank('') >>> resp = req.get_response(application) >>> print resp 200 OK Content-Type: text/html <BLANKLINE> Hello World! This is an easy way to test applications, and we’ll use it to test the framework we’re creating. WebOb is a library to create a request and response object. It’s centered around the WSGI model. Requests are wrappers around the environment. For example: >>> req = Request.blank('') >>> req.environ['HTTP_HOST'] 'localhost:80' >>> req.host 'localhost:80' >>> req.path_info '/test' Responses are objects that represent the... well, response. The status, headers, and body: >>> from webob import Response >>> resp = Response(>> resp.>> print resp 200 OK Content-Length: 12 content-type: text/plain; charset=UTF-8 <BLANKLINE> Hello World! Responses also happen to be WSGI applications. That means you can call resp(environ, start_response). Of course it’s much less dynamic than a normal WSGI application. These two pieces solve a lot of the more tedious parts of making a framework. They deal with parsing most HTTP headers, generating valid responses, and a number of unicode issues. While we can test the application using WebOb, you might want to serve the application. Here’s the basic recipe, using the Paste HTTP server: if __name__ == '__main__': from paste import httpserver httpserver.serve(app, host='127.0.0.1', port=8080) you could also use wsgiref from the standard library, but this is mostly appropriate for testing as it is single-threaded: if __name__ == '__main__': from wsgiref.simple_server import make_server server = make_server('127.0.0.1', 8080, app) server.serve_forever() Well, now we need to start work on our framework. Here’s the basic model we’ll be creating: We’ll use explicit routes using URI templates (minus the domains) to match paths. We’ll add a little extension that you can use {name:regular expression}, where the named segment must then match that regular expression. The matches will include a “controller” variable, which will be a string like “module_name:function_name”. For our examples we’ll use a simple blog. So here’s what a route would look like: app = Router() app.add_route('/', controller='controllers:index') app.add_route('/{year:\d\d\d\d}/', controller='controllers:archive') app.add_route('/{year:\d\d\d\d}/{month:\d\d}/', controller='controllers:archive') app.add_route('/{year:\d\d\d\d}/{month:\d\d}/{slug}', controller='controllers:view') app.add_route('/post', controller='controllers:post') To do this we’ll need a couple pieces: To do the matching, we’ll compile those templates to regular expressions. >>> import re >>> var_regex = re.compile(r''' ... \{ # The exact character "{" ... (\w+) # The variable name (restricted to a-z, 0-9, _) ... (?::([^}]+))? # The optional :regex part ... \} # The exact character "}" ... ''', re.VERBOSE) >>> def template_to_regex(template): ... regex = '' ... last_pos = 0 ... for match in var_regex.finditer(template): ... regex += re.escape(template[last_pos:match.start()]) ... var_name = match.group(1) ... expr = match.group(2) or '[^/]+' ... expr = '(?P<%s>%s)' % (var_name, expr) ... regex += expr ... last_pos = match.end() ... regex += re.escape(template[last_pos:]) ... regex = '^%s$' % regex ... return regex line 2: Here we create the regular expression. The re.VERBOSE flag makes the regular expression parser ignore whitespace and allow comments, so we can avoid some of the feel of line-noise. This matches any variables, i.e., {var:regex} (where :regex is optional). Note that there are two groups we capture: match.group(1) will be the variable name, and match.group(2) will be the regular expression (or None when there is no regular expression). Note that (?:...)? means that the section is optional. line 10: This variable will hold the regular expression that we are creating. line 11: This contains the position of the end of the last match. line 12: The finditer method yields all the matches. line 13: We’re getting all the non-{} text from after the last match, up to the beginning of this match. We call re.escape on that text, which escapes any characters that have special meaning. So .html will be escaped as \.html. line 14: The first match is the variable name. line 15: expr is the regular expression we’ll match against, the optional second match. The default is [^/]+, which matches any non-empty, non-/ string. Which seems like a reasonable default to me. line 16: Here we create the actual regular expression. (?P<name>...) is a grouped expression that is named. When you get a match, you can look at match.groupdict() and get the names and values. line 17, 18: We add the expression on to the complete regular expression and save the last position. line 19: We add remaining non-variable text to the regular expression. line 20: And then we make the regular expression match the complete string (^ to force it to match from the start, $ to make sure it matches up to the end). To test it we can try some translations. You could put these directly in the docstring of the template_to_regex function and use doctest to test that. But I’m using doctest to test this document, so I can’t put a docstring doctest inside the doctest itself. Anyway, here’s what a test looks like: >>> print template_to_regex('/a/static/path') ^\/a\/static\/path$ >>> print template_to_regex('/{year:\d\d\d\d}/{month:\d\d}/{slug}') ^\/(?P<year>\d\d\d\d)\/(?P<month>\d\d)\/(?P<slug>[^/]+)$ To load controllers we have to import the module, then get the function out of it. We’ll use the __import__ builtin to import the module. The return value of __import__ isn’t very useful, but it puts the module into sys.modules, a dictionary of all the loaded modules. Also, some people don’t know how exactly the string method split works. It takes two arguments – the first is the character to split on, and the second is the maximum number of splits to do. We want to split on just the first : character, so we’ll use a maximum number of splits of 1. >>> import sys >>> def load_controller(string): ... module_name, func_name = string.split(':', 1) ... __import__(module_name) ... module = sys.modules[module_name] ... func = getattr(module, func_name) ... return func Now, the Router class. The class has the add_route method, and also a __call__ method. That __call__ method makes the Router object itself a WSGI application. So when a request comes in, it looks at PATH_INFO (also known as req.path_info) and hands off the request to the controller that matches that path. >>> from webob import Request >>> from webob import exc >>> class Router(object): ... def __init__(self): ... self.routes = [] ... ... def add_route(self, template, controller, **vars): ... if isinstance(controller, basestring): ... controller = load_controller(controller) ... self.routes.append((re.compile(template_to_regex(template)), ... controller, ... vars)) ... ... def __call__(self, environ, start_response): ... req = Request(environ) ... for regex, controller, vars in self.routes: ... match = regex.match(req.path_info) ... if match: ... req.urlvars = match.groupdict() ... req.urlvars.update(vars) ... return controller(environ, start_response) ... return exc.HTTPNotFound()(environ, start_response) line 5: We are going to keep the route options in an ordered list. Each item will be (regex, controller, vars): regex is the regular expression object to match against, controller is the controller to run, and vars are any extra (constant) variables. line 8, 9: We will allow you to call add_route with a string (that will be imported) or a controller object. We test for a string here, and then import it if necessary. line 13: Here we add a __call__ method. This is the method used when you call an object like a function. You should recognize this as the WSGI signature. line 14: We create a request object. Note we’ll only use this request object in this function; if the controller wants a request object it’ll have to make on of its own. line 16: We test the regular expression against req.path_info. This is the same as environ['PATH_INFO']. That’s all the request path left to be processed. line 18: We set req.urlvars to the dictionary of matches in the regular expression. This variable actually maps to environ['wsgiorg.routing_args']. Any attributes you set on a request will, in one way or another, map to the environment dictionary: the request holds no state of its own. line 19: We also add in any explicit variables passed in through add_route(). line 20: Then we call the controller as a WSGI application itself. Any fancy framework stuff the controller wants to do, it’ll have to do itself. line 21: If nothing matches, we return a 404 Not Found response. webob.exc.HTTPNotFound() is a WSGI application that returns 404 responses. You could add a message too, like webob.exc.HTTPNotFound('No route matched'). Then, of course, we call the application. The router just passes the request on to the controller, so the controllers are themselves just WSGI applications. But we’ll want to set up something to make those applications friendlier to write. To do that we’ll write a decorator. A decorator is a function that wraps another function. After decoration the function will be a WSGI application, but it will be decorating a function with a signature like controller_func(req, **urlvars). The controller function will return a response object (which, remember, is a WSGI application on its own). >>> from webob import Request, Response >>> from webob import exc >>> def controller(func): ... def replacement(environ, start_response): ... req = Request(environ) ... try: ... resp = func(req, **req.urlvars) ... except exc.HTTPException, e: ... resp = e ... if isinstance(resp, basestring): ... resp = Response(body=resp) ... return resp(environ, start_response) ... return replacement line 3: This is the typical signature for a decorator – it takes one function as an argument, and returns a wrapped function. line 4: This is the replacement function we’ll return. This is called a closure – this function will have access to func, and everytime you decorate a new function there will be a new replacement function with its own value of func. As you can see, this is a WSGI application. line 5: We create a request. line 6: Here we catch any webob.exc.HTTPException exceptions. This is so you can do raise webob.exc.HTTPNotFound() in your function (or on Python 2.4, raise webob.exc.HTTPNotFound().exception). These exceptions are themselves WSGI applications. line 7: We call the function with the request object, any any variables in req.urlvars. And we get back a response. line 10: We’ll allow the function to return a full response object, or just a string. If they return a string, we’ll create a Response object with that (and with the standard 200 OK status, text/html content type, and utf8 charset/encoding). line 12: We pass the request on to the response. Which also happens to be a WSGI application. WSGI applications are falling from the sky! line 13: We return the function object itself, which will take the place of the function. You use this controller like: >>> @controller ... def index(req): ... return 'This is the index' Now we’ll show a basic application. Just a hello world application for now. Note that this document is the module __main__. >>> @controller ... def hello(req): ... if req.method == 'POST': ... return 'Hello %s!' % req.params['name'] ... elif req.method == 'GET': ... return '''<form method="POST"> ... You're name: <input type="text" name="name"> ... <input type="submit"> ... </form>''' >>> hello_world = Router() >>> hello_world.add_route('/', controller=hello) Now let’s test that application: >>>! There’s another pattern that might be interesting to try for a controller. Instead of a function, we can make a class with methods like get, post, etc. The urlvars will be used to instantiate the class. We could do this as a superclass, but the implementation will be more elegant as a wrapper, like the decorator is a wrapper. Python 3.0 will add class decorators which will work like this. We’ll allow an extra action variable, which will define the method (actually action_method, where _method is the request method). If no action is given, we’ll use just the method (i.e., get, post, etc). >>> def rest_controller(cls): ... def replacement(environ, start_response): ... req = Request(environ) ... try: ... instance = cls(req, **req.urlvars) ... action = req.urlvars.get('action') ... if action: ... action += '_' + req.method.lower() ... else: ... action = req.method.lower() ... try: ... method = getattr(instance, action) ... except AttributeError: ... raise exc.HTTPNotFound("No action %s" % action) ... resp = method() ... if isinstance(resp, basestring): ... resp = Response(body=resp) ... except exc.HTTPException, e: ... resp = e ... return resp(environ, start_response) ... return replacement line 1: Here we’re kind of decorating a class. But really we’ll just create a WSGI application wrapper. line 2-4: The replacement WSGI application, also a closure. And we create a request and catch exceptions, just like in the decorator. line 5: We instantiate the class with both the request and req.urlvars to initialize it. The instance will only be used for one request. (Note that the instance then doesn’t have to be thread safe.) line 6: We get the action variable out, if there is one. line 7, 8: If there was one, we’ll use the method name {action}_{method}... line 8, 9: ... otherwise we’ll use just the method for the method name. line 10-13: We’ll get the method from the instance, or respond with a 404 error if there is not such method. line 14: Call the method, get the response line 15, 16: If the response is just a string, create a full response object from it. line 19: and then we forward the request... line 20: ... and return the wrapper object we’ve created. Here’s the hello world: >>> class Hello(object): ... def __init__(self, req): ... self.request = req ... def get(self): ... return '''<form method="POST"> ... You're name: <input type="text" name="name"> ... <input type="submit"> ... </form>''' ... def post(self): ... return 'Hello %s!' % self.request.params['name'] >>> hello = rest_controller(Hello) We’ll run the same test as before: >>> hello_world = Router() >>> hello_world.add_route('/', controller=hello) >>>! You can use hard-coded links in your HTML, but this can have problems. Relative links are hard to manage, and absolute links presume that your application lives at a particular location. WSGI gives a variable SCRIPT_NAME, which is the portion of the path that led up to this application. If you are writing a blog application, for instance, someone might want to install it at /blog/, and then SCRIPT_NAME would be "/blog". We should generate links with that in mind. The base URL using SCRIPT_NAME is req.application_url. So, if we have access to the request we can make a URL. But what if we don’t have access? We can use thread-local variables to make it easy for any function to get access to the currect request. A “thread-local” variable is a variable whose value is tracked separately for each thread, so if there are multiple requests in different threads, their requests won’t clobber each other. The basic means of using a thread-local variable is threading.local(). This creates a blank object that can have thread-local attributes assigned to it. I find the best way to get at a thread-local value is with a function, as this makes it clear that you are fetching the object, as opposed to getting at some global object. Here’s the basic structure for the local: >>> import threading >>> class Localized(object): ... def __init__(self): ... self.local = threading.local() ... def register(self, object): ... self.local.object = object ... def unregister(self): ... del self.local.object ... def __call__(self): ... try: ... return self.local.object ... except AttributeError: ... raise TypeError("No object has been registered for this thread") >>> get_request = Localized() Now we need some middleware to register the request object. Middleware is something that wraps an application, possibly modifying the request on the way in or the way out. In a sense the Router object was middleware, though not exactly because it didn’t wrap a single application. This registration middleware looks like: >>> class RegisterRequest(object): ... def __init__(self, app): ... self.app = app ... def __call__(self, environ, start_response): ... req = Request(environ) ... get_request.register(req) ... try: ... return self.app(environ, start_response) ... finally: ... get_request.unregister() Now if we do: >>> hello_world = RegisterRequest(hello_world) then the request will be registered each time. Now, lets create a URL generation function: >>> import urllib >>> def url(*segments, **vars): ... base_url = get_request().application_url ... path = '/'.join(str(s) for s in segments) ... if not path.startswith('/'): ... path = '/' + path ... if vars: ... path += '?' + urllib.urlencode(vars) ... return base_url + path Now, to test: >>> get_request.register(Request.blank('')) >>> url('article', 1) '' >>> url('search', q='some query') '' Well, we don’t really need to factor templating into our framework. After all, you return a string from your controller, and you can figure out on your own how to get a rendered string from a template. But we’ll add a little helper, because I think it shows a clever trick. We’ll use Tempita for templating, mostly because it’s very simplistic about how it does loading. The basic form is: import tempita template = tempita.HTMLTemplate.from_filename('some-file.html') But we’ll be implementing a function render(template_name, **vars) that will render the named template, treating it as a path relative to the location of the render() call. That’s the trick. To do that we use sys._getframe, which is a way to look at information in the calling scope. Generally this is frowned upon, but I think this case is justifiable. We’ll also let you pass an instantiated template in instead of a template name, which will be useful in places like a doctest where there aren’t other files easily accessible. >>> import os >>> import tempita >>> def render(template, **vars): ... if isinstance(template, basestring): ... caller_location = sys._getframe(1).f_globals['__file__'] ... filename = os.path.join(os.path.dirname(caller_location), template) ... template = tempita.HTMLTemplate.from_filename(filename) ... vars.setdefault('request', get_request()) ... return template.substitute(vars) Well, that’s a framework. Ta-da! Of course, this doesn’t deal with some other stuff. In particular: But, for now, that’s outside the scope of this document.
http://pythonpaste.org/webob/do-it-yourself.html
crawl-002
refinedweb
3,373
60.01
Splatterhouse "Rick and the Terror Mask return!" Splatterhouse was a series I was a little late to discover. I saw my first Splatterhouse game inside of a video store, sitting in the Sega Genesis section. Splatterhouse 3, with it's blood splattered box art and intimidating characters jumped out to me. On the box was Rick, in his monster form, holding an Axe and preparing to butcher a large demon. I was enthralled with the box alone, and went to the counter to rent it and ran home as fast as I could. Upon turning on the console, I was greeted with an amazing intro sequence (for the genesis it was pretty impressive) and found myself immersed in the world of Splatterhouse. Not long after that, I discovered the remaining games in the series, and even saw a Splatterhouse arcade machine at my old laundry mat. Sadly, it would soon be replaced by Xmen Vs. Streetfighter, and my interest in the series would be replaced as well. Warp ahead to the year 2002, I rediscovered the series again and never let go. I collected all of the games (aside from the arcade machine) and beat all of them, which is quite a feat considering how difficult those games were. I always found myself thinking the series was too short lived however, and always wanted more out of it. Splatterhouse 3 ended on a good enough note, but I still wanted more. You can imagine my excitement when I find out later in my life that a new Splatterhouse game was being developed for xbox 360 by a small team called Bottlerocket. However what I saw was far from what I expected. What little information there was available at the time was not promising. The game as a whole didn't really seem very Splatterhouse, and the screenshots for the game weren't very good. It was hard to feel any excitement for the project, but regardless I kept watch over it like a protective parent. Sometime in 2008 however, more information on the game was revealed, and later I would find out that Namco wrenched the game from Bottlerockets hands, laid off most of the company, kept a few designers, and began working on the game anew. What I saw was looking much more promising, though still far from my expectations. As time went on, trailers began coming out and the game really took on a Splatterhouse look. I recall the first time I felt any hype at all being when I saw the "Make a Wish" trailer. Suddenly I realized that this game was indeed a true Splatterhouse title and began waiting for the release. Release was delayed several times, but I continued to wait. The day finally arrived when the game was released in stores, and I held the box in my hands. Greeted by a blood soaked box art, I eagerly tore the plastic off the box (which took me a good 10 minutes by the way, I hate that damn plastic) and slapped the disc into my 360 with all haste. I wasn't disappointed. Taking an interesting spin on the classic "Save the princess" plot, Ricks girlfriend Jennifer has been kidnapped by Dr. West, and Rick is laying down in his own blood with a mask peering into his eyes. "She doesn't have to die." The mask, taunting Rick, insists that if Rick wears the mask, he can save Jennifer. What Rick doesn't realize however, is that much more than just Jennifer is at stake. Putting on the mask, Rick transforms into a hulking giant, and the game begins. But many mysteries remain unsolved. What is the mask? Why is it helping Rick? What happened to Rick in the mansion? These questions are answered as the game progresses, and there were many moments in the game where I found myself thinking "Ahhh so THAT'S what happened!". Splatterhouse doesn't rely too much on story. While the story that's there is good in my opinion, this ain't Metal Gear Solid or Final Fantasy. There are no deep lines, no philosophical discussions, no thin moral grey areas. It's just Rick, chasing Dr. West, with a psychotic voice raging in his head and that's all this game needs. There are times when the interaction between the mask and Rick are interesting however, and other times when it's just downright funny to listen to the mask insult Rick or mock him. During early development of this game, the graphics never looked terribly appealing. However as time went on, the graphics became more and more polished and it looked like an entirely different game. In the final product, the graphics look quite good. The models in the game are very detailed and the overall mood of the game is kept alive with shadows being cast on the walls, blood splattering onto Rick, and fluid animations. There are some snags however. The actual stages aren't quite as attractive. Many of the stages, especially the first few phases, don't look nearly as appealing as the models, and this takes away a bit from the graphics. The stages are quite varied though. You're not limited to just the mansion this time around. Upon taking a portal, Rick travels to various locations in different times and different places. While the graphics in the stages may not be quite as impressive as the models, they make up for it by casting a certain amount of atmosphere. Post-Apocalyptic New York actually feels like Post-Apocalyptic New York and the Slaughterhouse actually feels like a Slaughterhouse. My favorite would have to be the Slaughterhouse. That place is just downright scary. The whole time I was in that stage I found myself admiring all the small details they had put into it. I never felt like I was just walking down a hall of corridors and rooms, I felt like I was inside one of the most twisted places on Earth. Yet, I enjoyed every moment of it! There's also a carnival stage full of evil clowns and a hall of mirrors, and a level where the mansion has become alive, complete with a living beating heart! However the blood may be what really catch your attention. Blood splatters everywhere, and the "Splatterkills" are especially pleasing. Even the screen isn't safe from the blood splatterings. Despite what people have been saying, the blood on the screen DOES NOT OBSCURE ANYTHING! The blood is transparent, and Rick (as well as his enemies) all stand out very easily, even when the screen is getting totally soaked. As a very cool nod to the classics however, the side scrolling stages let you smack the enemy INTO THE SCREEN! I've always wanted to do that in the original Splatterhouse. The ravings of the mask are voiced by voice actor legend Jim Cummings, most well known in my mind as the voice of Robotnick in the Sonic cartoons that aired on ABC. He lends his voice very well to the Terror Mask, helping to give it a very sarcastic and evil tone. He contributes greatly to many of the more amusing moments in the game, taking lines that would otherwise be plain and making them into something special. Though I have to admit I can't help but feel his role as the Terror Mask is raping many of the characters from my childhood! Other voice actors in the game do a reasonably good job, but I have to say I felt they were completely overshadowed by Jim. His voice definitely steals the show. The music (which has become somewhat of a big topic of discussion) is played by bands such as Lamb of God and Mastodon. Should the screaming lyrics not be to your taste, do not fret! The majority of the story mode plays instrumental music, so it shouldn't be a concern. While many have admitted they detest songs used in the game, I actually enjoyed some of them They took awhile to grow on me, but as time went on I found myself liking songs such as "Must Kill". Even if you don't like the songs, you have to admit the music does fit in very well with the overall atmosphere of the game. The remaining sounds in the game are creepy and lend an overall twisted feel to the game. Sound effects during battle are great as well.The sound of skulls being crushed, limbs being torn off, and the screams of my enemies all sounded wonderful, and really added to the experience. Some of the stages really creeped me out playing the game with headphones on. The Slaughterhouse in particular seems to really stand out in the sound department, with the sound of low moans and groans being heard nearby, most likely coming from the tortured creatures that are on the receiving end of the facilities horrors. A certain iconic Splatterhouse foe commands the factory as well, and you'll know who he is when you hear him coming! I also enjoy the sound cues the mask gives you. They've saved my hide many times. The mask will warn you of incoming attacks, give you advice, and in the Slaughterhouse (Did I mention that I love this level?) the mask will help you cross dangerous rooms by letting you know when it's time to cross over. The mask is indeed mans best friend. Splatterhouse just wouldn't be Splatterhouse without endless beat-em-up action though now would it? This is where the game really shines in my opinion. While you can explore the mansion to find hidden goodies such as pictures of Jennifer (I'll let you find out for yourself what kind of pictures they are) or spot various horror movie easter eggs, the bulk of the game consists of Rick smashing his foes into a pile of red goo. In this regard Splatterhouse definitely excels. Rick is a giant, and as a result he really has the power to obliterate his foes in a way players may not be used to. While many other games of this type rely on dodging, this game relies more on the player overpowering even large numbers of his/her foes using an arsenal of attacks. At the start, Rick can only use light and heavy attacks, as well as a grapple attack. However as you spill more blood, you can buy more moves, and this is when the gameplay really gets interesting. Many of the moves in this game make Rick much more menacing than any of the demons he's fighting! One move allows Rick to grab his adversary and spin them around (damaging anything in the path) and then brutally slamming the enemy into the ground, most likely killing it. Another move allows Rick to rip his enemies head or arm off (WITHOUT being a splatterkill no less) and then use the body part to clobber everything. My personal favorite attack is an upgrade to the quick (light) attacks which allows Rick to lay into the enemy with a barrage of fast punches. The upgrades you can buy consist of health upgrades, combo extensions, entirely new moves, and even weapon upgrades. Rick isn't entirely unstoppable though. In fact he can be considered downright fragile. Despite having such a large frame, Rick takes a lot of damage and can die quickly if overwhelmed. While he does learn attacks later in the game to handle mobs of enemies, his best defense is his ability to heal. If one of his limbs is torn off, he can force it to regrow or let it regrow naturally. If he takes too much damage, Rick can cause his bones to explode out of his body (he looks kind of like a sea urchin when he does this!) and blood from his enemies is drained into his body, allowing him to heal completely. And if Rick is REALLY pissed, he can transform into his new monster form (berserker mode) and reduce everything around him to bloody particles. There's nothing more satisfying than being on the verge of death, cornered by your enemy, only to completely turn the tables on them by transforming into a hulking vehicle of death and destruction. The overall combat experience in this game can be summed up as "satisfying". I love smashing the bad guys! RICK SMASH!! AEERGGHH!!!! Classic weapons like the 2X4 return (with a bit of a buff up) and can be used as well, though they will break after being used for awhile. Strangely enough, I found body parts to be the most effective weapons because they don't break as easily and give you more blood. The shotgun and chainsaw are the best for harvesting blood though, as you can imagine. Why does blood matter you ask? Good question. Powerful attacks, upgrades, and healing is done with blood. Spill blood, and the mask harvests it automatically, filling it into a Necro Meter underneath your life bar. Keeping this bar full is the key to survival in Splatterhouse. In this way, the game encourages you to be as violent as possible. Splatterkills and weapons spill the most blood. Because of this, I find Splatterhouse to be the perfect game for venting your anger. In a bad mood? Pop in Splatterhouse and vent all that anger on the monsters. When your enemy is surrounded by a glowing red aura, this is when the enemy is primed for a splatterkill. And that's when things get messy! Splatterkills are done with movements of the analog sticks or presses of buttons. They may take a few tries to learn, but that's what video games are about. Many may be irked by the fact they can't instantly do the splatterkills and lose their patience, but just keep practicing. In about 20 minutes I was pulling splatterkills left and right, laughing like a menace. Some of the splatterkills while very violent, obviously weren't very imaginative, but there is one particular kill however that is my favorite, and it's performed on a monster known on the Internet as "the butt monster". Trust me, the name will make sense when you do a splatterkill on this thing. That looks uncomfortable! Fighting isn't mindless however. You'll need to dodge attacks, learn enemy patterns, and really adjust to your current situation. Rick is strong, but don't forget he's outnumbered 10 to 1 for most of the game. The camera makes it easy to keep an eye on them all as well. Simply rolling the right analog stick around swivels the camera around Rick, allowing you to keep the action in front of you. The camera makes it tough to keep an eye on anything behind you however, which some may consider a flaw. Personally I don't have a problem with it. Lots of games are like this including Resident Evil 4. This doesn't mean it's not annoying when a demon jumps you from behind and cuts your arm off, but it's also not a big problem. In this type of game you're going to have to deal with getting hit from behind once in awhile, there are A LOT of enemies on the screen coming after you. Maybe not nearly as many as something like Dead Rising, but it's not unusual for Rick to be facing 10 or more enemies at once. During berserker mode, the camera is at its worst unfortunately. Rick grows about twice his size, yet the camera doesn't zoom out any. This is annoying, but somewhat made up for with the fact Rick is invincible during this time, and his attacks will have a wide berth. But it still would have been nice if the camera would zoom out just a bit when you transform. Aside from exploration and fighting, there are also platforming and side scrolling segments and for the most part, these are fine. However these areas are where the game starts to fall apart. The side scrolling portions are excellent as far as I'm concerned, and serve as a great throwback to the classics. My beef however is with the 3D platforming segments. They're cheap, to put it bluntly. Platforming in 3D is done mostly by using the analog stick to highlight a glowing foothold and jumping to it. Most of the time this works fine but some parts are cheap, such as a section where the platform you're standing on is falling and the mask tells you to jump. Your first instinct is going to be to jump as soon as the platform is falling, but that's not where you're supposed to do. You're supposed to press the jump button while in mid air. Rick will then do a strange floating motion and make it to the other side. What the hell???? Boss fights make a return to Splatterhouse but sadly there aren't many. Only about 3 "real" boss fights, with the other boss fights being glorified gauntlets of enemies. The bosses that we do get are satisfying, but late in the game the bosses stop appearing, which is a let down. My favorite boss fight however is in, you guessed it, THE SLAUGHTERHOUSE! I think it's actually called the Meat Factory but who cares? This boss was epic, and was one of my favorite moments in the game. It also happens to be the only boss that you get to fight in a side scrolling manner, which is far too short lived. I'd have loved to have seen more side scrolling boss fights. The game could have really used more bosses, and probably some more enemy variety. Too many enemies are grouped into "types" and a lot of them are just stronger variations of enemies you've already pummeled hundreds of times before, But to be fair, that kind of comes with the beat-em-up territory. Familiar enemies from the series make an appearance, but there really weren't enough of them for my liking. I would have liked to have seen more classic Splatterhouse foes like those water monsters from the water stages in the original Splatterhouse. I loved smashing those things into the background! Most of the enemies are new, such as the Aegis, and I can live with that I suppose My favorite enemy thus far are the fat fetus thing you run into in one of the side scrolling levels. Not only is that a classic nod to the fetus monsters from the original Splatterhouse games, but it's also a totally twisted enemy design that really works well with the game. Too bad you only run into them once! Boreworms are back as well, but they're no longer the menace they used to be. They've been reduced to blood fodder. You find them in breakable objects now and you stomp on them. Damn, I remember when the boreworms used to be big enough to damage Rick! There's also a snake enemy, and I have to admit that while I like the design of the snake, It's a pretty unoriginal enemy concept. "Oh yeah, let's have a GIANT SNAKE!!!!". I like to pretend it's supposed to be the new Giant Boreworm, but it's obviously a snake. One complaint I have are the amount of "spike" areas. These areas require you to pick an enemy up and throw them (or slam them) into spikes. These areas are fine most of the time, but sometimes the spikes are high up and you have to throw the enemy into them from a certain distance. It distracts from the heart of the game which is the fighting, and it just slows the game down in my opinion. A few of these would have been fine, but it seems like you run into far too many of them, especially at the beginning of the game. Copying and pasting is also an issue, with many rooms you go into being obviously copied, and some of the mechanics (such as the spikes I mentioned) get copied too often as well. There is a certain amount of glee to be had in impaling an enemy ass-first into a spike, punching a switch, and watching them get blasted into more spikes/incinerated/butchered though. But surely the team could have came up with something better than "let's make another room where you kick something into a spike!!!!". There's also a "Door Guardian" which is actually a giant mouth. Once again you have to throw enemies into this. Seriously, as if the spikes weren't enough? Finally, there's a giant eyeball which makes a few appearances in the game. The giant eye is probably my favorite because it doesn't get appear that often and doesn't require throwing an enemy into it. Sweet relief! Regardless of any flaws, Splatterhouse is a solid game and too many people just won't give it a chance. The game offers plenty of replay value with difficulties, a new game plus, a survival arena (S ranking those can be a challenge) collectibles, unlockable classic trilogy (which plays just fine for me despite people claiming emulation issues) etc. but people can't get off of their Halo high long enough to try this game. Combined with the fact Namco refuses to acknowledge this game exists along with the fact games these days cost way too much (60$ is a little much) and you've got a game that people are going to dismiss as crap and ignore it, or wait until the price drops. The people who made this game worked hard to mold it into what it is, and it's a shame that this game probably won't sell enough to warrant a sequel, so splatterhouse fans you better enjoy this gasp of air. The series probably won't make a comeback. It was nice to see one more game in the series, especially with all the nods and references this one made. But I have a feeling this is the end. The series didn't get the revival I was hoping for, and there are still many things I would have liked to have seen done with Splatterhouse. While I enjoyed this game, I can't help but feel that the experience will be incomplete without a sequel. The game ends on a cliffhanger, and it just makes me want to see what's going to happen next. And part of me knows I will probably never find out what happens next in the story. Rick, the Mask, and Jennifers fate will probably go unknown, though we can still speculate on what "could have" happened next. It sucks to end this review on a sad note, but the fact is a sequel is nearly impossible. Namco is having hard times right now and the future of Splatterhouse looks pretty bleak. After finally getting all the achievements, getting 100%, beating the game three times, etc. I finally placed the game into my bookshelf and took a moment to reflect on the series. I'm happy that the series got one last hurrah before it sank, but with all the needless hate this game gets, the bad reviews from popular websites, and overall lack of public interest in the game, I feel like this wasn't the way I wanted the series to go out. I guess Splatterhouse is doomed to be a cult classic series that will never get the recognition I feel that it deserves. With that note, I urge people to buy this game. Buy the DLC packs too. Do everything you can to help this series. I don't want to see it end again so soon. I want to see a sequel, I want to see what happens. More importantly, I want to see Splatterhouse (and the team behind it) become successful and become a mainstream product. It deserves better than this. Reviewer's Rating: 4.0 - Great Originally Posted: 12/02/10 Game Release: Splatterhouse (US, 11/23/10) Got Your Own Opinion? Submit a review and let your voice be heard.
http://www.gamefaqs.com/xbox360/946605-splatterhouse/reviews/144356
CC-MAIN-2016-50
refinedweb
4,060
70.73
Instead of thinking of recently released Visual Studio 2010 as just another integrated development environment, Microsoft is pitching it as a full-fledged platform for developing all-things Windows -- desktop, Web, mobile, enterprise, and everything in between. But being "full-fledged" requires tools and features, and lots of them. Visual Studio 2010 has features you'd expect of any IDE, like an editorand debugger. It also has visual design tools. But what makes it more than just another IDE is extensibility. Like previous versions, Visual Studio 2010 accommodates third-party and roll-your-own macros, plug-ins, add-ins, and packages. Being able to adapt tools to fit the development environment instead of the other way around is a powerful capability, particularly when it comes to programmer productivity. All this feature richness is great, but at some point it runs the risk of feature bloat, bogging down resources and performance in the process. Remember the Ada programming language? Oft described as having everything including the kitchen sink, Ada was ahead of its time with object-oriented and other capabilities. But its plethora of features was more than most developers could handle, and it never went mainstream. That's something the Visual Studio team should keep in mind. That said, to find out more about Visual Studio 2010s features, I asked top-flight Windows programmers what they like best about VS2010. Code Contracts Guarantee Visual Studio 2010's Code Contracts is a .NET 4.x feature with tremendous potential. Code Contracts are methods developers use to define the terms of service of a given class and its methods. They're based on Bertrand Meyer's Design by Contract concept where each method indicates the conditions that must be verified for the code to execute, the conditions that will be verified after the execution completes, and the conditions that never change during the life of any class instance. Code Contracts let you strictly define the specs for each class and method and go a long way toward proving the semantic correctness of code. Once you define Code Contracts, you can use them as exception handlers in your code or just keep them as specifications. Visual Studio 2010's Static Checker analysis tool uses Code Contracts. It runs in the background for each build, checking whether each method gets and returns proper values based on the declared contract. The result? You get warned at compile time about code that compiles okay but will fail at runtime. --Dino Esposito, software consultant Intellisense Makes Sense We already know that Intellisense, which displays a list of every accessible element in a namespace, class, and object, can have a big impact on developer productivity. Improvements to it in Visual Studio 2010 promise to make developers even more productive. In previous Visual Studio versions, even after you entered characters to refine an Intellisense search list, you still had to search through the list for the grouping you were looking for. For instance, if you wanted to see all classes that contain the word "Exception" in the System namespace, you could refine the Intellisense list. But the only way to actually find the group of classes with "Exception" in the System namespace was to manually scroll through all the Intellisense items to find where your target items were grouped. With the improved Intellisense for Visual Studio 2010, if you want to find all the classes in the System namespace that contain "Exception," all you do is type "System.Exception," and the list is populated with every class in the System namespace that contains the word "Exception." Essentially, Intellisense now performs a strstr-style search instead of a strncmp-style search. It's a simple improvement that leads to great productivity gains. --Robby Powell, group product manager, GrapeCity Do More With Multimonitors Multiple monitor support in Visual Studio 2010 lets me use both my monitors and see code in full screens. The split screen feature that came out with Visual Studio 2008 was nice, but I could only work on one form at a time and on one monitor. Because I had to keep scrolling the different split screen, I didn't use that feature to its full extent. Multiple monitor support will let me view as much code as possible using both my monitors. And it's simple to use; I can take any embedded tab item in Visual Studio -- a designer, code, or markup -- tear it from the main window and drag it anywhere I want. --Jason Beres, product management director, Infragistics Think Parallel Visual Studio 2010 debugging windows -- Parallel Stacks and Parallel Tasks -- provide invaluable information about the way tasks and threads interact to run code in parallel. The Parallel Tasks window displays a list of all the tasks and their status -- scheduled, running, waiting, waiting-deadlocked, and so on -- providing a snapshot of what's going on with each one. You can order and group information shown in the windows. You can double click on a task name to access the code that's being run by it. The Parallel Tasks grid shows a column, or Thread Assignment, that displays the ID for the thread shown in the Threads Windows, letting you see which managed thread is supporting a task's execution. The Parallel Stacks window displays a diagram with all the tasks or threads, their status, and their relationships. Switch from the Tasks view to the Threads view to see the whole map of what's going on with tasks and managed threads in your parallelized code. --Gaston Hillar, IT consultant Silverlight Shines The single most-useful new feature isthe design-time support and property window available for Silverlight applications. While I use Expression Blenda lot and still prefer it for creating the initial layout of screens, the integrated Visual Studio 2010 visual designer for Silverlight definitely saves time. Silverlight 3 had no designer support within Visual Studio, so having a way to visually move controls around, change properties, and even hook-up data bindings is welcome. I can open an XAML file and directly adjust properties of controls. Data bindings can be set directly through the data-binding window. If you're a Silverlight developer, you'll use the new visual designer a lot. It's not as powerful as Expression Blend and doesn't let you do things like edit control templates, but it does provide a more efficient development workflow that boosts productivity. --Dan Wahlin, founder, The Wahlin Group Built-In UML With UML design tools now built-in, no longer do you need to depend on Visio, Rational Rose, or other UML design tools to create UML diagrams. Now you can do it from within Visual Studio. Visual Studio 2010 provides support for Use Case, Class, Sequence, Activity, Component, and Layer diagrams. You can use the built-in UML modeling tools for conceptual and logical analysis during the software development cycle. And, there's a new project type called "Modeling Projects" in your "Add New Project" dialog to control and organize the UML diagrams. Moreover, you can save your UML diagrams as images or in XML Paper Specification (.xps) formats. --Joydip Kanjilal, software consultant F# Made Easier When learning a new programming language, immediate feedback from an interactive console helps. That's certainly the case with support provided for F#. Modern functional languages such as F# Interactive are becoming the paradigm of choice for working with multicore processors. Visual Studio 2010's F# interactive window lets you to run F# code interactively or execute F# scripts. It executes a read, evaluate, and print (REPL) loop cycle for the F# language. Granted, you can run the fsi.exe console application, but the F# Interactive window is more convenient and instructional. To access this new window, select View, Other Windows, F# Interactive in the IDE's main menu. Once you enter two semicolons (;;) to terminate a line or several lines of code, F# Interactive attempts to compile the code. If successful, it executes the code and prints the signature of the types and values that it compiled. Of course, the interpreter will print error messages if something goes wrong. Code entered in the same session has access to previously entered constructs, a useful feature to test code before creating an F# program. --Gaston Hillar, IT consultant Postmortem Debugging If you are like me, you spend a great deal of time doing "postmortem" debugging. What that means is that I don't debug a live process per se, rather a snapshot of a problematic process. Most all of my time I spend debugging these problems using the Debugging Tools for Windows debuggers (i.e., ntsd, cdb, windbg) which are incredibly powerful debuggers with unprecedented commands. At times though, it can get a little tedious and I find myself wondering why I can't have the Visual Studio debugging experience while debugging some of these issues. Well, I wonder no longer. Visual Studio 2010 supports postmortem debugging (i.e., crash dump debugging) with all the bells and whistles that go along with the Visual Studio IDE experience. Switching between frames in a call stack? No problem. The call stack window is just a click away. Do you want source code access? No problem. Setup your source path and Visual Studio will automatically bring up the source code. --Mario Hewardt, author of "Advanced .NET Debugging" More Accurate Multi-targeting Multi-targeting is a feature that lets you develop applications that target various versions of the .NET Framework -- versions 2.0, 3.0, 3.5, and now 4.0. While multi-targeting was available with Visual Studio 2008, it had its warts. For instance, the IDE would on occasion display information not available in the particular version of the .NET Framework your application was targeting. With Visual Studio 2010, you have in your toolbox only items that are specific to the particular version of .NET Framework that you're focusing on. So, if you are developing an application targeted at .NET Framework 2.0, you won't see your toolbox displaying types that belong to the later versions of the framework. Moreover, in Visual Studio 2010, if you change your project to retarget another version of the .NET Framework that is not available in the system you are working on, you will be prompted to choose the version you need; i.e., the target framework. --Joydip Kanjilal, software consultant
http://www.drdobbs.com/architecture-and-design/an-ide-with-lots-to-like/224201360
CC-MAIN-2015-35
refinedweb
1,732
53.81
Java Code Inside Page Directive Discussion in 'Java' started by vunet.us using directive w/o code-behind=?Utf-8?B?U3RldmUgVHJhbmRhaGw=?=, Jun 12, 2006, in forum: ASP .Net - Replies: - 4 - Views: - 627 - =?Utf-8?B?U3RldmUgVHJhbmRhaGw=?= - Jun 12, 2006 Page properties in the @Page directiveJeff, Jan 16, 2007, in forum: ASP .Net - Replies: - 3 - Views: - 460 - Mark Rae - Jan 17, 2007 compile directive for conditional compile for Java 1.4 versus Java 5timjowers, Jul 2, 2007, in forum: Java - Replies: - 7 - Views: - 4,351 - heyjude - Feb 2, 2011 Using-declaration or using-directive inside unnamed-namespace?Niels Dekker - no reply address, Apr 27, 2010, in forum: C++ - Replies: - 1 - Views: - 593 - Niels Dekker - no reply address - Apr 27, 2010 Page Directive - New Page PropertiesPhilip, Mar 21, 2006, in forum: ASP .Net Web Controls - Replies: - 0 - Views: - 116 - Philip - Mar 21, 2006
http://www.thecodingforums.com/threads/java-code-inside-page-directive.525992/
CC-MAIN-2014-41
refinedweb
142
65.73
View changes for these pages 2007-03-11 Bug in software or hardware This week was a very rewarding week: we squashed a bug which seemed to elude the very best minds -- these . . . an [ ADC]. When we made sure that no current . . . 4K - last updated 2007-03-12 08:46 UTC by bvankuik 2008-06-23 Shamroc DAC board The [ DAC] testboard for the Shamroc (part of . . . 2K - last updated 2008-06-24 09:11 UTC by 6715 2008-08-29 Fighting an ADC We use an [ ADC] from Cirrus Logic on the DAC . . . did -- except the command that powers off the digital and analogue parts of the board do NOT affect . . . 2K - last updated 2008-08-29 07:16 UTC by Bart 2008-11-04 Bitstream mode The new project its temperature sensor (tsens) ASIC is basically a [ . . . Delta Sigma analog-to-digital converter]. What it basically comes down to, . . . 3K - last updated 2008-11-07 10:57 UTC by 6715 2010-08-30 How to recover from unexpected reboots It's pretty interesting to dive into the situation of recovering from unexpected reboots. Our usual lab . . . with one or more [ DACs] and/or [ . . . 2K - last updated 2010-08-30 15:12 UTC by 6715 2010-10-14 Some highlights of AHS 2010 A colleague of mine recently went to [ AHS 2010], a series of annual . . . JPL] has developed the iBoard, a digital data acquisition platform for quick prototyping. . . . 2K - last updated 2010-10-14 14:32 UTC by 6715 2012-03-29 Extensions to Safari library I've received the specs for a modification to our software library for the SAFARI project, and after . . . You could roughly see the demux board as the digital part, and the rest of this picture as the analog . . . 2K - last updated 2012-05-05 17:38 UTC by 6715 2012-07-02 Minimizing amplifier chain offset For the [ SAFARI] project, I'm refactoring some . . . which also amplifies the output so it can be digitized by the DEMUX board. See also this schema: [[image:SAFARIbuildingblocks-1.png]] . . . 7K - last updated 2012-07-05 14:58 UTC by 6715 2013-04-11 measure the AC gain = A little intro = Another routine in the system is the [ . . . there are two [ DACs] that can drive the . . . an [ ADC] to measure back the result. The . . . DAC1. Note that this AC bias signal is still digitized, otherwise we couldn't feed it into a DAC :-) . . . 3K - last updated 2013-04-26 06:59 UTC by 6715 2013-04-26 Asynchronous data from the SAFARI Demux board Previously, I described [[2013-04-11_measure_the_AC_gain|how we measure the transfer of the AC bias]]. . . . very flexible and can be viewed as a number of digital data taps located on several points of the Demux . . . 4K - last updated 2013-04-29 10:46 UTC by 6715 2013-08-12 Page up and page down with a Logitech Marble on Linux On Debian 7.x (Wheezy), I'm using the Logitech Trackman Marble. Mine has product number P/N 810-000767, . . . 4) Logout and log into KDE again The Logitech Trackman . . . 1K - last updated 2013-08-13 07:54 UTC by 6715 2013-10-25 What is new in OS X Mavericks I'm running a little comparison of the file system of a Mavericks install and a Mountain Lion install, . . . announcement from Atto] * CalDigitHDPro -- For CalDigit's [ . . . 5K - last updated 2013-12-06 10:09 UTC by 6715 2015-07-09 Short list of VPS providers Here's my 2015 short list of VPS providers: * [ TransIP] Reasonable price, good . . . (for which they compensated me) * [ DigitalOcean] Good price, don't like . . . 1K - last updated 2015-07-09 12:15 UTC by 6715 2015-09-04 View all non-standard kernel extensions on OS X If you want to list all the drivers (kernel extensions in OS X jargon) that didn't come pre-installed . . . on your system. As an example, I've got the Logitech OS X driver installed, plus I've got VirtualBox . . . 70 0 0xffffff7f80df6000 0x46000 0x46000 com.Logitech.Control Center.HID Driver (3.9.1) <69 67 37 . . . 2K - last updated 2015-09-08 05:24 UTC by 6715 2015-10-18 Popular libraries Here's a couple of pointers to popular libraries: * . . . 1K - last updated 2015-10-18 08:27 UTC by 6715 2015-10-30 Searching Cocoapods Today, we were joking around in the team, and we figured it would be cool if you could simply include . . . Cocoapod in your iOS project to add an [ easter egg]. . . . which allows you to search Cocoapods: No results for . . . though :) But you might find one via [ cocoapods-roulette] . . . 1K - last updated 2015-11-06 08:09 UTC by 6715 2015-11-06 Creating an OS X virtual machine Automatically creating an OS X virtual machine is getting quite easy and automated nowadays. If you haven't . . . virtualbox Now continue with [ rmoriz' instructions]. . . . 2K - last updated 2015-12-08 11:54 UTC by 6715 2016-01-09 Compress PNG files with pngquant A little trick if you want to save some space and/or bandwidth; compress your PNG files. Although PNG . . . That's not so bad. The [ pngquant homepage is here] . . . 1K - last updated 2016-01-09 20:52 UTC by 6715 2016-04-28 USB speakers Since ages, I've had these [ Logitech . . . power and their audio over USB: There are a couple . . . market. But lately I found out that Logitech makes a new model that strictly uses USB for . . . audio and power: the [ Logitech . . . S150]. Edit 2016-11-28: . . . 2K - last updated 2016-12-07 07:57 UTC by 6715 2016-06-02 Good error messages in Xcode Run Scripts phase Xcode has an option to run a script when building. This is necessary for example when using [ . . . install carthage, see also:" 1>&2 echo "" 1>&2 . . . 2K - last updated 2016-06-06 12:43 UTC by 6715 2016-10-31 Mice and keyboards with USB-C After seeing Apple coming out with new 13" and 15" machines with only USB-C connections, I wondered whether . . . that it's very slim pickings. Big brands like Logitech haven't gotten around to those, and the only . . . 1K - last updated 2016-11-04 09:23 UTC by 6715 2016-11-04 Veertu now open source My favorite virtualization software [ Veertu] has been open sourced! Check out their . . . their website], or via [ Brew Cask]: $ brew cask install veertu-desktop . . . 1K - last updated 2016-11-04 09:19 UTC by 6715 2017-02-21 Linux VPS with TeamViewer Here are my short notes on creating a Linux VPS (virtual private service) which can be remotely accessed . . . below fail on VPSes at Scaleway or DigitalOcean, but the combination of Fedora 25 and [ . . . 3K - last updated 2019-12-08 07:25 UTC by 6715 2017-04-04 UITextView with placeholders For my current project, the user needs to select a template from a list. The template contains a number . . . editing needs to be done. I've created [ an example . . . Xcode project on GitHub] which shows how this can be done: . . . 1K - last updated 2017-04-04 11:36 UTC by 6715 2017-06-06 Always leave a view in a stackview When you hide/unhide an element in a UIStackView, it will nicely animate. However when you hide the only . . . what happens, run the following project: Click . . . 1K - last updated 2017-06-06 09:34 UTC by 6715 2017-10-02 Most popular Swift projects Here's a nice link to the most popular Swift-related projects on Github: . . . 1K - last updated 2017-10-02 09:55 UTC by 6715 2017-11-06 Xcode 9.1 unknown error After upgrading to Xcode 9.1 (build 9B55), the following error would be shown in a modal dialog after . . . tree (-3) This particular path is included via a git submodule, but I'm not sure if that's related. . . . 1K - last updated 2017-11-06 08:40 UTC by 6715 Bookmarks to check out sometime * [ OpenEJB], with an additional [ . . . * . . . 1K - last updated 2006-08-03 14:26 UTC by 6715 Cheap VPS Hosting (English readers: this is an overview of cheap Dutch VPS hosters) = Update 23-12-2013 = '''Update:''' . . . TransIP] en [ DigitalOcean]. Beide zijn uitstekend. . . . Als je puur de goedkoopste zoekt, ga dan voor DigitalOcean. Die hebben een datacenter in Amsterdam . . . 12K - last updated 2013-12-23 10:48 UTC by 6715 File Area [[temp.jpg]] [[digitale_transmissietechniek.doc.zip]] [[boarding.pdf.enc]] [[6x8-IMG_3436c1.jpg]] [[Explanation]] . . . 1K - last updated 2008-06-23 20:45 UTC by 6715 Git Did you mean [[git]]? . . . 1K - last updated 2014-11-11 08:39 UTC by 6715 Java Snippets = Generate a unique number = import java.math.BigInteger; import java.rmi.server.UID; public class UniqueNumber . . . you must make sure it contains at least five digits. If not, the number needs to be prefixed with . . . s; if (l < 10000) { // this has less than five digits DecimalFormat df = new DecimalFormat("00000"); . . . 5K - last updated 2005-11-10 13:52 UTC by 6715 MySQL = Installation = Install as usual for your distribution. For Debian, this means: $ sudo apt-get install . . . 10, 16); To display a fixed number of digits: SELECT lpad( conv(mynumber, 10, 16), 8, '0x000000'); . . . 8K - last updated 2013-06-07 11:16 UTC by 6715 Other links * Bikes: ** ** . . . in alle soorten en formaten] ** [ Afdrukken op caps en T-shirts] ** . . . 2K - last updated 2006-08-02 06:47 UTC by 6715 The technology behind Jini = The technology behind Jini = What is Jini its trick? What are the principles behind the interface? . . . us look at a practical example. Let us take a digital photocamera which connects to the network. As . . . 8K - last updated 2005-10-16 19:05 UTC by 6715 digitale transmissietechniek.doc.zip application/zip 947K - last updated 2006-03-23 10:57 UTC by 6715 git = Starting on an existing project = When you start on a client project, you usually take the following . . . steps. $ git clone . . . List the branches: $ cd project $ git branch -a * master remotes/origin/HEAD -> origin/master . . . want to switch to the development branch: $ git checkout -t origin/development = My way of merging . . . to check out the current development branch: $ git clone . . . 4K - last updated 2020-02-17 19:00 UTC by 6715 vankuik.nl = Latest weblog entries = <journal 5> = Weblog Archive = [[Weblog_entries_2021]] [[Weblog_entries_2020]] . . . * [[SVN]] for all your Subversion tricks * [[git]] * [[Bash]] * [[UNIX_Toolkit]] Explanation of . . . 4K - last updated 2021-06-01 11:29 UTC by 6715 38 pages found.
https://www.vankuik.nl/?search=%22git%22
CC-MAIN-2021-49
refinedweb
1,750
65.93
I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Some answers to some questions asked below: Shame about the Linux requirement. This is exactly what Windows DFS does. Since 2003 R2, it does it on a block-level basis, too. Some questions: How many "server" nodes are you thinking about having participate in this thing? What's the WAN connectivity topology like-- hub and spoke, full mesh? How reliable is it? Do you expect clients to failover to a geographically non-local server in the event the local server fails? Windows DFS-R certainly would what you're looking for, albeit for some potentially hefty licensing costs. You say that collisions aren't a problem and you don't need a distributed lock manager, so you could do this with userland tools like rsync or Unison and just export the resulting corpus of files with NFS to the local clients. It's ugly, and you'd have to handle knocking together some kind of system to handle generating a replication topology and actually running the userland tools, but it would certainly be cheap as licensing cost goes. Have you considered AFS? The Andrew File System (AFS) is a distributed networked file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. The Andrew File System (AFS) is a distributed networked file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. As I understand it, most of the recent development has been behind the OpenAFS project. I can't pretend to be familiar enough with the project to know if the "preferred locality" feature is available, but otherwise it sounds like a good fit. Have you looked at OST pools in Lustre? It won't be automatic but with OST pools you can assign directories/files to specific OST/OSSes - basically policy based storage allocation, rather than the default round-robin/striping across OSTs. So you could setup a directory per site and assign that directory to the local OSTs for that site, which will direct all I/O to the local OSTs. It will still be a global namespace. There's a lot of work going into improving Lustre over WAN connections (local caching servers and things like that) but it's all still under heavy development AFAIK. Maybe NFS but with Cachefs on the application servers will accomplish your part of your goal. As I understand it everything written will still go the central server, but at least reads could end up being cached locally. This could potentially take a lot of delay off of reads depending on your usage patterns. Also, mabye UnionFS is worth looking into. With this I think each location would be a NFS export, and then you could use UnionFS at each location to have that and all the other NFS mounts from the location appear as one filesystem. I don't have experience with this though. You could look into DRBD to replicate the disks.. This is a linux High Availability solution which just now made it into the Kernel. However, this has some limitations: If you want to keep it simple then have a look a rsync, solves a lot of problems and can be scripted. Check on chironfs. Maybe it can do what you want, on file system basis. Btsync is another solution that I've had good experience with. It uses BitTorrent protocol to transfer the files, so the more servers you have the faster it is as synchronizing new files. Unlike rsync-based solution, it detects when you rename the files/folders, and renames them on all the nodes instead of delete/copy. Yout btsync clients can then share the folders on a local network. The only downside I found (compared to MS DFS) is that it will not detect a local file copy. Instead it will interpret it as a new file an uploaded to all the peers. So far btsync seems to be the best synchronization solution and it can be installed on Windows, Linux, Android, and ARM devices (e.g. NAS) By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 1964 times active 1 year ago
http://serverfault.com/questions/126037/geographically-distributed-file-system-with-preferred-locality
CC-MAIN-2014-42
refinedweb
896
61.67
Duh, I need to type faster. Cairyn @Cairyn Posts made by Cairyn - RE: Dialog Freezes when checking for Message() - RE: Dialog Freezes when checking for Message() @bentraje Try the BFM_xxx messages for gadget interaction, not MSG_CHANGE. (and no KillEvents in your case) - RE: Dialog Freezes when checking for Message() Without explicitly testing: You are changing the slider (InitValues) while you are still checking the message. Which probably generates a new message, which calls Message again, etc. The least thing you can do is NOT call InitValues if the slider is already at 0.0. I'm not sure anyway whether the code will do what you want, as any tiny movement of the slider will accumulate into a finger rotation (even while the user is still dragging the slider). I bet it would just start spinning out of control, or become unresponsive... - RE: Threading & job questions well, as for Q1 you did write MAXON_IMPLICITE with an E at the end in your code, so the compiler doesn't know that... - RE: Bend Deformer Using C++ For simplification, we'll assume that world, bend, and object coordinates are all the same. Now you can draw a circle with radius r and check where a point p is going. Let's just look at the point p = (L,0) - the farthest centered point of our object, or the center of the coordinate system of out object if you want to move a whole object. Then p'.x is going to be r * sin(alpha), and p'.z will be r - r * cos(alpha), as per the definitions of sine and cosine. And that's all there is! Okay, after you did the necessary coordinate transformations between the base coordinate systems and adapted the calculation for other points than p, but that's homework. Here I made a little scene where a Python tag does the calculation. A flattened cube is bent by a bend deformer; a torus primitive is moved accordingly to stay flush with the left edge of the cube. Note that the calculation for the torus position and rotation is done solely through the Python tag, whose only input the bend deformer's angle is, so the Python tag is actually replicating the bend deformer functionality. 20200104_CenterOfBendDeformer.c4d The main part is the code of the Python tag (I'm calculating with radians here so don't wonder why there is no pi; also the bend deformer in my scene is pointing to the negative x so we have additional minus signs: import c4d from math import sin, cos def main(): torus = op.GetObject() source = torus.GetPred().GetDown() strength = source[c4d.DEFORMOBJECT_STRENGTH] originx = -400.0 originz = 0.0 if strength == 0.0: targetx = originx targetz = originz roth = 0.0 else: alpha = strength radius = originx / alpha targetx = radius * sin(alpha) targetz = -(radius - radius * cos(alpha)) roth = -alpha torus[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_X] = targetx torus[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_Z] = targetz torus[c4d.ID_BASEOBJECT_REL_ROTATION,c4d.VECTOR_X] = roth Obviously this is a very simplified sample that works only for this scene, but you can adapt the calculation for yours. - RE: Bend Deformer Using C++ okay, calculate with me... The arc length L for an angle alpha on a circle with radius r is L = 2* pi * r * alpha / 360 (assuming alpha is in degree). alpha is known; it is the bend angle. The arc length is also known - it's the distance of the bending point to the center of the bend (assuming we want to keep that length constant). The radius is actually the wanted value. So we transform L * 360 = 2 * pi * r * alpha (L * 360) / (2 * pi * alpha) = r Obviously, this is numerically bad since with alpha approaching 0, r is approaching infinity. But maybe we can solve this in the transformation and get rid of either alpha or r. The rotation of the target coordinate system (the object to place on the bend) is obviously alpha itself. The position p of the target needs to be moved according to sin and cos of alpha on the circle with radius r. Remember that all of this needs to be expressed in the coordinate systems of either the object, the world, or the bend deformer, which makes the calculation perhaps a bit awkward. - RE: Bend Deformer Using C++ @zipit @mfersaoui not quite; the pivot of the rotation is also changing (only along one axis, but still...), and it's not a linear movement but some exponential one between 0 (approaching only) and infinity (when no bend). The infinity one may also be tricky numerically. I have an idea how to solve (my Google-fu seems weak on this) but haven't tried in praxis yet. - RE: Bend Deformer Using C++ @mfersaoui If you're generating the objects on the fly, you wouldn't need a Python tag but could place the objects in their "bent" placement already, so that would not be a problem. Of course, if you can't replicate the bending algorithm the question is moot... but maybe Google knows the fitting formula? - RE: Bend Deformer Using C++ If I understand the code correctly, you are only using it for generating the structure, and you rely on the Bend deformer completely to create the repositioning. What if you lose the bend deformer, and use a Python tag instead that creates the bend effect "manually" - based on not the polygons (which you don't deform in your screenshot anyway) but on the object matrix of the affected objects, which would then apply to the lights as well? Is that wavy motion in the screenshot all you want to generate, or is there more to it? - RE: Disabling a Protection Tag @blastframe good... although the question would be why it works for me without that. Then again it is entirely possible that some other plugin that I have installed has the side effect of updating all tags under certain circumstances...
https://plugincafe.maxon.net/user/cairyn
CC-MAIN-2020-05
refinedweb
988
61.56
Important: Please read the Qt Code of Conduct - MultiPointTouchArea and PinchArea not working in latest Beta2 Multi touch events seem to be no longer working in the Beta2 in QML. At least on Mac OS X 10.7.5. Running the same code against the Beta1 works as expected. After a small amount of debugging, I have a feeling that not even the C++ for MultiPointTouchArea is receiving the events. Also, not sure if it is related, but I receive these warning when I built the project (a brand new QtQuick 2 Application): ld: warning: directory not found for option '-F/Users/kit/Qt5.0.0beta2/5.0.0-beta2/clang_64/qtdeclarative/lib' ld: warning: directory not found for option '-F/Users/kit/Qt5.0.0beta2/5.0.0-beta2/clang_64/qtbase/lib' ld: warning: directory not found for option '-F/Users/kit/Qt5.0.0beta2/5.0.0-beta2/clang_64/qtjsbackend/lib' The code I tested with: @ import QtQuick 2.0 Rectangle { width: 360 height: 360 MultiPointTouchArea { anchors.fill: parent maximumTouchPoints: 3 touchPoints: [ TouchPoint { id: p1 }, TouchPoint { id: p2 }, TouchPoint { id: p3 } ] Rectangle { width: 30; height: 30 color: "red" opacity: p1.pressed ? 1 : 0.3 x: p1.x; y: p1.y } Rectangle { width: 30; height: 30 color: "blue" opacity: p2.pressed ? 1 : 0.3 x: p2.x; y: p2.y } Rectangle { width: 30; height: 30 color: "green" opacity: p3.pressed ? 1 : 0.3 x: p3.x; y: p3.y } } } @ Update: This is still a problem in RC1.
https://forum.qt.io/topic/21848/multipointtoucharea-and-pincharea-not-working-in-latest-beta2
CC-MAIN-2021-10
refinedweb
248
61.02
> Too subjective and argumentative I am currently working on a school project where I'll be using motion capture and unity. It's made for the elderly to improve their cognitive and motoric functions. I want unity to be able to record their movements into a csv file to see how well they are doing. I want the x, y and z co-ordinates recorded against time in excel. I'm using perception neuron for my motion capture which have a total of 32 sensors. The 3D model in unity has 32 different parts/limbs that move, including the fingers. I added a picture of it here: Now in this code i cant seem to get the co-ordinates of a limb (i'm just trying to get the coordinates of a single limb for now). I tried splitting Vector3 into x, y, z coordinates to get separate co-ordinates and also because i want the values to be float too but it does not seem to work could anyone help me with this? Here's my code: using UnityEngine; using System.Collections.Generic; using System; using System.IO; public class Test : MonoBehaviour { float timer = 100.0f; public TestSO so ; StreamWriter motionData; public void Start() { string fullFilename = @"C:\Users\Administrator\Desktop\CsvPlz.csv"; motionData = new StreamWriter(fullFilename, true); InvokeRepeating("wolf", 0.0f, 0.5f); } void wolf() { timer += Time.deltaTime; string delimiter = ", "; var position : Vector3; var x: float = position[0]; var y : float = position[1]; var z : float = position[2]; if (timer > 1.0f) { timer -= 1.0f; string dataText = ("{1}",x) + delimiter + ("{2}", y) + delimiter + ("{3}", z); motionData.WriteLine(dataText); } motionData.Close(); } } Answer by Bunny83 · Jul 18, 2018 at 10:22 AM Uhm where do you actually reference your limbs? Currently you seem to create a local position variable which you never initialize and then read it's values. This makes no sense. Apart from this your code would never compile. You're mixing C# with UnityScript. UnityScript is a deprecated language anyways. Furthermore you have another logical error in your code. You open a file stream in Start but you're closing it inside your "wolf" callback. That means when wolf is called the second time the stream is closed and you can no longer write to the file. Second logic error is that you use InvokeRepeating with a repeat rate of 2 calls per second, but inside the callback you use Time.deltaTime to increase your timer. This makes no sense at all as deltaTime only makes sense when used every frame. Finally this line makes no sense: string dataText = ("{1}",x) + delimiter + ("{2}", y) + delimiter + ("{3}", z); This looks like you wanted to use something like string.Format but as it's written now it just makes no sense and wouldn't compile. We don't know how your skeleton structure looks like or how you want to translate the hierachical structure into a linear list. You have to provide more information on your setup and how the final data should look like. Do your bones / limbs have any names? Do you only want the worldspace positions of each limb? Im planning on getting the coordinates of one limb for now. When i achieve that then ill use this code for the rest of the limbs. Ultimatley This is what i want : I want all x,y,z coordinates of each limb plotted against time in an excel sheet. This is what my unity project looks like: Theres a total of 59 body parts in this project. The Line : string dataText = ("{1}",x) + delimiter + ("{2}", y) + delimiter + ("{3}", z); is what im having trouble with because i dont know how to extract the coordinates from the unity and into the csv file. I also changed both these parts : var position : Vector3; var x: float = position[0]; var y : float = position[1]; var z : float = position[2]; and with this : string dataText = (Robot_RightShoulder.transform.position.x) + delimiter + (Robot_RightShoulder.transform.position.y) + delimiter + (Robot_RightShoulder.transform.position.z); But its still not35 People are following this question. Multiple Cars not working 1 Answer Distribute terrain in zones 3 Answers Locations not registering as equal 1 Answer Getting other cameras coordinates 1 Answer ,Enemy AI rapidfires and invokerepeating doesn't do anything. c# 1 Answer
https://answers.unity.com/questions/1530684/store-positional-data-into-a-csv-file.html?sort=oldest
CC-MAIN-2019-22
refinedweb
713
65.52
You can subscribe to this list here. Showing 1 results of 1 In windows xp sp3, python 2.6.3, pyOpenGL 3.0.1, installed from the .exe in sourceforge 1. For glut32.dll the dll lookup order seems to be i) %windows%\system32 ii) site-packages\OpenGL\DLLS iii) the CWD (current working dir) is irrelevant I'm unsure if this was intended, but seems weird: in worst case you should touch %windows%\system32 and site-packages to ensure the desired glut flavor is loaded (see 3. , concrete case) 2. freeglut.dll is a replacement for glut32.dll that have some extra functions, like glutBitmapString. The glut32.dll provided by PyOpenGL-3.0.1.win32.exe dont have the extra functions, thus you can get a traceback like: [snipped] OpenGL.error.NullFunctionError: Attempt to call an undefined function glutBitmapString, check for bool(glutBitmapString) before calling Should the NullFuncionError handler check if the name belongs to the freeglut exclusive functions and inform 'you need to use freeglut...' ? Wouldn't be better functionality to load a specific glut flavor, like: import OpenGl OpenGL.glut_desired = 'freeglut' ... Then in site-packages\OpenGL\DLLS you can include both flavors: glut32.dll and freeglut.dll, and at run time you bind according to .glut_desired 3. concrete case: a pyweek10 entry, look at this thread if you want the code I will reproduce my comment there: """ At first it crashed, made some adjustments and then succeed in windows xp, pyOpenGL 3.0.1 (the latest released in sourceforge), at the start crashes with: D:\tmp\_pyweek10_previa\teetering-towers-2\teetering-tower-2>run_game.py Traceback (most recent call last): File "D:\tmp\_pyweek10_previa\teetering-towers-2\teetering-tower-2\run_game.py ", line 268, in <module> tower.render() File "D:\tmp\_pyweek10_previa\teetering-towers-2\teetering-tower-2\run_game.py ", line 116, in render glutBitmapString(GLUT_BITMAP_HELVETICA_18, "%.1f" % r) File "C:\Python26\lib\site-packages\OpenGL\platform\baseplatform.py", line 336 , in __call__ self.__name__, self.__name__, OpenGL.error.NullFunctionError: Attempt to call an undefined function glutBitmap String, check for bool(glutBitmapString) before calling Workaround: seems that glutBitmapString is not available in the standard glut32.dll shipped in the win binaries for pyOpenGL greping the pyOpenGL sources with the func name suggested that freeglut was needed got freeglut (I choosed the MSVC version) from copied freeglut.dll to site-packages\OpenGL\DLLS renamed to glut32.dll in %windir%\system32 rename glut32.dll to something else Done, it runs (and the text shows) """ Stashing the freeglut.dll (renamed or not to glut32.dll) at the script directory (thus in the CWD) dont work. If you happen to have glut32.dll in %windows%\system32, it will hide the glut32.dll in site-packages\OpenGL\DLLS So a python app needs to modify system-wide and python-wide settings in order to specify the desired glut variant (!). And if packaging with py2exe or similar, surely my expectation would be that the dlls will come from the narrower environment: CWD, site-packages\pyOpenGL\DLLS, system32. Should at least pyOpenGL try to load the dll first from the CWD ? -- claxo Yahoo! Cocina Encontra las mejores recetas con Yahoo! Cocina.
http://sourceforge.net/p/pyopengl/mailman/pyopengl-users/?viewmonth=201004&viewday=1
CC-MAIN-2014-52
refinedweb
526
59.7
Hi. I have a simple groovy unit test which does some logging that i'd like to see. (see code below). My problem is that i can see the output from println statements, but not when i do calls to logger.error As you can see i have ensured that my logging threshold is set to at least error ... But I see no output.. I explored the options in 'Edit Configurations' window (Logs tab).. But nothing seemed relevant. I'm wondering if I need to programatically add slf4j's equivalent of a ConsoleHandler, as was described in this post > [ Note. i actually tried the code in that post and it does give logging output.. but i'm hoping i don't have to programatically set this up in each of my unit test classes.. that will add a lot of clutter ] Thanx for your help in advance ! chris -- CODE -- import org.slf4j.Logger; import org.slf4j.LoggerFactory class CloudRegistryTest extends GroovyTestCase { private static Logger logger = LoggerFactory.getLogger(CloudRegistryTest.class.name) public void testFoo() { if (logger.isErrorEnabled()) { println "error on " } else { println "error off " } println "responsehi ho" // This shows up in the console window logger.error ">> responsehi ho" // This does not show up } } Hi. Oh.. I forgot to mention.. I imported a simple grails app from existing sources... (that's how i set up my project) also, I am using Idea version 10 (which i just downloaded today) thanks
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206252679-Groovy-unit-test-with-slf4j-logging-does-not-show-log-statements-in-the-console-window-
CC-MAIN-2020-29
refinedweb
238
68.67
Im trying to create a table that counts all orders and groups them in a table from sql to linq to use in a bar graph with google charts. Table` Orders Status 8 Created 3 Delayed 4 Enroute sql SELECT Count (OrderID) as 'Orders', order_status FROM [ORDER] where order_status ='Created'OR order_status= 'Delayed' OR order_status='Enroute' group by order_status controller public ActionResult GetChart() { var Orders = db.Order.Select(a => new { a.OrderID, a.order_status }) .GroupBy(a => a.order_status); return Json(Orders, JsonRequestBehavior.AllowGet); } this is not displaying the correct results as the linq seems to be wrong. can someone please point me in the right direction? I am relatively new to this. thanks in advance. This should work:- var result = db.Order.Where(x => x.order_status == "Created" || x.order_status == "Delayed" || x.order_status == "Enroute") .GroupBy(x => x.order_status) .Select(x => new { order_status = x.Key, Orders = x.Count() }); Or if you prefer query syntax then:- var result = from o in db.Order where o.order_status == "Created" || o.order_status == "Delayed" || o.order_status == "Enroute" group o by o.order_status select new { orderStatus = x.Key, Counts = x.Count() }; I think you want to group by Status and count total number of orders in each group (I build a simple console program to demonstrate). I suppose the data is: Orders Status 8 Created 3 Delayed 4 Enroute 2 Created 1 Delayed Order.cs public class Order { public Order(int orderId, string status) { OrderId = orderId; Status = status; } public int OrderId { get; set; } public string Status { get; set; } } Program.cs class Program { static void Main(string[] args) { // Data var orders = new List<Order> { new Order(8, "Created"), new Order(3, "Delayed"), new Order(4, "Enroute"), new Order(2, "Created"), new Order(1, "Delayed"), }; // Query var query = orders .GroupBy(x => x.Status) .Select(x => new {Status = x.Key, Total = x.Count()}); // Display foreach (var item in query) { Console.WriteLine(item.Status + ": " + item.Total); } Console.ReadLine(); } } The one you need to focus in is query. After using GroupBy, you will have a list of groups. For each group, the Key is the criteria to group (here is the Status). Then, we call Count() to get the total number of element in that group. So, from the program above, the output should be: Created: 2 Delayed: 2 Enroute: 1
http://www.dlxedu.com/askdetail/3/42af1bef6fbb78a8400e63e6b40ddad6.html
CC-MAIN-2018-22
refinedweb
378
61.02
Content. A content type is a flexible and reusable template of type list item or document (or inherited from some other basic types available in SharePoint) that defines the columns and behavior for an item in a list or a document in a document library. A content type can also have receivers and workflows associated with it. You can create content types with either Out-of-Box option available in your SharePoint Site or by using Client and Server Object models.. Content type overview. A partial list of the built-in content types is shown in the following table. Notice that many of these base content types correspond to types of lists that you can create. This correspondence is by design. For more information, see Default List Content Types. You can determine the line of descent for a content type by carefully inspecting its content type ID. For example, notice that all of the content types descended from Item have IDs that begin with the ID for Item. The ID for a child content type is formed by appending information to the ID of the parent content type. For more information, see Content Type IDs. For a complete list of built-in content types, see the fields of the SPBuiltInContentTypeId class. Content Type Groups The built-in content types are organized in groups such as List Content Types, Document Content Types, Folder Content Types, and _Hidden. You can obtain the name of the group that a given content type belongs to by reading the Group property of an SPContentType object in server code or the same property of a ContentType object in client code. Content types that belong to the “_Hidden” group are not displayed in the user interface for users to apply to lists or use as the basis for other content types. For more information, see Content Type Access Control.. C# using System; using System.Collections; using Microsoft.SharePoint; namespace Test { class Program { static void Main(string[] args) { using (SPSite site = new SPSite(“”)) { using (SPWeb web = site.OpenWeb()) { // Create a sortable list of content types. ArrayList list = new ArrayList(); foreach (SPContentType ct in web.AvailableContentTypes) list.Add(ct); // Sort the list on group name. list.Sort(new CTComparer()); // Print a report. Console.WriteLine(“{0,-35} {1,-12} {2}”, “Site Content Type”, “Parent”, “Content Type ID”); for (int i = 0; i < list.Count; i++) { SPContentType ct = (SPContentType)list[i]; if (i == 0 || ((SPContentType)list[i – 1]).Group != ct.Group) { Console.WriteLine(“\n{0}”, ct.Group); Console.WriteLine(“————————“); } Console.WriteLine(“{0,-35} {1,-12} {2}”, ct.Name, ct.Parent.Name, ct.Id); } } } Console.Write(“\nPress ENTER to continue…”); Console.ReadLine(); } } // Implements the Compare method from the IComparer interface. // Compares two content type objects by group name, then by content type Id. class CTComparer : IComparer { // The implementation of the Compare method. int IComparer.Compare(object x, object y) { SPContentType ct1 = (SPContentType)x; SPContentType ct2 = (SPContentType)y; // First compare group names. int result = string.Compare(ct1.Group, ct2.Group); if (result != 0) return result; // If the names are the same, compare IDs. return ct1.Id.CompareTo(ct2.Id); } } } When this application runs on a website that has only the built-in content types available, it generates the following output. Document Content Types Folder Content Types Group Work Content Types List Content Types Special Content Types What metadata do Content Types save? Simply put, Content Type is a set of field definitions. How are they different from standard Lists in SharePoint? Lists are specific to a location, content types are not. Content types in fact represent the blue print or schema definition that can be applied to a List or any other type of library for that matter. Content Types can also be scoped at site level such that they are available to an entire site hierarchy. Create Content Type Out-of -Box Create Content Type with Visual Studio 2010 (Server Object Model) Create Content Type using Client Object Model To create one in your SharePoint 201o site, follow the steps below: - Click Site Actions ➪ Site Settings. - Under Galleries click on “Site content types” and then click Create. See the Screen below - Next Click on “Create” and Provide Name and Description on “New Site Content Type” Page. Next, from “Parent content type from” Drop-down select “List Content Types” and from “Parent content type” select “Issue” type. In Group select “custom content types” or select a new group and give it a name. Learn the core features of SharePoint and become master with our expertise tutorials. 4. When the content type is created, you can see that in “Custom Content types”. - Click on the “Test Content Type” and you can change various options like Title, description etc and also can add additional Site columns which get carried along with this content type. - Now either you can create a new list or navigate to an existing one with which you want to associate the custom template (content type). - After you navigate to the new list, on the ribbon click the list tab and then list Settings. - Click Advanced Settings and then click Yes for the “Allow management of content types?” button and click Ok. - Click OK to return to the new list’s settings page. - On the Advanced Settings page, click the “Add from existing site content types” link to add your new content type to the custom list. In the Available Site Content Types list, find the custom content type you created, and then click the Add button to move that content type into the “Content types to add” list. - Now you can navigate to your list and click New. Your custom Content Type will be in the drop-down for your selection. Sample Content Type For Indepth understanding of SharePoint click on - SharePoint Tutorials - Introduction to SharePoint - Application Pages - SharePoint Security - Introduction to Web Parts 0 Responses on Create Content Type in SharePoint 2010"
https://tekslate.com/create-content-type-sharepoint-2010/
CC-MAIN-2017-17
refinedweb
982
64.3
Net::SSLeay::OO::Functions - convert Net::SSLeay functions to methods use Net::SSLeay::OO::Functions 'foo'; # means, roughly: use Net::SSLeay::OO::Functions sub { my $code = shift; sub { my $self = shift; $code->($self->foo, @_); } }; This internal utility module distributes Net::SSLeay functions into the calling package. Its import method takes a callback which should return a callback to be assigned into the symbol table; not providing that will mean that the Net::SSLeay function is directly assigned into the symbol table of the calling namespace. If a function is passed instead of a closure, it is taken to be the name of an attribute which refers to where the Net::SSLeay magic pointer is kept. The difference between the version of the installed handler function and the actual installed function is that the real one checks for OpenSSL errors which were raised while the function was called. After the first argument, options may be passed: Specify NOT to include some functions that otherwise would be; perhaps they won't work, perhaps they are badly named for their argument types. Import the Net::SSLeay function called func_name, as the local method method_name. This is mostly useful for functions which were missing their prefix indicating the argument types. <>
http://search.cpan.org/~samv/Net-SSLeay-OO-0.02/lib/Net/SSLeay/OO/Functions.pm
CC-MAIN-2018-09
refinedweb
207
52.43
Laptop Price Prediction – Practical Understanding of Machine learning project lifecycle This article was published as a part of the Data Science Blogathon. Machine learning is a branch of Artificial intelligence that deals with implementing applications that can make a future prediction based on past data. If you are a data science enthusiast or a practitioner then this article will help build your own end-to-end machine learning project from scratch. There are various steps involved in building a machine learning project but not all the steps are mandatory to use in a single project, and it all depends on the data. In this article, we will build a Laptop price prediction project and learn about the machine learning project lifecycle. Table of Contents - Describing Problem Statement - Overview about dataset - Data Cleaning - Exploratory Data Analysis - Feature Engineering - Machine learning Modeling - ML web app development - Deployment Machine learning app Problem Statement for Laptop Price Prediction We will make a project for Laptop price prediction. The problem statement is that if any user wants to buy a laptop then our application should be compatible to provide a tentative price of laptop according to the user configurations. Although it looks like a simple project or just developing a model, the dataset we have is noisy and needs lots of feature engineering, and preprocessing that will drive your interest in developing this project. Dataset for Laptop Price Prediction You can download the dataset from here. Most of the columns in a dataset are noisy and contain lots of information. But with feature engineering you do, you will get more good results. The only problem is we are having less data but we will obtain a good accuracy over it. The only good thing is it is better to have a large data. we will develop a website that could predict a tentative price of a laptop based on user configuration. Basic Understanding of Laptop Price Prediction Data Now let us start working on a dataset in our Jupyter Notebook. The first step is to import the libraries and load data. After that we will take a basic understanding of data like its shape, sample, is there are any NULL values present in the dataset. Understanding the data is an important step for prediction or any machine learning project. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv("laptop_data.csv") data.shape data.isnull().sum() It is good that there are no NULL values. And we need little changes in weight and Ram column to convert them to numeric by removing the unit written after value. So we will perform data cleaning here to get the correct types of columns. data.drop(columns=['Unnamed: 0'],inplace=True) ## remove gb and kg from Ram and weight and convert the cols to numeric data['Ram'] = data['Ram'].str.replace("GB", "") data['Weight'] = data['Weight'].str.replace("kg", "") data['Ram'] = data['Ram'].astype('int32') data['Weight'] = data['Weight'].astype('float32') EDA of Laptop Price Prediction Dataset Exploratory analysis is a process to explore and understand the data and data relationship in a complete depth so that it makes feature engineering and machine learning modeling steps smooth and streamlined for prediction. EDA involves Univariate, Bivariate, or Multivariate analysis. EDA helps to prove our assumptions true or false. In other words, it helps to perform hypothesis testing. We will start from the first column and explore each column and understand what impact it creates on the target column. At the required step, we will also perform preprocessing and feature engineering tasks. our aim in performing in-depth EDA is to prepare and clean data for better machine learning modeling to achieve high performance and generalized models. so let’s get started with analyzing and preparing the dataset for prediction. 1) Distribution of target column Working with regression problem statement target column distribution is important to understand. sns.distplot(data['Price']) plt.show() The distribution of the target variable is skewed and it is obvious that commodities with low prices are sold and purchased more than the branded ones. 2) Company column we want to understand how does brand name impacts the laptop price or what is the average price of each laptop brand? If you plot a count plot(frequency plot) of a company then the major categories present are Lenovo, Dell, HP, Asus, etc. Now if we plot the company relationship with price then you can observe that how price varies with different brands. #what is avg price of each brand? sns.barplot(x=data['Company'], y=data['Price']) plt.xticks(rotation="vertical") plt.show() Razer, Apple, LG, Microsoft, Google, MSI laptops are expensive, and others are in the budget range. 3) Type of laptop Which type of laptop you are looking for like a gaming laptop, workstation, or notebook. As major people prefer notebook because it is under budget range and the same can be concluded from our data. #data['TypeName'].value_counts().plot(kind='bar') sns.barplot(x=data['TypeName'], y=data['Price']) plt.xticks(rotation="vertical") plt.show() 4) Does the price vary with laptop size in inches? A Scatter plot is used when both the columns are numerical and it answers our question in a better way. From the below plot we can conclude that there is a relationship but not a strong relationship between the price and size column. sns.scatterplot(x=data['Inches'],y=data['Price']) Feature Engineering and Preprocessing of Laptop Price Prediction Model Feature engineering is a process to convert raw data to meaningful information. there are many methods that come under feature engineering like transformation, categorical encoding, etc. Now the columns we have are noisy so we need to perform some feature engineering steps. 5) Screen Resolution screen resolution contains lots of information. before any analysis first, we need to perform feature engineering over it. If you observe unique values of the column then we can see that all value gives information related to the presence of an IPS panel, are a laptop touch screen or not, and the X-axis and Y-axis screen resolution. So, we will extract the column into 3 new columns in the dataset. Extract Touch screen information It is a binary variable so we can encode it as 0 and 1. one means the laptop is a touch screen and zero indicates not a touch screen. data['Touchscreen'] = data['ScreenResolution'].apply(lambda x:1 if 'Touchscreen' in x else 0) #how many laptops in data are touchscreen sns.countplot(data['Touchscreen']) #Plot against price sns.barplot(x=data['Touchscreen'],y=data['Price']) If we plot the touch screen column against price then laptops with touch screens are expensive which is true in real life. Extract IPS Channel presence information It is a binary variable and the code is the same we used above. The laptops with IPS channel are present less in our data but by observing relationship against the price of IPS channel laptops are high. #extract IPS column data['Ips'] = data['ScreenResolution'].apply(lambda x:1 if 'IPS' in x else 0) sns.barplot(x=data['Ips'],y=data['Price']) Extract X-axis and Y-axis screen resolution dimensions Now both the dimension are present at end of a string and separated with a cross sign. So first we will split the string with space and access the last string from the list. then split the string with a cross sign and access the zero and first index for X and Y-axis dimensions. def findXresolution(s): return s.split()[-1].split("x")[0] def findYresolution(s): return s.split()[-1].split("x")[1] #finding the x_res and y_res from screen resolution data['X_res'] = data['ScreenResolution'].apply(lambda x: findXresolution(x)) data['Y_res'] = data['ScreenResolution'].apply(lambda y: findYresolution(y)) #convert to numeric data['X_res'] = data['X_res'].astype('int') data['Y_res'] = data['Y_res'].astype('int') Replacing inches, X and Y resolution to PPI If you find the correlation of columns with price using the corr method then we can see that inches do not have a strong correlation but X and Y-axis resolution have a very strong resolution so we can take advantage of it and convert these three columns to a single column that is known as Pixel per inches(PPI). In the end, our goal is to improve the performance by having fewer features. data['ppi'] = (((data['X_res']**2) + (data['Y_res']**2))**0.5/data['Inches']).astype('float') data.corr()['Price'].sort_values(ascending=False) Now when you will see the correlation of price then PPI is having a strong correlation. So now we can drop the extra columns which are not of use. At this point, we have started keeping the important columns in our dataset. data.drop(columns = ['ScreenResolution', 'Inches','X_res','Y_res'], inplace=True) 6) CPU column If you observe the CPU column then it also contains lots of information. If you again use a unique function or value counts function on the CPU column then we have 118 different categories. The information it gives is about preprocessors in laptops and speed. #first we will extract Name of CPU which is first 3 words from Cpu column and then we will check which processor it is def fetch_processor(x): cpu_name = " ".join(x.split()[0:3]) if cpu_name == 'Intel Core i7' or cpu_name == 'Intel Core i5' or cpu_name == 'Intel Core i3': return cpu_name elif cpu_name.split()[0] == 'Intel': return 'Other Intel Processor' else: return 'AMD Processor' data['Cpu_brand'] = data['Cpu'].apply(lambda x: fetch_processor(x)) To extract the preprocessor we need to extract the first three words from the string. we are having an Intel preprocessor and AMD preprocessor so we are keeping 5 categories in our dataset as i3, i5, i7, other intel processors, and AMD processors. How does the price vary with processors? we can again use our bar plot property to answer this question. And as obvious the price of i7 processor is high, then of i5 processor, i3 and AMD processor lies at the almost the same range. Hence price will depend on the preprocessor. sns.barplot(x=data['Cpu_brand'],y=data['Price']) plt.xticks(rotation='vertical') plt.show() 7) Price with Ram Again Bivariate analysis of price with Ram. If you observe the plot then Price is having a very strong positive correlation with Ram or you can say a linear relationship. sns.barplot(data['Ram'], data['Price']) plt.show() 8) Memory column memory column is again a noisy column that gives an understanding of hard drives. many laptops came with HHD and SSD both, as well in some there is an external slot present to insert after purchase. This column can disturb your analysis if not feature engineer it properly. So If you use value counts on a column then we are having 4 different categories of memory as HHD, SSD, Flash storage, and hybrid. #preprocessing data['Memory'] = data['Memory'].astype(str).replace('.0', '', regex=True) data["Memory"] = data["Memory"].str.replace('GB', '') data["Memory"] = data["Memory"].str.replace('TB', '000') new = data["Memory"].str.split("+", n = 1, expand = True) data["first"]= new[0] data["first"]=data["first"].str.strip() data["second"]= new[1] data["Layer1HDD"] = data["first"].apply(lambda x: 1 if "HDD" in x else 0) data["Layer1SSD"] = data["first"].apply(lambda x: 1 if "SSD" in x else 0) data["Layer1Hybrid"] = data["first"].apply(lambda x: 1 if "Hybrid" in x else 0) data["Layer1Flash_Storage"] = data["first"].apply(lambda x: 1 if "Flash Storage" in x else 0) data['first'] = data['first'].str.replace(r'D', '') data["second"].fillna("0", inplace = True)) data['second'] = data['second'].str.replace(r'D', '') #binary encoding) #only keep integert(digits) data['second'] = data['second'].str.replace(r'D', '') #convert to numeric data["first"] = data["first"].astype(int) data["second"] = data["second"].astype(int) #finalize the columns by keeping value data["HDD"]=(data["first"]*data["Layer1HDD"]+data["second"]*data["Layer2HDD"]) data["SSD"]=(data["first"]*data["Layer1SSD"]+data["second"]*data["Layer2SSD"]) data["Hybrid"]=(data["first"]*data["Layer1Hybrid"]+data["second"]*data["Layer2Hybrid"]) data["Flash_Storage"]=(data["first"]*data["Layer1Flash_Storage"]+data["second"]*data["Layer2Flash_Storage"]) #Drop the un required columns data.drop(columns=['first', 'second', 'Layer1HDD', 'Layer1SSD', 'Layer1Hybrid', 'Layer1Flash_Storage', 'Layer2HDD', 'Layer2SSD', 'Layer2Hybrid', 'Layer2Flash_Storage'],inplace=True) First, we have cleaned the memory column and then made 4 new columns which are a binary column where each column contains 1 and 0 indicate that amount four is present and which is not present. Any laptop has a single type of memory or a combination of two. so in the first column, it consists of the first memory size and if the second slot is present in the laptop then the second column contains it else we fill the null values with zero. After that in a particular column, we have multiplied the values by their binary value. It means that if in any laptop particular memory is present then it contains binary value as one and the first value will be multiplied by it, and same with the second combination. For the laptop which does have a second slot, the value will be zero multiplied by zero is zero. Now when we see the correlation of price then Hybrid and flash storage have very less or no correlation with a price. We will drop this column with CPU and memory which is no longer required. data.drop(columns=['Hybrid','Flash_Storage','Memory','Cpu'],inplace=True) 9) GPU Variable GPU(Graphical Processing Unit) has many categories in data. We are having which brand graphic card is there on a laptop. we are not having how many capacities like (6Gb, 12 Gb) graphic card is present. so we will simply extract the name of the brand. # Which brand GPU is in laptop data['Gpu_brand'] = data['Gpu'].apply(lambda x:x.split()[0]) #there is only 1 row of ARM GPU so remove it data = data[data['Gpu_brand'] != 'ARM'] data.drop(columns=['Gpu'],inplace=True) If you use the value counts function then there is a row with GPU of ARM so we have removed that row and after extracting the brand GPU column is no longer needed. 10) Operating System Column There are many categories of operating systems. we will keep all windows categories in one, Mac in one, and remaining in others. This is a simple and most used feature engineering method, you can try something else if you find more correlation with price. #Get which OP sys def cat_os(inp): if inp == 'Windows 10' or inp == 'Windows 7' or inp == 'Windows 10 S': return 'Windows' elif inp == 'macOS' or inp == 'Mac OS X': return 'Mac' else: return 'Others/No OS/Linux' data['os'] = data['OpSys'].apply(cat_os) data.drop(columns=['OpSys'],inplace=True) when you plot price aginst operating system then as usual Mac is most expensive. sns.barplot(x=data['os'],y=data['Price']) plt.xticks(rotation='vertical') plt.show() Log-Normal Transformation we saw the distribution of the target variable above which was right-skewed. By transforming it to normal distribution performance of the algorithm will increase. we take the log of values that transform to the normal distribution which you can observe below. So while separating dependent and independent variables we will take a log of price, and in displaying the result perform exponent of it. sns.distplot(np.log(data['Price'])) plt.show() Machine Learning Modeling for Laptop Price Prediction Now we have prepared our data and hold a better understanding of the dataset. so let’s get started with Machine learning modeling and find the best algorithm with the best hyperparameters to achieve maximum accuracy. Import Libraries from sklearn.model_selection import train_test_split from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder from sklearn.metrics import r2_score,mean_absolute_error from sklearn.linear_model import LinearRegression,Ridge,Lasso from sklearn.neighbors import KNeighborsRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor,GradientBoostingRegressor,AdaBoostRegressor,ExtraTreesRegressor from sklearn.svm import SVR from xgboost import XGBRegressor we have imported libraries to split data, and algorithms you can try. At a time we do not know which is best so you can try all the imported algorithms. Split in train and test test As discussed we have taken the log of the dependent variables. And the training data looks something below the dataframe. X = data.drop(columns=['Price']) y = np.log(data['Price']) X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.15,random_state=2) Implement Pipeline for training and testing Now we will implement a pipeline to streamline the training and testing process. first, we use a column transformer to encode categorical variables which is step one. After that, we create an object of our algorithm and pass both steps to the pipeline. using pipeline objects we predict the score on new data and display the accuracy. step1 = ColumnTransformer(transformers=[ ('col_tnf',OneHotEncoder(sparse=False,drop='first'),[0,1,7,10,11]) ],remainder='passthrough') step2 = RandomForestRegressor(n_estimators=100, random_state=3, max_samples=0.5, max_features=0.75, max_depth=15) pipe = Pipeline([ ('step1',step1), ('step2',step2) ]) pipe.fit(X_train,y_train) y_pred = pipe.predict(X_test) print('R2 score',r2_score(y_test,y_pred)) print('MAE',mean_absolute_error(y_test,y_pred)) In the first step for categorical encoding, we passed the index of columns to encode, and pass-through means pass the other numeric columns as it is. The best accuracy I got is with all-time favorite Random Forest. But you can use this code again by changing the algorithm and its parameters. I am showing Random forest. you can do Hyperparameter tuning using GridsearchCV or Random Search CV. we can also do feature scaling but it does not create any impact on Random Forest. Exporting the Model Now we have done with modeling. we will save the pipeline object for the development of the project website. we will also export the data frame which will be required to create dropdowns in the website. import pickle data.to_csv("df.csv", index=False) pickle.dump(pipe,open('pipe.pkl','wb')) Create Web Application for Deployment of Laptop Price Prediction Model Now we will use streamlit to create a web app to predict laptop prices. In a web application, we need to implement a form that takes all the inputs from users that we have used in a dataset, and by using the dumped model we predict the output and display it to a user. Streamlit Streamlit is an open-source web framework written in Python. It is the fastest way to create data apps and it is widely used by data science practitioners to deploy machine learning models. To work with this it is not important to have any knowledge of frontend languages. Streamlit contains a wide variety of functionalities, and an in-built function to meet your requirement. It provides you with a plot map, flowcharts, slider, selection box, input field, the concept of caching, etc. install streamlit using the below pip command. pip install streamlit create a file named app.py in the same working directory where we will write code for streamlit. import streamlit as st import pickle import numpy as np import pandas as pd #load the model and dataframe df = pd.read_csv("df.csv") pipe = pickle.load(open("pipe.pkl", "rb")) st.title("Laptop Price Predictor") #Now we will take user input one by one as per our dataframe #Brand #company = st.selectbox('Brand', df['Company'].unique()) company = st.selectbox('Brand', df['Company'].unique()) #Type of laptop lap_type = st.selectbox("Type", df['TypeName'].unique()) #Ram ram = st.selectbox("Ram(in GB)", [2,4,6,8,12,16,24,32,64]) #weight weight = st.number_input("Weight of the Laptop") #Touch screen touchscreen = st.selectbox("TouchScreen", ['No', 'Yes']) #IPS ips = st.selectbox("IPS", ['No', 'Yes']) #screen size screen_size = st.number_input('Screen Size') # resolution resolution = st.selectbox('Screen Resolution',['1920x1080','1366x768','1600x900','3840x2160','3200x1800','2880x1800','2560x1600','2560x1440','2304x1440']) #cpu cpu = st.selectbox('CPU',df['Cpu_brand'].unique()) hdd = st.selectbox('HDD(in GB)',[0,128,256,512,1024,2048]) ssd = st.selectbox('SSD(in GB)',[0,8,128,256,512,1024]) gpu = st.selectbox('GPU',df['Gpu_brand'].unique()) os = st.selectbox('OS',df['os'].unique()) #Prediction if st.button('Predict Price'): ppi = None if touchscreen == "Yes": touchscreen = 1 else: touchscreen = 0 if ips == "Yes": ips = 1 else: ips = 0 X_res = int(resolution.split('x')[0]) Y_res = int(resolution.split('x')[1]) ppi = ((X_res ** 2) + (Y_res**2)) ** 0.5 / screen_size query = np.array([company,lap_type,ram,weight,touchscreen,ips,ppi,cpu,hdd,ssd,gpu,os]) query = query.reshape(1, 12) prediction = str(int(np.exp(pipe.predict(query)[0]))) st.title("The predicted price of this configuration is " + prediction) Explanation – First we load the data frame and model that we have saved. After that, we create an HTML form of each field based on training data columns to take input from users. In categorical columns, we provide the first parameter as input field name and second as select options which is nothing but the unique categories in the dataset. In the numerical field, we provide users with an increase or decrease in the value. After that, we created the prediction button, and whenever it is triggered it will encode some variable and prepare a two-dimension list of inputs and pass it to the model to get the prediction that we display on the screen. Take the exponential of predicted output because we have done a log of the output variable. Now when you run the app file using the above command you will get two URL and it will automatically open the web application in your default browser or copy the URL and open it. the application will look something like the below figure. Enter some data in each field and click on predict button to generate prediction. I hope you got the desired results and the application is working fine. Deploy Application to Heroku Now we are ready to deploy our website and make it available for the public to use. Prepare cloud files for deployment 1) Procfile Create a file name Procfile which is an initiator file for Heroku. It only contains one line of code that says which file to run or it simply runs your python file. web: sh setup.sh && streamlit run app.py 2) requirements Create a file named requirements.txt. It is a text file that contains the name and version of a library that you have used to create your project. we need to define the libraries used to cloud so that when we deploy it creates a complete setup by installing required files. If you do not specify the version then it will install the current updated version of the library. we have used only four libraries for creating streamlit apps. streamlit sklearn numpy pandas 3) setup file Create a file name setup.sh which contains how to create the directory structure in the cloud. mkdir -p ~/.streamlit/ echo " [server]n port = $PORTn enableCORS = falsen headless = truen n " > ~/.streamlit/config.toml Upload Code to Github Log in to your GitHub account and create a new repository of the project name of your choice. Now you can either use the upload button to upload all files by selecting from a local file manager. And you can also use the GIT bash command as stated below to upload your code. After creating a new repository copy the ssh link of a repository to connect with a repository. And then line by line follow the below commands. git init #initialize empty repository git remote add origin #connect to repository git pull origin master #pull initial chnges gid add -A #to add files in staging area git commit -m initial commit git push origin master #push all files to github Deploy to Heroku Log in or register to Heroku if you do not have an account. After you log in in the top-right corner you will have the option of new. create a new app. Give a unique name to your code and this name will be your website URL followed by the Heroku domain and let the region be united states only. Connect to repository As you create an app, you will be redirected to the app dashboard. Now it’s time to connect Heroku to our Github repository. select Github and search your repository where you have pushed all application code and connect with it. Deploy the code Now scroll down and you have the option to deploy code. we will deploy on the main branch. Click on deploy branch and observe logs. After successfully deployment it will give you the web app URL on the view app option and your application is deployed successfully. Now you can share the URL with anyone to use your application. Live demo web app URL – Laptop Price predictor Complete code files developed – GitHub End Notes Hurray! we have developed and deployed a machine learning application for the prediction of laptop price. We have learned about the complete machine learning project lifecycle with practical implementation and how to approach a particular problem. I hope that the particular article motivates and encourage you to develop similar more application to enhance your understanding of various methods and algorithms to use and twin with different parameters. I hope that it was easy to follow up on this article. If you have any queries, suggestions, or feedback please feel free to post in the comment section. About The Author I am pursuing a bachelor’s in computer science. I am a data science enthusiast and love to learn, work in data technologies. Connect with me on Linkedin Thanks for giving your time! Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2021/11/laptop-price-prediction-practical-understanding-of-machine-learning-project-lifecycle/
CC-MAIN-2022-21
refinedweb
4,291
56.86
22 February 2011 11:29 [Source: ICIS news] SINGAPORE (ICIS)--Linear low density polyethylene (LLDPE) futures on the Dalian Commodity Exchange (DCE) fell 1.6% on Tuesday as investors took profits, local futures brokers said. Liquidity focused on the May contract which closed at yuan (CNY) 11,975/tonne, 1.6% or CNY200/tonne ($30/tonne) lower from Monday’s settlement price of CNY12,175/tonne, according to DCE data. Tuesday's trading activity was also due to investors who sought to lock in their profits by taking "sell" positions on the futures market, said Jack Hua, a petrochemical analyst at CITIC Newedge Futures in ?xml:namespace> These investors have physical LLDPE cargoes on hand but are unwilling to accept the prevailing physical market prices, and hence they choose to lock in their profits by taking a "sell" position on the futures market, he said. LLDPE was trading at around CNY11,000/tonne EXWH (ex-warehouse) in the domestic physical market, according to local distributors. An investor with physical cargoes on hand would have locked in a few hundred yuan of profit by taking a "sell" position on the futures market at Tuesday's closing price of CNY11,975/tonne. The investors refused to accept the current physical market prices because they believe prices would increase in the weeks ahead as a result of high global crude prices, said Hua. ($1 = CNY6.58)
http://www.icis.com/Articles/2011/02/22/9437281/china-lldpe-futures-fall-1.6-on-profit-taking.html
CC-MAIN-2014-15
refinedweb
233
55.88
? David Jencks <david_jencks@yahoo.com> 11/24/2005 11:11 AM Please respond to user To: user@geronimo.apache.org cc: Subject: Re: Enlisting XAResource objects On Nov 24, 2005, at 10:38 AM, Guglielmo.Lichtner@instinet.com wrote: > > >I was going to mention that :-). You still haven't told me how your > >(presumably j2ee) application finds the part of the RM to talk to. > >This could be an important part of the picture :-). > > Applications invoke our own user-transaction-like class, which keeps > track of the change set. So when that happens, if I can find the > transaction > manager then I can call getTransaction.enlistResource(XAResource). How does the application get the instance of your class? Or is it a static method call? There isn't really any way to bind anything not specified in the j2ee spec into the geronimo java:comp/env jndi namespace. We've thought about providing a way to get a proxy to any gbean from jndi but it isn't very clear how to fit this into the j2ee requirements/plans. > > It looks like any gbean can use the kernel (or some other object, I > can't remember) to invoke any method on any other gbean, so I can do > that too. Although I am going to write a gbean to emulate a startup > class so I can also use a reference. > > Thanks a lot for the detailed description of how to write a > recoverable > resource manager. I just realized that since my cache has no persistent > state independent of the db, so I don't need recovery. :-) > > If I did want to get list under the "ResourceManagers" collection I > would > have to change the j2ee-server plan, whereas I would prefer to use an > official > plan. However, you could add a line in a future release for RMs which > are > not JCA-based: > > <references name="ResourceManagers"> > > <pattern><gbean-name>geronimo.server: > j2eeType=JCAManagedConnectionFactory,*</gbean-name></pattern> > > <pattern><gbean-name>geronimo.server:j2eeType=ActivationSpec,*</gbean- > name></pattern> > > <pattern><gbean-name> <!-- pattern for other third-party RMs > --> </gbean-name></pattern> > > </references> We might add the ability to override reference patterns in the config.xml. I was thinking you could lie and give your gbean a j2eeType=JCAManagedConnectionFactory :-) thanks david jencks > > ***************************************************************** > <<>> > >. > > *****************************************************************
http://mail-archives.apache.org/mod_mbox/geronimo-user/200511.mbox/%3COF5153170F.CBF1C670-ON852570C3.006B7E4A-882570C3.006C21AC@isn.instinet.com%3E
CC-MAIN-2015-27
refinedweb
376
56.45
Image by quimby | Some Rights Reserved I recently began taking a Harvard computer science course. As pretentious as that sounds, it’s not as bad as it seems. I am taking Harvard CS50 on-line, for free, in an attempt to push my knowledge and expand my understanding of this thing I love so much, programming. The course uses Linux and C as two of the primary learning tools/environments. While I am semi-capable using Linux, and the syntax of C is vary familiar, the mechanics of C compilation, linking, and makefiles are all new. If you are an experienced C developer, or if you have worked extensively with Make before, there is likely nothing new for you here. However, if you wanted to let me know if and when I am passing bad information, please do! This article will (hopefully) be helpful to those who are just getting started compiling C programs, and/or using the GNU Make utility. In the post, we discuss some things that are specific to the context of the course exercises. However, the concepts discussed, and the examples introduced are general enough that the post should be useful beyond the course specifics. The course makes available an “appliance” (basically, a pre-configured VM which includes all the required software and files), which I believe can be downloaded by anyone taking even the free on-line course. However, since I already know how to set up/configure a Linux box (but can always use extra practice), I figured my learning would be augmented by doing things the hard way, and doing everything the course requires manually. For better or for worse, this has forced me to learn about Make and makefiles (this is for the better, I just opened the sentence that way to sound good). - Compiling and Linking – The Simple - Compiling and Linking – A Little More Complex - Handling Errors and Warnings - What is Make? - An Example Makefile for Hello World - A More General Template for Make Files - A Note on Tabs Vs. Spaces in Your Editor - Passing the Compilation Target Name to Make as a Command Line Argument - Additional Resources and Items of Interest Make File Examples on Github In this post we will put together a handful of example Make files. You can also find them as source at my Github repo: Compiling and Linking – The Simple This is by no means a deeply considered resource on compiling and linking source code. However, in order to understand how make works, we need to understand compiling and linking at some level. Let’s consider the canonical “Hello World!” program, written in the C programming language. Say we have the following source file, suitably named hello.c: Basic Hello World Implementation in C: #include <stdio.h> int main(void) { printf("Hello, World!\n"); } In the above, the first line instructs the compiler to include the C Standard IO library, by making reference to the stdio.h header file. This is followed by our application code, which impressively prints the string “Hello World!” to the terminal window. In order to run the above, we need to compile first. Since the Harvard course is using the Clang compiler, the most basic compilation command we can enter from the terminal might be: Basic Compile Command for Hello World: $ clang hello.c -o hello When we enter the above command in our terminal window, we are essentially telling the Clang compiler to compile the source file hello.c, and the -o flag tells it to name the output binary file hello. This works well enough for a simple task like compiling Hello World. Files from the C Standard Library are linked automatically, and the only compiler flag we are using is -o to name the output file (if we didn’t do this, the output file would be named a.out, the default output file name). Compiling and Linking – A Little More Complex When I say “Complex” in the header above, it’s all relative. We will expand on our simple Hello World example by adding an external library, and using some additional important compiler flags. The Harvard CS50 course staff created a cs50 library for use by students in the course. The library includes some functions to ease folks into working with C. Among other things, the staff have added a number of functions designed to retreive terminal input from the user. For example, the cs50 library defines a GetString() function which will accept user input as text from the terminal window. In addition to the GetString() function, the cs50 library also defines a string data type (which is NOT a native C data type!). We can add the cs50 library to our machine by following the instructions from the cs50 site. During the process, the library will be compiled, and the output placed in the /usr/local/lib/ directory, and the all-important header files will be added to our usr/local/include/ directory. NOTE: You don’t need to focus on using this course-specific library specifically here – this is simply an example of adding an external include to the compilation process. Once added, the various functions and types defined therein will be available to us. We might modify our simple Hello World! example as follows by referencing the cs50 library, and making use of the GetString() function and the new string type: Modified Hello World Example: // Add include for cs50 library: #include <cs50.h> #include <stdio.h> int main(void) { printf("What is your name?"); // Get text input from user: string name = GetString(); // Use user input in output striing: printf("Hello, %s\n", name); } Now, if we try to use the same terminal command to compile this version, we will see some issues: Run Original Compile Command: $ clang hello.c -o hello Terminal Output from Command: /tmp/hello-E2TvwD.o: In function `main': hello.c:(.text+0x22): undefined reference to `GetString' clang: error: linker command failed with exit code 1 (use -v to see invocation) From the terminal output, we can see that the compiler cannot find the GetString() method, and that there was an issue with the linker. Turns out, we can add some additional arguments to our clang command to tell Clang what files to link: $ clang hello.c -o hello -lcs50 By adding the -l flag followed by the name of the library we need to include, we have told Clang to link to the cs50 library. Handling Errors and Warnings Of course, the examples of using the Clang compiler and arguments above still represent a very basic case. Generally, we might want to direct the compiler to add compiler warnings, and/or to include debugging information in the output files. a simple way to do this from the terminal, using our example above, would be as follows: Adding Additional Compiler Flags to clang Terminal Command: clang hello.c -g -Wall -o hello -lcs50 Here, we have user the -g flag, which tells the compiler to include debugging information in the output files, and the -Wall flag. -Wall turns on most of the various compiler warnings in Clang (warnings do not prevent compilation and output, but warn of potential issues). A quick skimming of the Clang docs will show that there is potential for a great many compiler flags and other arguments. As we can see, though, our terminal input to compile even the still-simple hello application is becoming cumbersome. Now imagine a much larger application, with multiple source files, referencing multiple external libraries. What is Make? Since source code can be contained in multiple files, and also make reference to additional files and libraries, we need a way to tell the compiler which files to compile, which order to compile them, and how a link to external files and libraries upon which our source code depends. With the additional of various compiler options and such required to get our application running, combined with the frequency with which we are likely to use the compile/run cycle during development, it is easy to see how entering the compile commands manually could rapidly become cumbersome. Enter the Make utility. GNU Make was originally created by Richard M. Stallman (“RMS”) and Roland McGrath. From the GNU Manual: “The make utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them.” When we write source code in C, C++, or other compiled languages, creating the source is only the first step. The human-readable source code must be compiled into binary files in order that the machine can run the application. Essentially, the Make utility utilizes structured information contained in a makefile in order to properly compile and link a program. A Make file is named either Makefile or makefile, and is placed in the source directory for your project. An Example Makefile for Hello World Our final example using the clang command in the terminal contained a number of compiler flags, and referenced one external library. The command was still doable manually, but using make, we can make like much easier. In a simple form, a make file can be set up to essentially execute the terminal command from above. The basic structure looks like this: Basic Makefile Structure: # Compile an executable named yourProgram from yourProgram.c all: yourProgram.c <TAB>gcc -g -Wall -o yourProgram yourProgram.c In a makefile, lines preceded with a hash symbol are comments, and will be ignored by the utility. In the structure above, it is critical that the <TAB> on the third line is actually a tab character. Using Make, all actual commands must be preceded by a tab. For example, we might create a Makefile for our hello program like this: Makefile for the Hello Program: # compile the hello program with compiler warnings, # debug info, and include the cs50 library all: hello.c clang -g -Wall -o hello hello.c -lcs50 This Make file, named (suitably) makefile and saved in the directory where our hello.c source file lives, will perform precisely the same as the final terminal command we examined. In order to compile our hello.c program using the makefile above, we need only type the following into our terminal: Compiling Hello Using Make: $ make Of course, we need to be in the directory in which the make file and the hello.c source file are located. A More General Template for Make Files Of course, compiling our Hello World application still represents a pretty simplistic view of the compilation process. We might want to avail ourselves of the Make utilities strengths, and cook up a more general template we can use to create make files. The Make utility allows us to structure a makefile in such a way as to separate the compilation targets (the source to be compiled) from the commands, and the compiler flags/arguments (called rules in a make file). We can even use what amount to variables to hold these values. For example, we might refine our current makefile as follows: General Purpose Makefile Template: # the compiler to use CC = clang # compiler flags: # -g adds debugging information to the executable file # -Wall turns on most, but not all, compiler warnings CFLAGS = -g -Wall #files to link: LFLAGS = -lcs50 # the name to use for both the target source file, and the output file: TARGET = hello all: $(TARGET) $(TARGET): $(TARGET).c $(CC) $(CFLAGS) -o $(TARGET) $(TARGET).c $(LFLAGS) As we can see in the above, we can make assignments to each of the capitalized variables, which are then used in forming the command (notice that once again, the actual command is preceded by a tab in the highlighted line). While this Make File is still set up for our Hello World application, we could easily change the assignment to the TARGET variable, as well as add or remove compiler flags and/or linked files for a different application. Again, we can tell make to compile our hello application by simply typing: Compile Hello.c Using the modified Makefile: $ make A Note on Tabs Vs. Spaces in Your Editor If you, like me, follow the One True Coding Convention which states: “Thou shalt use spaces, not tabs, for indentation” Then you will have a problem with creating your make file. If you have your editor set to convert tabs to spaces, Make will not recognize the all-important Tab character in front of the command, because, well, it’s not there. Fortunately, there is a work-around. If you do not have tabs in your source file, you can instead separate the compile target from the command using a semi-colon. With this fix in place, our Make file might look like this: Makefile with no Tab Characters: # Compile an executable named yourProgram from yourProgram.c all: yourProgram.c <TAB>gcc -g -Wall -o yourProgram yourProgram.c # = hello all: $(TARGET) $(TARGET): $(TARGET).c ; $(CC) $(CFLAGS) -o $(TARGET) $(TARGET).c $(LFLAGS) In the above, we have inserted a semi-colon between the definition of dependencies definition of the target and the command statement structure (see highlighted line). Passing the Compilation Target Name to Make as a Command Line Argument Most of the time, when developing an application you will most likely need a application-specific Makefile for the application. At least, any substantive application which includes more than one source file, and/or external references. However, for simple futzing about, or in my case, tossing together a variety of one-off example tidbits which comprise the bulk of the problem sets for the Harvard cs50 course, it may be handy to be able to pass the name of the compile target in as a command line argument. The bulk of the Harvard examples include the cs50 library created by the program staff (at least, in the earlier exercises), but otherwise would mostly require the same sets of arguments. For example, say we had another code file, goodby.c in the same directory. We could simply pass the target name like so: Passing the Compilation Target Name as a Command Line Argument: make TARGET=goodbye.c As we can see, we assign the target name to the TARGET variable when we invoke Make. In this case, if we fail to pass a target name, by simply typing make as we have done previously, Make will compile the program hard-coded into the Makefile – in this case, hello. Despite our intention, the wrong file will be compiled. We can make one more modification to our Makefile if we want to require that a target be specified as a command line argument: Require a Command Line Argument for the Compile Target Name: # = $(target) all: $(TARGET) $(TARGET): $(TARGET).c ; $(CC) $(CFLAGS) -o $(TARGET) $(TARGET).c $(LFLAGS) With that change, we can now run make on a simple, single-file program like so: Invoke Make with Required Target Name: $ make target=hello Of course, now things will go a little haywire if we forget to include the target name, or if we forget to explicitly make the assignment when invoking Make from the command line. Only the Beginning This is one of those posts that is mainly for my own reference. As I become more fluent with C, compilation, and Make, I expect my usage may change. For now, however, the above represents what I have figured out while trying to work with the examples in the on-line Harvard course. If you see me doing anything idiotic in the above, or have suggestions, I am all ears! Please do comment below, or reach out at the email described in my “About the Author” blurb at the top of this page. Additional Resources and Items of Interest - Taking Harvard Computer Science for Free - GNU Make Manual - Clang 3.5 Documentation - Git Quick Reference: Interactive Patch Staging with git add -p - Git: Combine and Organize Messy Commits Using Interactive Rebase John on GoogleCodeProject Author Clair Stolberg Blogs ou should be reading… Author John Atten Nice! Thanks for the link! Nope, not into re-blogging. Author mar77i can't reblog this one enough…
http://johnatten.com/2014/07/06/creating-a-basic-make-file-for-compiling-c-code/?replytocom=98
CC-MAIN-2022-27
refinedweb
2,683
60.14
- Advertisement OklyDoklyMember Content Count148 Joined Last visited Community Reputation122 Neutral About OklyDokly - RankMember Can anyone see what's wrong with this code? OklyDokly replied to OklyDokly's topic in General and Gameplay ProgrammingOf course, silly me :) Thanks for your help though, annoying how the fix is always a little more complicated :S... Can anyone see what's wrong with this code? OklyDokly posted a topic in General and Gameplay ProgrammingHey I thought I knew C++ pretty well, but this code seems to be crashing on the 3rd line of the main function and I can't see why. Anyone got any ideas? // Me class class Me { protected: int m_I; public: Me( int i ) { m_I = i; } virtual ~Me() {} virtual void Print() = 0; }; // Inherited you class class You : public Me { int j; public: You() : Me( 0 ) { j = m_I; } virtual ~You() {} virtual void Print() { printf( "%i\n", j ); } }; int _tmain(int argc, _TCHAR* argv[]) { Me* you = new You[ 3 ]; you[0].Print(); you[1].Print(); you[2].Print(); return 0; } Thanks Neural Networks Game Engines OklyDokly posted a topic in Artificial IntelligenceDoes anyone know if there are any commercial middleware engines out there to allow a games developer to quickly set up a neural network for its games' requirments? If not, do you think such an engine would take off in the market today? - Thanks for your replies. I've just scrolled to the bottom of my linked list template header and found a #include "LinkedList.cpp" at the bottom of the file. Looks like I've encountered this problem before and forgotten about it :s - Thanks, I never knew that. Learn a new thing every day :) It's a little odd though, I had a linked list template up and running with the same code structure, the compiler didn't seem to complain about that one. Guess I better modify that too :s GCC linker and templates OklyDokly posted a topic in General and Gameplay ProgrammingHi I'm getting linker problems when I'm trying to implement my second template class in my project using the linux GCC linker and eclipse with CDT. I'm just wondering whether anyone's ever had similar problems. Here's the code: DataSet.h #ifndef _DATASET_H #define _DATASET_H template <class T> class CDataSet { public: CDataSet(); }; #endif //_DATASET_H_ DataSet.cpp #include "DataSet.h" template <class T> CDataSet<T>::CDataSet() { } Main.cpp #include "DataSet.h" class CTest { }; int main( int argc, char* argv ) { CDataSet<CTest> dataSet; } The error I get is: undefined reference to `CDataSet<CTest>::CDataSet[in-charge]()' Any ideas what it could be? Thanks request for comments on Rendering sprites on Symbian OklyDokly replied to fardin's topic in General and Gameplay ProgrammingSorry fardin, had a bit of a memory lapse the other day. It's not an asteroids game, but a shoot-em-up. It tells you how to do it here: If you're new to assembly, I recommend purchasing a book on the ARM architecture as well, to help you out. I would recommend the technique mentioned in this article, over internal ASM code, since it is much easier to write readable assembly code this way, and involves less typing. If you're wondering what to call your ASM routines, use: abld armi listing at the command lines, and refer to the listing files to view the ASM subroutine name, for the corresponding C function (make sure it isn't ignored by the compiler with a #ifdef __WINS__). It can also help, if you want to optimize some of your C methods. Hope this helps and feel free to PM me if you get stuck on anything. Boltzmann Machines vs Back-Propagation networks OklyDokly posted a topic in Artificial IntelligenceI'm wondering, are there any circumastances within game AI where you would want to use a Boltzmann Machine (A kind of noisy version of the Hopfield Net, with a hidden layer) over Back-Propagation networks? Back-Propagation nets seem to be much faster, both in training and execution. I'm just wondering whether anyone sees any advantage in using the Boltzmann Machine instead? ... Advice needed for Rare Test + Interview OklyDokly replied to the_golden_gunman's topic in Games Business and LawYeup been through the Rare test. One thing to remember about that test is you might not know everything. It would be helpful if you brush up on ARM assembler, as I recall there being some assembler questions (e.g. how to copy from one area of memory to another). Also brush up on matrix algebra, and how it applies to Direct 3D as there is likely to be some questions on that. Finally be prepared to answer questions about the games industry, what makes a good and bad game, what games Rare have done and what you think of them etc. request for comments on Rendering sprites on Symbian OklyDokly replied to fardin's topic in General and Gameplay ProgrammingI got a significant speed improvement when I converted my bitmap routine to assembler, I really would recommend it. There is a tutorial somewhere on the Symbian site which takes you through how to do this, it's about making an asteroids game for the Communicator I think. Viewing Memory Allocation in VS.NET OklyDokly replied to StaticEngine's topic in General and Gameplay ProgrammingWhich operating system are you using? Because it appears that you can view the memory allocated to each process in the task manager, in XP. If you want to know memory that you have allocated threw malloc or new statements, then one way you can approach this is by making a base class (if using C++) and have all your classes derive from that class. In that class if you overide the new operator to keep a static count of the amount of memory allocated, then you can simply watch this variable in the debugger, or if you like print it out to the screen. If you don't want to do this, because your codes too complicated, you may be able to use the _heapwalk function or something equivalent to walk through each block on it's heap and test it's size. How's the PPC market? OklyDokly replied to wyrd's topic in General and Gameplay ProgrammingThe link above doesn't quite paint the whole picture. It is a report about handset manufacturers and not necessarily about the distributor of the operating system themselves. Microsoft do not necessarily manufacturer pocket PCs (iPaq for example is manufacturered by HP), they do license the operating system however. A good bit of market research can be found at. From this it can be seen that Microsoft enabled devices occupied around 20% of the market share in 2003. That share has shrunk by around 8% since 2002, which doesn't look too promising for Microsoft. It has to be remembered that many of these devices won't be Pocket Pc's but smartphones such as the Orange SPV. Samsung's new gadgety phone! OklyDokly posted a topic in General and Gameplay ProgrammingIf you thought that location based gaming opened a whole window of innovation, take a look at this... A gadget like this could produce some truly innovative games, for example you could combine it with a location based engine to provide virtual marathons. What are your thoughts on this new technology? Should I work in the games industry? OklyDokly replied to Crazy Chicken's topic in Games Business and LawQuote:Face it, the games industry will pay you less and you'll have to work longer hours than a regular IT job. I'm not sure I agree with you entirely there, it depends on what you call a 'regular IT job.' I had to work fewer hours when I was in the games industry than some of my friends at Accenture. The pay is usually much lower however, although there are some companies who do pay good starting salaries in the UK (considering the average graduate salary anyway). Should I work in the games industry? OklyDokly replied to Crazy Chicken's topic in Games Business and LawI've actually been to interviews for companies, where the director claims that they think that people are most product 9am-5pm 5 days a week, and should have the weekend to rest. This is in the UK, and most places I've been to really aren't that bad in terms of hours worked. - Advertisement
https://www.gamedev.net/profile/35101-oklydokly/
CC-MAIN-2018-43
refinedweb
1,403
59.13
#include <gazebo++.h> #include <gazebo++.h> List of all members. GzTruth class provides an interface to functions gz_truth_*() in the libgazebo library. The truth interface is useful for getting and setting the ground-truth pose of objects in the world; currently, it is supported only by the TruthWidget model. [inline] A constructor. A destructor. Retrieve sensory data. Move a robot to a new pose in 2D space. Move a robot to a new pose in full 3D space. Retrieve pose information in 2D space. Retrieve pose information in full 3D space. Retrieve position information in 2D space. Retrieve position information in full 3D space.
http://robotics.usc.edu/~boyoon/bjlib/d2/dde/classbj_1_1GzTruth.html
CC-MAIN-2013-20
refinedweb
104
62.95
Two simple tips on using React Hooks February 15, 2020 I have been using React Hooks for a few months now and I am absolutely loving it! Today I would like to share 2 simple tips that I have learned along the way. Hopefully you will find them useful! useEffect for API requests One common use case for useEffect hook is to fetch some data from REST API and display them: useEffect(() => { const fetchUserData = async () => { const response = await API.getUserData(userId); setUserData(response.userData); }; fetchUserData(); }, [userId); This looks fine at first glance but there are actually 2 potential bugs here: - If user navigate away before the API returns, this will cause a set state to an unmounted component which may lead to memory leak. - If userIdchanges before the previous request returns, this will cause the component to send another request which may lead to race condition (The first request returns after the second one and override the state). To prevent those bugs we can simply add a local variable ignore: useEffect(() => { let ignore = false; const fetchUserData = async () => { const response = await API.getUserData(userId); if (!ignore) { setUserData(response.userData); } }; fetchUserData(); return () => { ignore = true; }; }, [userId); ignore will be set to true when ever the component is unmounted or the dependency array has changed ( userId changed in this case). Therefore setUserData will not be called! This pattern is documented in React’s official documentation. useDeepMemo Sometimes when we write custom hooks we want that hook to be able to take a dynamic object or array as input. For example consider this useAPI custom hook: // usage const response = useAPI({ url: ‘/api/get-user’, body: { userId }, }); // implementation function useAPI({ url, body }) { const [response, setResponse] = useState(null); useEffect(() => { // do the request here }, [url, body]); return response; } This is a nice handy custom hook, but there is just one problem. Whenever there is a re-render, body is passed in as a new object and useEffect will get triggered again. This will cause an infinite loop! Sure, we can wrap the body object in an useMemo hook, but it becomes a pain in the ass when you have to do that every time you use useAPI! To solve this, we can write another custom hook useDeepMemo: // you can also use other deep compare functions // e.g. lodash’s _.isEqual import { equal } from ‘@wry/equality’; function useDeepMemo(memoFn, key) { const ref = useRef(); if (!ref.current || !equal(key, ref.current.key)) { ref.current = { key, value: memoFn() }; } return ref.current.value; } I first saw this technique from Apollo’s source code. This is a replacement for useMemo but it uses deep equality to compare memo keys and it guarantees that the memo function will only be called if the keys are unequal. With that, we can then rewrite our useAPI hook: function useAPI({ url, body }) { const [response, setResponse] = useState(null); const cachedBody = useDeepMemo(() => body, [body]); useEffect(() => { // do the request here }, [url, cachedBody]); return response; } Now we can use useAPI without worrying about accidentally causing an infinite loop!
https://colloque.io/two-simple-tips-on-using-react-hooks/
CC-MAIN-2022-40
refinedweb
498
57.2
2.1. Data Manipulation¶ In order to get anything done, we must have some way to manipulate data. Generally, there are two important things we need to do with data: (i) acquire them and (ii) process them once they are inside the computer. There is no point in acquiring data if we do not even know how to store it, so let us get our hands dirty first by playing with synthetic data. We will start by introducing the \(n\)-dimensional array ( ndarray), MXNet’s primary tool for storing and transforming data. In MXNet, ndarray is a class and we also call its instance an ndarray for brevity. If you have worked with NumPy, perhaps the most widely-used scientific computing package in Python, then you are ready to fly. In short, we designed MXNet’s ndarray to be an extension to NumPy’s ndarray with a few key advantages. First, MXNet’s ndarray supports asynchronous computation on CPU, GPU, and distributed cloud architectures, whereas the latter only supports CPU computation. Second, MXNet’s ndarray supports automatic differentiation. These properties make MXNet’s ndarray indispensable for deep learning. Throughout the book, the term ndarray refers to MXNet’s ndarray unless otherwise stated. 2.1.1. Getting Started¶ Throughout this chapter, our aim is to get you up and running, equipping you with the the basic math and numerical computing tools that you will be mastering throughout the course of the book. Do not worry if you are not completely comfortable with all of the mathematical concepts or library functions. In the following sections we will revisit the same material in the context practical examples. On the other hand, if you already have some background and want to go deeper into the mathematical content, just skip this section. To start, we import the np ( numpy) and npx ( numpy_extension) modules from MXNet. Here, the np module includes the same functions supported by NumPy, while the npx module contains a set of extensions developed to empower deep learning within a NumPy-like environment. When using ndarray, we almost always invoke the set_np function: this is for compatibility of ndarray processing by other components of MXNet. from mxnet import np, npx npx.set_np() An ndarray represents an array of numerical values, which are possibly multi-dimensional. With one axis, an ndarray corresponds (in math) to a vector. With two axes, an ndarray corresponds to a matrix. Arrays with more than two axes do not have special mathematical names—we simply call them tensors. To start, we can use arange to create a row vector x containing the first \(12\) integers starting with \(0\), though they are created as floats by default. Each of the values in an ndarray is called an element of the ndarray. For instance, there are \(12\) elements in the ndarray x. Unless otherwise specified, a new ndarray will be stored in main memory and designated for CPU-based computation. x = np.arange(12) x array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.]) We can access an ndarray’s shape (the length along each axis) by inspecting its shape property. x.shape (12,) If we just want to know the total number of elements in an ndarray, i.e., the product of all of the shape elements, we can inspect its size property. Because we are dealing with a vector here, the single element of its shape is identical to its size. x.size 12 To change the shape of an ndarray without altering either the number of elements or their values, we can invoke the reshape function. For example, we can transform our ndarray, x, from a row vector with shape (\(12\),) to a matrix of shape (\(3\), \(4\)). This new ndarray contains the exact same values, and treats such values as a matrix organized as \(3\) rows and \(4\) columns. To reiterate, although the shape has changed, the elements in x have not. Consequently, the size remains the same. x = x.reshape(3, 4) x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]]) Reshaping by manually specifying each of the dimensions can sometimes get annoying. For instance, if our target shape is a matrix with shape (height, width), after we know the width, the height is given implicitly. Why should we have to perform the division ourselves? In the example above, to get a matrix with \(3\) rows, we specified both that it should have \(3\) rows and \(4\) columns. Fortunately, ndarray can automatically work out one dimension given the rest. We invoke this capability by placing -1 for the dimension that we would like ndarray to automatically infer. In our case, instead of calling x.reshape(3, 4), we could have equivalently called x.reshape(-1, 4) or x.reshape(3, -1). The empty method grabs a chunk of memory and hands us back a matrix without bothering to change the value of any of its entries. This is remarkably efficient but we must be careful because the entries might take arbitrary values, including very big ones! np.empty((3, 4)) array([[ 2.1045235e-17, 4.5699146e-41, -5.3771771e-05, 3.0650601e-41], [ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00], [ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 0.0000000e+00]]) Typically, we will want our matrices initialized either with ones, zeros, some known constants, or numbers randomly sampled from a known distribution. Perhaps most often, we want an array of all zeros. To create an ndarray representing a tensor with all elements set to \(0\) and a shape of (\(2\), \(3\), \(4\)) we can invoke np.zeros((2, 3, 4)) array([[[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]], [[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]]) We can create tensors with each element set to 1 as follows: np.ones((2, 3, 4)) array([[[1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]], [[1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]]]) In some cases, we will want to randomly sample the values of all the elements in an ndarray according to some known probability distribution. One common case is when we construct an array to serve as a parameter in a neural network. The following snippet creates an ndarray with shape (\(3\), \(4\)). Each of its elements is randomly sampled from a standard Gaussian (normal) distribution with a mean of \(0\) and a standard deviation of \(1\). np.random.normal(0, 1, size=(3, 4)) array([[]]) We can also specify the value of each element in the desired ndarray by supplying a Python list containing the numerical values. np.array([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]]) array([[2., 1., 4., 3.], [1., 2., 3., 4.], [4., 3., 2., 1.]]) 2.1.2. Operations¶ This book is not about Web development—it is not enough to just read and write values. We want to perform mathematical operations on those arrays. Some of the simplest and most useful operations are the elementwise operations. These apply a standard scalar operation to each element of an array. For functions that take two arrays as inputs, elementwise operations apply some standard binary operator on each pair of corresponding elements from the two arrays. We can create an elementwise function from any function that maps from a scalar to a scalar. In math notation, we would denote such a unary scalar operator (taking one input) by the signature \(f: \mathbb{R} \rightarrow \mathbb{R}\) and a binary scalar operator (taking two inputs) by the signature \(f: \mathbb{R}, \mathbb{R} \rightarrow \mathbb{R}\). Given any two vectors \(\mathbf{u}\) and \(\mathbf{v}\) of the same shape, and a binary operator \(f\), we can produce a vector \(\mathbf{c} = F(\mathbf{u},\mathbf{v})\) by setting \(c_i \gets f(u_i, v_i)\) for all \(i\), where \(c_i, u_i\), and \(v_i\) are the \(i^\mathrm{th}\) elements of vectors \(\mathbf{c}, \mathbf{u}\), and \(\mathbf{v}\). Here, we produced the vector-valued \(F: \mathbb{R}^d, \mathbb{R}^d \rightarrow \mathbb{R}^d\) by lifting the scalar function to an elementwise vector operation. In MXNet, the common standard arithmetic operators ( +, -, *, /, and **) have all been lifted to elementwise operations for any identically-shaped tensors of arbitrary shape. We can call elementwise operations on any two tensors of the same shape. In the following example, we use commas to formulate a \(5\)-element tuple, where each element is the result of an elementwise operation. x = np.array([1, 2, 4, 8]) y = np.array([2, 2, 2, 2]) x + y, x - y, x * y, x / y, x ** y # The ** operator is exponentiation (array([ 3., 4., 6., 10.]), array([-1., 0., 2., 6.]), array([ 2., 4., 8., 16.]), array([0.5, 1. , 2. , 4. ]), array([ 1., 4., 16., 64.])) Many more operations can be applied elementwise, including unary operators like exponentiation. np.exp(x) array([2.7182817e+00, 7.3890562e+00, 5.4598148e+01, 2.9809580e+03]) In addition to elementwise computations, we can also perform linear algebra operations, including vector dot products and matrix multiplication. We will explain the crucial bits of linear algebra (with no assumed prior knowledge) in Section 2.4. We can also concatenate multiple ndarrays together, stacking them end-to-end to form a larger ndarray. We just need to provide a list of ndarrays and tell the system along which axis to concatenate. The example below shows what happens when we concatenate two matrices along rows (axis \(0\), the first element of the shape) vs. columns (axis \(1\), the second element of the shape). We can see that, the first output ndarray‘s axis-\(0\) length (\(6\)) is the sum of the two input ndarrays’ axis-\(0\) lengths (\(3 + 3\)); while the second output ndarray‘s axis-\(1\) length (\(8\)) is the sum of the two input ndarrays’ axis-\(1\) lengths (\(4 + 4\)). x = np.arange(12).reshape(3, 4) y = np.array([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]]) np.concatenate([x, y], axis=0), np.concatenate([x, y], axis=1) (array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [ 2., 1., 4., 3.], [ 1., 2., 3., 4.], [ 4., 3., 2., 1.]]), array([[ 0., 1., 2., 3., 2., 1., 4., 3.], [ 4., 5., 6., 7., 1., 2., 3., 4.], [ 8., 9., 10., 11., 4., 3., 2., 1.]])) Sometimes, we want to construct a binary ndarray via logical statements. Take x == y as an example. For each position, if x and y are equal at that position, the corresponding entry in the new ndarray takes a value of \(1\), meaning that the logical statement x == y is true at that position; otherwise that position takes \(0\). x == y array([[0., 1., 0., 1.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) Summing all the elements in the ndarray yields an ndarray with only one element. x.sum() array(66.) For stylistic convenience, we can write x.sum()as np.sum(x). 2.1.3. Broadcasting Mechanism¶ In the above section, we saw how to perform elementwise operations on two ndarrays of the same shape. Under certain conditions, even when shapes differ, we can still perform elementwise operations by invoking the broadcasting mechanism. These mechanisms work in the following way: First, expand one or both arrays by copying elements appropriately so that after this transformation, the two ndarrays have the same shape. Second, carry out the elementwise operations on the resulting arrays. In most cases, we broadcast along an axis where an array initially only has length \(1\), such as in the following example: a = np.arange(3).reshape(3, 1) b = np.arange(2).reshape(1, 2) a, b (array([[0.], [1.], [2.]]), array([[0., 1.]])) Since a and b are \(3\times1\) and \(1\times2\) matrices respectively, their shapes do not match up if we want to add them. We broadcast the entries of both matrices into a larger \(3\times2\) matrix as follows: for matrix a it replicates the columns and for matrix b it replicates the rows before adding up both elementwise. a + b array([[0., 1.], [1., 2.], [2., 3.]]) 2.1.4. Indexing and Slicing¶ Just as in any other Python array, elements in an ndarray can be accessed by index. As in any Python array, the first element has index \(0\) and ranges are specified to include the first but before the last element. By this logic, [-1] selects the last element and [1:3] selects the second and the third elements. Let us try this out and compare the outputs. x[-1], x[1:3] (array([ 8., 9., 10., 11.]), array([[ 4., 5., 6., 7.], [ 8., 9., 10., 11.]])) Beyond reading, we can also write elements of a matrix by specifying indices. x[1, 2] = 9 x array([[ 0., 1., 2., 3.], [ 4., 5., 9., 7.], [ 8., 9., 10., 11.]]) If we want to assign multiple elements the same value, we simply index all of them and then assign them the value. For instance, [0:2, :] accesses the first and second rows, where : takes all the elements along axis \(1\) (column). While we discussed indexing for matrices, this obviously also works for vectors and for tensors of more than \(2\) dimensions. x[0:2, :] = 12 x array([[12., 12., 12., 12.], [12., 12., 12., 12.], [ 8., 9., 10., 11.]]) 2.1.5. Saving Memory¶ In the previous example, every time we ran an operation, we allocated new memory to host its results. For example, if we write y = x + y, we will dereference the ndarray that y used to point to and instead point redirects y to point at this new location in memory. before = id(y) y = y + x id(y) == before that discarded memory is not released, and make it possible for parts of our code to inadvertently reference stale parameters. Fortunately, performing in-place operations in MXNet is easy. We can assign the result of an operation to a previously allocated array with slice notation, e.g., y[:] = <expression>. To illustrate this concept, we first create a new matrix z with the same shape as another y, using zeros_like to allocate a block of \(0\) entries. z = np.zeros_like(y) print('id(z):', id(z)) z[:] = x + y print('id(z):', id(z)) id(z): 140064388789904 id(z): 140064388789904 If the value of x is not reused in subsequent computations, we can also use x[:] = x + y or x += y to reduce the memory overhead of the operation. before = id(x) x += y id(x) == before True 2.1.6. Conversion to Other Python Objects¶ Converting an MXNet’s ndarray to an object in the NumPy package of Python, or vice versa, is easy. The converted result does not share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or on GPUs, you do not want MXNet to halt computation, waiting to see whether the NumPy package of Python might want to be doing something else with the same chunk of memory. The array and asnumpy functions do the trick. a = x.asnumpy() b = np.array(a) type(a), type(b) (numpy.ndarray, mxnet.numpy.ndarray) To convert a size-\(1\) ndarray to a Python scalar, we can invoke the item function or Python’s built-in functions. a = np.array([3.5]) a, a.item(), float(a), int(a) (array([3.5]), 3.5, 3.5, 3) 2.1.7. Summary¶ MXNet’s ndarrayis an extension to NumPy’s ndarraywith a few key advantages that make the former indispensable for deep learning. MXNet’s ndarrayprovides a variety of functionalities such as basic mathematics operations, broadcasting, indexing, slicing, memory saving, and conversion to other Python objects. 2.1.8. Exercises¶ Run the code in this section. Change the conditional statement x == yin this section to x < yor x > y, and then see what kind of ndarrayyou can get. Replace the two ndarrays that operate by element in the broadcasting mechanism with other shapes, e.g., three dimensional tensors. Is the result the same as expected?
https://www.d2l.ai/chapter_preliminaries/ndarray.html
CC-MAIN-2019-47
refinedweb
2,706
61.97
PYTHON : OOPs Python is a multi-paradigm programming language. It means that Python supports different programming approach. Python Object Oriented Programming approach (OOPs) is the easiest and the most popular one among them . Introduction to OOPs Object Oriented Programming is referred as a programming approach , where the programs are organised as objects. This is to say that everything written in the program is treated as an object. OOP is based on these four principles: - Abstraction - Encapsulation - Inheritance - Polymorphism OOPs Python Python Object Oriented Programming Python follows the object Oriented programming style. It treats everything inside the code, as an object. An object has two characteristics :Attribute and Behavior For example, let’s consider human as an object. A human has both attribute and behavior. Name, age, color, sex etc are the attributes of human object, while walking, talking,eating, etc are the behaviors of human. Therefore, any object created in Python, has attributes( defined in constructor) and behaviour (defined in methods). CLASS AND OBJECT Class is defined as a template or blueprint for an object. Object is the instance of a class. For instance, if we want to construct a house, we require a blueprint of the house. Similarly, Class is the blueprint required to build an object. Therefore, a class has to be created before creating an object. The syntax for creating a class is as follows: class <class-name>: For example: class Mobile: The syntax for creating an object is as follows: object=classname(<attributes>) For example: m1=Mobile( ) To access the object created for a class, we use the dot(.) operator. For example: m1.buy( ) m1.purchase( ) METHODS Methods are the functions that are defined inside the body of the class. They define the behavior of an object. The syntax is as follows: def <method-name>(self,<attributes>): For example: def buy(self, a,b) self is the keyword that points to that particular object ,it is referring to . In other words, self keyword points to the current object in use . To clarify, for instance, there are multiple objects for the same class Mobile. Then, the object in use is present in self. It creates a reference to the current object. self behaves like this operator in Java and C++ CONSTRUCTOR Constructor is a special method in Python, that is declared inside a class. The constructor is the first method that is called inside a class ,whenever an object is created for a particular class, The syntax is as follows: def__init__(self, <attributes>): Here, init is the constructor. Attributes are created inside a Constructor. There are two types of constructors: default and parameterised constructor. Default constructor does not accept any attributes, except self , which is inevitable. def __ init__(self): Parameterised constructor accepts some parameters/attributes from the programmer. def __init__(self, <attribute1>,<attribute2>,..) The above picture shows a code snippet of class Mobile,parameterised constructor with price and brand as attributes and object of class Mobile.The output of the code is as shown below: ABSTRACTION Abstraction refers to hiding the background details from the user and only showing the necessary details. The main use of abstraction is to reduce the complexity and increase the efficiency of the program. ENCAPSULATION Encapsulation is the method of restricting access to a particular data , inside the code . Other programming languages like Java and C++, uses using access specifiers like private, public and protected., to protect its data. But Python DOES NOT support these keywords. A private attribute is the one, that is denies access from other objects in Python. It is represented by using __ before the name.For example, __sample, self.__acc, etc. Private attributes are not accessible directly outside the class. To access it outside the class, we’ve to use getter and setter methods. setter method sets value to a private attribute.It always accepts parameters def set_value(self,<parameters>) getter method gets the value of a private attribute. def get_value(self,<parameters>) *NOTE: GETTER AND SETTER METHODS CAN ONLY BE USED FOR PRIVATE ATTRIBUTES INHERITANCE Inheritance is the process of inheriting the properties of one class, into another class . In other words, the sub class acquires all the properties of the superclass. CLASS A—>CLASS B(A) Suppose we create a class A and we want to inherit all the attributes and behaviour from A into B, the we write class B(A). The inheriting class is the subclass(B) and the inherited class is the superclass(A).Therefore, all the methods and constructor of class A is inherited to class B. CODE REUSE is the biggest advantage of Inheritance. There are four types of inheritance : - Single level inheritance : One parent class and one child class - Multilevel inheritance : One grand parent class, one parent class and one child class. - Hierarchical inheritance: One super class and two or more sub classes - Multiple Inheritance : Multiple parent class, but one child class Please refer to the above link for program’s on inheritance. PYTHON POLYMORPHISM Polymorphism is the ability to use common interface for multiple forms. Python supports Method overriding, nut it does not support Method Overloading. The code above ,shows a class Phone, inheriting a superclass Mobile. Both the classes have the method buy() in them . But the action performed by the buy( ) method in sub class, overrides the super class’s buy ( ) method and it will print ” Bought a phone”. This illustrates Polymorphism in Python. These topics cover the basics of Object Oriented Programming in Python. I hope this article was of some use to you . Thank you!
https://betapython.com/python-oops-concept/
CC-MAIN-2022-05
refinedweb
922
56.96
Details - Type: Bug - Status: Closed - Priority: Minor - Resolution: Fixed - Affects Version/s: 2.1.1 - - - Labels: Description The JoinColumn couldn't have the referencedColumn's definition which includes the length definition. and it's length should be assigned to the default value 255. @Entity public class Student { @Id @Column(name="id", length=128, nullable=false) private String id; @Column(name="sName", length=255) private String sName; @ManyToMany @JoinTable( name="student_course_map", joinColumns= , inverseJoinColumns= ) public Collection getCourses() ... } @Entity public class Courses We can see the student id length has been defined to 128. And there is no definition length in the JoinColumn student_id. The JoinColumn should be set to the default value 255. The warning message will occur like this WARN [Schema] Existing column "student_id" on table "test.student_course_map" is incompatible with the same column in the given schema definition. Existing column: Full Name: student_course_map.student_id Type: varchar Size: 128 Default: null Not Null: true Given column: Full Name: student_course_map.student_id Type: varchar Size: 255 Default: null Not Null: true Activity Thanks,Albert. I got your patch, and apply it into my server (geronimo-tomcat7-javaee6-3.0.0), The warning message mentioned in the jira did disappear, but another message occurred. Same message, same problem. But the difference is the column is defined in ManyToOne annotation. The source is below. @ManyToOne(optional=true, cascade={CascadeType.PERSIST, CascadeType.MERGE} ) @JoinColumn(name="classField") private Location schoolField; The message is below. 2012-09-05 09:19:58,293 WARN [Schema] Existing column "classField" on table "test.classField" is incompatible with the same column in the given schema definition. Existing column: Full Name: classes.classField Type: varchar Size: 255 Default: null Not Null: false Given column: Full Name: classes.classField Type: varchar Size: 128 Default: null Not Null: false XieZhi, Can you provide a more concrete test case and the conditions reproducing the failure? I don't think the new message is in error. What the message saying is the classField in the database has column length 255 and whereas the id field of the @OneToMany side has column length 128. This means OpenJPA recognized the length=128 set on the id field. Before the patch, join column did not pick up the id column length and defaulted to 255, therefore this message did not happened. Either the id column has to match the join column length or the join column length need to match the id length. This is just the reverse of the original scenario. Albert Lee. This problem only happens when This problem also affects the Mapping tool creating database table operation, always assume database defect VARCHAR/CHAR length define in the database dictionary. Attached a patch for trunk. Please try if this has resolved your issue. For fix/commit for 2.2.x, 2.1.x and 2.0.x releases, you will need to work with IBM service channel to get this fix in these releases. Albert Lee.
https://issues.apache.org/jira/browse/OPENJPA-2255
CC-MAIN-2016-07
refinedweb
488
58.69
PreRender is called before a camera starts rendering the Scene. This message is sent to all scripts attached to the camera. Note that if you change camera's viewing parameters (e.g. fieldOfView) here, they will only take effect the next frame. Do that in OnPreCull instead. Also note that when OnPreRender is called, the camera's render target is not set up yet, and the depth texture(s) are not rendered yet either. If you want to do something later on (when the render target is already set), try using a CommandBuffer. See Also: onPreRender delegate. #pragma strict public class Example extends MonoBehaviour { var revertFogState: boolean = false; function OnPreRender() { revertFogState = RenderSettings.fog; RenderSettings.fog = enabled; } function OnPostRender() { RenderSettings.fog = revertFogState; } }; } }
https://docs.unity3d.com/2017.4/Documentation/ScriptReference/Camera.OnPreRender.html
CC-MAIN-2020-24
refinedweb
121
51.44
Subject: Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale From: Ake Sandgren (ake.sandgren_at_[hidden]) Date: 2010-02-10 09:04:00 On Wed, 2010-02-10 at 08:21 -0500, Jeff Squyres wrote: > On Feb 10, 2010, at 7:47 AM, Ake Sandgren wrote: > > > According to people who knows asm statements fairly well (compiler > > developers), it should be > > >), "2"(*addr), "1"(oldval) > > : "memory", "cc"); > > > > return (int)ret; > > } > > Disclaimer: I know almost nothing about assembly. > > I know that OMPI's asm is a carefully crafted set of assembly that works across a broad range of compilers. So what might not be "quite right" for one compiler may actually be there because another compiler needs it. > > That being said, if the changes above are for correctness, not neatness/style/etc., I can't speak for that... The above should be correct for gcc style unless i misunderstood them. Quoting from their reply: 'it should be "memory", "cc" since you also have to tell gcc you're clobbering the EFLAGS' And i don't know asm either so... -- Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden Internet: ake_at_[hidden] Phone: +46 90 7866134 Fax: +46 90 7866126 Mobile: +46 70 7716134 WWW:
http://www.open-mpi.org/community/lists/users/2010/02/12069.php
CC-MAIN-2013-48
refinedweb
205
69.92
I was looking at this post. This lead me to the Allegro development road map. I thought I would contribute one of the "Nice To Have" section. // header file #ifndef __al_included_allegro5_alprintf_h #define __al_included_allegro5_alprintf_h #include <allegro5\allegro.h> #include <stdarg.h> bool al_vfprintf(ALLEGRO_FILE *pfile, const char *format, va_list args); bool al_fprintf(ALLEGRO_FILE *pfile, const char *format, ...); #endif // __al_included_allegro5_alprintf_h You beat me to it by actually doing work, you monster! -----sig:“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs Is there any reason you have two functions instead of just one? The first one should probably be called al_vfprintf rather than al_fprintf. Not least of which because c doesn't let you overload" -- Trent Been working with ALLEGRO_USTR alot lately and since it has two appendf functions (... & va_args). Just copying the same style. Thomas fixed I think this is a useful addition for someone to add. Will try to remember to do it myself. I agree. This is definitely useful. Between supporting format printing to files, it also uses the ustr code to make sure UTF8 writing works. Bump to make sure this doesn't go forgotten. Both functions are nice. 46 rv = al_fprintf(pfile, format, args); I think you meant al_vfprintf there, since you changed the name of the test this code a bit? hammer it a little with invalid input and the like? Once validated, I can apply. Edgar Fixed Thomas If someone else could tackle this. I don't quite have the time for a while. I have put the code into aintern_file, and will test it a bit. Instead of returning a bool I would prefer to return an int, indicating the number of bytes written... tobing I had it return bool because of the use of ustr's appendf function. Of course I agree it should return the size written to file. I haven't used plain C in forever. Does it support size_t? // header file #ifndef __al_included_allegro5_alprintf_h #define __al_included_allegro5_alprintf_h #include <allegro5\allegro.h> #include <stdarg.h> size_t al_vfprintf(ALLEGRO_FILE *pfile, const char *format, va_list args); size_t al_fprintf(ALLEGRO_FILE *pfile, const char *format, ...); #endif // __al_included_allegro5_alprintf_h On a side note: al_ustr_dup(al_ustr_empty_string()); v.s. al_ustr_new(""); P.S. at work so can't actually run and test the code I just changed size_t is valid in C as well, and most functions, especially newer ones will use size_t over int. size_t may even be 64bit on a 64bit platform. Here's a patch for current origin/5.1 in git, that contains the last implementation by DanielH, but with al_ustr_new("") instead of the dup on empty string. I have tested this a bit, and works fine with unicode characters. This is committed. I added some quick documentation. Also changed it to return a negative number on error like stdio fprintf. Thanks for submitting and doing the cleanup. The return is a size_t, so returning -1 is not exactly doing what it should do, so maybe we should return int instead of size_t? The libc fprintf returns the number of bytes in an int, so I'd vote for that. Someday 2147483648 bytes in a single write won't be enough, but today is not return is a size_t, so returning -1 is not exactly doing what it should do, so maybe we should return int instead of size_t? Good catch, thanks. Changing it to int. Thanks guys. However, can someone answer a previous question? al_ustr_dup(al_ustr_empty_string()); v.s. al_ustr_new(""); I see that my code was changed for the latter case. In looking at other's code, I've seen both. Ah, I didn't realize that this was a question. Well, I used the second variant because the result seems to be the same, but uses only one call. You're right. I did post it, but didn't phrase it as a question. Small patch to fix a warning about comparison between signed and unsigned integers (I hate seeing warnings) : Committed. Thanks a lot!
https://www.allegro.cc/forums/thread/614829/1008605
CC-MAIN-2018-34
refinedweb
670
68.16
Type: Posts; User: Traps I dont know why, but I GOT IT! Dictionary_RTB(sender).AppendText(Dictionary_Output(sender)); string tmp0 = Dictionary_Output(sender); // This text formatting code is... OMG, why doesnt this work, please help me. tmp0 = "Microsoft Windows [Version 6.0.6001]" tmp = 37 this is the FIRST time that this code is hit when I run my program. Prior to that,... Thanks, but I dont need to ask in vb forum, as vb=c#, and any source code I provide you will be in c#. As far as using SPy++ thats impossible, I dont know what windows the enduser will be sending... Hey, thanks I'll definately check out those links. Funny you mention vb, because I am writing this in vb. :) Remember I'm trying to get the hwnd for any control on any application, I'm not going... I think FindWindowEx will do it, any ideas on the parameters? Here's what I got so far: FindWindowEx(GetForegroundWindow(), null, null, nulll); and its not much. if spy++ can do it so can... I've done this a couple years ago, but I certainly dont remember what API's I used. Anyone know off had on how to do it? Well, the formatting sticks, IF it gets applied, so its still not working correctly. Here's my code: private void UpdateText(clsServer sender) { try { ... Didnt see your post. Until now, I think that might do it. At least its not losing the formatting now by using the append... Thanks, this should take care of it. I'll be sure to rate ya for it, and... The thread title is pretty self explanitory. I'd think this would be easy, but my approaches havent worked. Any ideas? I have an RTB that is filled line by line via a console application's... I am aware that I'm posting vb specific keywords in a c# forum, and I am confident that the experts here do not have an issue with this. I am a preferred c# developer, that is to say, I prefer c# (I... I saw the line "create your own class, and inherit VScrollbar", then glanced at your code. I thought you were like perhaps setting focus to the vscrollbar, and using the mousewheel event of... Bah. The answer is simple and here it is in vb (yes I was writing it in vb for another forum. C# is my one true love though :)) Public Class Form1 Private pb As New PictureBox ... No, that sounds more like a hack than a solution. This is close, but not quite right, as I need to know when the picturebox has focus. using System; using System.Collections.Generic;... I feel incredibly stupid after reading darwen's post. More so than usual. You are one smart person. Do I have to pinvoke to do this? I tried public partial class Form1 : Form { public Form1() { InitializeComponent(); Hi, you need to first determine the top and left edge values for the container of your listview (this may be a form if it is, then ignore these values). Next you determine the left and top edge of... Your in the c# forum, so I assume you want to write this in c#. You will need to create your own .exe Basically you will be creating your own editor. It will have a textbox, it will detect each... Agreed. We think a like on opposite sides of the world. Now go learn WPF so you can teach me in codeguru's new WPF forum :lol:. We need some WPF guru's here, and fast! :thumb: I'm still trying... private void textBox1_KeyPress(object sender, KeyPressEventArgs e) { if (e.KeyChar == '.') { if (textBox1.Text.Contains(".")) { ... What is the advantage.... muahahaha Look at this: Or this: Perhaps this:... Where not just speaking about video learning here. Where talking about the Microsoft learning path. Most of them come with : Video Transcript Project Code files Study Guide and Exercises ... Try this: // Set text boxes for start of job txtStatus.Text = "Started ..."; textBox1.Text = ""; txtStatus.Refresh(); textBox1.Refresh(); // and for good measure... toraj58, I agree with you, and I disagree with you. The point that I disagree on is first learning .net. I honestly dont see ANYTHING better than the development path provided by microsoft in... I am very happy to see codeguru hosting a dedicated WPF forum. I hope it takes off ( It should! all the experts here seriously need to start thinking about WPF, if they want to preserve their... Accessing the control is not exactly the same. You will learn. I am learning as well. Consider "attached properties" and "dependency properties". WPF is a completely different beast from your...
http://forums.codeguru.com/search.php?s=9d6df9b6b91d503777c0f648c90b606e&searchid=1920923
CC-MAIN-2013-48
refinedweb
780
77.23
Images not drawing in eclipse - java I have a BufferedImage space; and I then try to initialize it by using the try-catch try { space = ImageIO.read(new File("simple-star-space-background-effect-footage-023768280_prevstill.jpg")); } catch(Exception e) {} When I try using g.drawImage(space, 0, 0, 800, 600, null); It doesn't show up. I think it has something to do with Eclipse because when I used a basic text editor, it worked. Here is a screenshot of where my pics are relevant to the program. They are in the same directory "src" but no image shows up. The Screenshot Please use: try { File myImage = new File("simple-star-space-background-effect-footage-023768280_prevstill.jpg") System.out.println(myImage.getAbsolutePath()); space = ImageIO.read(myImage); } catch(Exception e) { e.printStackTrace(); } I assume your path to your image is not correct, since you expect to have it in your user.dir system-property. Also you eat the exception, which causes you to not see what the root cause of your issue is. Also my hint is to avoid inline code. Define variables, which can be much easier debugged in the eclipse debugger instead of a single long line, which does multiple executions (like your new File(...) inlined in reading the image) Related FIle saving path I have created code that makes print screens and saves them as an imiage, but I don't really know how to change the path of saving the file for other folder in my main project folder. Any ideas? private static void print(JPanel comp, String nazwa) { // Create a `BufferedImage` and create the its `Graphics` BufferedImage image = GraphicsEnvironment.getLocalGraphicsEnvironment() .getDefaultScreenDevice().getDefaultConfiguration() .createCompatibleImage(comp.getWidth(), comp.getHeight()); Graphics graphics = image.createGraphics(); // Print to BufferedImage comp.paint(graphics); graphics.dispose(); // Output the `BufferedImage` via `ImageIO` try { ImageIO.write(image, "png", new File(nazwa+".png")); } catch (IOException e) { e.printStackTrace(); } } Write the full path in the File constructor: new File("/home/cipek/images/filename.png") However I have still got a problem. I am not sure that udnerstand it properly. cipek- is my project name images- folder where i want to keep images But what about home?? I rewrite "home" however it doesn't work.I don't want to give whole path because I will use this proram on other computers so the path will be diffrent every time. Safely close/remove file for ImageView in JavaFX I have a JavaFX application which displays all images from a certain folder in a VBox. The VBox is built like this: try (DirectoryStream<Path> stream = Files.newDirectoryStream(imagePath)) { for (Path file : stream) { String fileNameLc = file.toString().toLowerCase(); if (fileNameLc.matches(".*\\.(jpg|png)")) { ImageView graph = new ImageView(new Image(Files.newInputStream(file))); graph.setPreserveRatio(true); imageVBox.getChildren().add(graph); } } } catch (IOException ex) { // } There also is a button to remove all images (and all other files) in the folder which are displayed in the VBox. This is the code for the button action: imageVBox.getChildren().clear(); try (DirectoryStream<Path> stream = Files.newDirectoryStream(imagePath)) { for (Path file : stream) { Files.delete(file); System.out.println("Removing: " + file); } } catch (IOException ex) { // } Displaying the images works fine, but deleting them does not work. In the standard output I see Removing: /foo/img1.jpg Removing: /foo/img2.jpg ... No Exceptions are thrown, but the image files are still there if check the contents of the folder. All files in the folder which are not images (and are not displayed in the VBox) are removed succesfully, but the images displayed in the VBox are not. I thinks the cause is that after imageVBox.getChildren().clear(); a background thread starts to remove the images and the .clear() method returns immediately. This way the code block which removes the files is executed before the Image resources are closed. What would be be the best way to close the images? and why is there no Exception thrown by the Files.delete() method? I know it's a realy old question, but i think anyone can have the same problem, i have it few days ago. The problem is realy simple when you create yout ImageView, the image are load by JAVA, and you can't delete it before free the memory. I don't know why they are no error, but you can see an error if you try to delete the image file manualy during the execution of jar. For free the ImageView you have to do that : Image graph = new Image(Files.newInputStream(file)); ImageView graphView = new ImageView(graph); graph = null; graphView.setImage(null); System.gc(); Don't forget System.gc(), that will call the garbage collector and he will free memory and after that you can now delete the file. Enjoy, Sorry for the realy bad english Having trouble loading image to screen I am making a game and I am stuck trying to get the image to screen. I states that- java.lang.IllegalArgumentException: input == null! at javax.imageio.ImageIO.read(Unknown Source) I am sure that my image path is the right path, but it keeps stating that it's not. background = new Background("C:/hello/flappybird.png"); The "debug" section of my workspace states that there is a problem with background.render(g); specifically, with g.drawImage(img,(int)x,(int)y,null); and I have no idea why it is doing this. I am 100% sure my build path is right! EDIT: just incase you want to know the constructor of the background class: public Background(String s) { try { img = ImageIO.read(this.getClass().getResource(s)); } catch (IOException e) { e.printStackTrace(); } } This error indicates the image file wasn't found. To be sure that this is really the case, I urge you to execute ImageIO.read() directly and check the result: BufferedImage icon = ImageIO.read("C:/hello/flappybird.png"); Also, it's not a good practice to use resources outside the classpath. I strongly suggest you to change your class Background to use classpath resources. Somewhere inside your Background class, you could add: public class Background { public Background(String filename) { // Some code here BufferedImage image = ImageIO.read(getClass().getResource("/resources/images/" + filename)); // More code here } } I have found a solution! it seems that in gamePanel, after I declared my BufferedImage, I forgot to add- Graphics g = image.getGraphics(); sorry guys, I feel so stupid XD Why is there no image when running from a .jar file? I'm trying to make my panel show image as background. I already can do that in NetBeans, but when I build my jar and run it image doesn't show there. I know I have to access it differently. I have seen many tutorials but every one of them shows how to do it with ImageIcon, but I don't need that, I need just Image. Can anyone point out what piece of code do I need to do this? Thanks. This is my code for backgrounded JPanel: public class JPanelWB extends JPanel { // Creates JPanel with given image as background. private Image backgroundImage; public JPanelWB(String fileName){ try { backgroundImage = ImageIO.read(new File(fileName)); } catch (IOException ex) { new JDialog().add(new Label("Could not open image."+ex.getMessage())); } } #Override public void paintComponent(Graphics g) { super.paintComponent(g); // Draw the background image. g.drawImage(backgroundImage, 0, 0, getWidth(),getHeight(),this); } } Yeah, you're trying to read in the image as a file -- don't do that since files don't exist within a Jar file. Instead read it in as a resource. Something like so: public JPanelWB(String resourceName){ try { // backgroundImage = ImageIO.read(new File(resourceName)); backgroundImage = ImageIO.read(getClass().getResource(resourceName)); } catch (IOException ex) { new JDialog().add(new Label("Could not open image."+ex.getMessage())); } } But note that resource path is different from file path. The resource path is relative to the location of your class files. If you want to read new image and import it as background, people smarter than me already answered your question. But, if your problem is similar to mine, then this migh help: If you already have images to show, then the point is to call them from absolute path. Executable class form JAR will read drive created inside virtual machine, not the physical drive in your computer. Put images in short-pathed folder like C:\J\proj\img\ and call them with absolute path like "C:\\J\\proj\\img\\your_image.png" // (Don't forget the double backslashes.) (If you don't mind path lenght, leave them in image folder inside your project package, and call them from there.) NetBeans will pack them into JAR with absolute path. On execution JRE will create JVM with that path in it, take the images from JAR and put them to that virtual path. Class will be able to find them, because it doesn't read path from physical drive, but from own virtual one newly created inside JVM. In that case avoiding ImageIcon is just more clutter, not less. You can add "blackBoard" as JLabel to be background to your JFrame, set its layout to null, something like this: private JLabel blackBoard; private JLabel noteToSelf; //..... blackBoard = new JLabel(); noteToSelf = new JLabel(); //..... // putting JLabel "blackBoard" as background into JFrame blackBoard.setIcon(new ImageIcon("c:\\Java\\images\\MarbleTable.png")); getContentPane().add(blackBoard); blackBoard.setBounds(1, 1, 400, 440); blackBoard.setLayout(null); and then add components into "blackBoard" instead of your JFrame, like this. // putting JLabel "noteToSelf" onto background noteToSelf.setIcon(new ImageIcon("c:\\Java\\images\\Sticker_a1.png")); // or: noteToSelf.setText("Remind me at 6:30am..."); blackBoard.add(noteToSelf); noteToSelf.setBounds(noteX, noteY, 64, 48); Now your JFrame is table board and "blackBoard" is table sheet on it. Hope this helps. Problem converting Java applet to Jar. Maybe Image loading problem I'm writing an applet in eclipse and under the eclipse environment it works well. while creating a jar file from this project, the problems start. After testing the jar with several options, I think the problem is with loading an image from a web page. Any Other features from the applet seems to work ok in the jar. The code of loading image in my project looks like that: MediaTracker mt = new MediaTracker(this); String photo = imagePath URL base = null; try { base = getDocumentBase(); } catch (Exception e) { } if(base == null){ System.out.println("ERROR LOADING IMAGE"); } Image imageBase = getImage(base,photo); // Some code that works on the image (not relevant) // The rest of the code icon.setImage(image); imageLabel.setIcon(icon); But the jar can not load the imgae and it doesn't disply it in while running and the applet is stuck because of that. (unlike in the eclipse, which loads the image and shows it) What could be the problem? A second problem is that from the applet in the eclipse the loading take few seconds. Is there a way to speed things up? Thanks for any help, I have no idea how this could be working in Eclipse. The problem is that getDocumentBase() returns location of a page, in which the applet is embedded (e.g.), and you are trying to load a picture from that location. Obviously, there is no picture, just an html (or php) file, and the loading fails. If your goal is to load an image from inside the jar, try: Image img = null; try { img = ImageIO.read(getClass().getResource("/images/tree.png")); } catch (IOException ex) { System.err.println("Picture loading failed!"); } where "/images/tree.png" is path to image file in your source tree. EDIT: If you need just to load an image from URL, you can use: Image img = null; try { img = ImageIO.read(new URL("")); } catch (IOException ex) { System.err.println("Picture loading failed!"); } This method is a bit better than Applet.getImage(new URL(...)) - I had some problems when loading many images.
https://java.develop-bugs.com/article/10000111/Images+not+drawing+in+eclipse
CC-MAIN-2021-21
refinedweb
1,960
58.69
#include "apr.h" Go to the source code of this file. APR APR by use of the compile-time constants and the use of the run-time query function. APR version numbering follows the guidelines specified in: Internal: string form of the "is dev" flag major version Major API changes that could cause compatibility problems for older programs such as structure size changes. No binary compatibility is possible across a change in the major version. Minor API changes that do not cause binary compatibility problems. Should be reset to 0 when upgrading APR_MAJOR_VERSION patch level Value: Return APR's version information information in a numeric form. Return APR's version information as a string.
http://apr.apache.org/docs/apr/0.9/apr__version_8h.html
CC-MAIN-2016-44
refinedweb
114
55.64
Tim Bray Joins Google On his blog today, Tim Bray, the 55-year-old software developer and entrepreneur, proclaimed that he now works for Google. Bray's a Canadian. Back in the mid to late 1990s he helped nail down the XML and XML namespace specifications. Until Feb. 26, 2010, Bray had been working at Sun Microsystems, but he declined an offer by Oracle to stay with the company. "I'll maybe tell the story," Bray blogged about the Oracle offer, "when I can think about it without getting that weird spiking-blood-pressure sensation in my eyeballs." Now Bray is a "Developer Advocate" at Google focusing on the open source Android mobile operating system. "The reason I'm here is mostly Android. Which seems to me about as unambiguously a good thing as the tangled wrinkly human texture of the Net can sustain just now," he said. Bray really, really wants Android to beat Apple despite Apple's great iPhone hardware and software, Bray said, "I hate it." View Article
http://www.developer.com/daily_news/article.php/394818/Tim-Bray-Joins-Google.htm
CC-MAIN-2014-49
refinedweb
171
61.56
ldns_dnssec_rrsets_type (3) - Linux Man Pages SYNOPSIS#include <stdint.h> #include <stdbool.h> #include <ldns/ldns.h> ldns_dnssec_rrsets* ldns_dnssec_rrsets_new();, bool follow); DESCRIPTION - ldns_dnssec_rrsets_new() - Creates a new list (entry) of RRsets .br Returns the newly allocated structure - ldns_dnssec_rrsets_free() - Frees the list of rrsets and their rrs, but *not* the ldns_rr records in the sets .br rrsets: the data structure to free - ldns_dnssec_rrsets_type() - Returns the rr type of the rrset (that is head of the given list) .br rrsets: the rrset to get the type of .br Returns the rr type - ldns_dnssec_rrsets_set_type() - Sets the RR type of the rrset (that is head of the given list) .br rrsets: the rrset to set the type of .br type: the type to set . .br rrsets: the list of rrsets to add the RR to .br rr: the rr to add to the list of rrsets .br Returns LDNS_STATUS_OK on success - ldns_dnssec_rrsets_print() - Print the given list of rrsets to the fiven file descriptor .br out: the file descriptor to print to .br rrsets: the list of RRsets to print .br follow: if set to false, only print the first RRset.
https://www.systutorials.com/docs/linux/man/docs/linux/man/3-ldns_dnssec_rrsets_type/
CC-MAIN-2021-17
refinedweb
184
85.49
Section 4.2 Constructors OBJECT TYPES IN JAVA ARE VERY DIFFERENT from the primitive types. Simply declaring a variable whose type is given as a class does not automatically create an object of that class. Objects must be explicitely constructed. The process of constructing an object means, first, finding some unused memory in the heap that can be used to hold the object and, second, filling in the object's instance variables. As a programmer, you will usually want to exercise some control over what initial values are stored in a new object's instance variables. There are two ways to do this. The first is to provide initial values in the class definition, where the instance variables are declared. For example, consider the class:class Mosaic { // class to represent "mosaics" consisting of // colored squares arranged in rows and columns int ROWS = 10; // number of rows of squares int COLS = 20; // number of columns of squares . . // (the rest of the class definition) . } When an object of type Mosaic is created, it includes two instance variables named ROWS and COLS, which are initialized with the values 10 and 20, respectively. This means that for every newly created object, msc, of type Mosaic, the value of msc.ROWS will be 10, and the value of msc.COLS will be 20. (Of course, there is nothing to stop you from changing those values after the object has been created.) If you don't provide any initial value for an instance variable, default initial values.) Of course, you can provide an alternative initial value if you like. For example, the class Mosaic might contain an instance variable of type MosaicWindow, where MosaicWindow is the name of another class. This instance variable could be initialized with a new object of type MosaicWindow:class Mosaic { int ROWS = 10; // number of rows of squares int COLS = 20; // number of columns of squares MosaicWindow window = new MosaicWindow(); // a window to display the mosaic . . // (the rest of the class definition) . } When an object of class Mosaic is constructed, another object of type MosaicWindow is automatically constructed, and a reference to the new MosaicWindow is stored in the instance variable named window. (Note that the statement "MosaicWindow window = new MosaicWindow();" is not executed unless and until an object of class Mosaic is created. And it is executed again for each new object of class Mosaic, so that each Mosaic object gets its own new MosaicWindow object.) There is a second way to get initial values into the instance variables of a class. That is to provide one or more constructors for the class. In fact, constructors can do more than just fill in instance variables: They let you program any actions whatsoever that you would like to take place automatically, every time an object of the class is created. A constructor for a class is defined to be a subroutine in that class whose name is the same as the name of the class and which has no return value, not even void. A constructor can have parameters. You can have several different constructors in one class, provided they have different signatures (that is, provided they have different numbers or types of parameters). A constructor cannot be declared to be static, but, on the other hand, it's not really an instance method either. The only way you can call a constructor is with the new operator. In fact, the syntax of the new operator is: new constructor-call When the computer evaluates this expression, it creates a new object, executes the constructor, and returns a reference to the new object. As an example, let's rewrite the Student class that was used in the previous section:public class Student { private String name; // Student's name private int ID; // unique ID number for this student public double test1, test2, test3; // grades on three tests private static int nextUniqueID = 1; // next available unique ID number Student(String theName) { // constructor for Student objects; // provides a name for the Student, // and assigns the student a unique // ID number name = theName; ID = nextUniqueID; nextUniqueID++; } public String getName() { // accessor method for reading value of private // instance variable, name return name; } public getID() { // accessor method for reading value of ID return ID; } public double getAverage() { // compute average test grade return (test1 + test2 + test3) / 3; } } // end of class Student In this version of the class, I have provided a constructor, Student(String). This constructor has a parameter of type String that specifies the name of the student. I've made the instance variable name into a private member, so that I can keep complete control over its value. In fact, by examining the class, you can see that once the value of name has been set by the constructor, there is no way for it ever to be changed: A name is assigned to a Student object when it is created, and the name remains the same for as long as the object exists. Notice that since name is a private variable, I've provided a function, getName() that can be used from outside the class to find out the name of the student. Thus, from outside the class, it's possible to discover the name of a student, but not to change the name. This is a very typical way of controlling access to a variable. The ID instance variable in the class Student is handled in a similar way. I should note, by the way, that if you provide initial values for instance variables, those values are computed and stored in the variables before the constructor is called. It's common to use a combination of initial values and constructors to set up new objects just the way you want them. Since the constructor in this class has a parameter of type String, a value of type String must be included when the constructor is called. Here are some examples of using this constructor to make new Student objects:std = new Student("John Smith"); std1 = new Student("Mary Jones"); You've probably noticed that the previous version of class Student did not include a constructor. Yet, we were able to construct instances of the class using the operator "new Student()". The rule is that if you don't provide any constructor in a class, then a default constructor, with no parameters, is provided automatically. The default constructor doesn't do anything beyond filling in the instance variables with their initial values. An object exists on the heap, and it can be accessed only through variables that hold references to the object. What happens to an object if there are no variables that refer to it? many other programing languages, it's the programmer's responsibility. ]
http://math.hws.edu/eck/cs124/javanotes1/c4/s2.html
crawl-001
refinedweb
1,120
57.3
This Tutorial Explains XPath Axes for Dynamic XPath in Selenium WebDriver With the help of Various XPath Axes Used, Examples and Explanation of Structure: In the previous tutorial, we have learned about XPath functions and its importance in identifying the element. However, when more than one elements have too similar orientation and nomenclature, it becomes impossible to identify the element uniquely. What You Will Learn: Understanding XPath Axes Let us understand the above-mentioned scenario with the help of an example. Think about a scenario where two links with “Edit” text are used. In such cases, it becomes pertinent to understand the nodal structure of the HTML. Please copy-paste the below code into notepad and save it as .htm file. <!DOCTYPE html> <html> <body> <div class="1" align="left"> <a href="">Edit</a> <div class="2" align="left"> <a href="">Edit</a> </div> </body> </html> The UI will look like the below screen: Problem Statement Q #1) What to do when even XPath Functions fail to identify the element? Answer: In such a case, we make use of the XPath Axes along with XPath Functions. The second part of this article deals with how we can use the hierarchical HTML format to identify the element. We will start by getting a little information on the XPath Axes. Q #2) What are XPath Axes? Answer: An XPath axes defines the node-set relative to the current (context) node. It is used to locate the node that is relative to the node on that tree. Q #3) What is a Context Node? Answer: A context node can be defined as the node the XPath processor is currently looking at. Different XPath Axes Used In Selenium Testing There are thirteen different axes that are listed below. However, we’re not going to use all of them during Selenium testing. - ancestor: This axes indicates all the ancestors relative to the context node, also reaching up to the root node. - ancestor-or-self: This one indicates the context node and all the ancestors relative to the context node, and includes the root node. - attribute: This indicates the attributes of the context node. It can be represented with the “@” symbol. - child: This indicates the children of the context node. - descendent: This indicates the children, and grandchildren and their children (if any) of the context node. This does NOT indicate the Attribute and Namespace. - descendent-or-self: This indicates the context node and the children, and grandchildren and their children (if any) of the context node. This does NOT indicate the attribute and namespace. - following: This indicates all the nodes that appear after the context node in the HTML DOM structure. This does NOT indicate descendent, attribute, and namespace. - following-sibling: This one indicates all the sibling nodes (same parent as context node) that appear after the context node in the HTML DOM structure. This does NOT indicate descendent, attribute, and namespace. - namespace: This indicates all the namespace nodes of the context node. - parent: This indicates the parent of the context node. - preceding: This indicates all the nodes that appear before the context node in the HTML DOM structure. This does NOT indicate descendent, attribute, and namespace. - preceding-sibling: This one indicates all the sibling nodes (same parent as context node) that appear before the context node in the HTML DOM structure. This does NOT indicate descendent, attribute and namespace. - self: This one indicates the context node. Structure Of XPath Axes Consider the below hierarchy for understanding how the XPath Axes works. Refer below a simple HTML code for the above example. Please copy-paste the below code in the notepad editor and save it as .html file. <!DOCTYPE html> <html> <body> <div class="Animal" align="center"> <h2>Animal</h2> <div class="Vertebrate" align="left" > <h3 align="left">Vertebrate</h3> <div class="Fish" white-space=:pre> <h4 >Fish</h4> </div> <div class="Mammal"> <h4>Mammal</h4> <div class="Herbivore"> <h5>Herbivore</h5> </div> <div class="Carnivore"> <h5>Carnivore</h5> <div class="Lion"> <h6>Lion</h6> </div> <div class="Tiger"> <h6>Tiger</h6> </div> </div> </div> <div class="Other"> <h4>Other</h4> </div> </div> <div class="Invertebrate"> <h3>Invertebrate</h3> <div class="Insect"> <h4>Insect</h4> </div> <div class="Crustacean"> <h4>Crustacean</h4> </div> </div> </div> </body> </html> The page will look like below. Our mission is to make use of the XPath Axes to find the elements uniquely. Let’s try to identify the elements that are marked in the chart above. The context node is “Mammal” #1) Ancestor Agenda: To identify the ancestor element from the context node. XPath#1: //div[@class=’Mammal’]/ancestor::div The XPath “//div[@class=’Mammal’]/ancestor::div” throws two matching nodes: - Vertebrate, as it is the parent of “Mammal”, hence it is considered the ancestor too. - Animal as it the parent of the parent of “Mammal”, hence it is considered an ancestor. Now, we only need to identify one element that is “Animal” class. We can use the XPath as mentioned below. XPath#2: //div[@class='Mammal']/ancestor::div[@class='Animal'] If you want to reach the text “Animal”, below XPath can be used. #2) Ancestor-or-self Agenda: To identify the context node and the ancestor element from the context node. XPath#1: //div[@class=’Mammal’]/ancestor-or-self::div The above XPath#1 throws three matching nodes: - Animal(Ancestor) - Vertebrate - Mammal(Self) #3) Child Agenda: To identify the child of context node “Mammal”. XPath#1: //div[@class=’Mammal’]/child::div XPath#1 helps to identify all the children of context node “Mammal”. If you want to get the specific child element, please use XPath#2. XPath#2: //div[@class=’Mammal’]/child::div[@class=’Herbivore’]/h5 #4) Descendent Agenda: To identify the children and grandchildren of the context node (for instance: ‘Animal’). XPath#1: //div[@class=’Animal’]/descendant::div As Animal is the top member in the hierarchy, all the child and descendant elements are getting highlighted. We can also change the context node for our reference and use any element we want as the node. #5) Descendant-or-self Agenda: To find the element itself, and its descendants. XPath1: //div[@class=’Animal’]/descendant-or-self::div The only difference between descendent and descendent-or-self is that it highlights itself in addition to highlighting the descendants. #6) Following Agenda: To find all the nodes that follow the context node. Here, the context node is the div that contains the Mammal element. XPath: //div[@class=’Mammal’]/following::div In the following-axes, all the nodes that follow the context node, be it the child or descendant, are getting highlighted. #7) Following-sibling Agenda: To find all the nodes after the context node that share the same parent, and are a sibling to the context node. XPath: //div[@class=’Mammal’]/following-sibling::div The major difference between the following and following-sibling is that the following-sibling takes all the sibling nodes after the context but will also share the same parent. #8) Preceding Agenda: It takes all the nodes that come before the context node. It may be the parent or the grandparent node. Here the context node is Invertebrate and highlighted lines in the above image are all the nodes that come before the Invertebrate node. #9) Preceding-sibling Agenda: To find the sibling that shares the same parent as the context node, and that comes before the context node. As the context node is the Invertebrate, the only element that is being highlighted is the Vertebrate as these two are siblings and share the same parent ‘Animal’. #10) Parent Agenda: To find the parent element of the context node. If the context node itself is an ancestor, it won’t have a parent node and would fetch no matching nodes. Context Node#1: Mammal XPath: //div[@class=’Mammal’]/parent::div As the context node is Mammal, the element with Vertebrate is getting highlighted as that is the parent of the Mammal. Context Node#2: Animal XPath: //div[@class=’Animal’]/parent::div As the animal node itself is the ancestor, it won’t highlight any nodes, and hence No Matching nodes were found. #11) Self Agenda: To find the context node, the self is used. Context Node: Mammal XPath: //div[@class=’Mammal’]/self::div As we can see above, the Mammal object has been identified uniquely. We can also select the text “Mammal by using the below XPath. XPath: //div[@class=’Mammal’]/self::div/h4 Uses Of Preceding And Following Axes Suppose you know that your target element is how many tags ahead or back from the context node, you can directly highlight that element and not all the elements. Example: Preceding (with index) Let’s assume our context node is “Other” and we want to reach the element “Mammal”, we would use the below approach to do so. First Step: Simply use preceding without giving any index value. XPath: //div[@class=’Other’]/preceding::div This gives us 6 matching nodes, and we want only one targeted node “Mammal”. Second Step: Give the index value[5] to the div element(by counting upwards from context node). XPath: //div[@class=’Other’]/preceding::div[5] In this way, the “Mammal” element has been identified successfully. Example: following (with index) Let’s assume our context node is “Mammal” and we want to reach the element “Crustacean”, we will use the below approach to do so. First Step: Simply use the following without giving any index value. XPath: //div[@class=’Mammal’]/following::div This gives us 4 matching nodes, and we want only one targeted node “Crustacean” Second Step: Give the index value[4] to the div element(count ahead from the context node). XPath: //div[@class=’Other’]/following::div[4] This way the “Crustacean” element has been identified successfully. The above scenario can also be re-created with preceding-sibling and following-sibling by applying the above approach. Conclusion Object Identification is the most crucial step in the automation of any website. If you can acquire the skill to learn the object accurately, 50% of your automation is done. While there are locators available to identify the element, there are some instances where even the locators fail to identify the object. In such cases, we must apply different approaches. Here we have used XPath Functions and XPath Axes to uniquely identify the element. We conclude this article by jotting down a few points to remember: - You shouldn’t apply “ancestor” axes on the context node of the context node itself is the ancestor. - You shouldn’t apply “parent” axes on the context node of the context node itself is the ancestor. - You shouldn’t apply “child” axes on the context node of the context node itself is the descendant. - You shouldn’t apply “descendant” axes on the context node of the context node itself is the ancestor. - You shouldn’t apply “following” axes on the context node it’s the last node in the HTML document structure. - You shouldn’t apply “preceding” axes on the context node it’s the first node in the HTML document structure. Happy Learning!!! => Visit Here For The Exclusive Selenium Training Tutorial Series.
https://www.softwaretestinghelp.com/xpath-axes-tutorial/
CC-MAIN-2021-17
refinedweb
1,857
54.22
New Onewheel boards for Sale - UK & European customers wanted I have a number of brand new boards for sale, will arrive in the UK in a week or so. Happy to discuss deals on a Onewheel bundle that could include any accessory found on our website If you're interested PM me @ jimmersis@mac.com to reserve your board now Cheers Cracking commute this morning As to the Game being Over......I beg to differ I think it's just begun! I didn't realise how hard it would be having to part with brand spanking new boards each week, I guess I should stop punishing myself by comparing them against my currently used, battered and well worn one! If you're in the UK or Europe and are interested in a new Onewheel then I can have one with you within a week of placing an order! - senor.jonn last edited by @Jimmers75 where did you get the handles? @senor.jonn You can find them in the accessory store he mentions in his first post: Jimmy and I sell them. He ships from Britain and I ship to US customers. - detroitwheelin last edited by @Jimmers75 Wow tire on the one in the middle looks so much thinner...do they really wear that much? or is it just an illusion? Cheers again Jimmers75! You are a legend among men good Sir! :D My OneWheel ordered from Jimmers75 arrived first thing this morning.. Less than 6 days after placing my order.. And at the moment I couldn't be happier.. Although i did order some stickers from Ebay for it the same day (yes I am 5) and they have still yet to arrive. So if you need a OneWheel in the UK without all the delay Jimmers75 is your man... Just order your stickers first though to avoid disappointment! - Zen.Potatoes last edited by Zen.Potatoes Well stickers arrived... :D Now to learn to ride the thing... n stop making it look all pretty... def does feel like snowboarding.. but I feel like I'm right back to beginners skool again.... @Zen.Potatoes said in New Onewheel boards for Sale - UK & European customers wanted: Well stickers arrived... :D def does feel like snowboarding.. but I feel like I'm right back to beginners skool again.... Yeah it's definitely its own animal, but the more you ride and figure out how it works, your snowboarding instincts will start to kick in and fill in the gaps. I used to skateboard, but when I got on the Onewheel for the first time it felt so foreign because so many things were very different from skating (like being able to put all your weight on the front while riding... that's commonplace in skating, but a death sentence when you only have one wheel). But after about a month of riding, when I started to feel a lot more comfortable on it, I started feeling my skating instincts coming into play. That's when it starts to get REALLY fun. @Zen.Potatoes Thanks amigo appreciate the feedback. I'll be sure to get you a handle out just as soon as the next batch becomes ready and backpack too. Enjoy your full weekend with the board! Cheers Jimmers75. Just got back from first full day with the OneWheel.. 2x 1hour sessions... Caving on concrete....I still remember being 13 and seeing Back to the Future 2... Finally that day has become.. :D what a time to be alive...! Picture show what will be my daily playground from now on...... @thegreck Cheer bud... it's amazing how after only 4 days on it... (but 20+ hours riding) how at home I feel on it... After a month or so hopefully it should really just feel a part of me... "THESE WHEELS CHANGE LIVES!" I now look at where i live with new eyes after many many years.. Always wished i lived at a ski resort.. now it feels like I do.... :D
https://community.onewheel.com/topic/4071/new-onewheel-boards-for-sale-uk-european-customers-wanted
CC-MAIN-2018-39
refinedweb
669
84.27
Hi, My problem is relatively simple, but I am pretty confused as to how to implement a solution... My project will be using random numbers across various classes. I currently have a class which creates some pretty sweet randoms. What I want to do is be able to access a call to the random number generator anywhere in my program - within any class that might need it. On another board people suggested using a namespace - which i will freely admit is not something I totally understand. I get the basic concept, but I don't see quite how implement. Currently I have the following files: main.cpp yardage.h (this is for a football game - these are yardage results) yardage.cpp mersenne.h (rng header) mersenne.cpp (rng implement) untilites.h (this is where the namespace would live) I will be generating randoms in both Main and yardage (among others) The thing is, the mersenne class is pretty complex and I'm not sure how to approach implementing this. Any suggestions? Please remember - I'm (apparently) pretty noobish here, so as much patience as you can muster would be appreciated.
http://cboard.cprogramming.com/cplusplus-programming/74635-namespaces.html
CC-MAIN-2014-52
refinedweb
190
64.61
Because Hashtables and HashMaps are the most commonly used nonlist structures, I will spend a little extra time discussing them. Hashtables and HashMaps are pretty fast and provide adequate performance for most purposes. I rarely find that I have a performance problem using Hashtables or HashMaps, but here are some points that will help you tune them or, if necessary, replace them: Hashtable is synchronized. That's fine if you are using it to share data across threads, but if you are using it single-threaded, you can replace it with an unsynchronized version to get a small boost in performance. HashMap is an unsynchronized version available from JDK 1.2. Hashtables and HashMaps are resized whenever the number of elements reaches the [capacity x loadFactor]. This requires reassigning every element to a new array using the rehashed values. This is not simply an array copy; every element needs to have its internal table position recalculated using the new table size for the hash function. You are usually better off setting an initial capacity that handles all the elements you want to add. This initial capacity should be the number of elements divided by the loadFactor (the default load factor is 0.75). Hashtables and HashMaps are faster with a smaller loadFactor, but take up more space. You have to decide how this tradeoff works best for you. The hashing function for most implementations should work better with a capacity that is a prime number. However, the 1.4 HashMap implementation (but not the Hashtable implementation) uses a different implementation that requires a power-of-two capacity so that it can use bit shifting and masking instead of the % operator. If you specify a non-power-of-two capacity, the HashMap will automatically find the nearest power-of-two value higher than the specified capacity. For other hash maps, always use a prime (preferably) or odd number capacity. A useful prime number to remember is 89. The sequence of numbers generated by successively multiplying by two and adding one includes several primes when the sequence starts with 89. But note also that speedups from prime number capacities are small at best. Access to the Map requires asking the key for its hashCode( ) and also testing that the key equals( ) the key you are retrieving. You can create a specialized Map class that bypasses these calls if appropriate. Alternatively, you can use specialized key classes that have very fast method calls for these two methods. Note, for example, that Java String objects have hashCode( ) methods that iterate and execute arithmetic over a number of characters to determine the value, and the String.equals( ) method checks that every character is identical for the two strings being compared. Considering that strings are used as the most common keys in Hashtables, I'm often surprised to find that I don't have a performance problem with them, even for largish tables. From JDK 1.3, Strings cache their hash code in an instance variable, making them faster and more suited as Map keys. If you are building a specialized Hashtable, you can map objects to array elements to preallocate HashtableEntry objects and speed up access as well. The technique is illustrated in the "Search Trees" section later in this chapter. The hash function maps the entries to table elements. The fewer entries that map to the same internal table entry, the more efficient the map. There are techniques for creating more efficient hash maps, for instance, those discussed in my article "Optimizing Hash Functions For a Perfect Map" (see). Here is a specialized class to use for keys in a Hashtable. This example assumes that I am using String keys, but all my String objects are nonequal, and I can reference keys by identity. I use a utility class, tuning.dict.Dict, which holds a large array of nonequal words taken from an English dictionary. I compare the access times against all the keys using two different Hashtables, one using the plain String objects as keys, the other using my own StringWrapper objects as keys. The StringWrapper objects cache the hash value of the string and assume that equality comparison is the same as identity comparison. These are the fastest possible equals( ) and hashCode( ) methods. The access speedups are illustrated in the following table of measurements (times normalized to the JDK 1.2 case): [7] The limited speedup from JDK 1.3 reflects the improved performance of Strings having their hash code cached in the String instance. If you create a hash-table implementation specialized for the StringWrapper class, you avoid calling the hashCode( ) and equals( ) methods completely. Instead, the specialized hash table can access the hash-instance variable directly and use identity comparison of the elements. The speedup is considerably larger, and for specialized purposes, this is the route to follow: package tuning.hash; import java.util.Hashtable; import tuning.dict.Dict; public class SpecialKeyClass { public static void main(String[ ] args) { //Initialize the dictionary try{Dict.initialize(true);}catch(Exception e){ } System.out.println("Started Test"); //Build the two hash tables. Keep references to the //StringWrapper objects for later use as accessors. Hashtable h1 = new Hashtable( ); Hashtable h2 = new Hashtable( ); StringWrapper[ ] dict = new StringWrapper[Dict.DICT.length]; for (int i = 0; i < Dict.DICT.length; i++) { h1.put(Dict.DICT[i], Boolean.TRUE); h2.put(dict[i] = new StringWrapper(Dict.DICT[i]), Boolean.TRUE); } System.out.println("Finished building"); Object o; //Time the access for normal String keys long time1 = System.currentTimeMillis( ); for (int i = 0; i < Dict.DICT.length; i++) o = h1.get(Dict.DICT[i]); time1 = System.currentTimeMillis( ) - time1; System.out.println("Time1 = " + time1); //Time the access for StringWrapper keys long time2 = System.currentTimeMillis( ); for (int i = 0; i < Dict.DICT.length; i++) o = h2.get(dict[i]); time2 = System.currentTimeMillis( ) - time2; System.out.println("Time2 = " + time2); } } final class StringWrapper { //cached hash code private int hash; private String string; public StringWrapper(String str) { string = str; hash = str.hashCode( ); } public final int hashCode( ) { return hash; } public final boolean equals(Object o) { //The fastest possible equality check return o = = this; /* //This would be the more generic equality check if we allowed //access of the same String value from different StringWrapper objects. //This is still faster than the plain Strings as keys. if(o instanceof StringWrapper) { StringWrapper s = (StringWrapper) o; return s.hash = = hash && string.equals(s.string); } else return false; */ } }
http://etutorials.org/Programming/Java+performance+tuning/Chapter+11.+Appropriate+Data+Structures+and+Algorithms/11.3+Hashtables+and+HashMaps/
CC-MAIN-2017-22
refinedweb
1,067
57.67
I need to parse XML documents without parsing libraries, that is, just create one of myself. E.g. input: <aa><bb>H</bb></aa> I have to get rid of tags and show, if any, errors(mismatching tag etc). Any suggestion please. Thanks in advance!! If you're writing your own parser, first you have to recognize your tokens. Parsing XML, you've pretty much got two sorts of tokens: things in angle brackets, and things not in angle brackets. If you're just validating that tags are matched, then all you need to do is check that things are properly nested. Take a simpler case: you want to verify that a string consists of some sequence of lower-case letters followed by the reverse sequence in capitals. That is this string obeys the rule: abcdeEDCBA and these strings do not: abcdeAEDCB abcdeEDCB abAcBCdeED The strings can be of arbitrary length. How do you verify this? You'll probably need a LIFO stack to keep track of opening tags for matching with closing tags You'll probably need a LIFO stack to keep track of opening tags for matching with closing tags Yeah, that's what I'm thinking. @tedtdu - do you see how a stack helps you with the problem? Thanks for your attention. Really appreciated of that. Actually no need to make perfect parser and it is also requirement not to use DOM &SAX etc... Soloutions: !stack.empty()&&stack.peek()!=starting tag , then push it into stack, otherwise pop off, or indicate error if can`t pop off. Questions:How can I sort them with "starting tag", "ending tag" and "without tag". Extremly thankful for any help. Thanks again! I'm not sure I understand what you mean there. To be blunt, if I do understand you, it's wrong. Work it out in English, don't try to do it in code until you have it worked out logically. Try the simpler problem: A well-formed string consists of a sequence of lower-case letters followed by the reverse sequence, upper case. How do you check whether a string (say, "abcCBD")is well-formed? You're looking at one char at a time and you don't know how many more there are. You have a stack to work with, and you're returning a boolean to indicate "well-formed" or not. You can state the solution in two sentences, or one compound sentence, so there's no need to do it in code. Solve that, and you've got the backbone of your validation problem. Questions:How can I sort them with "starting tag", "ending tag" and "without tag". <snide_mode> Hm... is there any formal marker of a closing tag versus its corresponding opening tag? Let me think, I'm sure it'll come to me... </snide_mode> I thought, if I could sort xml document by "open tag+element"and"close tag+element" "without tag+element"and"/+element", then push the "open tag+element" into stack. pop off, if "/+element" encounters, otherwise indicate error. Thus from xml: E.g. <a> " "<b> " "" "<c>H</c> " "</b> </a> wanted output would be: a " "b " "" "c-H I have deadline for this, please help. Teach me with something to move on. Treamendouly grateful for helping me to solve this. please!!!!!!!!!!!! You've got two sorts of things here, really. You've got things that have angle brackets around them, which you want to check against certain rules, and you have things that don't have angle brackets around them, which you're going to echo to the output. (or, if you'd rather, you're going to build into an array of Strings for potential output, if the XML validates) Having to echo the contents of the tags complicates things in a very minor way, but not seriously. Set that aside for the moment, and just discard the tag contents. So you're going to go through the document, and you're going to encounter tokens of these two types. In your example, the tokens would be <a>, " ", <b>, " "" ", <c>, H, </c>, " ", </b>, </a> (where comma serves as our delimiter, of course) So you encounter the tokens in this order. When you get a token, you can push it on the stack, or you can shove it on to the output, or you can pop the stack and do a comparison. Now walk down that series of tokens and tell me what you're going to do with each one. Do that, and you should be ready to write your code. By the way, I'm sorry about your deadline, but I have to trust your professor's judgement here - I'm going to assume that you had this assignment in time to finish it, and that you've been given the tools to solve it, so all I'm going to do is make vague suggestions, unless there's something so simple and abstruse that you couldn't be expected to know it. Trust me, you can solve this. I don't know if you've given yourself enough time to solve it before your deadline, but you can in fact solve it in finite time. Dear jon.kiparsky Thanks Sir, I`ve read it three times, got following questions. 1)How can I remove tags and output content of tags as string. 2)How can I compare<kind> to </kind>, if I need to pop off when they are same. Could it possible for you to shed light by a few lines code please. Sorry for taking so much of your time. I would do anything for solving it, it is kind of important for me. The String class has a lot of useful methods in it. For your current purposes, I can suggest a few to look at particularly. String.indexOf() returns an integer value, which is the index of the first occurence of the argument, or -1 if the argument does not appear in the string. This can tell you whether a given character or String appears in a String. String.replace() replaces every occurence of one char with another. Remember, '' is a valid char. String.charAt() tells you the char value at a given location in the String. This might be handy for checking whether a given char is at a given location - sort of the reverse of "indexOf". String.subString() will extract a portion of the String - so if you know where you can to start and where you want to end, this will give you the String's contents. I don't think you'll need to use all of these, but any of them could be useful in solving the two problems you've mentioned. Remember, if you do this right, you have a known closing tag and a known non-closing tag, and you just need to figure out whether N is the same in <N> and </N>. I think there are three easy ways to do this, coming from the methods I've just pointed you to. Your item 2 suggests that you might not be thinking about this as I would. I can't tell, because you haven't spelled out your approach, but you say "I need to pop off when they are the same" - but you can't know if they're the same until you pop, right? Examining each token should tell you whether you're going to add it to the output, or push it to the stack (and, I guess, put part of it in the output as well - that's different, but easy), or whether you're going to take something off the stack and make a decision. Best of luck, I'm going off station. I expect to see this marked "solved" when I sign in in the morning! :) How can I compare<kind> to </kind> If one starts with < (and not </) and the other starts with </ then compare the remaining for equality.. @tedtdu - Sorry you missed your deadline. If you want to go back to the top and work through how this is done, we can still do that. If one starts with < (and not </) and the other starts with </ then compare the remaining for equality. Dear NormR1 Thanks for your hint. Let me ask further questions. 1)Shall I need to tokenize the string of "<Kind>Yes</Kind>", Thus, "<kind","</kind" become independent tokens? 2)Could you please to tell me how to compare REMAINING for equality? Sincerely Thanks in advance tedtdu Dear jon.kiparsky Thanks for kind supporting. I think I have to finish it and am still working on it. Below is work that I have done so far, but UNsuccessfull yet. Help will highly be appreaciated and remembered. "stream.txt" is XML document. import java.util.*; import java.io.*; public class LastHope { public static void main(String[] args){ Stack<String> stack = new Stack<String>(); try{ BufferedReader bf = new BufferedReader(new FileReader("stream.txt")); String line; String de; String str; boolean flag=true; while((line=bf.readLine())!=null){"))&&str.equals(line.length())){ System.out.println(); //flag=false; }else if((str.equals(">"))&&!str.equals(line.length())){ System.out.print(""); }else{System.out.print(" "+str); } } } }catch(Exception e){System.out.println("exception");} } } Edited by tedtdu: furnish explanation. @jon.kiparsky. Yes, when "/" is encountered, stack should be popped off. Please excuse if it sounds strange. How do I know "/" is encountered if it is replaced with ''. Could you please illustrate your commend with a few line of codes. Thanks for your attention. The remaining part of the string is the string following the / would be the substring starting at index=2. Sample code to remove leading bits for comparing remaining: String sub1 = begTok.subString(1); // drop leading < String sub2 = endTok.substring(2); // drop leading </ sub1.equals(sub2) // test equality of remaining Suggestion: For testing, create a String array in your program and use a StringReader to wrap it. That makes the testing self contained ie not requiring a separate file. Edited by NormR1: n/a ...
https://www.daniweb.com/programming/software-development/threads/304741/to-parse-strings-without-parsing-libraries
CC-MAIN-2017-34
refinedweb
1,675
73.17
Clearance now uses only cookies with a long expiration as its default. The effect is always remembering the user unless they ask to be signed out. “I’ll never let go, Jack! I’ll never let go!” A better “remember” default A couple of weeks ago, I asked how Clearance should handle “remember me” PJ Hyett’s argument won the day: Assuming people using shared computers can’t remember to log out is insulting at best and annoying to everyone else that has exclusive access. Cookies with long expirations should always be the default. Clearance, as of today’s 0.8.2 release, works exactly this way. Cleaner under the hood Fewer conditionals. No special cases. Just do one thing well. def current_user - @_current_user ||= (user_from_cookie || user_from_session) + @_current_user ||= user_from_cookie end def user_from_cookie if token = cookies[:remember_token] - return nil unless user = ::User.find_by_remember_token(token) - return user if user.remember? + ::User.find_by_remember_token(token) end end If you look through the recent commits, it’s a glorious sea of red as lines of code were removed. Deprecations of shoulda macros Originally, we had between a dozen and two dozen shoulda macros. They’re almost all deprecated now, continuing a trend over the last six months. The macros that have survived are: Want to upgrade You’ll want to: - migrate your schema - watch out for a cookies gotcha - regenerate Cucumber features - remove the “remember me” checkbox! Migrate your schema If you decide to upgrade, you’ll need to migrate your database schema, as we also finally addressed the “double duty” that token/ token_expires_at used to play. It is now split into a confirmation_token and a remember_token. Cookies gotcha Like most things in software, this decision comes with a tradeoff. When cookies are set, they are not available until the next request. So be careful with functional tests that depend that cookies. Try to use the current_user method where possible. Cucumber features This is a minor change. They mostly combine “remember me” scenarios into the basic scenario. If you don’t want to run the generator again, you can probably figure out what needs to be altered on your own. Issues As always, if you find any issues, please report them at Github Issues. Thanks and happy coding!
https://robots.thoughtbot.com/always-remember-me
CC-MAIN-2015-48
refinedweb
372
58.69
Essentials All Articles What is LAMP? Linux Commands ONLamp Subjects Linux Apache MySQL Perl PHP Python BSD ONLamp Topics App Development Database Programming Sys Admin Back in May, Paul Vixie and I presented a webinar in which we discussed five new extensions to and uses of the Domain Name System: the Sender Policy Framework (SPF), IPv6 support, Internationalized Domain Names, ENUM, and the DNS Security Extensions. These subjects represented most of the new topics in the fifth edition of O'Reilly's DNS and BIND, released in April 2006. At the end of the webinar, we gave our assessment of the future of each of these technologies. Six months later, after conducting a survey of the Internet's namespace, consulting experts, and generally keeping an ear to the ground, it's already time to update our original assessment. Perhaps the best news comes from SPF. SPF is a means of storing data that authorizes certain mail servers to send email from a domain name. DNS administrators authorize mail servers to send email from a particular domain name by adding specially formatted TXT records to that domain name. For example, to authorize the hosts cerberus.infoblox.com and daneel.infoblox.com to send mail from infoblox.com email addresses, the Infoblox DNS administrator could add this TXT record to the infoblox.com zone: infoblox.com. IN TXT "v=spf1 +a:cerberus.infoblox.com +a:daneel.infoblox.com -all" Mail servers that support SPF will check this TXT record when they receive email from infoblox.com addresses. If the mail sender sending the infoblox.com email isn't one described in the record, the server can subject the email to additional checking. We suggested in the webinar that there was no reason not to implement SPF: it's easy to set up, and there are no disadvantages to publishing a list of mail servers that are allowed to send email from your domain names. In our recent survey of the Internet's namespace (PDF), testing about 2 million subzones of .com and .net, we found that roughly 5 percent of those zones published SPF information. That's an impressive figure, given that a DNS administrator needs to take the initiative to learn at least a little about SPF and then manually enter the TXT records that enumerate his mail servers. With a little help--the inclusion of SPF wizards in DNS management products to make publishing SPF data simpler, for example--we believe the adoption rate could see double digits. Given that some proportion of the 2 million zones we sampled don't send email at all, an adoption rate over 10 percent could make it possible to authenticate a large share of inbound email. Another email authentication mechanism that's gaining traction is Domain Keys Identified Mail (DKIM). DKIM is the product of the merger of Yahoo!'s DomainKeys and Cisco's Identified Internet Mail specifications, and while we neglected to cover it in the webinar, it shows a lot of promise. SPF operates at the level of domain names and hence can only tell you, for example, whether the mail server that sent you my mail is authorized to send mail from infoblox.com email addresses. DKIM, on the other hand, can tell you whether a particular message actually came from The same survey also looked at IPv6 adoption. We checked to see how many of the subzones of .com and .net had at least one name server with an IPv6 address. Now, that result is probably lower than it should be; organizations in .com and .net are disproportionately North American, and adoption of IPv6 in North America has been slower than in other parts of the world, where address space isn't as abundant. Also, many registrars for the .com and .net zones don't support specifying an IPv6 address for a name server, so an administrator running a name server that speaks IPv6 often can't even get the full benefit of that connectivity. Nonetheless, we found that 0.2 percent of the zones under .com and .net have at least one name server with an IPv6 address. That's an impressive number given the circumstances. Had we been able to sample the children of a country code top-level domain in Europe or particularly Asia, the proportion surely would have been higher. When covering Internationalized Domain Names, we mentioned that the forthcoming Internet Explorer 7 would include support for IDNs. IE 7 was released, of course, back in October, and allows you to enter domain names that include non-ASCII characters. Per the IDN specs, labels of domain names that include non-ASCII characters are translated into ASCII-armored equivalents before being passed to a DNS resolver. With IE 7, almost all modern browsers now provide support for IDNs, including Firefox and Opera. Many top-level domains now allow registration of subdomains whose names include non-ASCII characters--though most restrict the allowable characters to a small subset of Unicode. For example, the German DENIC publishes a list of those Unicode characters it allows Other registries, such as the .org top-level domain's restrict characters by language. The ITU publishes a list of those country code top-level domains and generic top-level domains that support IDNs. According to Richard Shockey, co-chair of the IETF's ENUM Working Group, adoption of ENUM is huge. However, it's not the traditional variety of ENUM--mapping telephone numbers to URIs using the e164.arpa domain--that seems to be taking off. Instead, carriers are adopting ENUM as a next-generation signaling system to facilitate direct interconnection of their networks, without using the public switched telephone network or the Internet as a transit network. Traditional ENUM is mired in conflicts over ownership of subdomains of the e164.arpa namespace and concerns about publishing what amounts to private, personal contact information where it is accessible by anyone on the Internet. The laggard among these technologies is the DNS Security Extensions (DNSSEC). While providing source authentication and a guarantee of data integrity in DNS is enormously valuable, DNSSEC just isn't taking off. Our survey showed a paltry 16 signed zones among the 2 million sampled. The DNSSEC Monitoring Project reports only 279 signed zones Internet-wide. There are a several reasons DNSSEC's adoption has been so slow. It's hard to administer a signed zone. The only tools widely available for generating keys and signing zones are command-line-based and not particularly user-friendly. Documentation of common procedures (for example, key rollover) is scarce. Signed zones place a greater burden on both recursive and authoritative name servers, increasing the size of zones and responses as well as the computational load involved in recursive query processing. DNSSEC is also a moving target. The standard has already undergone one overhaul, and it may face another revision to address concerns about a new type of record DNSSEC introduces, the NSEC record. If the IETF undertakes that change, we'll have to wait months for the corresponding modifications of the few name server implementations that bother to support DNSSEC at all. If DNSSEC is ever to fulfill its mission of helping to secure the Internet's DNS namespace, it needs a swift kick in the pants. This might come in the form of government regulation, such as a NIST requirement that U.S. government agencies sign their Internet-facing zones, or a mandate that contractors working with the U.S. Department of Defense do the same. DNSSEC also needs serious work in the area of usability. Most administrators would find managing a signed zone with the existing tools and available documentation very challenging. Still, with four new developments in DNS advancing--some fairly rapidly--we DNS administrators have plenty to keep us busy. A prudent administrator would do well to stay on top of these technologies by reading the relevant RFCs and documentation and by following the related newsgroups. And, of course, by reading the fifth edition of DNS and BIND, which addresses each of these topics in-depth. My thanks to Matt Larson of VeriSign and Richard Shockey of NeuStar for their contributions to this article. ONLamp.com. Sponsored by: © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.onlamp.com/pub/a/onlamp/2007/01/11/dns-extensions.html
CC-MAIN-2017-30
refinedweb
1,397
54.52
I am trying to add a viewmodel to a project because I want my view to use two separate models. I've looked at different tutorials trying to learn how to do this but I am having some trouble. Before, the view was strongly binded(typed?) to the Person model, but now when we add a person, we want them to upload a file, but this file is in its own table in the database so I had to create a new FileToBeUploaded model I created a new class and added the properties I wanted to it namespace Project.ViewModel { public class ViewModel { public Person personVM { get; private set; } public FileToBeUploaded fileVM { get; private set; } } } Now my problem is when I want to strongly bind this to the view I am using, I write @model Project.ViewModel instead of the old Project.Models.Person But I get an error saying "Project.ViewModel is a namespace but is used like a type" So I don't know if I'm missing some steps in between creating the viewmodel and trying to access it in the view, and I feel like the tutorials I've seen on it are not very clear about it.
http://www.howtobuildsoftware.com/index.php/how-do/boN/c-aspnet-aspnet-mvc-creating-a-viewmodel-on-an-existing-project
CC-MAIN-2018-05
refinedweb
202
64.04
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. On 17/08/2017 16:51, Paul Eggert wrote: > On 08/17/2017 10:32 AM, Adhemerval Zanella wrote: >> My understanding from Florian comment is the 'decoupled' version would be the >> code with both win32/amiga/etc code striped and LIBC defines set to only glibc. >> Would it be acceptable for gnulib? > > We don't need Amiga code any more. MS-Windows support is still used, though. However, the current style in glob.c with the forest of ifdefs is pretty bad, and it'd be good to see it go. Instead, I'd rather have the Gnulib-specific stuff put into a section that is relatively independent of the rest of the code. To do that, I suggest that you just rip out all the MS-Windows code, and I'll do my best to reintroduce it in a cleaner way. Right, I will remove both amiga and win32 code in a subsequent patch to gnulib sync. > >> This would also remove the d_ino/d_type abstraction macros. > We'll still need some form of abstraction. For fts.c Gnulib is using something like the following, and we could do this sort of thing in glob.c too. It's not much of a burden to write 'D_INO (dp)' instead of 'dp->d_ino' in the mainline code. > > > #if defined _LIBC || defined D_INO_IN_DIRENT > # define D_INO(dp) (dp)->d_ino > #else > # define D_INO(dp) 0 > #endif I think it is feasible, I will check this out. > > By the way, I've lost track: have you looked at the Gnulib fixes for glob.c, and merged them into your glibc patch? > By [01/18] patch [1], I synced with 1dc82a77fa606e18edf (which is still latest gnulib glob version), with some exceptions: 1. Commit 44c637c (Properly initialize glob structure with GLOB_BRACE|GLOB_DOOFFS) which fixes BZ# 20707. 2. No inclusion of flexmember.h header and its usage on glob. The code is meant to be rewritten and header is not required in following patches in this set. 3. An additional define (GLOB_COMPAT_BUILD) to avoid building size_and_wrapv and gblo_use_alloca twice on some configurations (i368 compat code) due multiple inclusion. [1]
https://sourceware.org/ml/libc-alpha/2017-08/msg00817.html
CC-MAIN-2019-35
refinedweb
368
74.19
I'M TRYING TO WRITE A PROGRAM BUT AM COMPLETELY LOST AND DON'T KNOW WHERE TO START W/ MAIN. IT'S A PROGRAM DESIGNED TO SEE IF PEOPLE ARE ELGIBLE FOR PLATINUM CARD. USE FUNCTION Get_Credit_Limit() to prompt user for customer's present credit limit. function should return credit limit. Use Get_Acc_Bal() to prompt for customers account balance. function should return account balance. Use function Determine_Action(0 whose arguments are customers credit limit and account balance. Determine_Action() should return 0, 1, 2, according to following. If customer had credit limit of at least 2000 and account balance of 500 or less, bank will issue the customer a platinum card. in this case, function should return 2. If a customer has credit limit of at least 2000 and an account balance of more than 500, bank will send a letter stating that if customers balance falls below 500, he or she will receive platinum card. in this case, return 1. if customer does not fall into either category, bank will take no action. function should then return 0. Use a function Display_Action() to display the action bank will take for customer. Pass value that Determine_Action() returned to Display_Action. Based on this value, Display_Action() should display message that shows what bank did: Issue Platinum card, send letter or take no action. My question is how do I start off Main and how do I return the values 0, 1, and 2. I don't understand that part. I'm pretty terrible at this and this is what i have so far. any help would be greatly appreciated #include <iostream.h> #include <iomanip.h> double Get_Credit_Limit(double); double Get_Acc_bal(double); int Determine_Action(int, double); int Display_Action(); double Get_Credit_Limit(double credit_limit) { double credit_limit; cout<<"\nEnter present credit limit: "; cin>> credit_limit; return credit_limit; } double Get_Acc_bal(double) { double account_balance; cout<<"\nEnter account balance: "; cin>> account_balance; return account_balance; } int Determine_Action(int) { if ((credit_limit >= 2000) && (account_balance <= 500)) cout<<"\nIssue Platinum Credit Card"; return 2; if ((credit_limit >= 2000) && (account_balance > 500)) cout<<"\nIf balance falls below $500, you will receive Platinum card."; return 1;
http://cboard.cprogramming.com/c-programming/14313-long-but-please-help.html
CC-MAIN-2014-23
refinedweb
345
57.06
Well, it's been an amazing PDC. Before I came I thought it would be mostly about the sessions and the technology, but it's also about catching up with people, making new relationships and socialising. So many times I've thought that I'll have to wait for the DVD to review the material and let it all settle in my head. The time here has mostly been about understanding the general directions and seeing the motivations and directions for the technology. As one friend said, "why go to the sessions to watch the PowerPoint when I can go to the Microsoft Product Pavillion and get a Product Manager or Lead Developer to take me through it?" Some outstanding points; I'm really excited to see the new technology and to get on board the Indigo wave.. ObjectSpaces introduces a mapping layer that separates the business logic from the data access logic to reduce the amount of code to maintain. It's a declarative mapping between objects and relational tables. It makes sense when you have a strong or large business logic layer. It will be available in Whidbey. They will also have a nice User Interface in Visual Studio to help with the mapping. I've heard from some ThoughtWorks friends that it's not as good as the Java versions or Neo, an open source product from ThoughtWorks (this guy may have been biased ;) It works with mapping files to tranlsate from an object query/update to a SQL query/update. The idea is that objects are responsible for saving their own data and can just call a .Persist method to save the data to the database. One of the examples is showing how to use GetObjectSet to execute query strings against the objects. It's a new query language called OPath that lets you write select statements such as this to return data: It's being given by Luca Bolognese who has promised that his goal is to avoid PowerPoint hypnosis and aim to make sure that no one falls asleep. He's Italian (yes, really) and a very funny speaker. However, his accent can sound funny. He's reading out SQL statements and adding an 'a' to every word, so when he's reading SQL statement it sounds like 'selecta ... froma ... exista' Classic quote: Someone asked why they ar eusing OPath rather than XPath. He said 'we used XPath in the prototype and gave it to the group of programmers, and they came back and said it was too hard. So we did a search and replace to switch the slashes with dots and they loved it'. Actually it appears that the two are different problem domains. I'm sitting with Peter Provost (by chance, the photographer of many of the photos in my photo blog roll), soaking up the blogging energy that he's emitting. He's actually writing the source code as we see it (a true touch typing programming god!) so perhaps he'll post it later (Peter's notes are now available). Update: Paul Wilson left a comment pointing to more examples in his article on ObjectSpaces. I've been on a mission this PDC - to build a photo blogroll of all the bloggers I read who I've met at the PDC. I'm happy to say that I've now launched it. I saw Chris Sells pull his laptop out and start coding with someone on the Windows Form table. Anders was talking to a bunch of people about C# and all just about all of the Indigo team was there. I could name drop and say that I enjoyed talking with Ingo, Don, Omri and others, but I wont as that would be showing off. I got to meet Brent Rector who autographed my copy of his book (the first time I've ever had a book signed!). It was such a lot of fun. Once I'd exhausted all of the questions I could think of I sat down to eat (I needed real food after missing lunch and trying to get by on sugar in many different forms). At that point I noticed a big whiteboard sign: Longhorn Shell Team. No one was talking to the guys so I went up and said 'are you going to make the shell a first-class citizen in Windows so that my Java friends will stop buying the Mac for its Unix shell'. Happily the answer was yes. This is great as it's been a long time problem with Windows that it doesn't have a decent shell architecture. Adam Barr wrote about this in his book 'Proudly Serving My Corporate Masters'. The shell guys were really pumped and visibly passionate about what they are up to (creating a new scripting language, manipulating the system through objects with properties rather than text).. It seemed like too much of a fight to get to all the sessions today, so I skipped one set of sessions to spend some time in the Hands On Labs area. It's a huge room full of computers loaded with all of the latest bits (the longhorn build, the Visual Studio Whidbey files), so it's much faster than installing them all on a laptop. Each machine has a booklet of labs that can be walked through. It's a nice break from the presentations and a useful way to build familiarity with the new concepts. I did the Indigo track and built my first Indigo application which was like an improved way of using the SoapSender/SoapReceiver from WSE. I just saw Don Box have to ask to be let into his own session. The security guy didn't believe he was the speaker so Don had to show his ID pass. There was a massive traffic jam in the corridor - I went down to the wrong level and have ended up on the floor outside watching the TV. I'm not sure why Microsoft didn't just leave this session in the same as the last one. Still I'm thankful for being able to see and hear the message. My legs are cramping though amongst the small space I have. Drew is doing an excellent job of covering all of this. I'm typing as fast as I can so that I can work through it later.'m looking forward to understanding exactly how far Indigo will go, and where we might still need to use MSMQ and Tibco. Is Remoting really dead in applications or does it just suck at Interop?There's been a lot of discussion about whether Remoting is dead. I'm not a big user of remoting, preferring WSE, but I'm not sure that it's fair to say that remoting will have no future. As Ingo mentions, Indigo will support whichever method you want. Brent Rector's book that was given out with the CDs shows that Indigo has RemoteObject services which are an improvement on the .NET remoting model. The same decision matrix is involved with RemoteObjects vs. Web Services as with .NET Remoting and Web Services. RemoteObjects are useful where both ends of the wire share the same Indigo platform and when you need to marshal the object across machine boundaries. Wow, a whole room full of bloggers. I'm filled with a sudden fear of what would happen if there was an accident that wiped the room out. Imagine blogging silence. My original warm happy glow (as a result of meeting Rory) gradually subsides through the session as I realised what a political and grumpy bunch some bloggers are (Joel agreed). See Randy Holloway for more detailed technical details. Solving the problem of posting to different enginesClemens is talking about how he decided to use metablog api that allowed cross-posting between DasBlog and .Text. It uses XML RPC. A fantastic solution from a while ago says Clemens, it's time is over. Many of the blogging engines have custom extensions that make it hard to cross post. Clemens likes Userland for their spirt but not their technical ignorance. They are stuck with a 1997 view of XML, they ignor the namespaces and ignore the angle brackets. Apparently it was Dave, not the rest of the Userland crew. Clemens believes that Dave is ignorant of the XML advances. Atom is a community effort that is trying to solve these things (someone asks, 'What's the problem Atom was trying to solve?' 'Dave Winer' someone yells out). Clemens concern is that there is a 'comunity discussion' that may go nowhere. We could either wait or we could define our own standard. DasBlog, .Text are the dominant engines (in the .NET space), SharpReader, RSSBandit and Newsgator are the major readers. Perhaps a 'standard' can be created rather than waiting. Scoble reminds us of Don Box's comment that 'the only spec that matters is a spec that is being used'. The mood is that Clemens and Scott should go away and work it out between themselves. Robert Scoble 'I'm already getting a bit overloaded now [reading feeds 600 RSS feeds] - I think 1200 is the maximum'. It's reassuring for all of us to know there are limits. Clemens is asking Scoble how much the Microsoft Marketers might pay for a WinFS RSS application. Scoble: $50. How did Scoble start blogging?Scoble said MSN have been asking him why he blogs. He started with FrontPage 97 and had to know a lot about a server and the technology, not could enough for his mum. The problem originally was that no-one would see a blog - he had to add his blog to the search engine. People want to see a visit straight away. Dave used to refresh weblogs.com and view each post as a way of getting the freshest comment. On Saturdays Scoble still does this. On his first post 3.5 years ago he got 15 visits straight away, which fed his ego enough to keep feeding the machine. There's some great stuff in this road map presented by Brad Abrams and Jeffrey Richter. It's clear that Microsoft have been doing more work on the patterns, speed improvements (the team must love being able to tune a V1.1). The session was an overview of the trolley load of goodies that come with the Whidbey release of the CLR. Brad Abrams mentioned his goal of making sure that the framework uses consistent design patterns. An example of this is the WinFX poster that shows how all of the technologies relate.How .NET is being adopted Brad mentioned that blogs and .NET user groups are helping to contribute to the success of the .NET framework (along with over 400 books).Trustworthy ComputingTrustworthy Commitment is big deal for Microsoft. A few years ago security was just a feature of the product, with a single feature team. Microsoft now understand that security is a horizontal foundation that is thought about in each spec and code review. It's all part of the Trustworthy computing model. Some visible impacts: 1000's of hours of review, external parties have come in to audit the security code. The goals is to help developers build secure apps on top of the platform. This is where the Prescriptive Architectural Guidance comes from.Base InnovationsThese are the most important parts of the system: GenericsAdded new IL instructions and changed metadata tables to get support for Generics, so VB can share it as well. It's also in C++ (why this is better than templates I'll have to look up). Generics have also been added to the common language runtime, which means that all the 3rd party languages can adopt generics (e.g. Eiffel).Data AccessADO.NET has no model changes. This is so incredible that it's worth repeating: Microsoft have not invented a new way of doing architecture in the Whidby version of the Framework. (this is extrordinary given the changes that have happened in the past - where did they shift the ADO guys too - surely they would have been itching to do some improvements), the focus is advanced features and performance.System.XML is core to the platform. There are some performance improvements (from 50 - 100+% for different areas especially XSLT). Also support for Xquery for better support.ObjectSpacesObjectSpaces. Treat rows and columns as an object graph. Based on an xml file that defines the mapping between the relational and the object representation. It basically creates an domain/business object wrapper around the database. Having to remember the ordinals for rows in a columns is annoying. However, with ObjectSpaces the first things is to set up a connectionObjectSpace os = new ObjectSpace(myConnections, myMappings);foreach((customer c in os.GetObjectSet<Customer>( "city = Seatle")){Console.WriteLine(c.name);}Data Acces - System.Data.SqlServerThe integration with Yukon looks like it uses the attributes, such as :[SqlTrigger("EmailReview"), "Reviews", "FOR INSERT")]. The example Jeff showed how easy it is to write a stored procedure that sends and email when a new review is posted to a database. This used to be a horror in previous version of SQL Server. Just knowing that extended stored procs wont bring the machine down in a bunch of flames is big deal. This could seriously impact consulting revenues!ASP.NET 2.0A major release for ASP.NET. The team went away and looked at common code and controls that teams did. The goal was to reduce the plumbing code by 70%. This has been achieved with page framework, 40 new controlsCasini - the old Personal Web Service is back. This times its 100% managed code. It picks a random port each time it's run to protect from someone trying to hack into it (don't they trust developers to lock down their machine )Where did all the code go?Building BlocksThey build a range of "Building Block" APISMembership control (username/password, resetting the password)RoleManager (control access based on role)Personalisation (customize the layout of the site. You can define a class (e.g profile containing name and zip code) that is associated with the logged in user).SiteNavigation - tracking how the users move between pagesSiteCounters - useful for sites that are paid based on behaviour such as viewManagement - IT department get a page or email if there's something going wrong on the site.The provide an abstract Provide Model Design Pattern that controls the storage of the data behind these controls. Very, very nice.Page Framework FeaturesMasterPages - eack. Sounds like some of the FrontPage guys escaped and let loose the nasty FrontPage themes into the ASP.NET pageThemes/Skins - separate the UI from the logic so that it's easy to skin without changing code (like the themes in DasBlog)AdaptiveUI - all of the controls will render to small handheld devicesControl Buckets (over 40 new controls)Leverage the previous features to do tings like Security, Data, Navigation, Web Parts. These controls know how to talk to the underlying controls. One example was the bread crumbs links at the top of page.Innovations on the WebASMX performance is being improved through making sure the server side requests per seconds is much better. Secondly there's a smaller working set required on the client to call a web services. They've also notice that web service calls must be asynchronous (the button shouldn't stick down while the web service is working). You need to use a thread pool. This is a little complicated for some developers with the IAsyncResult pattern, so that now this is much easier - this should be the main stream way to call webservices. It ends up just looking like an event..NET Remoting - authenticated and encrypted channels. I wonder whether this is WS-Security compliant? It doesn't have to be since remoting is about two .NET machines rather than interops.System.Net - this has better 'network awareness'. For example Outlook 11 detects the type of network and adjust the experience based on that. FTP protocol support has also been added.Client Tier with Windows Forms - System.Windows.FormsLots of developers wanted to move to Windows Forms, however deployment of client applications is still too difficult. .NET started to make it easier (each assembly has it's own metadata). Whidbey is concluding this story : click-once. It should be as easy to deploy a forms project to clients as it is to deploy a web server. XP Theme support has been added. Finally you can look like Office (why is this always a couple of months after the Office release?). Apparently the Office team will be here this week showing how to make the Outlook interface in 100% managed code. Longhorn Related Windows Forms app will work great on Longhorn. There will be a two-way interop with Avalon. You can use Avalon markup and mentions win form controls, or you can use win forms on Avalon. Don and Chris demoed the API underneath Longhorn. Don achieved his aim of getting a VP to use a text editor to build a demo live. The demos were very DM/Box with many text editors shown to build the code. Great coding, a little slow in some of the delivery. Now onto Longhorn. Basically APIs in longhorn are a simple set of managed API. Aero User InterfaceThe demo started with a simple HelloWorld window, to which they added text boxes, click events, enlarged all of the controls (they're vector graphics), rotated them and set the transparency so that a video could play underneath the controls. There's a new idea of separating the C# style code from the layout information. It seems very similar to the code-behind pages in ASP.NET. The layout information uses XAML (pronounced zamel) an XML syntax to set the properties of the controls (now that I think of it, this seems like an improved version of the form layout information from the old days of VB6) They also demoed the MSBuild tool. This is an XML-based build system. It uses three types of things - properties, tasks and item. As Scott Hanselman says, 'Holy crap it smells like NAnt. Wow, writing these build files is xml and is 90% the same concept as Nant. Learn and use Nant now (I say) and use MSBuild soon.' The demo showed how to do this using new namespaces such as as MSAvalon.Windows, MSAvalon.Windows.Controls that seemed to be in assemblies such as PresentationCore.dll, PresentationFramework.dlll and WindowsBase.dll Here's some of the XAML to get a feel for it: <window xmlns=""xmlns:<Visible source="c:\clouds.wmv" Stretch="Fill" RepeatCount="…." ><TextPanel DockPanel.I can embed <Bold>really</Bold> text <TextBox id="bob" width="2in" Height="20pt"/> <Button Click="Pushed">Push me I'm a cliean</Button></TextPanel> WinFS Search DemoDon got Jim Allchin to write the code to search the file system and return the items using the Longhorn controls. Nothing really that amazing here at the API level - it's a nice simple Find method that returns a set of objects that can be iterated through. Using Indigo to post to Don's BlogA demo using Indigo to post to Don's weblog. I'm puzzled as to the the API under the cover (I'm assuming it's using a web service). Indigo looks like a nicer API on top of the web services. As Scott mentions, the Using Indigo to post to the sidebarSeemed to be a way of using Indigo to post the the sidebar of the desktop. Similar to using WSE with tcp channels, there was a bit of hassling around with the code to get it set up. We finally got to see Aero, the Longhorn User Interface. Overall, my interest is in how the flashy graphics will help people get their job done. Having movies display in a document, and showing them moving in the thumbnails doesn't seem that useful. Video in general is hard to get information from - it's difficult to condense in time. I'm concerned the focus is on cool rather than productive. Good features: More troubling usability points: Well, we've had clapping so far for the following: My first Bill Gates keynote today. It's amazing to sit amongst 7,000 developers watching this. There are 16 massive video screens display shots of the presenters and the PowerPoint slides. Here are some points that struck me from Bill's speech: A great video was shown on the history of the software "Software Futures" with Bill Clinton (talking about the number of websites when he was present), Warren Buffet (on Bill Gates missing the software boat) and the inventor of the Newton (on how the modern handhelds). He ended with a joke about a future episode showing the dangerous, challenging world of database development, over a shot of the Oracle yacht. This session was on the Patterns and Practices Group at Microsoft. It was hosted by Jackie Goldstein and included James Newkirk (originally ThoughtWorks, and now Dev Lead for the Patterns group. Aside: he's writing an exciting looking book on Test First Development in .NET) and another guy from the team. Easier to ContributeSeveral participants mentioned wanting to make it easier to contribute changes to the patterns and application blocks. So that if a company makes changes they don't need to maintain them separately in their own versions. The team is looking at using the community workspaces. James Newkirk mentioned using the Adapter pattern to switch between the shipped source code and any local revisions. Improve the Help FilesThe documentation is currently very class based, what the members and classes are, rather than on how to use them. Sometimes the samples are too simple and don't show how to use the advanced features. Some of the QuickStart/JumpStarts are too difficult, especially the User Interface Process (UIP) model based on the Model View Controller pattern. One participant mentioned integrating the help files into the Visual Studio collections. An MCT trainer asked for Microsoft Official Content (MOC) that used the patterns group. The MS people said they are trying to get out there and integrate it with MOC courseware. Conflict between RAD perspective and ArchitectureVisual Studio promotes a RAD, drag-and-drop, visual-designer backed RAD tool. One participant said they'd like a Visual Studio Add In to help integrate the patterns. Often the keynotes at the PDC are the quick RAD solutions and this comes across as what Microsoft thinks is Best Practice. Guidance is harder to market. One of the Microsoft guys said that marketing the RAD features was a more important goal for Visual Studio than marketing the architecture. Marketing the Architecture is harder. Integration with the Visual Studio and Language TeamsApparently the community around the patterns group (bloggers and speakers like Scott Stansfield from Vertigo) had forced the the Visual Studio and Language teams to work closer with the Patterns Group. There is a disconnect between internal Microsoft Groups. The Patterns group said they didn't know that the ASP.NET Starter Kits existed until after they were released). FX Cop for ArchitectureSomeone asked for an Architecture Cop like a FX Cop. Apparently someone in the patterns group also had this idea. I'd like to see improved checklists as a first step on this one. Why isn't there more focus on the GoF Patterns?James Newkirk said they will highlight them as they use it. Brad Abrams mentions at the Rotor BOF session mentions that Rotor (a free, fully functional implementation of the ECMA #335 standard for a common language infrastructure.) has moved from a skunkworks projects to being fully managed by the CLR team. When code is checked into the CLR there are tests that determine whether it might break anything in the Rotor Unix build. They are looking to release a Whidbey version of Rotor after Whidbey has shipped (someone mentioned September 2004). Other points: Brad Abrams wanted to know whether anyone was interested in using the JIT compiler and making it available in the Rotor distribution - you couldn't modify it but it would allow for better performance. It's interesting to see this effort from Microsoft based on building a community of interested people. Quotes: 'If you blog it they will come' I know that Developers aren't renowned for dress sense, but I've seen an alarming number of developers today wearing socks and sandals. Now there's just no need for this kind of tragic fashion sense. Just say no. Tim was talking and Don was typing. I guess Marting was blogging (I couldn't see - I was on the floor at the back with the power outputs). Aside: recently I've been watching lots of these presenters using Windows Media. It has an excellent - play at 1.4x button that means you can listen to an hours presentation in 43 odd minutes. It's great - the presenter's pauses go away and it's surprisingly understandable. However Tim Ewald is the only guy who this doesn't work for! You can play at the Raw XML level in SOAP messagingYou can, but it's messy - you have to handle the SOAP processing rules (headers, actors, etc) yourself. The benefits are that you can play in XML rather than object land - you can use transforms, Xath queries etc. Various ways of using the objects discussed in the first part of the day were mentioned. Essentially similar to Don's MSDN TV presentation. But working this way in a live demo is hardMuch fun was had trying to map a WSDL schema back into an ASMX page with the correct WebMethod parameters. Don eventually made Tim go back to the PowerPoint slides while he had a go. By now (near the end) they still haven't got it working (showing that this is a hard core thing to do. Evidence: MSDN does it but required Tim Ewald to implement it . This is gratifying, as I spent a night trying to generate my own provider of the Amazon web service so that I could demo the Amazon web service at talks where there was no Internet, but I gave up after a couple of hours hassle. SoapExtensionsA description of how to use SoapExtensions to ease the problems of processing the Mandatory headers in SOAP messages. See Tim's article Mandatory Headers in ASP.NET Web Services in May 2003 MSDN Magazine The Do's of .NET Web Services The Don’ts of .NET Web Services WSE is 100% GoodnessTim showed how WSE provide filters in the input/output filter that serialize/deserialize objects and SOAP headers. They built a sample project that modified the filters that are applied by default. He also showed how to use the RequestSoapContext inside the ASMX WebMethod and how to set the client up to call this method using WSE. WS-AddressingDon mentions that interop with Tibco was one of the motivations. WS-Security5 slides, 20 minutes. Brain is dead now. The XML and Web Services Perspective continues with Don, Tim and Martin. Heavy going after lunch,but much better than the first session. This session had more useful content that matched the audiences level of understanding (a very advanced audience). The future of Remoting: SoapFormating SOAP stackSoapFormatter is dead. It is the one part of the .NET platform that we'd recall if we were available. It is the SOAP stack underneath .NET remoting. .NET remoting works in situations where you have .NET on both sides of the pipe ('living the COM dream without reference counting' as Don say). is one part of the .NET platform that we'd recall if we were able to. What is SOAP about?SOAP is primarily about extensibility, it is designed to allow us to evolve services. The service provider and consumer may evolve at different times. It is important that the server is able to evolve the service without disrupting the established clients. SOAP 1.2 will be the last version of the protocol. Ever.There's wont ever be another version of the SOAP protocol. SOAP 1.2 is the last version that we will ever need. Don justifies this with two arguments. Firstly, no one gets a third chance at it. SOAP 1.1 is here and will be with us for a long, long time, we wont get rid of it. SOAP 1.2 is here and we should move to it but this will happen slowly (like the conversion of IPv4 to IPv6). It wont be practical to support more versions of SOAP. SOAP 1.2 is in the next version of .NET, would have been in earlier (Windows 2003) if it was in a final state as a specification. SOAP is not just for serialized object graphsSOAP can be used to serialize an object graph, but this is a lifestyle choice, a subject decision. There's nothing about SOAP that says it has to be like this. SOAP Headers and SOAP 1.2SOAP has headers that are extensible. They are meant for the ultimate receiver rather than any intermediaries. The headers have the mustUnderstand element which means that it must be processed successfully, as well as an actor tag that specifies which intermediary is designed to process the message. Because the headers must be processed before the body is processed, most of the SOAP stack implementations buffer the headers and then stream the body. SOAP 1.2 adds the s:relay attribute which means that the header is targeted for the current intermediary, but that it can be ignored. However, if it isn't processed it shouldn't be removed from the message - it should be passed on to the next intermediary (presumably changing the actor tag on the way). There's also a well-known URI '' means the next intermediary should process it. As Don says, 'everyone plays the role of next - it's the Iunkown of SOAP'. WSDLDon made us stand up and recite 'WSDL sounds really fun, please tell us how it works'. WSDL is more difficult. Don mentions several times that many of the flexibility points in WSDL were because they weren't sure that Web Services would end up using XML Schema as the type descriptions, and SOAP as the binding. WSDL provides for other type descriptions and bindings, but XML Schema and SOAP are the most common. Some very funny stuff when they went through Doc/Literal RPC/encoded choices in the WSDL. Basically at each attribute Don said 'there is no other way'. Good laugh, obviously everyone in the room understood that document/literal is the way to go. Don also joked about the '.NET WSDL parser'. If you run the WSDL.exe tool before adding the binding then if it's successful there will be a 'no classes generated' message, but at least no exception. AXM Security and Web ServicesTim did some great debugging of IIS and the directory to solve a permissions problem. When it failed first time, Don made the joke that it was not a failure but was 'locked down by default'. Don also made the point that the way the problem was attacked was to change the server until the client message came through (through giving anonymous access to the virtual directory), rather than the better approach which is to give the security credentials to the client (through using the proxy object). Polymorphic data and serializationTim made the point that WSDL generation mandates knowledge of all types at compile time. If you want to use derived types then you have to use the [XmlInclude] attribute so that the SOAP receiver understands where it must look to see if a type has a derivation somewhere. The alternative is that the SOAP stack would have to look through all of the referenced assemblies. Wow, my first US PDC lunch. The meal area is enormous, with teams of waiters running around like Ants. I had the pleasure of dining with Ian Griffiths and Mathew Adams, authors of the great Programming .NET Windows Forms from O'Reilly. Ian was one of my interviewers for my current job. Getting interviewed by an O'Reilly author was certainly intimidating, lucky Ian's a top guy. I also saw Robert Hess from the .NET Show (he's looking older these days - I remember the launch of IE 3.0 hosted by him!) and Ingo Rammer around the traps. Martin Fowler was even having lunch on his own (could this ever happen at a Java/XP/Agile conference?). Hopefully I'll get the courage to talk to him later in the week (he's on the architecture panel on Thurs.) Obviously there's a lot of camaraderie between these guys, all having worked at DevelopMentor together in the last millenium. While this adds for a good feeling between audience and presenters there's also an element of pranks and mucking around that sometimes has seemed more fun to the presenters than the audience. At the most interesting point in the session, where Tim finally got away from the keyboard to talk passionately about Schema, the point was sabotaged by a sniggering Martin and Don typing behind Tim's back on the screen. This was OK, but we didn't get the benefit of hearing the end of Tim's point. I don't mind the presenters having a good time, but not at the expense of the audience. Note: I missed breakfast this morning which may have contributed to these feelings. This session was slow-moving in the morning. All of the content has been available on MSDN or shown in MSDN TV episodes (e.g. Passing XML data inside the CLR). However, as a colleague mentioned, the first 2 hours of this talk were the full day of a previous PDC pre-con session, so it shows how things have developed. Here are some points that stood out: The flight over seemed to be a good time for some people such as Tim Sneath to trying out noise cancelling headphones. I'm afraid to admit it but I used the lo-tech ear plugs to get a couple of hours sleep, watched the movies on the back of the chair rather than from the DVD in a laptop and left my laptop in the overhead locker the whole flight. I enjoyed the luddite pleasure of a good book, Design Patterns Explained. It's written with Java and C++ in mind, but since C# is Java the examples are a breeze to work through. It presents a good case of why functional-based programming, where analysis is looking for nouns and verbs to turn into objects and methods, wont cut it anymore. I recommend the book to anyone looking for a good introduction to Design Patterns. As they say, sentences like these from the GoF book make sense as individual words, but it's hard to really get what they mean: Purpose of the Bridge pattern: To decouple an abstraction from its implementation so that the two can vary independently. source Purpose of the Bridge pattern: To decouple an abstraction from its implementation so that the two can vary independently. source The book does a great job of explaining this. Random thought: I'd also forgotten how strange US cheese is. I'm sure cheese isn't naturally orange.:). Rather than say thank you to Jeff, I thought I'd honour his wish to see photos of the t-shirts at unique landmarks as we make our way to the PDC: My appetite has been whetted for Doug Purdy's XmlSerializer presentation 'A tale of two type systems' at the PDC next Tuesday. I was talking with a customer today about how to version schemas in web services. It's a problem they haven't addressed yet as everything is still in the first version. I've seen Doug's TechEd presentation on loosely coupled web services and his MSDN TV episode. I understand that changing the namespace is not a versioning mechanism - it changes the type system. Doug's suggestion then was to use the open content model and author version schemas with a version attribute and places in the document where we can put any content. Then clients can use the switch statement and decide how to handle the message using the highest version number they understand. What I really miss with this approach is that it loses all of the goodness of schema type checking once the open content elements are used. It's like you can have schema versioning only for the first version of message, once extra content is added it isn't described or validated. I like the idea of a schema validation saving me write application code to achieve the same result (or in the case of XML firewalls, having them do the validation). Radovan argued against this open content approach, saying that subtyping would be better. In his Doug says in his post:. I'm really hoping that more schema versioning options will be part of the surprises Doug's got in store in his Tuesday talk "Indigo": Using XSD, CLR Types, and Serialization in Web Services should be informative. Here's a good post to the microsoft.web.services.enhancements newsgroup on creating X.509 Certificates with the Open SSL toolkit that can interoperate between WSE and Java. Open SSL is an open source tool that I've seen used in a couple of production situations and I mean to look into it more. With the PDC sold-out, the rehearsals ongoing I think it's time to deal with the difficult issues: how to communicate the excitement about the event with loved ones. Rory has already blogged about spending the night out at a bar thinking about his love for ASP.NET rather than talking with friends. Has anyone else found a successful way of communicating the pleasure of new concepts/technologies in development to their partners? While on holiday last week I re-read Richter's applied .NET and Juval Lowry's fantastic .NET Component Development. After a great Italian meal and a bottle of house wine I could contain my enthusiasm no longer and bored my pregnant wife with a passionate monologue about the 'fascinating' topic of generational garbage collection. Does anyone else have any suggestions? I'm hoping there won’t be too much sighing and spontaneous applause at new product features. At the first PDC I went to in Australia in 1996 we watched a video of the US keynote where the speaker had to repeatedly stop while the audience applauded at the new features (cross-language debugging I recall). This level of raw emotion is a bit embarrassing to an Australian audience. After getting over the initial silence, eventually each applause moment was met with laughter. Let's keep it in perspective: as good as the technology is, it's just technology and there's still (hopefully) a job for developers to do at the end of the day. Understanding what .NET and the C# compiler are doing under the covers can be both useful and interesting. While on holiday last week I was (geeking out) and re-reading Applied Microsoft .NET Framework Programming by Jeffrey Richter. What I enjoyed about Richter's book was that he goes down to the Intermediate Language (IL) level to demonstrate the points. He shows how to use these tools to better understand string handling in .NET and how the switch/case statement works under the covers. When an assembly is compiled, the compiler examines the code for literal strings and places them into a metadata table. This table is loaded by the framework as a hashtable (according to Richter) allowing strings to be compared based on hashtable references. This is obviously much faster than comparing character values in strings. The C# language compiler uses these techniques to make the switch/case statements more efficient. Here's an example that can be worked through using ILDASM and Reflector to make it clear what is happening under the covers. using System; public class StringIntern { public static void Main(string[] args) { Console.WriteLine( new StringIntern().ProcessBlogPost(args[0]) ); } public enum PostAction { ReadNow, StudyLater, ReadWhenever } public PostAction ProcessBlogPost( string author ) { switch ( author ) { case "Don Box": return PostAction.ReadNow; case "Chris Brumme": return PostAction.StudyLater; default: return PostAction.ReadWhenever; } } } To see the string table metatdata that is created when this application is compiled, run the ILDASM tool with the /ADV switch, as follows: ILDASM /ADV StringIntern.exe Then choose the View -> MetaInfo -> Show! menu option. The file that is displayed lists the metadata contained in the assembly. The 'User Settings' heading shows the string metadata table contents: all of the strings used in the assembly. User Strings-------------------------------------------------------70000001 : ( 7) L"Don Box"70000011 : (12) L"Chris Brumme" You can output this information to a file using the following command line: IDLASM /ADV /METADATA /FILE:outputfile.txt Assembly.exe This is a technique that Microsoft uses internally to discover code that might be susceptible to a SQL injection attack (where SQL strings are concatenated into SQL statements). The String.IsInterned method can be used to work out whether a strings is stored in the assembly's metadata table. The method returns null if the string isn't in the metadata table, otherwise it returns a reference to the string (effectively the memory address of this string). This allows string comparisons to occur with memory references that are much faster than character by character comparisons. This method is used by the C# compiler to make the switch/case statement more efficient. Using Reflector (and it's fantastic code IL tool tips that explain each IL instruction when you move the mouse over them) it is possible to see what is happening in more detail. Here's the pseudo-code for what the C# compiler does with each switch/case statement: public PostAction ProcessBlogPost(string author); // Load the author parameter into a 'local variable' L_000c: ldarg.1 // Go to the default case if the parameter is null L_000e: stloc.1 L_000f: brfalse.s L_0032 // Check whether the string parameter is stored in the meta data string table L_0012: call string.IsInterned // Store the result (either null or a reference to the string in the metadata table) L_0017: stloc.1 L_0018: ldloc.1 // Get the reference to the string stored in the metadata table L_0019: ldstr "Don Box" // Compare the reference to the switch string with the reference to the case string from the metadata table L_001e: beq.s L_002a // if they are not the same, repeat with the next case statement. L_0020: ldloc.1 L_0021: ldstr "Chris Brumme" I'm off on holidays in Tuscany for the next week, getting enough sleep to survive the PDC. I'm happy to have been 'Box'ed and 'Scobelized' (twice!) in the last day. I also managed to get a ticket for Radiohead's previously sold-out November concert in London. Life is good. More when I get back.). Somehow I'd failed to update my RSS feed for John Bristowe's site so I'd missed the last few months postings. He's been done a heap of work with WSE. John's presentations helped me get started with WSE. The post that most caught my eye was on custom policy assertions. This is a great piece of functionality that lets developers hook their own XML Security tokens into a WS-Policy xml file. So you can write statements like 'only let in a SOAP message that has an X509 digital certificate and one of my own XML tokens' in an XML file rather than having to write any custom code (I know I bang on about this point, but it's such a good one). There's very little documentation (say, none) on this one (I relied on Reflector, the copy and paste button and a judicious set of breakpoints to work out what was going on). John also comments on using policy to describe roles. The observant amongst you may notice that my new site redesign has a resemblance to John's. I'm hoping that John will be at the PDC so I can buy him a beer.. As an audience we can use this time before Microsoft open the Kimono to start thinking about what Microsoft's strategies are in each of the areas (OS/dev tools/Web Services) and what design challenges they have to think about. This helps generate questions so that when we're listening to the endless days of PowerPoint we have a reason to listen and engage. As Dare has mentioned there's plenty of opportunities to ask the Microsoft developers questions at these conferences (like at Tuesday night's Ask the Experts). Doing some work before hand to develop understanding and frame some questions is a good investment. Tim Sneath has already started the questioning, asking will Yukon kill the business tier and put in the database? Mehran isn't convinced, but it is the dialogue that is important. Background to Indigo: Read the speculations, and more speculations and work out what is known from the session outlines Try and understand the WS-* standards that Indigo will be based on, especially WS-ReliableMessaging. Look at the EAI vendors and work out what problems they are currently solving that Indigo will try and solve. I suggest the CapeClear's whitepapers site especially the Web Services and EAI paper on Web Services technologies and it's impact on applicaiton integration and Web Services Messaging Strategy. Jorgen Thelin CapeClear's Chief Scientist has recently joined the Indigo team. Get up to speed on the Architectural buzz-words: Message Oriented Middleware (MOM), Service Oriented Architecture (SOA). Roger Session's latest ObjectWatch newsletter has a crack at defining What is SOA. Clemens is a great source of information, as is Ingo's recent architecture briefings: 'SOAP is not a remote procedure call' and one on using other message patterns than just request/response. Enterprise Integration Patterns - a site by Gregor Hohpe from ThoughtWorks, describing 65 patterns to do with Enterprise Integration in a vendor-neutral way. Going through this material should help fill us in on the space that Microsoft are trying to move Indigo into. For example, since we know Indigo is a message bus, why not find out what a message bus is and how they can be usefully used in projects. Specific questions I've got based on WSE: It's a good thing to hear Microsoft people asking what they could do better. It’s important to note that the current situation isn't necessarily a criticism of anything Microsoft are currently doing. Blogs are making a big difference to the Microsoft community (both the MS blogs and others), the conferences like the PDC are great, the material on MSDN (MSDN-TV, the full presentations from TechEd 2003), the MSDN newsgroups with responses from the development teams, the patterns and practices group are all great things. I think Larry nailed it when he says that the Java community has grown organically, without central sponsorship, and that communities involve people communicating with each other. I think it's a tough ask to say what Microsoft could do itself to overcome these obstacles. What I like about the Java communities that I've seen so far: Here are some issues I think the Microsoft communities have (based on my very self-focused point of view): So, how could Microsoft improve? Here are some random thoughts: ." These are just my thoughts, I'm interested in what others have to say (especially on how things could be improved, this is the tough one).. Good to see that Hervey Wilson the Development Lead for WSE has a blog. Hervey helped me out a couple of times on my last project to implement WSE in a multinational bank. He's a great developer who really cares about how people are using his product. Lest I be accused of falling for the 'link to a Microsoft person and expect everyone to know and care who they are' trap that Cameron Purdy spelt out, here's some interesting content that Hervey's already mentioned on his blog: Sorry to blog about blogging, but here's a humorous rant from Cameron Purdy on some annoyances in .NET and Java blogs. Some highlights: Page rendered at Sunday, July 05, 2009 2:53:57 AM (GMT Daylight Time, UTC+01:00) Disclaimer The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.
http://benjaminm.net/default,month,2003-10.aspx
crawl-002
refinedweb
8,127
63.39
Hi, I have a datatable with rows composed with a checkbox (to perform the row selection) and data. I would like to select several rows (by clicking on the corresponding checkboxes) and call an action method on the datatable with all the selected rows. I've made a try but i use the binding tag of the datatable. Is there an issue to do this without component binding? (getRowIndex return only one selected line) Thanks in advance Yes, you can do: <h:form> <rich:datatable ... ... <rich:row> <h:selectBooleanCheckbox </rich:row> ... </h:form> </h:form> public class MyBean { ... private Collection<Integer,Boolean> forActionCollection = new ArrayList<Integer,Boolean>(); ... }
https://developer.jboss.org/thread/7333
CC-MAIN-2018-30
refinedweb
107
59.4
Running Flask on macOS with mod_wsgi/wsgi-express Flask is a (micro) web development Framework for Python. It is fairly simple to get started. All you need to do is to pip install Flask into your virtualenv, give the FLASK_APP environment variable your file and run flask run (described in detail in installation and quick start). This will launch the development server and you can instantly start hacking around. When you want to use your Apache webserver, however, you need to install and configure a WSGI module. When I first wanted to do this I tried to install mod_wsgi via brew ( brew install mod_wsgi from the homebrew/apache tap), but quickly ran into some (apparently common) issues with the XCode toolchain. Then I discovered that there is a much easier way of installing mod_wsgi as a Python package. On the PyPI page it says… [it] will compile not only the Apache module for mod_wsgi, but will also install a Python module and admin script for starting up a standalone instance of Apache directly from the command line with an auto generated configuration. Let’s try it out At this point I assume you have virtualenv and the XCode cli tools ( xcode-select --install) installed (and of course the standard Apache from macOS). Everything else we will do together in the following steps. Setting up virtualenv Let’s start by creating the directory for the application and setting up our virtual environment: $ mkdir my_app $ cd my_app/ $ virtualenv venv New python executable in /your/path/my_app/venv/bin/python2.7 Also creating executable in /your/path/my_app/venv/bin/python Installing setuptools, pip, wheel...done. Now we activate our environment: $ source venv/bin/activate (venv) $ # <- new prompt Installing mod_wsgi Installing mod_wsgi is now easily done via pip: (venv) $ pip install mod_wsgi Let’s see if it worked by launching the server: (venv) $ mod_wsgi-express start-server Server URL : Server Root : /tmp/mod_wsgi-localhost:8000:501 Server Conf : /tmp/mod_wsgi-localhost:8000:501/httpd.conf Error Log File : /tmp/mod_wsgi-localhost:8000:501/error_log (warn) Request Capacity : 5 (1 process * 5 threads) Request Timeout : 60 (seconds) Startup Timeout : 15 (seconds) Queue Backlog : 100 (connections) Queue Timeout : 45 (seconds) Server Capacity : 20 (event/worker), 20 (prefork) Server Backlog : 500 (connections) Locale Setting : de_DE.UTF-8 By pointing your browser to, you should be greeted by this page: Looks good. Let’s stop the server with ctrl-c. Installing Flask and creating the web app If you already have your Flask app, you can skip the next few commands, but for the sake of completeness, let’s install Flask and create a small sample web app (make sure you are still in your virtualenv): (venv) $ pip install Flask Let’s create another directory to put the code of our web application in, and fire up an editor for creating the code file: (venv) $ mkdir my_app (venv) $ vim my_app/__init__.py # <- choose the cli editor you want or just create the file with a gui editor Paste this code (from the Quick Start tutorial) into your file: from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' Creating the wsgi script To let the server know about our application, let’s create the wsgi script which we will later point to when starting the server. (venv) $ pwd /your/path/my_app # <- we are here (venv) $ vim my_app.wsgi # <- choose the cli editor you want or just create the file with a gui editor Paste this code into the new file: from my_app import app as application UPDATE: I initially set sys.path.insert(0,"/your/path/my_app/") in the snippet above. This is not needed, as the directory you run mod_wsgi-express setup-server or mod_wsgi-express start-server in is added to sys.path automatically. See these three tweets. Launching the server with the wsgi script (venv) $ mod_wsgi-express start-server my_app.wsgi By going to you should now see the “Hello, World!” from our Flask app. Doesn’t work for you? Didn’t work for me at first either as I had a typo in the code 🙂. Check out the error logs and you will see a hint of what might be wrong: /tmp/mod_wsgi-localhost:8000:501/error_log. Running the server in the background If you want to continue with your shell session, but leave the server running, you can use nohup. (venv) $ nohup mod_wsgi-express start-server my_app.wsgi & This will leave the server running, while you can continue working in the session (output will go to the file nohup.out). To bring it back to the foreground, use fg. UPDATE: Graham Dumpleton reached out to me and noted that the better way of running it in the background is by generating scripts via setup-server and then use apachectl to start/stop. So you might want to execute the following (as root) instead of using nohup: (feel free to change the port) mod_wsgi-express setup-server my_app.wsgi --port=8000 --user _www --group _www --server-root=/etc/mod_wsgi-express-8000 And then control the server state via: /etc/mod_wsgi-express-8000/apachectl start /etc/mod_wsgi-express-8000/apachectl stop Watching for changes When you develop your app and don’t want to reboot the server for each change, there is a convenient way of starting the server with the --reload-on-changes option. Now you can change your files and have the changes immediately served. (venv) $ mod_wsgi-express start-server --reload-on-changes my_app.wsgi More info You can find more info about mod_wsgi (e.g. running it on a privileged port as root) on the PyPI page: mod_wsgi. The command documentation is available here: (venv) $ mod_wsgi-express start-server --help
https://davidhamann.de/2017/08/05/running-flask-with-wsgi-on-macos/
CC-MAIN-2019-26
refinedweb
953
51.48
IFTTT The IFTTT (If This Then That) to Particle service is planned for removal. You can, however, still use IFTTT by using Webhooks. This document shows how. New event published If you are using an IFTTT Applet triggered by a Particle event, you can replace it as follows or use the same steps to setup a new Applet: Select your applet - Open your applet and delete the Particle integration in the IF THIS setion - OR Create a new applet in IFTTT - Select "Add" in the IF THIS - Search the integrations for the Webhooks service: - Then select Receive a web request with a JSON payload - Name your event. This is the name in the IFTTT system, and can only consist of letters, numbers, and underscores. IFTTT event names must match exactly. The name of your Particle event is mapped into the IFTTT event name in your webhook configuration in a later step, but you may find it less confusing to use the same name for both. - Select an action. For example, you could use the Notifications action to send an event to the IFTTT mobile app for iOS or Android. - Go through the rest of the steps to create applet. Add filters (optional) - If you want to restrict the applet to run only if the event data contains certain data, add a filter. This requires IFTTT Pro. Get your IFTTT maker app URL and key - Go to. - It will show the the IFTTT API key. It’s the part after use/ in the URL field. - Construct the Maker Event trigger URL. It follows this pattern: - Replace TestEvent with the name of your IFTTT event, which may be different than your Particle event name. - Replace bCYXXXXXXXX_YfdXXXXXeV with your secret IFTTT API key pictured above. Create a Particle webhook - Go to the Particle console, Integrations, New Integration, then Webhook. - For Event Name enter the Particle event name trigger. This is the event sent by the device. - In the URL field, enter the URL you constructed in the previous step. - Set the Request Type to POST (this should be the default) - Set the Request Format to JSON. You do need to change this! - The other fields should be the default values. - After using Create Webhook you can use the Test button to test it. - The Particle CLI particle publishcommand is also good for testing. - Of course you’ll normally generate events from a Particle device using Particle.publish()in your source code. - You can also monitor the status from the IFTTT side by viewing your applet Monitor a variable This feature polled a variable on the device once a minute. We do not recommend using this option as it doesn’t scale well. Polling a variable a function result This feature call a function on a device once a minute. We do not recommend using this option as it doesn’t scale well. Polling a function your device status Monitoring your device status works the same way as New event published except you need to create a new webhook for status events. - Set the Event Name field to spark/status. - This event is generated for both online and offline events. Calling the Particle API as an IFTTT service In order to publish an event or call a function there are some steps in common: Get a Particle access token In order to access the Particle API on your behalf, you’ll need to create a Particle API access token. For developer accounts, the easiest way is to use the Particle CLI command: particle token create --never-expires The --never-expires option creates a token that does not expire. If you do not use this option you will need to update the access token every 90 days. Create an IFTTT service - After creating the trigger for a new IFTTT applet you select the service to call. - Use the Webhooks service: - There is only one option, Make a web request. - The action fields will depend on whether you want to publish and event or call a function. Publish an event - To publish an event as an action, set the following action fields: - In URL, enter the following URL: - For Method select POST (this is not the default). - For Content-Type select application/json (this is not the default). - For Additional Headers, enter something like this. It’s the string “Authorization: Bearer” a space, and the access token you created above. Authorization: Bearer 4e130XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX45cc - For the Body enter something like this. “name” is the name of the Particle event to publish. “data” is any data you want to send. This field is JSON formatted, so you’ll need to escape any special JSON characters in the field. {"name":"generatedEvent","data":"somedata"} - It should look something like this: - If you’re watching the Event tab in the Particle console and generate the event, you should see something like this: Call a function - In order you call a function you need to know the Device ID (24-character hexadecimal) for the device you want to call. This is available in the Devices tab of the Particle console. - You’ll also need to know the name of the function to call (case-sensitive) and the access token, described above. - In the URL field, construct a URL of the form: / . For example: - In the Method field select POST (this is not the default). - In the Content Type field select application/json (this is not the default). - For Additional Headers, enter something like this. It’s the string “Authorization: Bearer” a space, and the access token you created above. Authorization: Bearer 4e130XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX45cc - For the Body enter something like this. “arg” is the data you want to send as the function argument. This field is JSON formatted, so you’ll need to escape any special JSON characters in the field. {"arg":"function args go here"} - It should look something like this: Sample device firmware: #include "Particle.h" SYSTEM_THREAD(ENABLED); SerialLogHandler logHandler; int functionHandler(String cmd); void setup() { Particle.function("testFunction", functionHandler); } void loop() { } int functionHandler(String cmd) { Log.info("function handler %s", cmd.c_str()); return 0; } Serial monitor output: 0000483144 [app] INFO: function handler function args go here Other API calls You can use this technique to make most Particle API calls! See the Particle Cloud API reference for more information.
https://docs.particle.io/getting-started/integrations/community-integrations/ifttt/
CC-MAIN-2022-27
refinedweb
1,048
64.61
Ticket #2713 (closed defect: invalid) Duplicate packets with host networking Description (last modified by ramshankar) (diff) Using host networking in VirtualBox attached to LAN interface there are duplicate ICMP packets while pinging the guest IP address (which is in the same /24 as guest): 64 bytes from 172.20.40.12: icmp_seq=0 ttl=255 time=0.459 ms 64 bytes from 172.20.40.12: icmp_seq=0 ttl=254 time=0.464 ms (DUP!) 64 bytes from 172.20.40.12: icmp_seq=1 ttl=255 time=0.425 ms 64 bytes from 172.20.40.12: icmp_seq=1 ttl=254 time=0.549 ms (DUP!) 64 bytes from 172.20.40.12: icmp_seq=2 ttl=255 time=0.386 ms 64 bytes from 172.20.40.12: icmp_seq=2 ttl=254 time=0.391 ms (DUP!) The same happens with Windows XP and OpenSolaris guests. If the host is rebooted (shutdown -i6 in Solaris, shutdown -r in Windows), after restart networking is not working from and to the guest. Attachments Change History comment:1 Changed 7 years ago by cbredi comment:2 Changed 7 years ago by ramshankar The network connection dropping problem has been fixed already in 2.1.0. As for duplicate packets, I take it you're using VLAN interfaces with PPAs like rge123000? Will investigate. comment:3 Changed 7 years ago by cbredi Network connection dropping is fixed in 2.1.0. Yes, I am using VLAN interfaces (bge0, bge0:1, bge2000, bge4000). Regardless of interface attached to there are duplicate packets seen from hosts in the same broadcast domain. There is also a Intel PRO/1000 GT interface NIC but I cannot use that with VirtualBox. When shutting down a machine the whole Solaris box crashes and reboots. This might be a different problem (e1000g driver with 82541PI controller?) but I thought it's worth mentioning here. comment:5 Changed 7 years ago by ramshankar I don't get any duplicate packets using VLANs when pinging the host->guest or guest->host or remote->host or guest->remote machine. Could you please attach ifconfig -a output from both host and guest and which interface you have assigned to the guest. Are zones involved? Also output of (as root/sudo). ifconfig <interface-you-give-to-guest> modlist comment:6 Changed 7 years ago by cbredi ifconfig -a on Solaris host: lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 bge0: flags=201100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,CoS> mtu 1500 index 2 inet 172.20.40.3 netmask ffffff00 broadcast 172.20.40.255 bge0:1: flags=201100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,CoS> mtu 1500 index 2 inet XX.XX.XX.XX netmask ffffffe0 broadcast XX.XX.XX.XX bge2000: flags=201100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,CoS> mtu 1500 index 3 inet XX.XX.XX.XX netmask ffffffc0 broadcast XX.XX.XX.XX bge4000: flags=201100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,CoS> mtu 1500 index 4 inet 89.37.227.35 netmask ffffffe0 broadcast 89.37.227.63 ifconfig -a on CentOS 5.2 guest, attached to bge4000 hostif: eth0 Link encap:Ethernet HWaddr 08:00:27:86:18:89 inet addr:89.37.227.37 Bcast:89.37.227.63 Mask:255.255.255.224 inet6 addr: fe80::a00:27ff:fe86:1889/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:481241 errors:0 dropped:0 overruns:0 frame:0 TX packets:327712 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:104602818 (99.7 MiB) TX bytes:119716305 (114.1 MiB) Base address:0xc010 Memory:f0000000-f0020000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2173464 errors:0 dropped:0 overruns:0 frame:0 TX packets:2173464 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:887378971 (846.2 MiB) TX bytes:887378971 (846.2 MiB) Solaris zones are not involved. host# ifconfig bge4000 modlist 0 arp 1 ip 2 vboxflt 3 bge Ping from Solaris host to Linux guest: $ ping -s 89.37.227.37 64 2 PING 89.37.227.37: 64 data bytes 72 bytes from c2.bradiceanu.net (89.37.227.37): icmp_seq=0. time=1.02 ms 72 bytes from c2.bradiceanu.net (89.37.227.37): icmp_seq=1. time=0.801 ms 2 packets transmitted, 2 packets received, 0% packet loss round-trip (ms) min/avg/max/stddev = 0.801/0.911/1.02/0.16 ping from guest to host: $ ping -c 2 89.37.227.35 PING 89.37.227.35 (89.37.227.35) 56(84) bytes of data. 64 bytes from 89.37.227.35: icmp_seq=1 ttl=255 time=0.757 ms 64 bytes from 89.37.227.35: icmp_seq=2 ttl=255 time=0.928 ms --- 89.37.227.35 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1008ms rtt min/avg/max/mdev = 0.757/0.842/0.928/0.090 ms Ping from a different host on the same VLAN (IP 89.37.227.33) to guest: $ ping -s 89.37.227.37 56 4 PING 89.37.227.37: 56 data bytes 64 bytes from c2.bradiceanu.net (89.37.227.37): icmp_seq=0. time=1.41 ms 64 bytes from c2.bradiceanu.net (89.37.227.37): icmp_seq=0. time=1.72 ms 64 bytes from c2.bradiceanu.net (89.37.227.37): icmp_seq=1. time=0.582 ms 64 bytes from c2.bradiceanu.net (89.37.227.37): icmp_seq=1. time=0.640 ms 2 packets transmitted, 4 packets received, 2.00 times amplification round-trip (ms) min/avg/max/stddev = 0.582/1.09/1.72/0.56 ping from Linux guest to 89.37.227.33: $ ping -c 2 89.37.227.33 PING 89.37.227.33 (89.37.227.33) 56(84) bytes of data. 64 bytes from 89.37.227.33: icmp_seq=1 ttl=255 time=0.478 ms 64 bytes from 89.37.227.33: icmp_seq=1 ttl=255 time=0.496 ms (DUP!) 64 bytes from 89.37.227.33: icmp_seq=2 ttl=255 time=0.471 ms --- 89.37.227.33 ping statistics --- 2 packets transmitted, 2 received, +1 duplicates, 0% packet loss, time 1003ms rtt min/avg/max/mdev = 0.471/0.481/0.496/0.027 ms Duplicates are seen from LAN to guest, from guest to LAN, but not from host to guest or guest to host. Same results if guest is attached to bge0 with 172.20.40.x IP address. There are no duplicates between any two hosts on any of the interfaces, except to and from guest. I tried with XP, Linux and FreeBSD guests with same results. comment:7 Changed 7 years ago by ramshankar Eh, correct me if I'm wrong, but if you can ping public class A 89.37.227.37 and you have the same address to a machine on the VLAN you should be getting 2 ICMP replies, one VLAN tagged from the guest and one from the external host. Same for the guest to LAN/external scenario. OpenSolaris log file
https://www.virtualbox.org/ticket/2713
CC-MAIN-2016-07
refinedweb
1,232
70.6
Handles the menu/command system... when it detects the EmergencyStopMC is activated, it'll kick into high priority. More... #include <Controller.h> Handles the menu/command system... when it detects the EmergencyStopMC is activated, it'll kick into high priority. . = !refreshsketchworld !refreshsketchlocal !refreshsketchcamera ! 105 of file Controller.h. List of all members. Constructor. Definition at line 107 of file Controller.h. Constructor, sets a default root control. Definition at line 108 of file Controller.h. [virtual] Destructor. Definition at line 109 of file Controller.h. [private] shouldn't be called... [protected] called when the estop switches on causes the top control to activate, registers for button events Definition at line 583 of file Controller.cc. Referenced by doEvent(), and setEStopID(). [static, protected] returns true when the current time and last time are in different periods Definition at line 204 of file Controller.h. Referenced by trapEvent(). returns true if a valid control is available on the stack if the stack is empty, will push root if it's non-null Definition at line 608 of file Controller.cc. Referenced by activate(), push(), refresh(), takeLine(), and trapEvent(). [static] calls close() on a Java object loaded with loadGUI() (on the desktop) Definition at line 230 of file Controller.cc. Referenced by SegCam::closeServer(), RegionCam::closeServer(), RawCam::closeServer(), DepthCam::closeServer(), WorldStateSerializerBehavior::doStop(), HeadController::doStop(), ArmController::doStop(), and Aibo3DControllerBehavior::doStop(). called by wireless when someone has entered new data on the tekkotsu console (NOT cin) Definition at line 269 of file Controller.cc. called when the estop switches off causes the top control to deactivate, stops listening for buttons Definition at line 596 of file Controller.cc. just for e-stop activation/deactivation Reimplemented from BehaviorBase. Definition at line 85 of file Controller.cc. register for events and resets the cmdstack Definition at line 40 of file Controller.cc. stop listening for events and resets the cmdstack Definition at line 63 of file Controller.cc. sends stack of currently active controls Definition at line 539 of file Controller.cc. Referenced by takeLine(). Gives a short description of what this class of behaviors does... you should override this (but don't have to). If you do override this, also consider overriding getDescription() to return it Definition at line 143 of file Controller 144 of file Controller.h. called by wireless when there's new data from the GUI Definition at line 238 of file Controller.cc. Referenced by doStart(). 316 of file Controller.cc. Referenced by Controller(). assigns appropriate values to the static event bases Definition at line 357 of file Controller.cc. Referenced by init(). attempts to open a Java object on the desktop Definition at line 212 of file Controller.cc. Definition at line 147 of file Controller.h. Referenced by WorldStateSerializerBehavior::doStart(), HeadController::doStart(), ArmController::doStart(), Aibo3DControllerBehavior::doStart(), WalkCalibration::err(), loadGUI(), SegCam::setupServer(), RegionCam::setupServer(), RawCam::setupServer(), and DepthCam::setupServer(). kills the top control, goes to previous Definition at line 186 of file Controller.cc. Referenced by reset(), and setNext(). puts a new control on top Definition at line 177 of file Controller.cc. Referenced by chkCmdStack(), and setNext(). refreshes the display, for times like sub-control dying, the previous control needs to reset it's display Definition at line 159 of file Controller.cc. Referenced by pop(), reset(), setRoot(), and takeLine(). refresh camera sketches Definition at line 173 of file Controller.cc. refresh local sketches Definition at line 169 of file Controller.cc. refresh world sketches Definition at line 165 of file Controller.cc. will take the command stack back down to the root Definition at line 149 of file Controller.cc. Referenced by doStart(), doStop(), setRoot(), and takeLine(). called with slots (options), a name to lookup; will select the named control Definition at line 368 of file Controller.cc. sets a config value - some values may require additional processing (done here) to have the new values take effect Definition at line 554 of file Controller.cc. Sets the emergency stop MC to monitor for pausing. Definition at line 200 of file Controller.cc. maintains top Control Definition at line 575 of file Controller.cc. Referenced by push(), select(), takeLine(), and trapEvent(). sets the root level control Definition at line 193 of file Controller.cc. called with each line that's entered on the tekkotsu console or from the GUI Definition at line 390 of file Controller.cc. Referenced by console_callback(), and gui_comm_callback(). returns the current control Definition at line 137 of file Controller.h. passes an event to the top control Implements EventTrapper. Definition at line 95 of file Controller.cc. if doReadStdIn() was already called, but the buttons are both still down Definition at line 219 of file Controller.h. event masks used by processEvent() Definition at line 121 of file Controller.h. Referenced by initButtons(), and trapEvent(). the stack of the current control hierarchy should never contain NULL entries Definition at line 201 of file Controller.h. Referenced by activate(), chkCmdStack(), deactivate(), dumpStack(), pop(), push(), refresh(), reset(), setNext(), takeLine(), top(), and trapEvent(). the time of the current event (do*() can check this instead of calling get_time() ) Definition at line 214 of file Controller.h. invalid_MC_ID if not active, otherwise id of high priority LEDs Definition at line 191 of file Controller.h. Referenced by activate(), chkCmdStack(), deactivate(), doStart(), doStop(), and push(). the EmergencyStopMC MC_ID that this Controller is monitoring Definition at line 194 of file Controller.h. Referenced by setEStopID(). the socket to listen on for the gui Definition at line 223 of file Controller.h. Referenced by activate(), chkCmdStack(), closeGUI(), console_callback(), doStart(), doStop(), dumpStack(), loadGUI(), pop(), push(), refreshSketchCamera(), refreshSketchLocal(), refreshSketchWorld(), and takeLine(). true if the Controller is currently active (in the activate()/deactivate() sense, not doStart()/doStop() sense - use isActive() for that...) Definition at line 220 of file Controller.h. Referenced by activate(), deactivate(), doEvent(), doStart(), setEStopID(), and takeLine(). the time of the last event Definition at line 213 of file Controller.h. the duration of the last next event (nextItem) Definition at line 216 of file Controller.h. the magnitude of the last next event (nextItem) Definition at line 215 of file Controller.h. Definition at line 116 of file Controller.h. Referenced by initButtons(), ValueEditControl< T >::pause(), ValueEditControl< T >::processEvent(), and trapEvent(). Definition at line 118 of file Controller.h. the duration of the last prev event (prevItem) Definition at line 218 of file Controller.h. the magnitude of the last prev event (prevItem) Definition at line 217 of file Controller.h. Definition at line 117 of file Controller.h. Definition at line 119 of file Controller.h. the base control, if cmdstack underflows, it will be reset to this Definition at line 197 of file Controller.h. Referenced by chkCmdStack(), setRoot(), takeLine(), and ~Controller(). Definition at line 120 of file Controller.h. currently can't pull connection socket off of server socket, so only one Controller Definition at line 225 of file Controller.h. Referenced by closeGUI(), console_callback(), doStart(), doStop(), dumpStack(), gui_comm_callback(), loadGUI(), pop(), push(), refreshSketchCamera(), refreshSketchLocal(), refreshSketchWorld(), takeLine(), and ~Controller(). true if ControllerGUI knows how to use the buttons for menu navigation, will intercept button presses Definition at line 221 of file Controller.h. Referenced by activate(), init(), initButtons(), and takeLine().
http://tekkotsu.org/dox/classController.html
CC-MAIN-2017-17
refinedweb
1,198
51.04
I created a maya python toolchain for my team. All goes well, just on one machine i seem to have problems. I narrowed it down to the print command. Like this test library called "temp.py": import os # from pymel.core import * print "Hello" after importing it with import temp produces this output (only on that one computer!): // Error: 9 # Traceback (most recent call last): # File "<maya console>", line 1, in <module> # File "C:\maya_scripts\temp.py", line 4, in <module> # print "Hello" # IOError: [Errno 9] Bad file descriptor // I've tried Maya Version 2016, 2016.5 and 2017. All the same result. Python 2.5 as standalone hasn't got that problem. me that sounds like some kind of configuration problem, but then again it behaves the same over 3 different maya installations, so deleting the prefs didn't help either. Any idea where to look? have you tried any other file name than temp.py? temp.py Yea, tried that, but it's not that. The only sort of lead i got was this thread about "sys.stdout is not available when not running as a console session.", but i wouldn't know how i would test that in maya. Another solution would be to eliminate all occurances of the print command in my code, but this is quite a radical solution for a single machine... Okay, i got some more. Actually the problem goes much deeper. Turns out this already throws the same error: import sys, os sys.stdout.write("Hello\n") In the maya script editor. On all 3 versions of maya. See the StackOverflow question. I think somebody overwrote Maya's default Output object.
http://tech-artists.org/t/maya-python-ioerror-errno-9-bad-file-descriptor/8718
CC-MAIN-2017-51
refinedweb
279
78.25
Build an SMS Groceries App with Ionic and Angular Stay connected In this article, we're going to have a closer look at the Ionic Framework, which is a modern WEB-based platform for building mobile apps. That means you don't need to have any knowledge about the mobile platforms to build real apps. Ionic is a full platform which evolved from a simple UI library. It's a complete gateway for a web developer into the mobile world. It is still a UI framework and a very good one. It's also a set of abstractions on top of the Cordova, which provides JavaScript bindings for the native functions. In the heart of it is a well-known web framework Angular with TypeScript. Ionic will run your code in a webview (a simple browser) with the help of some native bindings. Note how this is different from React Native, which will actually transpile the web code into the native components. As we're getting familiar with Ionic, we're going to make something very practical. We're going to build a mobile app -- one that solves a real problem. The idea behind it comes from the very real problem I've been suffering from for several years. Namely the groceries list. My wife sends me a list of things I need to buy in an SMS, which is usually quite long. And so you can imagine, it's hard to keep track of what is already in the shopping basket and what's not. So let's build an app that will read the SMS and make a nice To Do list with checkboxes from it. The complete code can be found on GitHub. In case there are problems, please refer to the source code. Prerequisites You probably already have the Node and the npm package manager. You'll also want to install Android Studio which comes with an Android SDK. We will need it to build our app for Android and run it on the emulator or a real device. Getting Started Alright, first off, open the terminal and install the ionic client. npm i -g ionic You can also add a handy alias into your shell config file. alias i=ionic When creating a new project, you can specify a template to build from. For the sake of this tutorial, we'll start with the blank. ionic start sms-shop blank cd sms-shop Ionic will create a new project, install all the dependencies, and then you'll be able to run it. ionic serve You should see your blank app in the browser now. That is how you're supposed to develop your app. Every time you make a change, the browser will refresh, and you will see the change on the screen. It's pretty handy to have a separate monitor with the browser always in sight. Sometimes, you will want to run your app on a real device or an emulator (because Cordova plugins won't work in a browser). ionic cordova run android --device ionic cordova run android --emulator The downside is that every time you alter an app, you have to re-run it to see the changes. Unfortunately, this is the only way we can test the native functions. Here's the plan... OK, let's talk about what we are going to do. We will need three pages. The first one to display all the lists we have (we already have the HomePage), the second one to select an SMS to build a list from (SelectPage), and the last one is the page for a specific groceries list where you can check/uncheck items (ListPage). What would a groceries list look like? It will have an id, date of creation and the list of items. Each item will have a title and a boolean flag "done." As you already know, Ionic is built on top of the Angular with the TypeScript support. So let's define our groceries list type. Create a new file src/app/types.ts: export type GLIST = { id: string, created: string, items: Array< { title: string, done: boolean }> } Our app will also need some storage to store the lists and the items. For that, we'll use the Ionic Storage module. Ionic Storage lets us store data as key-values, and underneath it, will use a storage engine, which is the most suitable for a specific platform. To make the storage available, we'll need to register it as an import. In the src/app/app.module.ts file, let's import storage, and put it into the imports section: import { IonicStorageModule } from '@ionic/storage'; // ... imports: [ BrowserModule, IonicModule.forRoot(MyApp), IonicStorageModule.forRoot() ], // ... OK, now we're all set to actually start making things. Home view Let's start with the home view. Here we're going to display all the groceries lists and a button that will allow us to create a new list from an SMS. Open home.html and replace its content with the following: <ion -header> </ion><ion -navbar> </ion><ion -title>SMS Groceries</ion> <ion -content padding> <p text-center> <button ion-button (click)="selectSms()">Select SMS</button> </p> </ion><ion -list> </ion><ion -item * <!-- --> {{ listTitle(list) }} <span item-end{{ list.created }}</span> In this view, ion-header and ion-list are examples of ionic UI components. For a full list of available components, you can refer to the documentation. The most interesting part here is: <ion -item * {{ listTitle(list) }} <span item-end{{ list.created }}</span> </ion> If you're not familiar with Angular, here's a quick overview: ngFor="let list of lists"- is a common directive for iterating a list in Angular. (click)="go(list.id)"- this is how you bind a click event. In this case we want to navigate to the list view {{ listTitle(list) }}- every time you want to output a value you put it into double curvy braces ion-item, item-end- those are UI components that Ionic prepared for us to use. OK, I hope this part is clear, now let's do the component. Copy and paste this into home.ts: import { Component } from "@angular/core" import { NavController } from "ionic-angular" import { Storage } from "@ionic/storage" import { SelectPage } from "../select/select" import { ListPage } from "../list/list" import { GLIST } from "../../app/types" @Component({ selector: "page-home", templateUrl: "home.html" }) export class HomePage { lists: Array<glist> // define our lists, which is an array of GLISTs constructor( public navCtrl: NavController, // navCtrl is used for navigation between pages private storage: Storage // storage to be used for... well storing our data ) {} // this is part of Ionic lifecycle; it will be called every time before navigating this page ionViewWillEnter() { // we're loading all the lists that we have from the storage const lists = [] this.storage.forEach((list, id) => { lists.push(JSON.parse(list)) console.log(this.lists) }).then(() => { this.lists = lists }) } selectSms() { this.navCtrl.push(SelectPage, {}) // this is how we change the page } listTitle(list) { return list.items.map(x => x.title).join(", ") } go(id) { this.navCtrl.push(ListPage, {id}) // another example of changing the page, // this time with a parameter } }</glist> Now, this might feel a little overwhelming, so let's break this code down piece by piece. First of all, in Angular when you want to use a component, you are using dependency injection. Basically, you're not creating the components yourself, but rather declaring which ones you need and letting the engine figure out the details. In the example above, this is how we inject navCtrl and storage. Angular will create them for us and put on this. The ionViewWillEnter is part of the Ionic page lifecycle. This method runs just before the page is to become active. This is usually the place where you want to load your data. This is what we are doing here by loading all of our lists from the storage. You've probably noticed that storage works asynchronously. Instead of returning a value, it returns a promise. The same goes for the forEach method we're using here. Finally, when we want to navigate to another page, we use the navCtrl which works like a simple stack. It has a push method we need to call whenever we want the view to change. It can also accept the params, which can be retrieved on the next page (we'll talk about this later). At this point, the compiler will fail because we still haven't defined the other two pages SelectPage and ListPage. So let's write them. Reading the SMSes A couple of words about Ionic Native. Loosely speaking, this is a set of JavaScript bindings for some native behavior. It allows you to use some of the native phone functions (for example camera) from JavaScript. Underneath it is powered by Cordova, but I think nowadays you can even swap it to Xamarin which is owned by Microsoft. The list of Native's features is astonishingly long. You can skim through the whole list here. Some of those plugins can be used on both platforms iOS, and Android, while others are platform-specific. Our task here is to read the list of SMSes and let the users select the SMS they want. I have to say that iOS doesn't have an ability to read SMSes (at least not that I'm aware of, please let me know otherwise). So from this point onwards, we're going Android-only. Let's start by using a permissions module. It allows us to ask for a particular permission at any point of the code (this is how the modern Android permission system works). ionic cordova plugin add cordova-plugin-android-permissions npm install --save @ionic-native/android-permissions Then, in the app.module.ts add to the providers section: import { AndroidPermissions} from '@ionic-native/android-permissions'; // ... providers: [ StatusBar, SplashScreen, {provide: ErrorHandler, useClass: IonicErrorHandler}, AndroidPermissions // < = this change ] We will also need the Cordova plugin for reading the SMSes: ionic cordova plugin add cordova-plugin-sms Now let's create the SelectPage. This time let's start with the component itself: // app/pages/select/select.ts import { Component } from "@angular/core" import { NavController, Platform } from "ionic-angular" import { AndroidPermissions } from "@ionic-native/android-permissions" import { ListPage } from "../list/list" import { Storage } from "@ionic/storage" import { GLIST } from "../../app/types" declare var SMS: any // making TypeScript happy. Otherwise, we could use `window.SMS` @Component({ selector: "page-select", templateUrl: "select.html" }) export class SelectPage { messages: Array<any> // Our messages definition constructor( public navCtrl: NavController, public androidPermissions: AndroidPermissions, // a component for requesting permissions public platform: Platform, private storage: Storage, ) { this.messages = [] } select(m) { const id = m.date + "" const list: GLIST = { id, created: formatDate(new Date()), items: m.body.split(",").map(s => { return { title: s.trim(), done: false, } }) } this.storage.set(id, JSON.stringify(list)) this.navCtrl.push(ListPage, { id }) } loadMessages() { this.platform.ready().then(readySource => { SMS.listSMS( { box: "inbox", indexFrom: 0, maxCount: 50 }, messages => { console.log("Sms", messages) this.messages = messages }, err => console.log("error listing smses: " + err) ) }) }]) } } // A simple formatter for dates, i.e. "4 Feb" function formatDate(date) { var monthNames = [ "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" ] var day = date.getDate(); var monthIndex = date.getMonth(); return day + ' ' + monthNames[monthIndex]; }</any> Let's start with the method:]) } You already know this method runs every time a view is about to enter. Here, we're using the permission component that will trigger a standard Android permission dialog asking if we can read the SMSes. And if we have the permission, then it will load the messages: loadMessages() { this.platform.ready().then(readySource => { SMS.listSMS( { box: "inbox", indexFrom: 0, maxCount: 50 }, messages => { console.log("Sms", messages) this.messages = messages }, err => console.log("error listing smses: " + err) ) }) } Here, we're using the SMS module and one of its methods called listSMS. By the way, here you can read about others. So, we read the first 50 SMSes and put them on this so that we can iterate them in the view. Then we also have the select method: select(m) { const id = m.date + "" const list: GLIST = { id, created: formatDate(new Date()), items: m.body.split(",").map(s => { return { title: s.trim(), done: false, } }) } this.storage.set(id, JSON.stringify(list)) this.navCtrl.push(ListPage, { id }) } It takes the SMS body, then it prepares the items list by splitting it with commas, puts it into the storage this.storage.set(id, JSON.stringify(list)) and navigates to the list page, which we still need to write. And, finally the html view (src/pages/select/select.html) should already be understandable: <ion -header> </ion><ion -navbar> </ion><ion -title>SMS Groceries</ion> <ion -content padding> <h1>Select the SMS</h1> </ion><ion -list> <button ion-item * {{ m.body }} </button> </ion> List Page Just one little step left. Let's finally write the ListPage. import { Component } from "@angular/core" import { NavController, NavParams } from "ionic-angular" import { Storage } from "@ionic/storage" import { GLIST } from "../../app/types" @Component({ selector: "page-list", templateUrl: "list.html" }) export class ListPage { list: GLIST constructor( public navCtrl: NavController, public navParams: NavParams, private storage: Storage, ) { } ionViewWillEnter() { const id = this.navParams.get("id") // retrieving the param, in this case, list id this.storage.get(id).then(list => { // loading the list from the storage if (list) { this.list = JSON.parse(list) } else { console.log("NO LIST?!?!") } }) } doneCount() { return this.doneItems().length } allCount() { if (!this.list) return 0 return this.list.items.length } toggle(item) { item.done = !item.done this.storage.set(this.list.id, JSON.stringify(this.list)) } toBeDoneItems() { if (!this.list) return [] return this.list.items.filter(x => !x.done) } doneItems() { if (!this.list) return [] return this.list.items.filter(x => x.done) } } By this point, all of this should be familiar. Let's have a look at the HTML view: <ion -header> </ion><ion -navbar> </ion><ion -title>SMS Groceries</ion> <ion -content> </ion><ion -list> </ion><ion -item * <span *{{ item.title }}</span> </ion><ion -icon</ion> <ion -item * <span *{{ item.title }}</span> </ion><ion -icon</ion> <ion -footer> </ion><ion -toolbar> <h2 padding>Done: <b>{{ doneCount() }}</b> out of {{ allCount() }}</h2> </ion> Running Let's finally try to run our app with simple command (connect your Android device first): i cordova run android --device Conclusion Look, Ma! In this article we became familiar with Ionic Framework, got a grasp of Angular and Typescript, with dependency injection and rather peculiar HTML markup, and learned how to use native functions from our web app with the help of Cordova and Ionic Native. But what's more important... In a matter of hours, with zero knowledge about mobile development, we were able to build a functioning application that uses native features and actually solves a real problem. It's the beauty and power of the Ionic Framework. I strongly encourage you to give it a try and build something useful (a side-project?), put it in the store and even try to get some money. Good luck and have fun. P.S.: Please let me know about any questions you have or if you need advice. I'll be happy to help. My twitter is @janis_t. Stay up to date We'll never share your email address and you can opt out at any time, we promise.
https://www.cloudbees.com/blog/build-sms-groceries-app-ionic-angular
CC-MAIN-2021-31
refinedweb
2,551
66.64
Subject: Re: [boost] [bind] Placeholders suggestion, std and boost From: Glen Fernandes (glen.fernandes_at_[hidden]) Date: 2015-05-26 20:51:31 On Tue, May 26, 2015 at 3:32 PM, Peter Dimov <lists_at_[hidden]> wrote: > Glen Fernandes wrote: >> I propose creating <boost/bind/bind.hpp>. Including it should result in >> the placeholders being in boost::placeholders namespace. >> <boost/bind.hpp> would now do nothing more than just include >> <boost/bind/bind.hpp> and bring all the placeholders into the global >> namespace. > > Done on develop (with a using directive.) Looks good! Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2015/05/222727.php
CC-MAIN-2019-47
refinedweb
113
53.88
Yes, CRUD views.!). The videos also get difficult to record in a longer, complex project such as this. I don’t edit out any mistakes or typos I make, since debugging those presents a valuable learning opportunity for beginners. But bringing all the details together while maintaining the continuity can get incredibly demanding. Also most tutorials would try to show you how to use the most popular package or the easiest way to implement a feature. I deliberately avoided that, perhaps inspired by Learn Python the Hard Way. It might be okay to reinvent the wheel the first time because it will help you understand how wheels work for a lifetime. So, despite many comments telling me that it is easier to use X than Y, I stuck to the alternative which helps you learn the most. In this tutorial, we would cover some interesting areas like how you can make Django forms work with AJAX and how a simple ranking algorithm works. As always you can choose to watch the video or read the step by step description below or follow both. I would recommend watching all the previous parts before watching this video. Did you learn quite a bit from this video series? Then you should sign up for my upcoming book “Building a Social News Site in Django”. It explains in a learn-from-a-friend style how websites are built and gradually tackles advanced topics like testing, security, database migrations and debugging. Step-by-step Instructions This is the transcript of the video. In part 3, we created a social news site where users can post and comment about rumours of “Man of Steel” but cannot vote. The outline of Part 4 of the screencast is: - Voting with FormView - Voting over AJAX - Mixins - Display Voted Status - Ranking algorithm - Background tasks Voting with FormView We will add an upvote button (with a plus sign) to each headline. Clicking on this will toggle the user’s “voted” status for a link i.e. voted or did not vote. The safest way to implement it is using a ModelFormfor our Votemodel. Add a new form to links/forms.py: from .models import Vote ... class VoteForm(forms.ModelForm): class Meta: model = Vote We will use another generic view called FormViewto handle the view part of this form. Add these lines to links/views.py from django.shortcuts import redirect from django.shortcuts import get_object_or_404 from django.views.generic.edit import FormView from .forms import VoteForm from .models import Vote ... class VoteFormView(FormView): form_class = VoteForm def form_valid(self, form): link = get_object_or_404(Link, pk=form.data["link"]) user = self.request.user prev_votes = Vote.objects.filter(voter=user, link=link) has_voted = (prev_votes.count() > 0) if not has_voted: # add vote Vote.objects.create(voter=user, link=link) print("voted") else: # delete vote prev_votes[0].delete() print("unvoted") return redirect("home") def form_invalid(self, form): print("invalid") return redirect("home") Those print statements will be removed soon and they are definitely not recommended for a production site. Edit the home page template to add a voting form per headline. Add lines with ‘+’ sign (removing the ‘+’ sign) to steelrumors/templates/links/link_list.html: {% for link in object_list %} + <form method="post" action="{% url 'vote' %}" class="vote_form"> <li> [{{ link.votes }}] + {% csrf_token %} + <input type="hidden" id="id_link" name="link" class="hidden_id" value="{{ link.pk }}" /> + <input type="hidden" id="id_voter" name="voter" class="hidden_id" value="{{ user.pk }}" /> + <button>+</button> <a href="{% url 'link_detail' pk=link.pk %}"> <b>{{ link.title }}</b> </a> </li> + </form> Add this view in steelrumours/urls.py: from links.views import VoteFormView url(r'^vote/$', auth(VoteFormView.as_view()), name="vote"), Refresh the browser to see the ‘+’ buttons on every headline. You can vote them as well. But you can read the voting status only from the console. Voting with AJAX You have already copied the static folder of the goodies pack in the previous part. But in case you haven’t, then follow this step. Create a folder named ‘js’ under steelrumors/staticour javascript files. Copy jquery and vote.js from the goodies pack into this folder. mkdir steelrumors/static/js cp /tmp/sr-goodies-master/static/js/* ~/proj/steelrumors/steelrumors/static/js/ Add these lines to steelrumors/templates/base.htmlwithin <head>block: <title>Steel Rumors</title> <link rel="stylesheet" type="text/css" href="{{ STATIC_URL }}css/main.css" /> + <script src="{{ STATIC_URL }}js/jquery.min.js"></script> + <script src="{{ STATIC_URL }}js/vote.js"></script> </head> <body> In views.pydelete the entire class VoteFormViewand replace with these three classes. We are using a mixin to implement a JSON response for our AJAX requests: import json from django.http import HttpResponse ... class JSONFormMixin(object): def create_response(self, vdict=dict(), valid_form=True): response = HttpResponse(json.dumps(vdict), content_type='application/json') response.status = 200 if valid_form else 500 return response class VoteFormBaseView(FormView): form_class = VoteForm def create_response(self, vdict=dict(), valid_form=True): response = HttpResponse(json.dumps(vdict)) response.status = 200 if valid_form else 500 return response def form_valid(self, form): link = get_object_or_404(Link, pk=form.data["link"]) user = self.request.user prev_votes = Vote.objects.filter(voter=user, link=link) has_voted = (len(prev_votes) > 0) ret = {"success": 1} if not has_voted: # add vote v = Vote.objects.create(voter=user, link=link) ret["voteobj"] = v.id else: # delete vote prev_votes[0].delete() ret["unvoted"] = 1 return self.create_response(ret, True) def form_invalid(self, form): ret = {"success": 0, "form_errors": form.errors } return self.create_response(ret, False) class VoteFormView(JSONFormMixin, VoteFormBaseView): pass Showing the Voted state We need some indication to know if the headline was voted or not. To achieve this, we can pass ids of all the links that have been voted by the logged in user. This can be passed as a context variable i.e. voted. Add this to LinkListViewclass in links/views.py: class LinkListView(ListView): ... def get_context_data(self, **kwargs): context = super(LinkListView, self).get_context_data(**kwargs) if self.request.user.is_authenticated(): voted = Vote.objects.filter(voter=self.request.user) links_in_page = [link.id for link in context["object_list"]] voted = voted.filter(link_id__in=links_in_page) voted = voted.values_list('link_id', flat=True) context["voted"] = voted return context Change the home page template again. Add lines with ‘+’ sign (removing the ‘+’ sign) to steelrumors/templates/links/link_list.html: <input type="hidden" id="id_voter" name="voter" class="hidden_id" value="{{ user.pk }}" /> + {% if not user.is_authenticated %} + <button disabled+</button> + {% elif link.pk not in voted %} <button>+</button> + {% else %} + <button>-</button> + {% endif %} <a href="{% url 'link_detail' pk=link.pk %}"> Now, the button changes based on the voted state of a headline. Try it on your browser with different user logins. Calculating Rank Score We are going to change the sorting order of links from highest voted to highest score. Add a new function to models.pyto calculate the rank score: from django.utils.timezone import now ... class Link(models.Model): ... def set_rank(self): # Based on HN ranking algo at SECS_IN_HOUR = float(60*60) GRAVITY = 1.2 delta = now() - self.submitted_on item_hour_age = delta.total_seconds() // SECS_IN_HOUR votes = self.votes - 1 self.rank_score = votes / pow((item_hour_age+2), GRAVITY) self.save() In the same file, change the sort criteria in the LinkVoteCountManagerclass. The changed line has been marked with a ‘+’ sign. class LinkVoteCountManager(models.Manager): def get_query_set(self): return super(LinkVoteCountManager, self).get_query_set().annotate( + votes=Count('vote')).order_by('-rank_score', '-votes') Ranking Job Calculating the score for all links is generally a periodic task which should happen in the background. Create a file called rerank.py in the project root with the following content: #!/usr/bin/env python import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "steelrumors.settings") from links.models import Link def rank_all(): for link in Link.with_votes.all(): link.set_rank() import time def show_all(): print "\n".join("%10s %0.2f" % (l.title, l.rank_score, ) for l in Link.with_votes.all()) print "----\n\n\n" if __name__=="__main__": while 1: print "---" rank_all() show_all() time.sleep(5) This runs every 5 secs in the foreground. Turn it into a background job (nohup python -u rerank.py&) tail -f nohup.out Note that this is a very simplistic implementation of a background job. For a more robust solution, check out Celery. Watching the News Dive It is fun to watch the rank scores rise and fall for links. It is almost as fun as watching an aquarium except with numbers. But the ranking function set_rank in models.py has the resolution of an hour. This makes it quite boring. To see a more dramatic change in rank scores change the SECS_IN_HOUR constant to small value like 5.0. Now submit a new link and watch the scores drop like a stone! Final Comments Steel Rumors is far from being a complete Hacker News clone. But it supports voting, submission of links and user registrations. In fact, it is quite useable at this point. Hope you enjoyed this tutorial series as much as I did while making them. If you get stuck anywhere make sure you check the github source first for reference. Keep your comments flowing! Resources - Full Source on Github
http://arunrocks.com/building-a-hacker-news-clone-in-django-part-4/
CC-MAIN-2017-17
refinedweb
1,493
53.68
Difference between revisions of "Chatlog 2010-03-09" From SPARQL Working Group Latest revision as of 13:50, 15 March 2010 See original RRSAgent log and preview nicely formatted version. Please justify/explain all edits to this page, in your "edit summary" text. 14:50:26 <RRSAgent> RRSAgent has joined #sparql 14:50:26 <RRSAgent> logging to 14:50:28 <trackbot> RRSAgent, make logs world 14:50:28 <Zakim> Zakim has joined #sparql 14:50:30 <trackbot> Zakim, this will be 77277 14:50:30 <Zakim> ok, trackbot; I see SW_(SPARQL)10:00AM scheduled to start in 10 minutes 14:50:31 <LeeF> zakim, this will be SPARQL 14:50:31 <trackbot> Meeting: SPARQL Working Group Teleconference 14:50:31 <trackbot> Date: 09 March 2010 14:50:32 <Zakim> ok, LeeF; I see SW_(SPARQL)10:00AM scheduled to start in 10 minutes 14:50:33 <LeeF> Chair: LeeF 14:50:38 <LeeF> Scribe: Souri 14:50:44 <LeeF> Agenda: 14:53:11 <AlexPassant> AlexPassant has joined #sparql 14:55:36 <Zakim> SW_(SPARQL)10:00AM has now started 14:55:42 <Zakim> +??P0 14:55:47 <AndyS> zakim, ??P0 is me 14:55:47 <Zakim> +AndyS; got it 14:57:08 <MattPerry> MattPerry has joined #sparql 14:58:17 <bglimm> bglimm has joined #sparql 14:58:26 <Zakim> +MattPerry 14:59:15 <Zakim> +kasei 14:59:23 <kasei> Zakim, mute me 14:59:23 <Zakim> kasei should now be muted 15:00:09 <ivan> zakim, dial ivan-voip 15:00:09 <Zakim> ok, ivan; the call is being made 15:00:11 <Zakim> +Ivan 15:00:30 <tommik> tommik has joined #sparql 15:00:34 <Zakim> +bglimm 15:00:42 <bglimm> Zakim, mute me 15:00:42 <Zakim> bglimm should now be muted 15:01:09 <Zakim> + +035840564aaaa 15:01:19 <tommik> zakim, aaaa is me 15:01:19 <Zakim> +tommik; got it 15:01:26 <Zakim> +LeeF 15:01:40 <LeeF> zakim, who's on the phone? 15:01:40 <Zakim> On the phone I see AndyS, MattPerry, kasei (muted), Ivan, bglimm (muted), tommik, LeeF 15:01:59 <AndyS> Hi all. 15:02:17 <Zakim> +??P19 15:02:21 <AlexPassant> Zakim, ??P19 is me 15:02:21 <Zakim> +AlexPassant; got it 15:02:34 <AxelPolleres> AxelPolleres has joined #sparql 15:02:38 <LeeF> Scribe: MattPerry 15:02:41 <LeeF> Scribenick: MattPerry 15:02:46 <LeeF> Regrets+ Souri 15:03:01 <Zakim> +dcharbon2 15:03:18 <LeeF> PROPOSED: Approve minutes at 15:03:59 <LeeF> RESOLVED: Approve minutes at 15:04:23 <SteveH> SteveH has joined #sparql 15:04:32 <SteveH> hi all 15:04:34 <LeeF> Next meeting: 2010-03-16 @ 14:00 UK / 10:00 EDT 15:05:04 <ivan> possible regrets for me 15:05:26 <MattPerry> LeeF: 2 weeks of 4 hr difference 15:05:31 <Zakim> + +0207735aabb 15:05:43 <SteveH> Zakim, aabb is me 15:05:43 <Zakim> +SteveH; got it 15:06:49 <MattPerry> LeeF: may be a problem that Paul is not here ... but we will try to make decisions anyway 15:07:13 <LeeF> 15:07:27 <MattPerry> Topic: Blank Nodes in Delete 15:08:01 <MattPerry> Option 1: no blank nodes in delete template 15:08:16 <MattPerry> Option 2: blank nodes as wildcard in delete template 15:09:56 <MattPerry> LeeF: if prohibit blank nodes now can we change it later 15:10:16 <Zakim> +AxelPolleres 15:10:47 <bglimm> Zakim, unmute me 15:10:47 <Zakim> bglimm should no longer be muted 15:11:09 <AndyS> I don't understand why if we exclude bnode syntax now we are effectively deciding for the future. 15:11:22 <LeeF> AndyS, yes, that's what I was trying to say 15:11:28 <MattPerry> bglimm: can live with no blank nodes but would be better with them 15:11:31 <ivan> q+ 15:11:37 <LeeF> ack ivan 15:12:26 <SteveH> q+ 15:12:31 <bglimm> q+ to ask about deleting lists without allowing bnodes 15:12:33 <LeeF> ack SteveH 15:12:34 <MattPerry> ivan: there are non-entailment related use cases that need blank nodes 15:13:30 <LeeF> DELETE { ?x :hasList (1 2 3) } WHERE { ... ?x } 15:13:35 <AxelPolleres> (just to be able to follow... what is discussed - said by ivan - is that not allowing to "delete bnodes" would not allow to delete lists? 15:14:24 <LeeF> ack bglimm 15:14:24 <Zakim> bglimm, you wanted to ask about deleting lists without allowing bnodes 15:14:45 <MattPerry> SteveH: rdf list blank node shortcut doesn't add new functionality just makes it easier 15:14:48 <ivan> q+ 15:14:50 <bglimm> ack bglimm 15:14:58 <AxelPolleres> what does this mean if the data graph has two lists (1 2 3) as value of :hasList ? (with different bnode ids?) 15:15:41 <AxelPolleres> q+ 15:16:08 <ivan> ack ivan 15:16:15 <AxelPolleres> q+ to ask whether the vars in the DELETE template don't need to be bou�nd in the WHERE 15:16:36 <LeeF> ack ivan 15:16:41 <SteveH> q+ 15:17:16 <AxelPolleres> ack axel 15:17:16 <Zakim> AxelPolleres, you wanted to ask whether the vars in the DELETE template don't need to be bou�nd in the WHERE 15:17:17 <LeeF> ack AxelPolleres 15:17:50 <LeeF> ack SteveH 15:17:59 <MattPerry> LeeF: long var-based list expression must be repeated in both Delete and Where 15:18:40 <MattPerry> SteveH: can you use property paths to get list variables? 15:18:48 <AxelPolleres> so, just to note, such a delete would indeed all the matching lists (answering my own question from further above) 15:19:37 <LeeF> straw poll: preference between (1) prohibiting blank nodes in DELETE templates and (2) blank nodes in DELETE templates act as wild cards 15:19:44 <ivan> q+ 15:19:54 <AndyS> Wheer are we on need "all triples must match" rule -- else chaos may result (half a list goes if wrong length) 15:19:55 <LeeF> ack ivan 15:20:12 <MattPerry> ivan: also same blank node cannot be in both delete and where 15:20:15 <kasei> q+ 15:20:18 <AndyS> Pref: 2 15:20:20 <bglimm> 2 15:20:23 <kasei> Zakim, unmute me 15:20:23 <Zakim> kasei should no longer be muted 15:20:23 <ivan> Pref: 2 15:20:25 <LeeF> ack kasei 15:20:32 <SteveH> pref: 1, but 2 probably ok 15:20:37 <MattPerry> Pref: 2 15:21:13 <ivan> q+ 15:21:17 <MattPerry> kasei: can rdf list be a special case? 15:21:23 <AxelPolleres> slight pref 2, still don't feel sure about implications 15:21:55 <MattPerry> kasei: grammar can enforce this 15:22:11 <LeeF> ack ivan 15:22:13 <MattPerry> LeeF: specification still needs to give semantics for blank nodes 15:22:25 <kasei> Zakim, mute me 15:22:25 <Zakim> kasei should now be muted 15:22:38 <LeeF> My preference is for #1 15:22:50 <MattPerry> ivan: many cases for blank nodes beyond rdf lists 15:23:00 <AxelPolleres> q+ to note that conceptually, it seems strange, since it would be the only place where bnodes have a "universal variables" meaning, whereas anywhere alse they rather indicate "existential variables" 15:23:19 <AxelPolleres> ... but I have no better proposal 15:23:27 <LeeF> ack AxelPolleres 15:23:27 <Zakim> AxelPolleres, you wanted to note that conceptually, it seems strange, since it would be the only place where bnodes have a "universal variables" meaning, whereas anywhere alse they 15:23:27 <AndyS> but they are bnodes in the data. a ?var can be bound to a data bnode 15:23:31 <Zakim> ... rather indicate "existential variables" 15:24:00 <bglimm> I think in my examples you could always use the [] notation and as I understand it, that would be allowd even under 1 15:24:42 <bglimm> and Steve 15:25:06 <AndyS> opt 1 bans [] in delete template AIUI 15:25:07 <SteveH> I'm about -0.5 15:25:30 <LeeF> PROPOSED: Blank nodes in DELETE templates act as "wild cards", effectively as variables bound to all RDF terms; the same blank node cannot be used in the WHERE clause and the template, or in multiple BGPs 15:26:03 <kasei> +1 15:26:05 <ivan> second it 15:26:06 <bglimm> +1 15:26:07 <ivan> +1 15:26:16 <SteveH> abstain 15:26:20 <dcharbon2> abstain 15:26:42 <LeeF> RESOLVED: Blank nodes in DELETE templates act as "wild cards", effectively as variables bound to all RDF terms; the same blank node cannot be used in the WHERE clause and the template, or in multiple BGPs, SteveH, dcharbon2, LeeF abstaining 15:26:50 <AxelPolleres> +1 (lacking better solutions) 15:27:28 <LeeF> 15:27:28 <MattPerry> Topic: Data Sets in SPARQL Update 15:27:40 <SteveH> [[[ 15:27:41 <SteveH> something like: 15:27:43 <SteveH> DELETE { ?n rdf:first ?f ; rdf:rest ?n } 15:27:43 <SteveH> WHERE { <list> rdf:first*/rdf:rest* ?n . ?n rdf:first ?f ; rdf:next ?n } 15:27:44 <SteveH> ]]] 15:30:07 <MattPerry> LeeF: SPARQL query allows specification of an RDF Dataset, but SPARQL update does not allow query to select specific RDF Dataset 15:30:33 <SteveH> q+ 15:30:49 <AndyS> Is this different from using GRAPH in the pattern? 15:31:00 <SteveH> GRAPH ?g ... FILTER(?g = <a> || ...) 15:31:24 <MattPerry> LeeF: what is the scope of graphs that the WHERE is matched against? 15:31:35 <AndyS> If the default dataset is changed (to some union using FROM...), it goes make a difference. 15:32:19 <MattPerry> LeeF: No way to give default graph for update 15:32:28 <AndyS> q+ to ask how does this interact with WITH? 15:32:30 <LeeF> q? 15:32:33 <LeeF> ack SteveH 15:33:02 <ivan> Lees' syntax example: INSERT INTO <g1> { template } FROM g2 FROM g3 FROM NAMED g4 FROM NAMED 15:33:02 <ivan> g5 WHERE { GRAPH ?g { ?s ?p ?o } } 15:33:10 <LeeF> SELECT ... FROM <g1> FROM <g2> { tp1 . tp2 } 15:33:51 <LeeF> SELECT ... { GRAPH ?g1 { tp1 } . GRAPH ?g2 { tp2 } } 15:34:16 <MattPerry> LeeF: can do this with GRAPH but WHERE gets very complicated 15:34:17 <AndyS> not quite - tp may span g1 and g2. 15:34:34 <LeeF> AndyS, one tp can span two graphs? 15:34:37 <LeeF> q? 15:34:39 <LeeF> ack AndyS 15:34:39 <Zakim> AndyS, you wanted to ask how does this interact with WITH? 15:34:57 <AxelPolleres> SELECT ... { GRAPH ?g1 { tp1 } . GRAPH ?g2 { tp2 } } FILTER (?g1 = <g1> or ?g1= <g2> and ?g2 = <g1> or ?g2= <g2> ) 15:34:59 <AxelPolleres> ? 15:35:36 <LeeF> AxelPolleres, right 15:35:37 <AndyS> tp1 can match g1 or g2, tp2 can match g1 or g2 => 4 cases 15:35:37 <LeeF> q? 15:35:50 <AndyS> (bnodes ... :-)) 15:36:12 <AxelPolleres> q+ 15:36:22 <AndyS> I'm happy to consider the design. Seems harmless (so far). 15:36:54 <AndyS> I can see that WITH != FROM (WITH is the updated graph, FROM is the queried graph) 15:37:08 <LeeF> ack AxelPolleres 15:38:12 <AndyS> Can't update a synthetic graph (e.g. merge of 2 graphs) 15:38:27 <LeeF> q? 15:38:39 <AndyS> LeeF, your suggestion is good. 15:39:01 <MattPerry> LeeF: interaction of WITH and FROM needs more investigation 15:39:16 <AxelPolleres> I see a problem with e.g. INSERT { ?X p ?Y } FROM <g1> FROM <g2> WHERE { ?X p1 o1 . ?Y p2 o2. } 15:40:31 <AxelPolleres> ... don't know what it means. A proposal should make this corner cases clear, before we can really figure out whether/where it is useful. 15:41:09 <LeeF> ACTION: Lee to work with Paul to flesh out design proposal for FROM/FROM NAMED (datasets) in SPARQL Update 15:41:09 <trackbot> Created ACTION-202 - Work with Paul to flesh out design proposal for FROM/FROM NAMED (datasets) in SPARQL Update [on Lee Feigenbaum - due 2010-03-16]. 15:41:45 <LeeF> 15:42:00 <MattPerry> Topic: Update Fault Types 15:42:55 <MattPerry> AndyS: with HTTP there are few error codes 15:44:03 <MattPerry> dcharbon2: WSDL 2.0 has no limit on error codes 15:44:06 <dcharbon2> 15:45:41 <Zakim> -AlexPassant 15:45:43 <SteveH> q+ 15:46:09 <LeeF> ACK SteveH 15:49:07 <Zakim> +??P2 15:49:08 <AlexPassant> Zakim, ??P2 is me 15:49:08 <Zakim> +AlexPassant; got it 15:49:22 <MattPerry> LeeF: does it make sense to have a lot of faults if there are only a couple of error codes? 15:50:29 <MattPerry> dcharbon2: could start with 2 basic error codes and see what users think 15:50:30 <kasei> i'd be interested in seeing the use of many of the 2xx http codes be conformant (going beyond the idea of just faults). 202 in particular. 15:51:23 <MattPerry> LeeF: what code does drop non-existent graph map to? 15:51:26 <kasei> 400 bad request seems like a potential status code... 15:51:56 <AndyS> "202 Accepted" is not an error? 2xx are all positives? Text is "Successful 2xx" 15:52:06 <AndyS> ?? "304 Not Modified" 15:52:13 <SteveH> could be 15:52:20 <MattPerry> SteveH: there are many more-specific HTTP codes to use 15:53:10 <MattPerry> LeeF: lets hold off on error codes until update language is set 15:54:02 <MattPerry> AndyS: it is still useful to identify what the possible errors are even though they are not exposed in the SPARQL protocol 15:54:14 <SteveH> +1 to AndyS 15:54:21 <MattPerry> AndyS: informative text can do this 15:55:35 <MattPerry> LeeF: Next week HTTP protocol, Property Paths, what is and is not in update language, and F2F 15:55:49 <Zakim> -bglimm 15:55:50 <Zakim> -SteveH 15:55:50 <Zakim> -LeeF 15:55:50 <Zakim> -AxelPolleres 15:55:53 <Zakim> -kasei 15:55:54 <Zakim> -MattPerry 15:55:56 <Zakim> -dcharbon2 15:55:56 <ivan> zakim, drop me 15:55:56 <Zakim> Ivan is being disconnected 15:55:58 <Zakim> -Ivan 15:55:58 <Zakim> -tommik 15:56:12 <SteveH> it's spooky that zakim knows more of my phone umber than I do :) 15:56:20 <AndyS> zakim, drop me 15:56:20 <Zakim> AndyS is being disconnected 15:56:22 <Zakim> -AndyS 15:58:52 <kasei> can anyone help me parse a sentence from the RDF/XML spec? 15:59:13 <LeeF> kasei, can try :D 15:59:25 <kasei> i'm looking for a URI that identifies RDF/XML, the serialization format. 15:59:33 <kasei> section 5.1 says, "The RDF Vocabulary is identified by this namespace name" 15:59:42 <kasei> I'm trying to sort out if "RDF Vocabulary" is what I'm after. 15:59:51 <kasei> I suspect not, but not totally sure. 15:59:56 <AndyS> no, it's not. 16:00:10 <AndyS> RDF vocab is "rdf:type" etc. 16:00:23 <kasei> bah. annoyed that we've been able to get this far without URIs for some very basic stuff! 16:00:32 <AndyS> Not sure there is a URI for the synatx - is there a URI for every MIME type? 16:01:09 <AndyS> There is naming competition between MIME types and formats ... so not sure if anyone has been bold enough to go there. 16:01:10 <kasei> well, much like the rdf/xml spec, there's probably an offical information resource for mime types, but that's different. 16:01:27 <AndyS> Fairly certain W3C hasn't - would be a nice suprise if they had. 16:01:30 <AndyS> ivan? 16:02:13 <kasei> wonder if this is the sort of thing that might be added to rdf/xml based on the upcoming workshop... 16:02:43 <AndyS> kasei, Graham Klyne would be a good person to ask - he tracks IETF and W3C. 16:02:50 <kasei> if we go ahead with this saddle:resultFormat stuff in the SDs, I'd like to be able to point to URIs for the standard formats. 16:03:06 <SteveH> kasei, didn't saddle: use mime types? 16:03:31 <kasei> it used both mime types and a link to the spec's webpage. 16:03:37 <SteveH> right 16:03:47 <SteveH> given that conneg works on mime types, that's not a bad idea 16:03:53 <kasei> which is fine if you don't have a proper URI, I suppose, but I think we can/should do better. 16:05:19 <kasei> also, with the confusion around some mime types for rdf, I'd rather not lean too heavily on them as identifiers. 16:05:47 <SteveH> true 16:06:23 <Zakim> -AlexPassant 16:06:23 <kasei> especially n-triples. text/plain isn't exactly the most useful thing for conneg. 16:06:24 <Zakim> SW_(SPARQL)10:00AM has ended 16:06:25 <Zakim> Attendees were AndyS, MattPerry, kasei, Ivan, bglimm, +035840564aaaa, tommik, LeeF, AlexPassant, dcharbon2, +0207735aabb, SteveH, AxelPolleres 16:09:21 <AndyS> agree re text/plain. They didn't plan to let it out of the WG as a format - but it escaped. Feral format. 16:09:40 <kasei> heh 16:47:00 <SteveH> SteveH has joined #sparql 17:38:03 <AndyS> AndyS has joined #sparql # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000254
http://www.w3.org/2009/sparql/wiki/index.php?title=Chatlog_2010-03-09&diff=prev&oldid=1948
CC-MAIN-2015-40
refinedweb
2,947
72.5
Hey Joachim, I think you will need to put the NO_SWORD_NAMESPACE define before you include any sword header files. See sword/include/defs.h. Martin Am Montag, 28. Oktober 2002 20:44 schrieb Joachim Ansorg: > Hi all! > > I'm preparing the new BibleTime 1.2.2 bugfix release, but I get lots of > undefined references if I compile BibleTime with NO_SWORD_NAMESPACE > defined! I wanted to make the release without changing all the Sword class > names in the BT sources, so I defined the define above to use no sword > namespace. How can I avoid this? > > > ..... > I think the Sword sources are compiled using the sword namespace but the > headers define to sword namespace if NO_SWORD_NAMESPACE is defined. > Maybe we should put a "using namespace sword" into each header file if > NO_SWORD_NAMESPACE was defined? > > But is there any way to build BT with an official release (1.5.4a or later > because I need the bugfixes of 1.5.4a). > ..... > > Thank you for any help!
http://www.crosswire.org/pipermail/sword-devel/2002-October/016514.html
CC-MAIN-2013-48
refinedweb
164
75.4
A MultiIndex (also known as a hierarchical index) DataFrame allows you to have multiple columns acting as a row identifier and multiple rows acting as a header identifier. With MultiIndex, you can do some sophisticated data analysis, especially for working with higher dimensional data. Accessing data is the first step when working on a MultiIndex DataFrame. In this article, you’ll learn how to access data in a MultiIndex DataFrame. This article is structured as follows: slice(None) When doing data analysis, it is important to ensure correct data types. Otherwise, you may get unexpected results or errors. Datetime is a common data type in data science projects and the data is often saved as numbers or strings. During data analysis, you will likely need to explicitly convert them to a datetime type. This article will discuss how to convert numbers and strings to a datetime type. More specifically, you will learn how to use the Pandas built-in methods to_datetime() and astype() to deal with the following common problems: When doing data analysis, it is important to ensure correct data types. Otherwise, you may get unexpected results or errors. In the case of Pandas, it will correctly infer data types in many cases and you can move on with your analysis without any further thought on the topic. Despite how well pandas works, at some point in your data analysis process you will likely need to explicitly convert data from one type to another. This article will discuss how to change data to a numeric type. … In data preprocessing and analysis, you will often need to figure out whether you have duplicate data and how to deal with them. In this article, you’ll learn the two methods, duplicated() and drop_duplicates(), for finding and removing duplicate rows, as well as how to modify their behavior to suit your specific needs. This article is structured as follows: loc keep For demonstration, we will use a subset from the Titanic dataset available on Kaggle. import pandas as pd… When it comes to select data on a DataFrame, Pandas loc and iloc are two top favorites. They are quick, fast, easy to read, and sometimes interchangeable. In this article, we’ll explore the differences between loc and iloc, take a looks at their similarities, and check how to perform data selection with them. We will go over the following topics: locand iloc locand ilocare interchangeable when labels are 0-based integers In exploratory data analysis, we often would like to analyze data by some categories. In SQL, the GROUP BY statement groups row that has the same category values into summary rows. In Pandas, SQL’s GROUP BY operation is performed using the similarly named groupby() method. Pandas’ groupby() allows us to split data into separate groups to perform computations for better analysis. In this article, you’ll learn the “group by” process (split-apply-combine) and how to use Pandas’s groupby() function to group data and perform operations. This article is structured as follows: groupby()and how to access groups information?: columnsattribute rename()function columns.str.replace()method set_axis() For demonstration, we will use a subset from the Titanic dataset available… Reading data is the first step in any data science project. As a machine learning practitioner or a data scientist, you would have surely come across JSON (JavaScript Object Notation) data. JSON is a widely used format for storing and exchanging data. For example, NoSQL database like MongoDB store the data in JSON format, and REST API’s responses are mostly available in JSON. Although this format works well for storing and exchanging data, it needs to be converted into a tabular form for further analysis. You are likely to deal with 2 types of JSON structure, a JSON object or… Numerical data is common in data analysis. Often you have numerical data that is continuous, or very large scales, or is highly skewed. Sometimes, it can be easier to bin values into discrete intervals. This is helpful to perform descriptive statistics when values are divided into meaningful categories. For example, we can divide the age into Toddler, Child, Adult, and Elder. Pandas’ built-in cut() function is a great way to transform numerical data into categorical data. In this article, you’ll learn how to use it to deal with the following common tasks. DataFrame and Series are two core data structures in Pandas. DataFrame is a 2-dimensional labeled data with rows and columns. It is like a spreadsheet or SQL table. Series is a 1-dimensional labeled array. It is sort of like a more powerful version of the Python list. Understanding Series is very important, not only because it is one of the core data structures, but also because it is the building blocks of a DataFrame. In this article, you’ll learn the most commonly used data operations with Pandas Series and should help you get started with Pandas. … Machine Learning practitioner | Formerly health informatics at University of Oxford | Ph.D.
https://bindichen.medium.com/
CC-MAIN-2021-21
refinedweb
837
53.92
Yampa/reactimate From HaskellWiki Latest revision as of 16:58, 22 September 2012 reactimate :: IO a -- init -> (Bool -> IO (DTime, Maybe a)) -- input/sense -> (Bool -> b -> IO Bool) -- output/actuate -> SF a b -- process/signal function -> IO () The Bool parameter of sense and actuate are unused if you look up the definition of reactimate so just ignore them (cf. the explanations below). reactimate basically is an input-process-output loop and forms the interface between (pure) Yampa signal functions and the (potentially impure) external world. More specifically, a Yampa signal function of type SF a b is an abstract data type that transforms a signal of type Time -> a into a signal of type Time -> b (note that one does not have direct access to signals in Yampa but just to signal functions). The Time parameter here is assumed to model continuous time but to evaluate a signal function (or a signal for that matter) it is necessary to sample the signals at discrete points in time. This is exactly what reactimate does (among other things). [edit] 1 Further explanations - The initaction is rather self-explanatory; it executes an initial IO action (e.g. print a welcome message), which then yields an initial sample of type afor the signal function that is passed to reactimateas the last argument. - The senseargument is then evaluated at Falseand should return an IO action yielding a pair that contains the time passed since the last sample and a new sample of type a(wrapped in a Maybe) for the signal function. If the second component of sense's return value is Nothingthen the previous sample is used again. actuateis evaluated at Trueand the signal function's output of type b, obtained by processing the input sample previously provided by sense. actuate's job now is to process the output (e.g. render a collection of objects contained in it) in an IO action that yields a result of type Bool. If this result is Truethe processing loop stops (i.e. the IO action defined by reactimatereturns ()). - Finally, the last argument of reactimateis the signal function to be run (or "animated"). Keep in mind that the signal function may take pretty complex forms like a parallel switch embedded in a loop. [edit] 2 Example To illustrate this, here's a simple example of a Hello World program but with some time dependence added. Its purpose is to print "Hello... wait for it..." to the console once and then wait for 2 seconds until it prints "World!" and then stops. module Main where import FRP (t' - t) -- Time difference in seconds writeIORef timeRef t' return (dt, Nothing)) -- we could equally well return (dt, Just ()) actuate = (\_ x -> if x then putStrLn "World!" >> return x else return x) reactimate init sense actuate sf Note that as soon as x in the definition of actuate becomes True (that is after 2 seconds), actuate returns True, hence reactimate returns () and the program stops. If we change the definition of actuate to always return False the line "World!" will be print out indefinitely. .
https://wiki.haskell.org/index.php?title=Yampa/reactimate&diff=53989&oldid=42198
CC-MAIN-2015-27
refinedweb
513
52.09
Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C.. Scripting language created by Guido van Rossum Published in 1991 (23 years ago) Most places these days we know about use it for web programming. Many more at Python's website. Python is great at getting an application created quickly and cleanly print "hello world" hello world 1+1 2 1/2 0 Notice for later: Not true division 1//2 0 print "hello " * 4 hello hello hello hello item = [1,2,3,4,5,6] print item print type(item) [1, 2, 3, 4, 5, 6] <type 'list'> print item[3:4] print item[:4] print item[3:] [4] [1, 2, 3, 4] [4, 5, 6] myDict = {"key":"value", "name":"chris"} print myDict {'name': 'chris', 'key': 'value'} print "My name is %s" % myDict['name'] My name is chris item = "5" if item == 5: print "It's 5" elif item == "5": print "It's %s" % item else: print "I don't know what it is." It's 5 item = "5" if item is not None: print "item isn't None" else: print "item is None" print "---\nSetting item to None\n---" item = None if item is None: print "item is None" item isn't None --- Setting item to None --- item is None my_list = [1, 2, 3] my_strings = ["things", "stuff", "abc"] for item in my_list: print "Item %d" % item Item 1 Item 2 Item 3 for (index, string) in enumerate(my_strings): print "Index of %d and value of %s" % (index, string) Index of 0 and value of things Index of 1 and value of stuff Index of 2 and value of abc print "Type of my_list is %s and the first element is %s" % (type(my_list), type(my_list[0])) Type of my_list is <type 'list'> and the first element is <type 'int'> print dir(my_list) ['_'] class A_old(): pass class A_new(object): pass print "Old type %s and New type %s" % (type(A_old), type(A_new)) Old type <type 'classobj'> and New type <type 'type'> my_old_a = A_old() my_new_a = A_new() print "Old object %s and New object %s" % (type(my_old_a), type(my_new_a)) Old object <type 'instance'> and New object <class '__main__.A_new'> item = [1, 2, 3] item is item True item is item[:] False item == item[:] True Question and answer import re user = {} user['name'] = raw_input("What is your name? ") user['quest'] = raw_input("What is your quest? ") user['will-get-shrubbery'] = raw_input("We want.... ONE SHRUBBERY. ") user['favorite-color'] = raw_input('What is your favorite colour? ') print '-'*60 accepted = re.compile(r"^((sure|SURE)|(y|Y).*)") accepted_status = "will acquire" if accepted.search(user['will-get-shrubbery']) is None: accepted_status = "will not aquire" print "%s is on a quest %s and %s a shrubbery. His favorite color is %s" % (user['name'].title(), user['quest'].lower(), accepted_status , user['favorite-color'].upper()) What is your name?King Arthur What is your quest?To find the Holy Grail We want.... ONE SHRUBBERY.sure What is your favorite colour?blue ------------------------------------------------------------ King Arthur is on a quest to find the holy grail and will acquire a shrubbery. His favorite color is BLUE
http://nbviewer.jupyter.org/github/0xaio/talks/blob/master/python/intro-to-python.ipynb
CC-MAIN-2017-26
refinedweb
530
73.41
RE: Excel Calculation is faster when visible than when Visible=Fal - From: JeffDotNet <JeffDotNet@xxxxxxxxxxxxxxxxx> - Date: Thu, 27 Mar 2008 16:41:00 -0700 Jialiang thanks for your suggestions. Unfortunately, I still haven’t solved the problem. Setting the calculation mode back to automatic is the CPU hog. I set data***.EnableCalculation = False but this didn't appear to have an impact on speed. The import of the text file to the data page is relatively quick ( less than a second) Public Sub importDataFromFile(ByVal FilePath As String) Me.CalculationMode = Excel.XlCalculation.xlCalculationManual 'Import CSV file into data work*** importDataFromFile(FilePath, Sheets.Data) Me.CalculationMode = Excel.XlCalculation.xlCalculationAutomatic '(objApp.Calculation = Excel.XlCalculation.xlCalculationAutomatic) m_Calculating = False End Sub The chart below show execution times for setting the spread*** back to automatic calculation after the csv data import. Run Column: Shows the consecutive runs. Here I'm reusing the same excel instance, but loading new csv files. Note this execution time get significantly faster each run even in manually importing the csv files into the spread ***. Manual: This shows the same data import (edit text import) and calculation done in excel without interop Visible: Execution time when running with Excel instance visible NotVisible: Execution time when running with Excel not visible -------------Execution time in Seconds-------------- Run # Manual Visible NotVisible 1 50 59.7 301.4 2 9 47.3 67.8 3 ~0.5 0.37 0.3 4 ~0.5 0.28 0.3 ---------------------------------------------------- I also noticed that when running with the excel instance visible but minimized yielded the same results as running with the instance set to not visible. I tried calculating the pages individually but didn't see a real performance improvement. Nearly all of the time is spent calculating one of the analysis Worksheets. I noticed this worksheet uses Dmax, Dmin, and Daverage. Could the searching in these functions cause strange execution times? (it gets faster each time when using the spread*** manually ) Is there anyway clever way to get this waiting out of the way in advance? My users will rarely be running more than one analysis a day. Therefore they will always experience a long wait. I really don't want to have to run with excel visible. I think I should be able to get the cell calculation to execute invisibly at least as fast as when the instance is executing visible. Thanks, Jeff "Jialiang Ge [MSFT]" wrote: Hello Jeff,. There seems something wrong with newsgroup post synchronization in this thread. I am sorry for it. Below is my initial response posted on March 25. Hello Jeff,Hello Jeff, From your post, my understanding on this issue is: you wonder why the ImportDataFromFile process is extremely slow only when Excel.Application.Visible = false, and how to resolve it. If I'm off base, please feel free to let me know. First off, I suggest we identify which part of code in ImportDataFromFile slows down the whole process when Application.Visible=false. Below are the two possible approaches that can help us identify the location: Approach 1. Debug the code lines. Step over each line of code in ImportDataFromFile, and see which line hangs for a extremely long time. If you are using Visual Studio, we can step over the code lines by pressing F10. Approach 2. Use a stop watch class to calculate each line's execution time:. An easier-to-implement stop watch is like: Dim start As DateTime = DateTime.Now 'execute our code Dim [end] As DateTime = DateTime.Now Dim span As TimeSpan = [end] - start I think knowing which part of code slows down the process may help us determine the underlying reason for the performance issue. In addition, Jeff, I suggest you call Work***.EnableCalculation = False. This will disable the recalculation of the ***, and may accelerate the import process. Does your target work*** contain a lot of function(UDF) to be calculated? If Visible is set to true, is there any dialog popped up on your side when the ImportDataFromFile is processed? Again, I am sorry for the inconveniences caused by the newsgroup system. I have reported the system problem to the system owner through internal channels. They will look into it and fix the problem.. ================================================= - Follow-Ups: - RE: Excel Calculation is faster when visible than when Visible=Fal - From: Jialiang Ge [MSFT] - References: - Excel Calculation is faster when visible than when Visible=False? - From: JeffDotNet - RE: Excel Calculation is faster when visible than when Visible=False? - From: Jialiang Ge [MSFT] - RE: Excel Calculation is faster when visible than when Visible=False? - From: Jialiang Ge [MSFT] - RE: Excel Calculation is faster when visible than when Visible=Fal - From: Jialiang Ge [MSFT] - Prev by Date: RE: Excel 2000 Duplicate Shortcut Keys - Next by Date: Get macros in Excel to show up as text always on the toolbar? - Previous by thread: RE: Excel Calculation is faster when visible than when Visible=Fal - Next by thread: RE: Excel Calculation is faster when visible than when Visible=Fal - Index(es):
http://www.tech-archive.net/Archive/Excel/microsoft.public.excel.programming/2008-03/msg04219.html
crawl-002
refinedweb
834
56.55
ex_spec alternatives and similar packages Based on the "Testing" category hound9.8 0.0 ex_spec VS houndElixir library for writing integration tests and browser automation. ex_machina9.8 3.0 ex_spec VS ex_machinaFlexible test factories for Elixir. Works out of the box with Ecto and Ecto associations. wallaby9.7 7.0 ex_spec VS wallabyWallaby helps test your web applications by simulating user interactions concurrently and manages browsers. proper9.6 6.8 ex_spec VS properPropEr (PROPerty-based testing tool for ERlang) is a QuickCheck-inspired open-source property-based testing tool for Erlang. meck9.6 5.5 ex_spec VS meckA mocking library for Erlang. mox9.4 5.1 ex_spec VS moxMocks and explicit contracts for Elixir. espec9.4 3.8 ex_spec VS especBDD test framework for Elixir inspired by RSpec. faker9.4 7.2 ex_spec VS fakerFaker is a pure Elixir library for generating fake data. mix_test_watch9.3 0.7 ex_spec VS mix_test_watchAutomatically run your Elixir project's tests each time you save a file. bypass9.3 6.5 ex_spec VS bypassBypass provides a quick way to create a mock HTTP server with a custom plug. StreamData9.2 3.8 ex_spec VS StreamDataData generation and property-based testing for Elixir. 🔮 ExVCR9.2 5.5 ex_spec VS ExVCRHTTP request/response recording library for Elixir, inspired by VCR. mock9.1 4.6 ex_spec VS mockMocking library for the Elixir language. excheck8.4 0.0 ex_spec VS excheckProperty-based testing library for Elixir (QuickCheck style). Quixir8.0 0.0 ex_spec VS QuixirProperty-based testing for Elixir white_bread8.0 1.5 ex_spec VS white_breadStory based BDD in Elixir using the gherkin syntax. amrita7.9 0.0 ex_spec VS amritaA polite, well mannered and thoroughly upstanding testing framework for Elixir. ponos7.7 0.0 ex_spec VS ponosPonos is an Erlang application that exposes a flexible load generator API. power_assert7.6 0.0 ex_spec VS power_assertPower Assert in Elixir. Shows evaluation results each expression. blacksmith7.5 0.0 ex_spec VS blacksmithData generation framework for Elixir. espec_phoenix7.4 0.0 ex_spec VS espec_phoenixESpec for Phoenix web framework. shouldi7.3 0.0 ex_spec VS shouldiElixir testing libraries with nested contexts, superior readability, and ease of use. FakerElixir7.1 0.0 ex_spec VS FakerElixirFakerElixir generates fake data for you. pavlov7.0 0.0 ex_spec VS pavlovBDD framework for your Elixir projects. chaperon7.0 2.4 ex_spec VS chaperonAn HTTP service performance & load testing framework written in Elixir. katt6.7 0.0 ex_spec VS kattKATT (Klarna API Testing Tool) is an HTTP-based API testing tool for Erlang. ex_unit_notifier6.6 0.0 ex_spec VS ex_unit_notifierDesktop notifications for ExUnit. Stubr6.4 0.0 ex_spec VS StubrStubr - a stubbing framework for Elixir FakeServer6.2 0.9 ex_spec VS FakeServerFakeServer integrates with ExUnit to make external APIs testing simpler blitzy5.9 0.0 ex_spec VS blitzyA simple HTTP load tester in Elixir. Mockery5.9 0.4 ex_spec VS MockerySimple mocking library for asynchronous testing in Elixir. mecks_unit5.1 3.4 ex_spec VS mecks_unitA package to elegantly mock module functions within (asynchronous) ExUnit tests using meck. Walkman4.9 0.1 ex_spec VS WalkmanIsolate tests from the real world, inspired by Ruby's VCR. factory_girl_elixir4.7 0.0 ex_spec VS factory_girl_elixirMinimal implementation of Ruby's factory_girl in Elixir. test_selector4.6 4.6 ex_spec VS test_selectorA set of test helpers that make sure you always select the right elements in your Phoenix app. double4.5 0.0 ex_spec VS doubleCreate stub dependencies for testing without overwriting global modules. definject4.3 8.0 ex_spec VS definjectUnobtrusive dependency injector for Elixir. cobertura_cover3.9 0.0 ex_spec VS cobertura_coverWrites a coverage.xml from mix test --cover file compatible with Jenkins' Cobertura plugin. ex_parameterized3.8 2.6 ex_spec VS ex_parameterizedSimple macro for parametarized testing. mix_erlang_tasks3.7 0.0 ex_spec VS mix_erlang_tasksCommon tasks for Erlang projects that use Mix. exkorpion3.7 0.0 ex_spec VS exkorpionA BDD library for Elixir developers. mix_eunit3.6 0.0 ex_spec VS mix_eunitA Mix task to execute eunit tests. hypermock3.4 0.0 ex_spec VS hypermockHTTP request stubbing and expectation Elixir library. ex_unit_fixtures3.4 0.0 ex_spec VS ex_unit_fixturesA library for defining modular dependencies for ExUnit tests. ElixirMock3.1 2.6 ex_spec VS ElixirMock(alpha) Sanitary mock objects for elixir, configurable per test and inspectable efrisby3.0 0.0 ex_spec VS efrisbyA REST API testing framework for erlang. apocryphal2.9 0.0 ex_spec VS apocryphalSwagger based document driven development for ExUnit. kovacs2.1 0.0 ex_spec VS kovacsA simple ExUnit test runner. test_that_json2.1 0.0 ex_spec VS test_that_jsonJSON assertions and helpers for your Elixir testing needs. ExopData2.1 0.4 ex_spec VS ExopDataA library that helps you to write property-based tests by providing a convenient way to define complex custom data generators. Scout APM: Application Performance Monitoring Do you think we are missing an alternative of ex_spec or a related project? Popular Comparisons README ExSpec ExSpec is a simple wrapper around ExUnit that adds Rspec-style macros. Specifically, it adds context and it. While it takes inspiration from Rspec, ExSpec is significantly simplier. The context macro has only two functions: - Aid test organization - Prepend to the message of any itdefined within its do blocks The it macro is identical to ExUnit.Case.test except that it is aware of the messages of its surrounding context blocks. It also works seemlessly with ExUnit's describe function. Other than the functionality described above, ExSpec is just ExUnit. When useing ExSpec, any options provided will be passed to ExUnit.Case (e.g. async: true). A simple example is shown below. For more examples, see the tests. Example defmodule PersonTest do use ExSpec, async: true describe "name" do context "with first and last name" do it "joins the names with a space" do drew = %Person{first_name: "Drew", last_name: "Olson"} assert Person.name(drew) == "Drew Olson" end end context "with only a first name" do it "returns the first name" do drew = %Person{first_name: "Drew", last_name: nil} assert Person.name(drew) == "Drew" end end end end Installation Add ex_spec to your mix.exs dependencies: def deps do [{:ex_spec, "~> 2.0", only: :test}] end
https://elixir.libhunt.com/ex_spec-alternatives
CC-MAIN-2020-45
refinedweb
1,002
50.33
: - FileStream—read, write, open, and close files - MemoryStream—read and write managed memory - NetworkStream—read and write between network connections (System.Net namespace) - CryptoStream—read and write data through cryptographic transformations - BufferedStream—adds buffering to another stream that does not inherently support buffering While the streams are used to abstract the input and output from the device, the stream itself is not directly used to read and write data. Instead, a reader or writer object is used to interact with the stream and perform the physical read and write. Here is a list of classes used for reading and writing to streams: - BinaryReader and BinaryWriter—read and write binary data to streams - StreamReader and StreamWriter—read and write characters from streams - StringReader and StringWriter—read and write characters from Strings - TextReader and TextWriter—read and write Unicode text from streams Reading and Writing Text The following section will use StreamWriter, StreamReader, and FileStream to write text to a file and then read and display the entire contents of the file. Sample Code to Write and Read a File using System;using System.IO;namespace CodeGuru.FileOperations{ /// <remarks> /// Sample to demonstrate writing and reading a file. /// </remarks> class WriteReadFile { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) { FileStream fileStream = null; StreamReader reader = null; StreamWriter writer = null; try { // Create or open the file fileStream = new FileStream("c:\\mylog.txt", FileMode.OpenOrCreate, FileAccess.Write); writer = new StreamWriter(fileStream); // Set the file pointer to the end of the file writer.BaseStream.Seek(0, SeekOrigin.End); // Force the write to the underlying file and close writer.WriteLine( System.DateTime.Now.ToString() + " - Hello World!"); writer.Flush(); writer.Close(); // Read and display the contents of the file one // line at a time. String fileLine; reader = new StreamReader("c:\\mylog.txt"); while( (fileLine = reader.ReadLine()) != null ) { Console.WriteLine(fileLine); } } finally { // Make sure we cleanup after ourselves if( writer != null ) writer.Close(); if( reader != null ) reader.Close(); } } }} Page 1 of 3 This article was originally published on September 9, 2003
https://www.developer.com/net/cplus/article.php/3074621/accessing-files-and-directories.htm
CC-MAIN-2021-10
refinedweb
338
53.61
Question: Ram has passed in certain subjects and failed in a few. Write a program to count the no of subjects he passed in and the no of subjects he has failed in. Marks scored below 50 is considered as failed. If Ram has passed in all the subject’s print “Ram passed in all subjects” and if failed print “Ram failed in all subjects”. Assume maximum size of array is 20, Sample Input 1: Enter the no of subjects: 6 60 70 80 90 45 49 Sample Output 1: Ram passed in 4 subjects and failed in 2 subjects Sample Input 2: Enter the no of subjects: 0 Sample Output 2: Invalid input range Sample Input 3: Enter the no of subjects: -2 Sample Output 3: Invalid input range Code: Count.java import java.util.*; public class Count { public static void main (String[] args) { Scanner sc=new Scanner(System.in); System.out.println("Enter the no of subjects:"); int no_sub=sc.nextInt(); if(no_sub>0 && no_sub<=20) { int marks[]=new int[no_sub]; for(int i=0;i<no_sub;i++) { marks[i]=sc.nextInt(); } int pass=0,fail=0; for(int i=0;i<no_sub;i++) { if(marks[i]<50) { fail++; } else { pass++; } } if(fail==0) { System.out.println("Ram passed in all subjects"); } else if(pass==0) { System.out.println("Ram failed in all subjects"); } else { System.out.println("Ram passed in "+pass+" subjects and failed in "+fail+" subjects"); } } else { System.out.println("Invalid input range"); } } } -
https://quizforexam.com/java-pass-and-fail-count/
CC-MAIN-2021-21
refinedweb
247
57.27
I tried building mobile-spec for the windows universal platform and I've run into a handful of problems with the wp8.1 project. Is anybody else seeing this behavior? - Some of the plugins are causing problems if checked out from the registry. For example, if I include the device plugin checked out at the latest version from the registry, deviceready does not fire - When all plugins are checked out at master, if the media plugin is added, I get this error: " File content does not conform to specified schema. The element 'Capabilities' in namespace '' has invalid child element 'Capability' in namespace ''." It is basically the same as this resolved issue <> with camera and geolocation in windows8; Capability elements must go before DeviceCapability elements in the manifest. The capabilties for camera and geolocation are sorted properly, but "<Capability Name='musicLibrary' />" needs to be moved up. - Again with all plugins on master, I see a lot of problems with mobilespec. The app crashes when running the auto test-framework tests or manual tests for battery-status, inappbrowser, and bridge. The contacts tests fail, as do the file-transfer tests related to images (video is working), many of the dialogs tests, etc etc. All of this is testing with the wp8.1 project. Anyone else seeing this or have I got something configured incorrectly?
http://mail-archives.apache.org/mod_mbox/cordova-dev/201409.mbox/%3CCAOU92vcgUkAuJw0MqTZRKvqaShKj5K8Attjk5k7KhoLaq74FPw@mail.gmail.com%3E
CC-MAIN-2018-09
refinedweb
222
53.41
Reinhard Poetz wrote: > Leszek Gawron wrote: > >> Daniel Fagerstrom wrote: >> >>> Leszek Gawron skrev: >> ... >>>>. >>> >>> The .xconf files will be parametrized with the parts that users >>> might be interested in changing, and a fixed .xconf will be included >>> in the blocks jar. >> >> So how do I add my own cforms convertor? The only way to do it now is >> xpatch. >> >> How do I parametrize continuations manager. It's current >> configuration is rather incoherent: >> >> <continuations-manager> >> <expirations-check >> <offset>180000</offset> >> <period>180000</period> >> </expirations-check> >> </continuations-manager> >> >> (BTW type="periodic" is not even parsed by the component :)) >> >> We'll probably need to redesign the way some parts of our systems are >> configured. I'd love to use Carsten's idea of property files to >> parametrize components. After all almost every xml configuration can >> be expressed with a set of properties. > > Blocks can be parameterized. I'm not sure how this will work together > with Carsten's idea of property files. Right now they don't. But if each block has a unique namespace, we probably could find some way to let the property org.mydomain.myblock.foo set the block parameter foo. /Daniel
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200601.mbox/%3C43C385B8.5090507@nada.kth.se%3E
CC-MAIN-2017-34
refinedweb
190
59.09
Created on 2008-01-27 22:44 by alexandre.vassalotti, last changed 2008-05-03 19:53 by lemburg. This issue is now closed.. Your description of the patch is a bit misleading. As far as I can tell only the first chunk (Python/import.c changes) addresses a potential buffer overflow. For example the last chunk (Modules/posixmodule.c changes) simply eliminates an unused variable. While a worthwhile change, it should not be bundled with what is potentially a security patch. I have a few suggestions: 1. It will really help if you produce a test case that crashes the interpretor. I am sure that will get noticed. 2. If any of buffer overflows apply to the current production versions (2.4 or 2.5) or even the alpha release (2.6a1), it would make sense to backport it to the trunk. Once again, security issues in the trunk will get noticed much faster than in py3k branch. I tried to produce a buffer overflow in get_parent (import.c), but an attempt to import a module with non-ascii characters is aborted in getargs.c before get_parent is reached: >>> __import__("\u0080xyz") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: __import__() argument 1 must be string without null bytes, not str This looks like a bug. At the very least the error message is misleading because there are no null bytes in "\u0080xyz" string. The offending code is if ((Py_ssize_t)strlen(*p) != PyUnicode_GetSize(arg)) return converterr("string without null bytes", arg, msgbuf, bufsize); at getargs.c:826 However, given the preceding "XXX WAAAAH!" comment, this is probably a sign of not yet implemented feature rather than a bug. Here are my comments on the other parts of the patch: Python/structmember.c The existing code is safe, but would silently produce wrong result if T_CHAR attribute is assigned a non-ascii character. With the patch this situation will be detected and an exception raised. I am not sure that would be a desired behavior of py3k. I could not find any examples of using T_CHAR member in the stdlib, but an alternative solution would be to change T_CHAR code to mean PY_UNICODE_TYPE instead of char member. Objects/typeobject.c "%s" -> ".400s" is an obviously good change. The existing __doc__ processing code is correct. Proposed code may be marginally faster, but will allow docstrings with embedded null characters, which may or may not be desirable (and may break other code that uses tp_doc). Finally PyUnicode_AsStringAndSize always returns null-terminated strings, so memcpy logic does not need to be altered. Objects/structseq.c Change from macros to enums is purely stylistic and python C style seem to favor macros. I don't think a repr of a python object can contain embedded null characters, but even if that were the case, the patched code would not support it because the resulting buffer is returned with PyUnicode_FromString(buf). Modules/datetimemodule.c Existing code compensates for an error in initial estimate of totalnew when it checks for overflow, but the proposed change will make code more efficient. Modules/zipimport.c Since 's' format unit in PyArg_ParseTuple does not properly support unicode yet, it is hard to tell if the current code is wrong, but unicode paths cannot have embedded null characters, so use of 's#' is not necessary. Modules/timemodule.c Supporting format strings with null characters is probably a good idea, but that would be an RFE rather than a bug fix. Modules/parsermodule.c Looks like there is a bug there. Thanks for the review! > Your description of the patch is a bit misleading. As far as I can > tell only the first chunk (Python/import.c changes) addresses a > potential buffer overflow. Yes, you are right. It seems only the bug in import.c could easily be exploited. > 1. It will really help if you produce a test case that crashes the > interpretor. I am sure that will get noticed. % cat pkg/__init__.py __package__ = "\U000c9c9c9" * 900 from . import f % ./python Python 3.0a3+ (py3k:61164, Mar 1 2008, 19:55:42) >>> import pkg *** stack smashing detected ***: ./python terminated [1] 9503 abort (core dumped) ./python > 2. If any of buffer overflows apply to the current production > versions (2.4 or 2.5) or even the alpha release (2.6a1), it would > make sense to backport it to the trunk. I don't think the trunk is affected in any way by the issues mentioned here. > The existing __doc__ processing code is correct. Proposed code may be > marginally faster, but will allow docstrings with embedded null > characters, which may or may not be desirable (and may break other code > that uses tp_doc) Good call! I will check out if null-characters may pose a problem for tp_doc and update the patch consequently. > I don't think a repr of a python object can contain embedded null > characters, but even if that were the case, the patched code would not > support it because the resulting buffer is returned with > PyUnicode_FromString(buf). Oh, that is true. I will remove that part from the patch, then. > Modules/datetimemodule.c > > Existing code compensates for an error in initial estimate of totalnew > when it checks for overflow, but the proposed change will make code more > efficient. Right again. > Modules/zipimport.c [...] > Modules/timemodule.c [...] > Modules/parsermodule.c [...] I need to check again the code for these three modules, before commenting. I will clean up the patch with your recommendation and post it again. Thanks for taking the time to review my patch. It's greatly appreciated. Any progress? I revised the patch with respect to Alexander's comments. In summary, here is what I changed from the previous patch: - Removed the unnecessary "fixes" to Objects/structseq.c and Modules/timemodule.c - Updated Objects/typeobject.c to forbid null-bytes in __doc__, since they cannot be handled in the `tp_doc` member. - Removed an erroneous pointer dereference in Modules/zipimport.c - Changed `len+1` to `len` in the memcpy() call in the Modules/parsermodule.c fixes So, any comment on the latest patch? If everything is all right, I would like to commit the patch to py3k. The patch looks good. Just a question: I thought the strings returned by PyUnicode_AsStringAndSize are 0-terminated, while your patch at several places attempts to explicitly 0-terminate a copy of such string. Are you sure this is necessary? @@ -2195,7 +2200,7 @@ } return Py_None; } - len = lastdot - start; + len = (size_t)(lastdot - start); if (len >= MAXPATHLEN) { PyErr_SetString(PyExc_ValueError, "Module name too long"); The above cast needs to be (Py_ssize_t). size_t is an unsigned length type. BTW: The API PyUnicode_AsString() is pretty useless by itself - there's no way to access the size information of the returned string without again going to the Unicode object. I'd suggest to remove the API altogether and not only deprecating it. Furthermore, the API PyUnicode_AsStringAndSize() does not follow the API signature of PyString_AsStringAndSize() in that it passes back the pointer to the string as output parameter. That should be changed as well. Note that PyString_AsStringAndSize() already does this for both 8-bit strings and Unicode, so the special Unicode API is not really needed at all or you may want to rename PyString_AsStringAndSize() to PyUnicode_AsStringAndSize(). Finally, since there are many cases where the string buffer contents are copied to a new buffer, it's probably worthwhile to add a new API which does the copying straight away and also deals with the overflow cases in a central place. I'd suggest PyUnicode_AsChar() (with an API like PyUnicode_AsWideChar()). Alexander Belopolsky wrote: > The patch looks good. Just a question: I thought the strings returned > by PyUnicode_AsStringAndSize are 0-terminated, while your patch at > several places attempts to explicitly 0-terminate a copy of such string. > Are you sure this is necessary? I wasn't sure if the strings returned by PyUnicode_AsStringAndSize were 0-terminated, so I didn't take any chance and explicitly terminated them. But I just verified myself and they are indeed 0-terminated. So, I modified the patch in consequence. Marc-Andre Lemburg wrote: [SNIP] > The above cast needs to be (Py_ssize_t). size_t is an unsigned length type. Actually, the cast is right (even though it is not strictly necessary). It just the patch that is confusing. Here is the relevant code: /* Normal module, so work out the package name if any */ char *start = PyUnicode_AsString(modname); char *lastdot = strrchr(start, '.'); size_t len; int error; /* snip */ len = (size_t)(lastdot - start); if (len >= MAXPATHLEN) { PyErr_SetString(PyExc_ValueError, "Module name too long"); return NULL; } I removed the cast from the patch (I don't know why I added it, anyway) to avoid further confusion. Committed to r62667. Thank you all for your comments! On 2008-05-03 20:25, Alexandre Vassalotti wrote: > Alexandre Vassalotti <alexandre@peadrop.com> added the comment: > > Committed to r62667. > > Thank you all for your comments! > > ---------- > resolution: -> fixed > status: open -> closed What about my comments regarding the PyUnicode_AsString() API in Should I open a separate tracker item for this ? I don't know who added those APIs, but they are neither in line with the rest of the Unicode API, nor are they really all that helpful. I guess they were just added out of a misunderstanding of the already existing code. I'd suggest to remove PyUnicode_AsString() altogether (which your patch has already done in a couple of places).
http://bugs.python.org/issue1950
CC-MAIN-2015-48
refinedweb
1,567
66.64
>>. What's new? (Score:5, Insightful) People have been saying this since FORTRAN meant you didn't need to know assembly language to make use of a computer.? Its been being "dumbed down" since the start (Score:2, Insightful) I still can't see the masses suddenly deciding that they're going to program applications now. Hell, most of the people I know think conditional formatting in excel is just too much effort. I can see this just being used by actual programmers for users but I dont think it will see in a swath of uber-uber-amateur programmers all of a sudden. Re:I could care less, it isn't truly FREE (Score:3, Insightful). Certainly not English (Score:1, Insightful) Natural languages are full of ambiguities, so these "natural language programming environments" always use a more formal syntax (and semantic) and only look superficially like a natural language. Until you can actually talk to a computer (and the computer can take all the context into account), programming in such a language irritates people to no end when they stumble upon one of the differences between the programming language and the natural language it imitates. Programming is the act of understanding and structuring a problem. The coding that follows is practically trivial compared to that first step. There's certainly a need for more programmers, because more and more is automated and someone has to write that software, but please don't create the impression that you can eliminate thinking from programming. Fixing bad code costs more than writing good code. Submarine article (Score:5, Insightful) Re:What's new? (Score:5, Insightful):Quit complaining ups for the past decade. It's entertaining to see them screw up. It's even better to see the outsourcing managers who have to come to us and admit that their "cost-saving" measures will now cost their companies 5x to 10x what it would have had they just done it properly and had real programmers write their code.:I could care less, it isn't truly FREE (Score:3, Insightful) ...which is largely due to external factors. (not saying .Net isn't nice, but ask yourself if it would do that well if it came from some small 3rd party dev):A Natural Progression Yet So Many Caveats (Score:4, Insightful) Isn't the best approach to develop fast, identify the bottlenecks and then rewrite those parts in a faster language, like Python C modules? Re:Interesting, yet exaggerated... :A Natural Progression Yet So Many Caveats this". It's more verbose and much less efficient, but it's both more human-readable and likely much more dummy-proof. If someone can more easily understand what they're doing, they're more likely to stop and realise it may not be what they actually intended to do. Re:A Natural Progression Yet So Many Caveats (Score:2, Insightful) I can say one thing. You've never done embedded programming. No chips that have to work in temperatures between -40C and +120C. No devices that work 120 meters under surface of water which is 1800 meters above sea level and good 4 hours of march from the last road where you can get by car. No chips for appliances, toys and small devices where $0.03 per unit savings by choosing a model with 64 bytes(!) of RAM instead of 128 bytes of RAM converts to a six-digit profit. No devices where failure to perform according to specs and fail gracefully will land you in prison for between 2 and 15 years. No devices that run a dozen sensors and send the results every hour over GPRS running off a single battery the size of a standard "A/R20" for a year. No devices where you measure time between sending out a beam of light and receiving it bounced off the obstacle, to determine distance with 5cm resolution. No devices where you have to do error correction, encoding and driving control and data lines at 100 megabit/second - or more precisely, at one bit per 10 nanoseconds plus/minus 1.5 nanosecond. This kind of applications won't have the hardware catching up to let you replace C, Asembly and VHDL with Ruby or Java for decades yet. Re:A Natural Progression Yet So Many Caveats around them. Text based languages had many reasons to evolve the way that they did. However, I see nothing invalid about producing code in a way and or language that defines this information in a different manner. Couldn't you just as easily replace the text editor with a flow chart where each operation or function was represented as an object in the chart? Not saying this is how I want to roll, but, I see no reason that it couldn't be made functionally equivalent. In truth, I am not sure that it will shorten the time that it takes to learn, as it will still take time to learn the skills of putting the pieces together. A calculator makes you an instant basic math wiz. Addition, subtraction, no need to learn times tables. However, its not going to obsolete learning the concepts. It can't make you an algebra god. Once you learn one or two languages, picking up another is usually easy (I never really gave lisp a fair shake, but it was the exception). The concepts are the same. I would imagine that a person who became proficient with something more hypercard like would have little trouble translating those concepts and learning some of the high level text languages. -Steve. Nope (Score:5, Insightful) The best approach is to develop fast, identify bottlenecks and then require the user to upgrade their computer.... their IT infrastructure... Worldwide network and datacenters. That's the economic history of programming.:Lowering the bar time is much more "expensive" then any lost sales due to requiring a slightly faster CPU and slightly more RAM. I can give you another example: An AI researcher that I know of re-wrote newer versions of her algorithm in Java instead of C++. Even though garbage collection givers her some visible performance implications, she can program and test her algorithms much faster, which has tangible economic value for her company. Re:Interesting, yet exaggerated... (Score:1, Insightful) They really do push it. Comments, unnecessary code, helper functions, unused namespace inclusions. Here is the result of two minutes refactoring of the C# code example they provide: class Program { static void Main(string[] args) { string[] t_quote_row = new System.Net.WebClient().DownloadString(@"").Split('\n')[1].Split(','); System.Console.WriteLine(t_quote_row[t_quote_row.Length - 1]); } } From 42 lines down to 8. Its obviously a contrived example but whatever, all I did was remove anything unnecessary. I could shave two more lines by moving the open brace to the line above on lines 1 and 3, and another line by not being pedantic about getting the last entry from the quote row (the original example code just indexes in with a magic number). I cant imagine coding using that language. It seem so imprecise, forced to make assumptions about what you want. The line of revTalk they provide to do the above work is: get the last item of line 2 of URL "" How does it know to treat the string as CSV? I can think of a number of ways it could easily guess this like checking the content encoding, but the point is that I can't immediately see that it is guessing. What if I want the last character in the string, or the string is tab (or some other character) delimited? Gives me the heebie jeebies just thinking about debugging it.. Re:Interesting, yet exaggerated... one thing but it's unbeatable at it :) Re:A Natural Progression Yet So Many Caveats (Score:1, Insightful) Depends on how you define "best". The best practices approach is to define what your user needs -> define what the software needs to do to meet those needs -> Design the software architecture to fulfill the software requirements -> implement -> review -> verify -> validate. Implementation is usually a breeze once you have already fully defined the software design. If the target audience of a programming language are people who are afraid of programming, then they will be ignoring the best practices steps anyways. Labview is a good example of this. Labview empowers people who honestly should not be empowered. Following best practices, good programs can be made using Labview (like all languages, it should be selected for certain purposes). However, when you empower people who don't know squat about good programming practices, you get garbage that works, but is impossible to maintain, is unstable, and gives the language a bad name.:A Natural Progression Yet So Many Caveats (Score:2, Insightful) You must be confusing Slashdot for rational thinking. You see, 'round these parts, every article the describes something new must be the only one of its kind to exist. ChromeOS? It's gotta be able to serve 500 trillion web hits, decode MRI scans, run MS Exchange to 10 people in an office, telephone you when the temperature outside goes above 50, and browse web sites from the couch. If it cannot do all that, and everything everyone in the world wants to do, it's useless. Nintendo Wii? It cannot play the simplest version of Pong 40,000 in 1080p across fourteen 10,000-inch televisions with fully realistic explosions. Therefore, it is crap. So, don't let it surprise you that this little "language" is crap since it cannot do some things.:Underhanded Way to Increase Comments in Code (Score:3, Insightful)!
http://developers.slashdot.org/story/09/11/26/2016255/Dumbing-Down-Programming/insightful-comments
CC-MAIN-2014-52
refinedweb
1,614
61.46
CS1411 - Programming Principles I, Fall 2005 Lab assignment 1 Motivation This weeks you will get started writing and compiling simple C++ programs The Lab Environment You will be working with the Eclipse environment on Macintosh computers. This environment will be used in all testing, so please become comfortable with it. (Instructions are below for downloading and installing it on your PC so you can have the same environment at home.) You should have 24 / 7 access to the Lab with your student id. If this is not working, please let your instructor know. Plan to test in the evening before relying on your card to work. You will have a personalized account. The username is the same as your eRaider username, your first password is your SSN. Please save your files only in your personal home directory. Everything saved anyplace else can and will be deleted at random! For more information, see the CS TTU Labs Webpage (). Finding your group For this lab, you will be working in pairs, assigned by the TA. Please find your group partner. There should be two of you in front of every computer. There should always be one person on the keyboard, and the other one helping. Log in / change your password Please check both of your accounts and verify if they are both working. Once logged in, please change your password according to the instructions on the labs webpage. If an account is not working, please let your TA know. Once your done, continue working in one of your accounts. Starting, first project, hello world! Open eclipse (switch to finder, go to applications, then the eclipse directory, double click eclipse). When asked for you workspace, make sure it is inside your personal home directory!. Once inside, create a new project (make sure it is managed make c++. Use any name you like, but do not use spaces or dots add a new .cpp source file. Again, use any name, but no spaces. Make sure the name ends in .cpp Write your first program, just like this: /* CS 1411-16x, lab 50x John Smith, Student ID: 123-45-6789 Lab 1, part 0 Description: sample hello world program. */ #include <iostream> using namespace std; int main() { cout << "\nHello, world!\n\n" << endl; return 0; } save (it will compile automatically) and run your program. Check your output. If you do have compiler errors, try to fix them. Assignment 1 Now rewrite the program to output: ******************** * Welcome * * to * * Computer Science * ******************** Show your program output to your TA when you're done. Log out again. Now switch users. The person previousely on the keyboard should now be the person helping, and the other way round. Log in under your name. Assignment 2 To make people feel welcome, a company wants a personalized welcome program. The program should ask for a name and welcome that person. Open up eclipse, create a new project, create a new source file. This time, we will write a larger program using input and output. Write a program similar to this one: /* CS 1411-16x, lab 50x Jane White, Student ID: 987-65-4321 Lab 1, part 2 */ #include <iostream> #include <string> using namespace std; int main() { string name; cout << "What is your name?"; cin >> name; cout << "Hello, " << name << endl; return 0; } Compile and run the program. Test it. Since you are two students, the program should ask for both of your names. Rewrite the above program to ask for to names, and then print something like Welcome Max and Fernando! (of course, with the names you typed in). When you're done, have your TA check your work. If you still have time If you still have time, try the following things: - Try some of the escape sequences (the ones starting with a \) listed in the book: \a, \r, \n, \\, \", \t - When typing in names, type in a first and last name (e.g. "Max Berger") or a name with middle names (e.g. "George W Bush"). See what happens Homework As a homework, try to install the eclipse environment on your home machine. Make sure you follow all the steps (Eclipse Setup instructions) If the instructions are unclear or you have any other questions, please email Max Berger (max@berger.name)
https://max.berger.name/teaching/f05/lab1.html
CC-MAIN-2021-21
refinedweb
709
83.25
There are 3-4 sections in about:home that use similar style: * Title * Subtitle (at times) * A container to show a list of items * A link to show more This can be factored out to reusable UI, there by we can avoid a lot of "findViewById()" in AboutHomeContent. Also, the UI for rows use many views. They can be shrunk to use minimal views to display the content. Created attachment 605822 [details] [diff] [review] Patch This patch factors out each section as reusable UI component. This uses "gecko" namespace to show title, subtitle and more_text. Created attachment 605828 [details] [diff] [review] Patch 2: Remove callback This patch removes an unwanted callback that was used to load the url. loadUrl() sends a message to Gecko, which eventually starts the spinner. Loading the spinner before Gecko sends it is not needed (or, that can be moved into loadUrl to be consistent with others). Comment on attachment 605822 [details] [diff] [review] Patch Removing a few findViewById calls might help performance a bit. I don't know if any other changes could be negatively affecting performance though. It doesn't seem like it should. The sections you are changing can have a lot of variations. They can be visible/invisible at various times in various situations. We need to test to make sure you have not broken anything in these situations. We should start thinking about how to test those situations. File a bug to add tests for about home content. Comment on attachment 605828 [details] [diff] [review] Patch 2: Remove callback >diff --git a/mobile/android/base/GeckoApp.java b/mobile/android/base/GeckoApp.java >- mAboutHomeContent.setUriLoadCallback(new AboutHomeContent.UriLoadCallback() { >- public void callback(String url) { >- mBrowserToolbar.setProgressVisibility(true); >- loadUrl(url, AwesomeBar.Type.EDIT); >- } >- }); I don't know if you realized it or not, but you are not keeping the setProgressVisibility(true) call. This call jump starts the throbber. I am not a big fan of jump starting the throbber because we can get into situations where the throbber never shuts off, but it has not hurt us yet. I do like the de-coupling that the callback gives us. Let's just keep it. I've removed jump starting the progress bar because it starts when Gecko sends us a message of location change. I felt it unnecessary and thought of having things work the usual way. - Patch (1/2) pushed.
https://bugzilla.mozilla.org/show_bug.cgi?id=735741
CC-MAIN-2016-26
refinedweb
398
66.13
Larry Osterman's WebLogConfessions of an Old Fogey Evolution Platform Developer Build (Build: 5.6.50428.7875)2009-09-16T14:50:09ZWhat’s wrong with this code–a real world example<p>I was working on a new feature earlier today and I discovered that while the code worked just fine when run as a 32bit app, it failed miserably when run as a 64bit app.</p> <p>If I was writing code that used polymorphic types (like DWORD_PTR) or something that depended on platform specific differences, this wouldn’t be a surprise, but I wasn’t.</p> <p> </p> <p>Here’s the code in question:</p> <pre class="csharpcode"> DWORD cchString; DWORD cbValue; HRESULT hr = CorSigUncompressData(pbBlob, cbBlob, &cchString, &cbValue); <span class="kwrd">if</span> (SUCCEEDED(hr)) { cbBlob -= cbValue; pbBlob += cbValue; <span class="kwrd">if</span> (cbBlob >= cchString) { <span class="rem">// Convert to unicode</span> wchar_t rgchTypeName[c_cchTypeNameMax]; DWORD cchString = MultiByteToWideChar(CP_UTF8, 0, reinterpret_cast<LPCSTR>(pbBlob), static_cast<<span class="kwrd">int</span>>(cchString), <br /> rgchTypeName, ARRAYSIZE(rgchTypeName)); <span class="kwrd">if</span> (cchString != 0 && cchString < ARRAYSIZE(rgchTypeName)) { <span class="rem">// Ensure that the string is null terminated.</span> rgchTypeName[cchString] = L<span class="str">'\0'</span>; } cbBlob -= cchString; pbBlob += cchString; } } </pre> <p>This code parses a <a href="">ECMA 335</a> SerString. I’ve removed a bunch  of error checking and other code to make the code simpler (and the bug more obvious).</p> <p.</p> <p!</p> <p.</p> <p.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] vs. Unsecured<p>A high school classmate of mine recently posted on Facebook:</p> <blockquote> <p>Message just popped up up my screen from Microsoft, I guess. "This site has insecure content." Really? Is the content not feeling good about itself, or, perchance, did they mean "unsecured?" What the ever-lovin' ****?</p> </blockquote> <p>I was intrigued, because it was an ambiguous message and it brings up an interesting discussion.   Why the choice of the word “insecure” instead of “unsecured”?   </p> <p.  </p> <p).</p> <p> </p> <p>Well, actually I think that insecure is a better word choice than unsecured, for one reason:  If you have a page with mixed content on it, an attacker can use the unsecured elements to attack the secured elements.  <a href="">This</a> page from the IE blog (and <a href="">this</a> article from MSDN) discuss the risks associated with mixed content – the IE blog post points out that even wrapping the unsecured content in a frame won’t make the page secure.</p> <p> </p> <p>So given a choice between using “insecure” or “unsecured” in the message, I think I prefer “insecure” because it is a slightly stronger statement – “unsecured” implies that it’s a relatively benign configuration error.</p> <p> </p> <p”.</p> <p <a href="">SSL on their phishing site</a>.</p> <p.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] and Write-Only computer languages<p>A colleague and I were chatting the other day and we were talking about STL implementations (in the context of a broader discussion about template meta-programming and how difficult it is).</p> <p> </p> <p>During our discussion, I described the STL implementation as “read-only” and he instantly knew what I was talking about. As we dug in further, I realized that for many languages, you can characterize computer languages as read-only and write-only[1]</p> <p>Of course there’s a huge amount of variation here – it’s <a href="">always possible to write incomprehensible code</a>, but there are languages that just lend themselves to being read-only or write-only.</p> <p.</p> <p>A “write-only” language is a language where only the author of the code understands what it does. Languages can be write-only because of their obscure syntax, they can be write-only because of their flexibility. The canonical example of the first type of write-only language is <a href="">Teco</a> .</p> <p> </p> <p.</p> <p> </p> <p> </p> <p> </p> <p> </p> <p>[1] I can’t take credit for the term “read-only”, I first heard the term from Miguel de Icaza at the //Build/ conference a couple of weeks ago.</p> <p>[2] “line noise” – that’s the random characters that are inserted into the character stream received by an acoustic modem – these beasts no longer exists in todays broadband world, but back in the day, line noise was a real problem.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] has Larry been doing for two years (and why has the blog been dark for so long)?<p.</p> <p.</p> <p <a href="" target="_blank">here</a>. </p> <p. </p> <p>Anyway, that's a very brief version of my job, and as I said, I hope to be able to write more often in the near future.</p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] started with test driven development<p>I".</p> <p>This morning, I realized I ought to elaborate on my answer a bit.</p> <p "<a title="listen to" href="" target="_blank">capture monitor/listen to</a>" feature), I decided to apply TDD when developing the feature just to see how well it worked. The results far exceeded my expectations.</p> <p.</p> <p.</p> <p.</p> <p>I was really happy with how well the test development went, but the proof about the benefits of TDD really shown when it was deployed as a part of the product. </p> <p.</p> <p.</p> <p>As I said at the beginning, I'm a huge fan of TDD - while there's some upfront cost associated with creating unit tests as you write the code, it absolutely pays off in the long run with a higher initial quality and a dramatically lower bug rate.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] ever reads the event logs…<p>In my last post, I mentioned that someone was complaining about the name of the bowser.sys component that I wrote 20 years ago. In my post, I mentioned that he included a screen shot of the event viewer.</p> <p>What was also interesting thing was the contents of the screen shot.</p> <blockquote> <p>“The browser driver has received too many illegal datagrams from the remote computer <redacted> to name <redacted> on transport NetBT_Tcpip_<excluded>. The data is the datagram. No more events will be generated until the reset frequency has expired.”</p> </blockquote> <p)).</p> <p>But you’ll note that the person reporting the problem only complained about the name of the source of the event log entry. He never bothered to look at the contents of this “error” event log entry to see if there was something that was worth reporting.</p> <p.</p> <p>There’s a pretty important lesson here: Nobody ever bothers reading event logs because there’s simply too much noise in the logs. So think really hard about when you want to write an event to the event log. Is the information in the log <em>really</em> worth generating? Is there important information that a customer will want in those log entries?</p> <p>Unless you have a way of uploading troublesome logs to be analyzed later (and I know that several enterprise management solutions do have such mechanisms), it’s not clear that there’s any value to generating log entries.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] number 9,999,999 why you don’t ever use humorous elements in a shipping product<p>I just saw an email go by on one of our self hosting aliases:</p> <blockquote> <p><b>From:</b> <REDACTED> <br /><b>Sent:</b> Saturday, April 30, 2011 12:27 PM <br /><b>To:</b> <REDACTED> <br /><b>Subject:</b> Spelling Mistake for browser in event viewer</p> <p>Not sure which team to assign this to – please pick up this bug – ‘bowser’ for ‘browser’</p> </blockquote> <p>And he included a nice screen shot of the event viewer pointing to an event generated by bowser.sys.</p> <p>The good news is that for once I <em>didn’t </em>have to answer the quesion. Instead my co-workers answered for me:</p> <blockquote> <p>FYI: People have been filing bugs for this for years. Larry Osterman wrote a blog post about it. J</p> <p><a href=""></a></p> <p><Redacted></p> <p><strong>From:</strong> <Redacted> <br /><b>Sent:</b> Saturday, April 30, 2011 1:54 PM <br /><b>To:</b> <Redacted></p> <p><b>Subject:</b> RE: Spelling Mistake for browser in event viewer</p> <p>The name of the service is (intentionally) bowser and has been so for many releases.</p> </blockquote> <p>My response: </p> <blockquote> <p>“many releases”. That cracks me up. If I had known that I would <i>literally</i> spend the next 20 years paying for that one joke, I would have reconsidered it.</p> <p>And yes, bowser.sys has been in the product for 20 years now.</p> </blockquote> <p> </p> <p>So take this as an object lesson. Avoid humorous names in your code or you’ll be answering questions about them for the next two decades and beyond. If I had named the driver “brwsrhlp.sys” (at that point setup limited us to 8.3 file names) instead of “bowser.sys” it would never have raised any questions. But I chose to go with a slightly cute name and…</p> <p> </p> <p>PS: After posting this, several people have pointed out that the resources on bowser.sys indicate that it's name should be "browser.sys". And they're right. To my knowledge, nobody has noticed that in the past 20 years...</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] do people keep coming up with this stuff (mspaint as an audio track).<p>The imagination of people on the internet continues to astound me.</p> <p>Todays example: Someone took mspaint.exe and turned it into a PCM .WAV file and then played it.</p> <iframe title="YouTube video player" height="390" src="" frameborder="0" width="480" allowfullscreen="allowfullscreen"></iframe> <p>The truly terrifying thing is that it didn't sound that horribly bad. </p> <p>T<a href="">There’s also a version of the same soundtrack with annoying comments</a></p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] is a glutton for punishment<p>From <a href="">Long Zheng</a>, a video of someone who decided to upgrade every version of Windows from Windows 1.0 to Windows 7.</p> <p><iframe title="YouTube video player" height="390" src="" frameborder="0" width="480" allowfullscreen="allowfullscreen"></iframe></p> <p>The amazing thing is that it worked.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] case of the inconsistent right shift results…<p>One of our testers just filed a bug against something I’m working on.  They reported that if they compiled code which calculated: 1130149156 >> –05701653 it generated different results on 32bit and 64bit operating systems.  On 32bit machines it reported 0 but on 64bit machines, it reported 0x21a.</p> <p>I realized that I could produce a simple reproduction for the scenario to dig into it a bit deeper:</p> <blockquote> <pre class="csharpcode"><span class="kwrd">int</span> _tmain(<span class="kwrd">int</span> argc, _TCHAR* argv[]) { __int64 shift = 0x435cb524; __int64 amount = 0x55; __int64 result = shift >> amount; std::cout << shift << <span class="str">" >> "</span> << amount << <span class="str">" = "</span> << result << std::endl; <span class="kwrd">return</span> 0; }</pre> </blockquote> <p style="margin-right: 0px" dir="ltr">That’s pretty straightforward and it *does* reproduce the behavior.  On x86 it reports 0 and on x64 it reports 0x21a.  I can understand the x86 result (you’re shifting right more than the processor size, it shifts off the end and you get 0) but not the x64. What’s going on?</p> <p style="margin-right: 0px" dir="ltr">Well, for starters I asked our C language folks.  I know I’m shifting by more than the processor word size (85), but the results should be the same, right?</p> <p style="margin-right: 0px" dir="ltr">Well no.  The immediate answer I got was:</p> <blockquote> <p>From C++ 03, 5.8/1: The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand.</p> </blockquote> <p>Ok.  It’s undefined behavior.  But that doesn’t really explain the difference.  When in doubt, let’s go to the assembly….</p> <blockquote> <pre class="csharpcode">000000013F5215D3 mov rax,qword ptr [amount] 000000013F5215D8 movzx ecx,al 000000013F5215DB mov rax,qword ptr [shift] 000000013F5215E0 <font style="background-color: #ffff00">sar rax,cl</font> 000000013F5215E3 mov qword ptr [result],rax 000000013F5215E8 mov rdx,qword ptr [shift] </pre> </blockquote> <p>The relevant instruction is highlighted.  It’s doing a shift arithmetic right of “shift” by “amount”.</p> <p>What about the x86 version?</p> <blockquote> <pre class="csharpcode">00CC14CA mov ecx,dword ptr [amount] 00CC14CD mov eax,dword ptr [shift] 00CC14D0 mov edx,dword ptr [ebp-8] 00CC14D3 <font style="background-color: #ffff00">call @ILT+85(__allshr) (0CC105Ah)</font> 00CC14D8 mov dword ptr [result],eax 00CC14DB mov dword ptr [ebp-28h],edx </pre> </blockquote> <p>Now that’s interesting.  The x64 version is using a processor shift function but on 32bit machines, it’s using a C runtime library function (__allshr).  And the one that’s weird is the x64 version.</p> <p>While I don’t have an x64 processor manual, I *do* have a 286 processor manual from back in the day (I have all sorts of stuff in my office).  And in my 80286 manual, I found: </p> <blockquote> <p>“If a shift count greater than 31 is attempted, only the bottom five bits of the shift count are used. (the iAPX 86 uses all eight bits of the shift count.)”</p> </blockquote> <p>A co-worker gave me the current text:</p> <blockquote> <p>The destination operand can be a register or a memory location.). A special opcode encoding is provided for a count of 1.</p> </blockquote> <p>So the mystery is now solved.  The shift of 0x55 only considers the low 6 bits.  The low 6 bits of 0x55 is 0x15 or 21.  0x435cb524 >> 21 is 0x21a.</p> <p>One could argue that this is a bug in the __allshr function on x86 but you really can’t argue with “the behavior is undefined”.  Both scenarios are doing the “right thing”.  That’s the beauty of the “behavior is undefined” wording.  The compiler would be perfectly within spec if it decided to reformat my hard drive when it encountered this (although I’m happy it doesn’t <img style="border-bottom-style: none; border-left-style: none; border-top-style: none; border-right-style: none" class="wlEmoticon wlEmoticon-smile" alt="Smile" src="" />).</p> <p>Now our feature crew just needs to figure out how best to resolve the bug.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] does Windows still place so much importance on filenames?<p>Earlier today, Adrian Kingsley-Hughes posted a <a href=";selector-blogs">rant</a> (his word, not mine) about the fact that Windows still relies on text filenames.</p> <blockquote> <p>The title says it all really. Why is it that Windows still place so much importance on filenames.</p> <p>Take the following example - sorting out digital snaps. These are usually automatically given daft filenames such as <em>IMG00032.JPG</em> at the time they are stored by the camera. In an ideal world you’d only ever have one <em>IMG00032.JPG</em> on your entire system, but the world is far from perfect. Your camera might decide to restart its numbering system, or you might have two cameras using the same naming format. What happens then?</p> </blockquote> <p>I guess I’m confused.  I could see a *very* strong argument against Windows dependency on file extensions, but I’m totally mystified about why having filenames is such a problem.</p> <p>At some level, Adrian’s absolutely right – it IS possible to have multiple files on the hard disk named “recipe.txt”.  And that’s bad.  But is it the fault of Windows for allowing multiple files to have colliding names? Or is it the fault of the user for choosing poor names?  Maybe it’s a bit of both.</p> <p>What would a better system look like?  Well Adrian gives an example of what he’s like to see: </p> <blockquote> <p>Why? Why is the filename the deciding factor?.</p> </blockquote> <p>But how would that system work?  What if we did just that.  Then you wouldn’t have two files named recipe.txt (which is good).</p> <p>Unfortunately that solution introduces a new problem: You still have two files.  One named “2B1015DB-30CA-409E-9B07-234A209622B6” and the other named “5F5431E8-FF7C-45D4-9A2B-B30A9D9A791B”. It’s certainly true that those two files are uniquely named and you can always tell them apart.  But you’ve also lost a critical piece of information: the fact that they both contain recipes.</p> <p>That’s the information that the filename conveys.  It’s human specific data that describes the contents of the file.  If we were to go with unique monikers, we’d lose that critical information.</p> <p>But I don’t actually think that the dependency on filenames is really what’s annoying him.  It’s just a symptom of a different problem.  </p> <p>Adrian’s rant is a perfect example of jumping to a solution without first understanding the problem.  And why it’s so hard for Windows UI designers to figure out how to solve customer problems – this example is a customer complaint that we remove filenames from Windows.  Obviously <em>something </em>happened to annoy Adrian that was related to filenames, but the question is: What?  He doesn’t describe the problem, but we can hazard a guess about what happened from his text:</p> <blockquote> <p>Here’s an example. I might have two files in separate folders called <em>recipe.txt</em>, but one is a recipe for a pumpkin pie, and the other for apple pie. OK, it was dumb of me to give the files the same name, but it’s in situations like this that the OS should be helping me, not hindering me and making me pay for my stupidity. After all, Windows knows, without asking me, that the files, even if they are the same size and created at exactly the same time, are different. Why does Windows need to ask me what to do? Sure, it doesn’t solve all problems, but it’s a far better solution than clinging to the notion of filenames as being the best metric by which to judge whether files are identical or not.</p> </blockquote> <p>The key information here is the question: “Why does Windows need to ask me what to do?”  My guess is that he had two “recipe.txt” files in different directories and copied a recipe.txt from one directory to the other.  When you do that, Windows presents you with the following dialog:</p> <p><a href=""><img style="border-bottom: ; border-left: ; margin: ; padding-left: ; padding-right: ; display: block; float: none; border-top: ; border-right: ; padding-top: " title="image" alt="Windows Copy Dialog" src="" width="232" height="240" /></a></p> <p>My suspicion is that he’s annoyed because Windows is forcing him to make a choice about what to do when there’s a conflict.  The problem is that there’s no one answer that works for all users and all scenarios.    Even in my day-to-day work I’ve had reason to chose all three options, depending on what’s going on.  From the rant, it appears that Adrian would like it to chose “Copy, but keep both files” by default.  But what happens if you really *do* want to replace the old recipe.txt with a new version?  Maybe you edited the file offline on your laptop and you’re bringing the new copy back to your desktop machine.  Or maybe you’re copying a bunch of files from one drive to another (I do this regularly when I sync my music collection from home and work).  In that case, you want to ignore the existing copy of the file (or maybe you want to copy the file over to ensure that the metadata is in sync).</p> <p>Windows can’t figure out what the right answer is here – so it prompts the user for advice about what to do.</p> <p>Btw, Adrian’s answer to his rhetorical question is “the reason is legacy”.  Actually that’s not quite it.  The reason is that it’s filenames provide valuable information for the user that would be lost if we went away from them.</p> <p>Next time I want to spend a bit of time brainstorming about ways to solve his problem (assuming that the problem I identified is the real problem – it might not be).  </p> <p> </p> <p> </p> <p>PS: I’m also not sure why he picked on Windows here.  Every operating system I know of has similar dependencies on filenames.  I think that’s an another indication that he’s jumping on a solution without first describing the problem.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] Windows with Phones… I don’t get it.<p>Over the weekend, Engadget and <a href="">CNet</a> <a href="">ran a story</a> discussing what was described as a new and novel attack using Android smartphones to attack PCs.  Apparently someone took an Android smartphone and modified the phone to emulate a USB keyboard.</p> <p.</p> <p> </p> <p.</p> <p>If the novelty is that it’s a keyboard that’s being driven by software on the phone, a quick search for “<a href="">programmable keyboard macro</a>” shows dozens of keyboards which can be programmed to insert arbitrary key sequences.  So even that’s not particularly novel.</p> <p> </p> <p>I guess the attack could be used to raise awareness of plugging in devices, but that’s not a unique threat.  In fact the 1394 “FireWire” bus is well known for having <a href="">significant</a> <a href="">security issues</a> (1394 devices are allowed full DMA access to the host computer).  </p> <p>Ultimately this all goes back to <a href="">Immutable Law #3</a>.  If you let the bad guys tamper with your machine, they can 0wn your machine.  That includes letting the bad guys tamper with the devices which you then plug into your machine.</p> <p>Sometimes the issues which tickle the fancy of the press mystify me.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT]’s a bad idea to have a TEMP environment variable longer than about 130 characters<p>I've been working with the Win32 API for almost 20 years - literally since the very first Win32 APIs were written. Even after all that time, I'm occasionally surprised by the API behavior.</p> <p>Earlier today I was investigating a build break that took out one of our partner build labs. Eventually I root caused it to an issue with (of all things) the GetTempName API.</p> <p>Consider the following code (yeah, I don’t check for errors, <slaps wrist />):</p> <pre class="csharpcode">#include <span class="str">"stdafx.h"</span> #include <<span class="kwrd">string</span>> #include <iostream> #include <Windows.h> <span class="kwrd">using</span> <span class="kwrd">namespace</span> std; <span class="kwrd">const</span> wchar_t longEnvironmentName[] = <br /> L<span class="str">"c:\\users\\larry\\verylongdirectory\\withaverylongsubdirectory"</span> L<span class="str">"\\andanotherlongsubdirectory\\thatisstilldeeper\\withstilldeeper"</span> L<span class="str">"\\andlonger\\untilyoustarttorunoutofpatience\\butstillneedtobelonger"</span> L<span class="str">"\\untilitfinallygetslongerthanabout130characters"</span>; <span class="kwrd">int</span> _tmain(<span class="kwrd">int</span> argc, _TCHAR* argv[]) { wchar_t environmentBuffer[ MAX_PATH ]; wchar_t tempPath[ MAX_PATH ]; SetEnvironmentVariable(L<span class="str">"TEMP"</span>, longEnvironmentName); SetEnvironmentVariable(L<span class="str">"TMP"</span>, longEnvironmentName); GetEnvironmentVariable(L<span class="str">"TEMP"</span>, environmentBuffer, _countof(environmentBuffer)); wcout << L<span class="str">"Temp environment variable is: "</span> << environmentBuffer << <span class="str">" length: "</span> << wcslen(environmentBuffer) << endl; GetTempPath(_countof(tempPath), tempPath); wcout << L<span class="str">"Temp path: "</span> << tempPath<< <span class="str">" length: "</span> << wcslen(tempPath) << endl; <span class="kwrd">return</span> 0; }</pre> <p>When I ran this program, I got the following output:</p> <p><code>Temp environment variable is: c:\users\larry\verylongdirectory\withaverylongsubdirectory\andanotherlongsubdirectory\thatisstilldeeper\withstilldeeper\andlonger\ <br />untilyoustarttorunoutofpatience\butstillneedtobelonger\untilitfinallygetslongerthanabout130characters length: 231 <br />Temp path: C:\Users\larry\ length: 15 <br /></code></p> <p> </p> <p>So what’s going on? Why did GetTempPath return a pointer to my profile directory and not the (admittedly long) TEMP environment variable?</p> <p>There’s a bunch of stuff here. First off, let’s consider the documentation for <a href="">GetTempPath</a>:</p> <blockquote> <p>The <strong>GetTempPath</strong> function checks for the existence of environment variables in the following order and uses the first path found:</p> <ol> <li>The path specified by the TMP environment variable. </li> <li>The path specified by the TEMP environment variable. </li> <li>The path specified by the USERPROFILE environment variable. </li> <li>The Windows directory. </li> </ol></blockquote> <p.</p> <p <a href="">UNICODE_STRING</a>, we find:</p> <blockquote><dl><dt><strong>MaximumLength</strong> </dt><dd> <p>Specifies the total size, in bytes, of memory allocated for <strong>Buffer</strong>. Up to <strong>MaximumLength</strong> bytes may be written into the buffer without trampling memory.</p> </dd></dl></blockquote> <p>So the function expects at most 261 <em>bytes,</em> or about 130 <em>characters. </em>I often see behaviors like this in the "A" version of system APIs, but in this case both the "A" and "W" version of the API had the same unexpected behavior.</p> <p>The moral of the story: If you set your TEMP environment variable to something longer than 130 characters or so, GetTempPath will return your USERPROFILE. Which means that you may unexpectedly find temporary files scribbled all over your profile directory.</p> <p> </p> <p>The fix was to replace the calls to GetTempPath with direct calls to GetEnvironmentVariable - it doesn't have the same restriction.</p> <p> </p> <p> </p> <p>[1] Note that the 4th step is the Windows directory. You can tell that this API has been around for a while because apparently the API designers thought it was a good idea to put temporary files in the windows directory.</p> <p> </p> <p>EDIT: Significantly revised to improve readability - I'm rusty at this.</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] does “size_is” mean in an IDL file?<p>My boss (who has spent a really long time working on RPC) and I got into a discussion the other day about the “size_is” IDL attribute (yeah, that’s what Microsoft developers chat about when they’re bored).</p> <p>For context, there are two related attributes which are applied to an array in IDL files. <a href="">size_is</a>(xxx) and <a href="">length_is</a>(xxx). They both relate to the amount of memory which is marshaled in a COM or RPC interface, but we were wondering the exact semantics of the parameter.</p> <p>The documentation for “size_is” says:</p> <blockquote> <p>Use the <strong>[size_is]</strong> attribute to specify the size of memory allocated for sized pointers, sized pointers to sized pointers, and single- or multidimensional arrays.</p> </blockquote> <p>The documentation for “length_is” says:</p> <blockquote> <p>The <strong>[</strong><strong>length_is</strong><strong>]</strong> attribute specifies the number of array elements to be transmitted. You must specify a non-negative value.</p> </blockquote> <p dir="ltr" style="margin-right: 0px">So the length_is attribute clearly refers to the number of elements in the array to be transmitted. But what are the units for the size_is attribute? The MSDN documentation doesn’t say – all you see is that it “specif[ies] the size of memory allocated for … single- or multidimentional arrays”. Typically memory allocations are specified in bytes, so this implies that the size_is attribute measures the number of bytes transferred.</p> <p dir="ltr" style="margin-right: 0px">And that’s what I’ve thought for years and years. length_is was the number of elements and size_is was the number of bytes.</p> <p dir="ltr" style="margin-right: 0px">But my boss thought that size_is referred to a number of elements. And since he’s worked on RPC for years, I figured he’d know best since he actually worked on RPC.</p> <p dir="ltr" style="margin-right: 0px"> </p> <p dir="ltr" style="margin-right: 0px">To see if the problem was just that the current MSDN documentation was incorrect, I dug into the oldest RPC documentation I have – from the original Win32 SDK that was shipped with Windows NT 3.1 way back in 1993 (I have my own personal <a href="">wayback machine</a> in my office).</p> <p>The old SDK documentation says: </p> <blockquote> <p>“the size_is attribute is used to specify an expression or identifier that designates the maximum allocation size of the array”</p> </blockquote> <p>Well, allocation sizes are always in bytes, so size_is is in bytes, right?</p> <p>Well maybe not. It further goes on to say:</p> <blockquote> <p>“the values specified by the size_is, max_is and min_is attributes have the following relationship: size_is = max_is – 1. The size_is attribute provides an alternative to max_is for specifying the maximum amount of data”</p> </blockquote> <p>So what is “max_is”? Maybe there’s a clue there…</p> <p>Go to max_is and it says “designates the maximum value for a valid array index” – so clearly it is a count of elements. And thus by induction, size_is must be in number of elements and not number of bytes… </p> <p>Ok, so the old documentation is ambiguous but it implies that both length_is and size_is refer to a count of elements.</p> <p> </p> <p>To confirm, I went to the current owner of the MIDL compiler for the definitive word on this and he said:</p> <blockquote> <p>Always in elements for all the XXX_is attributes. And everything else except allocation routines IIRC.</p> <p><Your boss> is correct that we allocate the buffer based on size_is, but we transmit elements based on length_is if they’re both present. BTW, [string] is basically [length_is(<w>strlen(…))].</p> </blockquote> <p>So that’s the definitive answer:</p> <p>size_is and length_is both refer to a count of elements. size_is defines the size of the buffer allocated for the transfer and length_is specifies the number of elements transferred within that buffer.</p> <p> </p> <p> </p> <p>And yes, I’ve asked the documentation folks to update the documentation to correct this.</p> <p> </p> <p>EDIT: Oops, fixed a typo. Thanks <span class="user-name"><strong><span style="color: #666666;">Sys64738</span></strong></span>:)</p><div style="clear:both;"></div><img src="" width="1" height="1">Larry Osterman [MSFT] Office team deploys botnet for security research<p>Even though it’s posted on April 1st, this is actually *not* an April Fools prank.</p> <p>It turns out that <a href="">the Office team runs a “botnet” internally</a> that’s dedicated to file fuzzing.  Basically they have a tool that’s run on a bunch of machines that runs file fuzzing jobs in their spare time.  This really isn’t a “botnet” in the strictest sense of the word, it’s more like <a href="mailto:SETI@home">SETI@home</a> orother distributed computing efforts or but “botnet” is the word that the Office team uses when describing the effort.</p> <p> </p> <p>For those that don’t know what <a href="">fuzz testing</a>.</p> <p.</p> <p> </p> <p>I’ve known about the Office teams effort for a while now (Tom Gallagher <a href="">gave a talk</a> about it at a recent <a href="">BlueHat</a> conference) but I didn’t know that the Office team had discussed it at CanSecWest until earlier today.</p><img src="" width="1" height="1">Larry Osterman [MSFT] Invented Here’s take on software security<p>One of my favorite web comics is <a href="">Not Invented Here</a> by Bill Barnes and Paul Southworth.  I started reading Bill’s stuff with his other web comic <a href="">Unshelved</a> (a librarian comic).</p> <p> </p> <p>NIH is a web comic about software development and this week Bill and Paul have decided to take on software security…</p> <p>Here’s Monday’s comic:</p> <a href=""><img alt="Not Invented Here strip for 2/15/2010" src="" width="858" height="309" /></a> <p> </p> <p>Check them out – Bill and Paul both have a good feel for how the industry actually works :).</p><img src="" width="1" height="1">Larry Osterman [MSFT] owes me a new monitor<p>Because I just got soda all over my current:2259cce1-8bea-4100-bfd5-a6bdfc627e7c" class="wlWriterEditableSmartContent"><div id="53bda8b8-0000-4c78-9dc5-ce52deb16109" style="margin: 0px; padding: 0px; display: inline;"><div><a href="" target="_new"><img src="" style="border-style: none" galleryimg="no" onload="var downlevelDiv = document.getElementById('53bda8b8-0000-4c78-9dc5-ce52deb16109'); downlevelDiv.<param name=\"movie\" value=\"\"><\/param><embed src=\"\" type=\"application/x-shockwave-flash\" width=\"425\" height=\"355\"><\/embed><\/object><\/div>";" alt=""></a></div></div></div> <p>One of the funniest things I’ve seen in a while.  </p> <p> </p> <p>And yes, I know that I’m being cruel here and I shouldn’t make fun of the kids ignorance, but he is SO proud of his new discovery and is so wrong in his interpretation of what actually is going on…</p> <p> </p> <p> </p> <p> </p> <p>For my non net-savvy readers: The “<a href="">tracert</a>” command lists the route that packets take from the local computer to a remote computer.  So if I want to find out what path a packet takes from my computer to <a href=""></a>, I would issue “tracert <a href=""></a>”.  This can be extremely helpful when troubleshooting networking problems.  Unfortunately the young man in the video had a rather different opinion of what the command did.</p><img src="" width="1" height="1">Larry Osterman [MSFT]’s up with the Beep driver in Windows 7?<p>Earlier today, someone asked me why 64bit versions of windows don’t support the internal PC speaker beeps.  The answer is somewhat complicated and ends up being an interesting intersection between a host of conflicting tensions in the PC ecosystem.</p> <p> </p> <p).</p> <p>The Beep() Win32 API is basically a thin wrapper around the 8254 PIC functionality.  So when you call the Beep() API, you program the 8254 to play sounds on the PC speaker.</p> <p> </p> <p.  </p> <p>One of the other things that happened in the intervening 25 years was that machines got a whole lot more capable.  Now machines come with capabilities like newfangled hard disk drives (some of which can even hold more than 30 <em>megabytes</em>).</p> <p> </p> <p>There’s something else that happened in the past 25 years.  PCs became commodity systems.  And that started exerting a huge amount of pressure on PC manufacturers to cut costs.  They looked at the 8254 and asked “why can’t we remove this?”</p> <p>It turns out that they couldn’t.  And the answer to why they couldn’t came from a totally unexpected place.  The <a href="">American’s with Disabilities Act</a>.</p> <p> </p> <p <a href="">StickyKeys</a> were generated using the Beep() API.   There are about 6 different assistive technology (AT) sounds built into windows, their implementation is plumbed fairly deep inside the win32k.sys driver.  </p> <p>But why does that matter?  Well it turns out that many enterprises (both governments and corporations) have requirements that prevent them from purchasing equipment that lacks accessible technologies and that meant that you couldn’t sell computers that didn’t have beep hardware to those enterprises.</p> <p> </p> <p.</p> <p”.  </p> <p>Because the only machines with this problem were 64bit machines, this functionality was restricted to 64bit versions of Windows.  </p> <p>That in turn meant that PC manufacturers still had to include support for the 8254 hardware – after all if the user chose to buy the machine with a 32bit operating system on it they might want to use the AT functionality.</p> <p.</p> <p> </p> <p.</p> <p> </p> <p.  </p> <p.</p> <p> </p> <p> </p> <p>[1] Thus providing me with an justification to keep my old Intel component data catalogs from back in the 1980s.</p><img src="" width="1" height="1">Larry Osterman [MSFT] fun with Amazon reviews.<p>A co-worker <a href="">sent this</a> around and I just HAD to share it…   It’s not nearly as geeky as “The Story of Ping”.</p> <p> </p> <p>For those that don’t want to follow the link, these are some of reviews for this:</p> <p><a href=""><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="" width="644" height="272" /></a> </p> <p> </p> <blockquote> . <br /! <a href="">Universal Portable Urinal - Unisex</a> and <a href="">Reliance Products Hassock Portable Lightweight Self-Contained Toilet</a> <br />Working from my car, and home, has never been easier. I finally threw out that crappy IKEA desk and printer stand combo! Good riddance! Best of all now I don't have to "tele-commute" to work, I "auto-commute." AWESOME.”</p> <p>“My!”</p> <p>." <br />We'll definitely use this product again at our next gig, whatever and whenever that happens to be... <br />Highly recommended! “ [Editors Note: A reference to <a href="">this</a>]</p> <p>“This is definitely one of the best products out there! <br />While commuting through downtown Seattle, I always had to make my sushi on the little center console in my Honda - try making a perfect California roll there! No way! <br />Now, I can safely make my California rolls, Spider rolls and Rainbow rolls - all while steering with my knee. <br /.”</p> </blockquote> <p>The reviews go on and on in this vein.  Very funny.</p><img src="" width="1" height="1">Larry Osterman [MSFT] are they called “giblets” anyway?<p><img style="margin: 0px 15px 0px 0px; display: inline" align="left" src="" width="180" height="240" /></p> <p>Five years ago, I attended one of the initial security training courses as a part of the XP SP2 effort.  I wrote this up in one of my very first posts entitled “<a href="">Remember the giblets</a>” and followed it up last year with “<a href="">The Trouble with Giblets</a>”.  I use the term “giblets” a lot but I’d never bothered to go out and figure out where the term came from.</p> <p>Well, we were talking about giblets in an email discussion today and one of my co-workers went and asked Michael Howard where the term came from.  Michael forwarded the question to <a href="">Steve Lipner</a> who was the person who originally coined the term and he came back with the origin of the term.</p> <p> </p> <p”.  </p> <p>Over time Steve started using the term for the pieces of software that were incidental to the product but which weren’t delivered by the main development team – things like the C runtime library, libJPG, ATL, etc.  </p> <p.</p> <p>Thanks to Craig Gehre for the picture.</p><img src="" width="1" height="1">Larry Osterman [MSFT] 7 Reflections…<p :)).  </p> <p>I thought I’d write a bit about the WIn7 experience from my point of view.  I’ve written a bit of this stuff in <a href="">my post on the Engineering 7 blog</a> but that was more about the changes in engineering processes as opposed to my personal experiences in the process.</p> <p.  </p> <p.</p> <p>During this interim time, I also worked on a number of prototype projects and helped the SDL tools team work on the current version of the <a href="">threat modeling tool</a>.  played around with some new (to me) development strategies – RAII and exception based programming and test driven development.  </p> <p <a href="">capture monitor</a> :)).</p> <p.   </p> <p>The third milestone for Win7 I worked on the “<a href="">Ducking</a>” feature.  Of all the features I worked on for WIn7, the ducking feature is the closest to a “<a href="">DWIM</a>” feature in Windows – the system automatically decreases the volume for applications when you start a communicating with other people (this feature requires some application changes to work correctly though which is why you don’t see it in use right now (although it has shown up in <a href="">at least one application</a> by accident)).</p> <p> </p> <p.</p> <p> </p> <p>I’m so happy that customers are now going to be able to play with the stuff we’ve all worked so hard to deliver to you.  Enjoy :).</p> <p> </p> <p>[1] I started writing this on the 22nd but didn’t finish it until today.</p><img src="" width="1" height="1">Larry Osterman [MSFT] Whoppers<P>Wow, one of my co-workers just sent this image out. It’s totally awesome (IMHO)…</P> <P><A title=</A></P> <P mce_keep="true"> </P> <P mce_keep="true"> </P> <P>Edit: The image tag didn't work for some reason so I removed it and just left the link...</P> <P>Bonus: The first Win7 ad: <A href=""></A>#</P> <P mce_keep="true"> </P><img src="" width="1" height="1">Larry Osterman [MSFT] for new skillz (turning the blog around)…<p.</p> <p>To repeat and be even more clear: I’m *not* leaving Microsoft.  I’m *not* leaving Windows.  </p> <p)…</p> <p>I could run out and browse the bookstores (and I might just do that) but I figured “Hey, I’ve got a blog, why don’t I ask the folks who read my blog?”.  So let me turn the blog around and ask: </p> <blockquote> <p>If I wanted to go out and learn web development, which books should I read?  </p> </blockquote> <p>I’ve already read “<a href="">Javascript: The ood Parts</a>” and it was fascinating but it was more of a language book (and a very good language book), but it’s not a web development book.  So what books <em>should</em> I read to learn web development?</p><img src="" width="1" height="1">Larry Osterman [MSFT] can make it arbitrarily fast if I don’t actually have to make it work.<p>Digging way back into my pre-Microsoft days, I was recently reminded of a story that I believe was told to me by <a href="">Mary Shaw</a> back when I took her Computer Optimization class at Carnegie-Mellon…</p> <p>During the class, Mary told an anecdote about a developer “Sue” who found a bug in another developer’s “Joe” code that “Joe” introduced with a performance optimization.  When “Sue” pointed the bug out to “Joe”, his response was “Oops, but it’s WAY faster with the bug”.  “Sue” exploded “If it doesn’t have to be correct, I can calculate the result in 0 time!” [1].</p> <p>Immediately after telling this anecdote, she discussed a contest that the CS faculty held for the graduate students every year.  Each year the CS faculty posed a problem to the graduate students with a prize awarded to the grad student who came up with the most efficient (fastest) solution to the problem.  She then assigned the exact same problem to us:</p> <blockquote> <p>“Given a copy of the “Declaration of Independence”, calculate the 10 most common words in the document”</p> </blockquote> <p>We all went off and built programs to parse the words in the document, inserting them into a tree (tracking usage) and read off the 10 most frequent words.  The next assignment was “Now make it fast – the 5 fastest apps get an ‘A’, the next 5 get a ‘B’, etc.”</p> <p>So everyone in the class (except me :)) went out and rewrote their apps to use a hash table so that their insertion time was constant and then they optimized the heck out of their hash tables[2].</p> <p>After our class had our turn, Mary shared the results of what happened when the CS grad students were presented with the exact same problem.</p> <p>Most of them basically did what most of the students in my class did – built hash tables and tweaked them.  But a couple of results stood out.</p> <ul> <li>The first one simply hard coded the 10 most common words in their app and printed them out.  This was disqualified because it was perceived as breaking the rules.</li> <li>The next one was quite clever.  The grad student in question realized that they could write the program much faster if they wrote it in assembly language.  But the rules of the contest required that they use Pascal for the program.  So the grad student essentially created an array on the stack and introduced a buffer overflow and he loaded his assembly language program into the buffer and used that as a way of getting his assembly language version of the program to run.  IIRC he wasn’t disqualified but he didn’t win because he circumvented the rules (I’m not sure, it’s been more than a quarter century since Mary told the class this story).</li> <li>The winning entry was even more clever.  He realized that he didn’t actually need to track all the words in the document.  Instead he decided to track only some of the words in the document in a fixed array.  His logic was that each of the 10 most frequent words were likely to appear in the first <n> words in the document so all he needed to do was to figure out what "”n” is and he’d be golden.</li> </ul> <p> </p> <p>So the moral of the story is “Yes, if it doesn’t have to be correct, you can calculate the response in 0 time.  But sometimes it’s ok to guess and if you guess right, you can get a huge performance benefit from the result”.  </p> <p> </p> <p> </p> <p>[1] This anecdote might also come from Jon L. Bentley’s “Writing Efficient Programs”, I’ll be honest and say that I don’t remember where I heard it (but it makes a great introduction to the subsequent story).</p> <p>[2] I was stubborn and decided to take my binary tree program and make it as efficient as possible but keep the basic structure of the solution (for example, instead of comparing strings, I calculated a hash for the string and compared the hashes to determine if strings matched).  I don’t remember if I was in the top 5 but I was certainly in the top 10.  I do know that my program beat out most of the hash table based solutions.</p><img src="" width="1" height="1">Larry Osterman [MSFT] a flicker free volume control<p>When we shipped Windows Vista, one of the really annoying UI annoyances with the volume control was that whenever you resized it, it would flicker.  </p> <p>To be more specific, the right side of the control would flicker – the rest didn’t flicker (which was rather strange).</p> <p> </p> <p <a href="">WM_PRINTCLIENT</a> message which allowed me to direct all of the internal controls on the window to paint themselves.</p> <p.</p> <p.</p> <p <em>real </em>problem, all I’d done is to hide it. </p> <p> </p> <p>So I had to go back to the drawing board.  Eventually (with the help of one of the developers on the User team) I finally tracked down the original root cause of the problem and it turns out that the root cause was somewhere totally unexpected.</p> <p>Consider the volume UI:</p> <p align="center"><a href=""><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="" width="493" height="353" /></a> </p> <p>The UI is composed of two major areas: The “Devices” group and the “Applications” group.  There’s a group box control wrapped around the two areas.</p> <p>Now lets look at the group box control.  For reasons that are buried deep in the early history of Windows, a group box is actually a form of the “button” control.  If you look at the window styles for a button in SpyXX, you’ll see: </p> <p align="center"><a href=""><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="" width="400" height="302" /></a> </p> <p> </p> <p>Notice the CS_VREDRAW and CS_HREDRAW window class styles.  The <a href="">MSDN documentation for class styles</a> says:</p> <blockquote> <p>CS_HREDRAW - Redraws the entire window if a movement or size adjustment changes the width of the client area. <br />CS_VREDRAW - Redraws the entire window if a movement or size adjustment changes the height of the client area.</p> </blockquote> <p>In other words every window class with the CS_HREDRAW or CS_VREDRAW style will <em>always </em>be fully repainted whenever the window is resized (including all the controls inside the window).  And ALL buttons have these styles.  That means that whenever you resize <em>any</em> buttons, they’re going to flicker, and so will all of the content that lives below the button.  For most buttons this isn’t a big deal but for group boxes it can be a big issue because group boxes contain other controls.</p> <p.</p> <p <a href="">DrawThemeBackground</a> API with the <a href="">BP_GROUPBOX</a> part and if theming is disabled, you can use the <a href="">DrawEdge</a>.</p> <p>As an added bonus, now that I was no longer painting everything manually, the fade-in animations on the flat buttons started working again!</p> <p> </p> <p>PS: While I was writing this post, I ran into this <a href="">tutorial on building flicker free applications</a>, I wish I’d run into it while I was trying to deal with the flickering problem because it nicely lays out how to solve the problem.</p><img src="" width="1" height="1">Larry Osterman [MSFT]
http://blogs.msdn.com/b/larryosterman/atom.aspx
CC-MAIN-2015-11
refinedweb
8,669
60.55
Red Hat Bugzilla – Bug 479987 /usr/share/rhn/server/__init__.py does from rhnHandler import rhnHandler Last modified: 2010-11-20 09:24:20 EST Description of problem: While investigating why osa-dispatcher produces AVC denial avc: denied { search } for pid=22398 comm="python" name="root" dev=dm-0 ino=784129 scontext=root:system_r:osa_dispatcher_t:s0 tcontext=root:object_r:user_home_dir_t:s0 tclass=dir I found out that it is caused by the /usr/lib/librpm-4.4.so library which wants to read /root/.rpmmacros: 19641 read(11, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 1547 19641 close(11) = 0 19641 munmap(0xb7fe6000, 4096) = 0 19641 stat64("/root/.rpmmacros", 0xbf83b5dc) = -1 EACCES (Permission denied) 19641 stat64("/usr/lib/rpm/init.lua", 0xbf83be3c) = -1 ENOENT (No such file or directory) 19641 close(10) = 0 19641 close(9) = 0 I wondered why the rpm library is loaded by osa-dispatcher in the first place. The chain looks like this: /usr/share/rhn/osad/osa_dispatcher.py imports rhnSQL (from server) /usr/share/rhn/server/__init__.py imports rhnHandler /usr/share/rhn/server/rhnHandler.py imports rhnServer /usr/share/rhn/server/rhnServer/__init__.py imports Server from server_class /usr/share/rhn/server/rhnServer/server_class.py imports rhn_rpm /usr/share/rhn/common/rhn_rpm.py imports rpm We'd need to break this chain somewhere. I looked at that /usr/share/rhn/server/__init__.py which has nothing but from rhnHandler import rhnHandler in it and Devan says that it is a trickery to get the rhnHandler class by magic somewhere. If I comments this line in /usr/share/rhn/server/__init__.py out, osa-dispatcher no longer gives the AVC denial, but httpd produces the following in error_log upon rhnpush: Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 299, in HandlerDispatch\n result = object(req) File "/usr/share/rhn/server/apacheServer.py", line 52, in __call__\n HandlerWrap.svrHandlers = self.get_handler_factory(req)() File "/usr/share/rhn/server/apacheServer.py", line 70, in get_handler_factory\n from apacheHandler import apacheHandler File "/usr/share/rhn/server/apacheHandler.py", line 32, in ?\n from apacheRequest import apacheGET, apachePOST, HandlerNotFoundError File "/usr/share/rhn/server/apacheRequest.py", line 33, in ?\n import rhnRepository File "/usr/share/rhn/server/rhnRepository.py", line 29, in ?\n import rhnChannel, rhnPackage File "/usr/share/rhn/server/rhnChannel.py", line 27, in ?\n from rhnServer import server_lib File "/usr/share/rhn/server/rhnServer/__init__.py", line 24, in ?\n from server_class import Server File "/usr/share/rhn/server/rhnServer/server_class.py", line 28, in ?\n from server import rhnChannel, rhnUser, rhnSQL, rhnLib, rhnAction, \\ ImportError: cannot import name rhnChannel We could also move that rhnSQL out from server namespace because it does not seem to have that much with the server (and with server's handler). Version-Release number of selected component (if applicable): # rpm -qf /usr/share/rhn/server/__init__.py spacewalk-backend-sql-0.4.10-1.el5 How reproducible: Deterministic. Steps to Reproduce: 1. Start osa-dispatcher via strace, with osa-dispatcher-selinux installed. 2. Review the strace output, and /var/log/audit/audit.log. Actual results: See /usr/share/rhn/common/rhn_rpm.py and then /usr/lib/librpmio-4.4.so loaded, and AVC denial in audit.log. Expected results: The structure of our backend classes should not load rpm libraries if program like osa-dispatcher only needs rhnSQL. Additional info: This later turned out to not be causing the AVC denial Jan thought it was and boiled down instead to just Python code cleanup. Time is short, bumping to space06. Devan, could you move this bugzilla to space06 and address it? It is blocking an AVC denial on Fedora 10, bug 514320. Thanks, Jan. Not blocking bug 514320 anymore, we addressed that one in jabber_lib.py code. Mass-moving to space13. Should be fixed as side effect of commit c7abc29bb1c8ba32a13ea22a2f5b050db26178a3 from bug 612581. server/__init__.py does not import rhnHandler any more. moving back to space12 as this change is already there for some time. Marking as fixed with spacewalk-backend-1.2.73-1 as that is the tag having the last commit related to bug 612581. With Spacewalk 1.2 release, marking as closed.
https://bugzilla.redhat.com/show_bug.cgi?id=479987
CC-MAIN-2017-43
refinedweb
706
53.78
"AttributeError: 'module' object has no attribute 'argv'" with windows service I have a small Flask app which connects the local host with a proxy host and uses multiprocessing to do so. This is the service.py import cx_Logging import cx_Threads import sys import win32serviceutil import win32service import win32event import servicemanager import socket from multiprocessing import freeze_support from webapp import webApp class Handler(object): # no parameters are permitted; all configuration should be placed in the # configuration file and handled in the Initialize() method def __init__(self): cx_Logging.Info("creating handler instance") self.stopEvent = cx_Threads.Event() # called when the service is starting def Initialize(self, configFileName): cx_Logging.Info("initializing: config file name is %r", configFileName) # called when the service is starting immediately after Initialize() # use this to perform the work of the service; don't forget to set or check # for the stop event or the service GUI will not respond to requests to # stop the service def Run(self): cx_Logging.Info("running service....") self.main() self.stopEvent.Wait() # called when the service is being stopped by the service manager GUI def Stop(self): cx_Logging.Info("stopping service...") self.stopEvent.Set() def main(self): freeze_support() webApp.run(port=5051) And this is the error I get whenever I try to start the service: [03284] 2014/07/22 17:27:17.293 starting logging at level ERROR [03284] 2014/07/22 17:27:17.745 Python exception encountered: [03284] 2014/07/22 17:27:17.745 Internal Message: exception running service [03284] 2014/07/22 17:27:17.745 Type => <type 'exceptions.AttributeError'> [03284] 2014/07/22 17:27:17.745 Value => 'module' object has no attribute 'argv' [03284] 2014/07/22 17:27:17.745 Traceback (most recent call last): [03284] 2014/07/22 17:27:17.745 File "ServiceHandler.py", line 32, in Run self.main() [03284] 2014/07/22 17:27:17.745 File "ServiceHandler.py", line 41, in main freeze_support() [03284] 2014/07/22 17:27:17.745 File "C:\Python27\lib\multiprocessing\__init__.py", line 145, in freeze_support freeze_support() [03284] 2014/07/22 17:27:17.745 File "C:\Python27\lib\multiprocessing\forking.py", line 336, in freeze_support if is_forking(sys.argv): [03284] 2014/07/22 17:27:17.745 AttributeError: 'module' object has no attribute 'argv' [01556] 2014/07/22 17:27:17.745 ending logging I posted this in the cx_Freeze mailing list and Thomas Kluyver told me to report it because it is most likely a bug, and if anyone else encounters this error, you have to add this: if not hasattr(sys, 'argv'): sys.argv = [''] before the frozen_support() function. My specs: Windows 7 Ultimate 32bits Python 2.7.8 Flask 0.9 cx_Freeze 4.3.3 Here's the mail: I think the fix will be for the Win32Service base to call PySys_SetArgv(), like the other services already do.
https://bitbucket.org/anthony_tuininga/cx_freeze/issues/97
CC-MAIN-2018-26
refinedweb
475
60.21
Convert pandas DataFrame to NumPy Array in Python (3 Examples) In this Python tutorial you’ll learn how to transform a pandas DataFrame to a NumPy Array. The page contains these content blocks: Let’s start right away! Example Data & Software Libraries We first have to import the pandas library, in order to use the corresponding functions: import pandas as pd # Import pandas library in Python Furthermore, we’ll use the following data as basement for this Python tutorial: data = pd.DataFrame({'x1':range(101, 106), # Create example DataFrame 'x2':['x', 'y', 'z', 'x', 'y'], 'x3':range(16, 11, - 1), 'x4':range(5, 10)}) print(data) # Print example DataFrame Have a look at the previous table. It shows that the example data is made of five rows and four columns called “x1”, “x2”, “x3”, and “x4”. Example 1: Transform pandas DataFrame to NumPy Array Using to_numpy() Function The following syntax shows how to convert a pandas DataFrame to a NumPy array using the to_numpy function. In order to use the functions of the NumPy package, we first have to load the numpy library to Python: import numpy as np # Import NumPy library in Python In the next step, we can apply the to_numpy function as shown below: data_array1 = data.to_numpy() # Apply to_numpy function to entire DataFrame print(data_array1) # Print array # [[101 'x' 16 5] # [102 'y' 15 6] # [103 'z' 14 7] # [104 'x' 13 8] # [105 'y' 12 9]] Have a look at the previous output: It shows that we have created a new array object called data_array1 that contains the values of our pandas DataFrame. Example 2: Transform Specific Columns of pandas DataFrame to NumPy Array In this example, I’ll show how to convert only a subset of a pandas DataFrame to a NumPy array. For this, we can use the following Python syntax: data_array2 = data[['x2', 'x4']].to_numpy() # Apply to_numpy to DataFrame subset print(data_array2) # Print array # [['x' 5] # ['y' 6] # ['z' 7] # ['x' 8] # ['y' 9]] As you can see based on the previous console output, we have created a NumPy array containing the values of the variables x2 and x4 of our pandas DataFrame. Example 3: Transform pandas DataFrame to NumPy Array Using values Attribute So far, we have used the to_numpy function to change from the pandas DataFrame class to the NumPy array class. However, it is also possible to extract the values of a pandas DataFrame to create a NumPy array using the values attribute of our DataFrame. Have a look at the following Python code: data_array3 = data.values # Extract values of DataFrame print(data_array3) # Print array # [[101 'x' 16 5] # [102 'y' 15 6] # [103 'z' 14 7] # [104 'x' 13 8] # [105 'y' 12 9]] The previously shown output is exactly the same as in Example 1. However, this time we have used the values attribute instead of the to_numpy command. Video & Further Resources Do you need more information on NumPy arrays in Python? Then I recommend watching the following video on the YouTube channel of Joe James. In the video, he explains how to handle numerical arrays in Python: Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party. If you accept this notice, your choice will be saved and the page will refresh. In addition, you may read the other articles on Statistics Globe: To summarize: In this tutorial, I have explained how to convert a pandas DataFrame to a NumPy Array in the Python programming language. Please let me know in the comments below, in case you have further questions.
https://statisticsglobe.com/convert-pandas-dataframe-to-numpy-array-in-python
CC-MAIN-2021-31
refinedweb
608
62.51
APM normally focuses on the activity of the application but gathering information about system usage gives you important background information that helps understand and manage the performance of your application so I am including the IRIS History Monitor in this series. In this article I will briefly describe how you start the IRIS or Caché History Monitor to build a record of the system level activity to go with the application activity and performance information you gather. I will also give examples of SQL to access the information. What is the IRIS or Caché History Monitor? The History Monitor is available on IRIS or on earlier Caché and Ensemble platforms. It is an extension of the IRIS or Caché System Monitor. It keeps a persistent record of the metrics relating to the database activity (e.g. global reads and updates) and system usage (e.g. CPU usage). There are several tools to collect these statistics, some of which go into enormous detail but they can hard to understand. The History monitor is designed to be simple to use, to run continuously on live systems and require less effort and expertise to understand the output. It stores information in a small number of hourly and daily tables that are easily queried with SQL. So if you start it today the historic record is there when you need it. And of course you can still supplement the historic record with more detailed data collection when you have a problem to investigate. What are the costs and benefits? The history Monitor is very lightweight and won’t add a significant load to the running system. The disk space used is also very small with storage adding up to about 130 MB per year even if you choose to extend the lifetime of hourly statistics as I have recommended in this article. It is easy to configure and the output needs no further analysis to make it usable. The day you turn the History Monitor on, you will see little or no advantage over other tools that can give more immediate detail. The benefit comes weeks or months later, when you are working on a capacity plan or investigating a performance issue. It provides a historic record of many important metrics including - CPU usage - Size of database files and journals - Global references and updates - Physical reads and writes - License usage It also records a large number of more detailed technical metrics that could be helpful when investigating changes in performance of an application. How do I access data stored by the History Monitor? Tables The information is stored in the %SYS namespace. It is readily accessible using SQL and can therefore be analyzed with any popular reporting package. There are four main daily tables SYS_History.Daily_DB, SYS_History.Daily_Sys, SYS_History.Daily_Perf and SYS_History.Daily_WD holding the daily summaries. There are equivalent tables holding the hourly summaries. Hourly and Daily fields The daily and hourly tables have a daily or hourly field respectively of the form ‘64491||14400’ where the two parts are the $h date and time values when the background job ran to generate the data. The time piece doesn’t have much meaning in the daily tables. Element_key field Some tables include the average and maximum values observed in each time period and the standard deviation. The type of entry is shown by the value of the element_key field Therefore a typical query to see the growth in average daily CPU usage would be: SELECT Substr(DS.Daily,1,5) as DateH, (100-DS.Sys_CPUIdle) as AvgCPUBusy FROM SYS_History.Daily_SYS DS WHERE element_key='Avg' ORDER BY DateH How do I configure and start the History Monitor? Open a Caché terminal session and change to the %SYS namespace. Then run the command Do ^%SYSMONMGR You will be offered a number of character menus. Enter the numbers to make the following selections: - Manage Application Monitor 2) Manage Monitor Classes Then select choose activate option twice and specify the class names 1)Activate/Deactivate Monitor Class %Monitor.System.HistoryPerf Yes 1) Activate/Deactivate Monitor Class %Monitor.System.HistorySys Yes There are a number of of other classes but don't be tempted to activate them without testing first. Some use PERFMON and are not suitable for running on a live system. Then use the exit option until you get back to the first menu. To activate the changes stop and then start the system monitor from the first menu. 1) Start/Stop System Monitor 1) Start/Stop System Monitor The history monitor will run continuously even if the system is restarted. You may want to keep the hourly statistics for longer than the default of 60 days. To do this use the method SetPurge(). E.g. to keep hourly statistics for a year: %SYS>do ##class(SYS.History.Hourly).SetPurge(365) More Complex SQL Example For a more complicated example, if you also want the daily average CPU and the average CPU usage between 9am and 12 noon and you want information about global references and updates: SELECT Substr(DS.Daily,1,5) Day, (100-DS.Sys_CPUIdle) as Daily_Avg_CPU, Round(AVG(100-H1.Sys_CPUIdle),2) Morning_Avg_CPU , DP.Perf_GloRef, DP.Perf_GloUpdate FROM SYS_History.Daily_Perf DP, SYS_History.Daily_SYS DS, SYS_History.Hourly_Sys H1 WHERE DP.Daily=DS.Daily and DP.element_key='Avg' and DS.element_key='Avg' and H1.element_key='Avg'and substr(DS.Daily,1,5)=Substr(H1.Hourly,1,5) and Substr(H1.Hourly,8,12) in (32400,36000,39600) GROUP BY DS.daily Which on my test system gives … Documentation The history monitor is described in full in the documentation based on feedback, i have updated the document to warn against activating other monitor classes without checking first. Some will affect the performance of your system, but the ones I describe in the article are safe. "There are a number of of other classes but don't be tempted to activate them without testing first. Some use PERFMON and are not suitable for running on a live system." Dave Note that you need to before enabling them.
https://community.intersystems.com/post/apm-using-iris-or-cach%C3%A9-history-monitor
CC-MAIN-2019-39
refinedweb
1,010
56.66
Anystyle-Parser Anystyle-Parser is a very fast and smart parser for academic references. It is inspired by ParsCit and FreeCite; Anystyle-Parser uses machine learning algorithms and is designed for raw speed (it uses wapiti based conditional random fields and Kyoto Cabinet or Redis as a key-value store), flexibility (it is easy to train the model with data that is relevant to your parsing needs), and compatibility (Anystyle-Parser exports to Ruby Hashes, BibTeX, or the CSL/CiteProc JSON format). Web Application and Web Service Anystyle-Parser is available as a web application and a web service at. For example Ruby code using the anystyle.io API, see this prototype for a style predictor. Installation $ [sudo] gem install anystyle-parser During the statistical analysis of reference strings, Anystyle-Parser relies on a large feature dictionary; by default, Anystyle-Parser creates a Kyoto Cabinet file-based hash database from the dictionary file that ships with the parser. If Kyoto Cabinet is not installed on your system, Anystyle-Parser uses a simple Ruby Hash as a fall-back; this Hash has to be re-created every time you load the parser and takes up a lot of memory in your Ruby process; it is therefore strongly recommended to install Kyoto Cabinet and the kyotocabinet-ruby gem. $ [sudo] gem install kyotocabinet-ruby The database file will be created the first time you access the dictionary; note that you will need write permissions in the directory where the file is to be created. You can change the Dictionary's default path in the Dictionary's options: Anystyle::Parser::Dictionary.instance.[:cabinet] Starting with version 0.1.0, Anystyle-Parser also supports Redis; to use Redis as the data store you need to install the redis and redis-namespace gems (optionally, the hiredis gem). $ [sudo] gem install redis redis-namespace To see which data store modes are available in you current environment, check the output of Dictionary.modes: > Anystyle::Parser::Dictionary.modes => [:kyoto, :redis, :hash] To select one of the available modes, use the dictionary instance options: > Anystyle.dictionary.options[:mode] => :kyoto To use Redis you also need to set the host or unix socket where your redis server is available. For example: Anystyle.dictionary.[:mode] = :redis Anystyle.dictionary.[:host] = 'localhost' When the data store is opened using redis-mode and the data store is empty, the feature dictionary will be imported automatically. If you want to import the data explicitly you can use Dictionary#create after setting the required options. Usage Parsing You can access the main Anystyle-Parser instance at Anystyle.parser; the #parse method is also available via Anystyle.parse. For more complex requirements (e.g., if you need multiple Parser instances simultaneously) you can create your own instances from the Anystyle::Parser::Parser class. The two fundamental methods you need to know about in order to use Anystyle-Parser are #parse and #train that both accept two arguments. Parser#parse(input, format = :hash) Parser#train(input = options[:training_data], truncate = true) #parse parses the passed-in input (either a filename, your reference strings, or an array of your reference strings; files are only opened if the string is not tainted) and returns the parsed data in the format specified as the second argument (supported formats include: :hash, :bibtex, :citeproc, :tags, and :raw). #train allows you to easily train the Parser's CRF model. The first argument is either a filename (if the string is not tainted) or your data as a string; the format of training data follows the XML-like syntax of the CORA dataset; the optional boolean argument lets you decide whether to train the existing model or to create an entirely new one. The following irb sessions illustrates some parser goodness: > require 'anystyle/parser' > Anystyle.parse 'Poe, Edgar A. Essays and Reviews. New York: Library of America, 1984.' => [{:author=>"Poe, Edgar A.", :title=>"Essays and Reviews", :location=>"New York", :publisher=>"Library of America", :year=>1984, :type=>:book}] > b = Anystyle.parse 'Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45:503–528.', :bibtex > b[0].author[1].given => "Jorge" > b[0].author.to_s => "Liu, Dong C. and Nocedal, Jorge" > puts Anystyle.parse('Auster, Paul. The Art of Hunger. Expanded. New York: Penguin, 1997.', :bibtex).to_s @book{auster1997a, author = {Auster, Paul}, title = {The Art of Hunger}, location = {New York}, publisher = {Penguin}, edition = {Expanded}, year = {1997} } => nil Unhappy with the results? Citation references come in many forms, so, inevitably, you will find data where Anystyle-Parser does not produce satisfying parsing results. >. 2001", :title=>"Conditional random fields: probabilistic models for segmenting and labeling sequence data", :booktitle=>"Proceedings of the International Conference on Machine Learning", :pages=>"282--289", :publisher=>"Morgan Kaufmann", :location=>"San Francisco, CA", :type=>:inproceedings}] This result is not bad, but notice how the year was not picked up as a date but interpreted as part of the author name. If you have such a problem (particularly, if the problem applies to a range of your input data, e.g., data that follows a style that Anystyle-Parser was not trained to recognize), you can teach Anystyle-Parser to recognize your format. The easiest way to go about this is to create new file (e.g., 'training.txt'), copy and paste a few references, and tag them for training. For example, a tagged version of the input from the example above would look like this: <author> John Lafferty, Andrew McCallum, and Fernando Pereira. </author> <date> 2001. </date> <title> Conditional random fields: probabilistic models for segmenting and labeling sequence data. </title> <booktitle> In Proceedings of the International Conference on Machine Learning, </booktitle> <pages> pages 282–289. </pages> <publisher> Morgan Kaufmann, </publisher> <location> San Francisco, CA. </location> Note that you can pick any tag names, but when working with Anystyle's model you should use the same names used to to train the model. You can always ask the Parser's model what names (labels) it knows about: > Anystyle.parser.model.labels => ["author", "booktitle", "container", "date", "doi", "edition", "editor", "institution", "isbn", "journal", "location", "note", "pages", "publisher", "retrieved", "tech", "title", "translator", "unknown", "url", "volume"] Once you have tagged a few references that you want Anystyle-Parser to learn, you can train the model as follows: > Anystyle.parser.train 'training.txt', false By passing true as the second argument, you will discard Anystyle's default model; the resulting model will be based entirely on your own data. By default the new or altered model will not be saved, but you can do so at any time by calling Anystyle.parser.model.save to save the model to the default file. If you want to save the model to a different file, set the Anystyle.parser.model.path attribute accordingly. After teaching Anystyle-Parser with the tagged references, try to parse your data again: >", :title=>"Conditional random fields: probabilistic models for segmenting and labeling sequence data", :booktitle=>"Proceedings of the International Conference on Machine Learning", :pages=>"282--289", :publisher=>"Morgan Kaufmann", :location=>"San Francisco, CA", :year=>2001, :type=>:inproceedings}] If you want to make Anystyle-Parser smarter, please consider sending us your tagged references (see below). Contributing The Anystyle-Parser source code is hosted on GitHub. You can check out a copy of the latest code using Git: $ git clone If you've found a bug or have a question, please open an issue on the Anystyle-Parser issue tracker. Or, for extra credit, clone the Anystyle-Parser repository, write a failing example, fix the bug and submit a pull request. If you want to contribute tagged references, please either add them to resources/train.txt or create a new file in the resources directory and open a pull request on GitHub. License Some of the code in Anystyle-Parser's post processing (normalizing) routines was originally based on the source code of FreeCite and The CRF template is a modified version of ParsCit's original template Copyright 2008, 2009, 2010, 2011 Min-Yen Kan, Isaac G. Councill, C. Lee Giles, Minh-Thang Luong and Huy Nhat Hoang Do. Anystyle-Parser is distributed under a BSD-style license. See LICENSE for details.
http://www.rubydoc.info/gems/anystyle-parser/frames
CC-MAIN-2016-44
refinedweb
1,359
53.92
Setup the Development Environment for System.Speech Speech engines are built in to Windows Vista and Windows 7. The System.Speech managed-code namespaces in the .NET Framework provide you with access to Microsoft's speech recognition and speech synthesis technologies in Windows. Use the following steps to set up the environment for speech development, for applications that will run on Windows Vista or Windows 7. To gain access to the System.Speech namespaces Download and install the Microsoft .NET Framework 4 (Web Installer). In your Visual Studio project, add a reference to System.Speech, as follows: In the Solution Explorer window, right-click References, and then click Add Reference. In the Add Reference window, on the .NET tab, scroll until you find System.Speech and select it, and then click OK. In your Visual Studio project, add a using statement for each System.Speech namespace that you want to access. For example, if you want to work with speech synthesis, enter the following using statement using System.Speech.Synthesis; To work with speech recognition, enter the following using statement: using System.Speech.Recognition; Your project should now have access to speech recognition and speech synthesis in Windows Vista or Windows 7. Try the examples under Speech Synthesis and Speech Recognition to get started.
http://msdn.microsoft.com/en-us/library/hh361618(v=office.14).aspx
CC-MAIN-2014-15
refinedweb
214
60.72
More like this - Validate by file content type and size by macmichael01 4 years, 3 months ago - Amazon S3 browser-based upload form(FIXED) by grillermo 7 months, 2 weeks ago - Aggiornare i Content Types e i Permessi del Model di una Tabella nell Admin by dario.agliottone 1 year ago - Update ContentTypes and Permissions without syncdb by paltman 5 years, 1 month ago - Database file storage by powerfox 4 years, 3 months ago First off this is a dupe of #1303. Second, as #1303 it fails to prevent the upload of huge files, it just rejects them after they have been uploaded. The effective way to limit the file size (at least at the Django level) is using something like, with the downside that it abruptly terminates the connection instead of showing a user friendly Validation error. # Thanks for the info. Is not exactly a duplicated of 1303, even if I used the main part of that. 1303 is a custom validation while this is a custom file field that can be used in the admin too. How would you advice to implement the method you linked? # I'd love to be proven wrong but I believe there is no way to get a nice validation error without uploading the whole file; otherwise the browser throws "The connection to the server was reset while the page was loading" on Firefox (or "Error 101 (net::ERR_CONNECTION_RESET): Unknown error" on Chrome). # So would it be possible then to add a line in the end that deletes the file? I'll try it out.. # The files is stored only in the temporary directory. I think it's a good solution to use both this filefield and set FILE_UPLOAD_MAX_MEMORY_SIZE in your settings.py file so it allows a size that is slight larger than the model field. # If you're ok with letting people use up all your bandwidth for uploading 1GB files to your servers just to delete them as soon as the upload finishes, sure it's a great solution :-P Here is a StackOverflow answer than does a much better job than me to explain the problem: # For get it work with south migrations I had to changed some lines... And adding the specific rule for south and the end of the file # it works for me!, but i recommend add the project name to the instrospection_rule path, something like this: from south.modelsinspector import add_introspection_rules add_introspection_rules([], ["^project_name.app.extra.ContentTypeRestrictedFileField"]) # oops! i forgot the slashes!! # there is a bug, when you try to update a registry in admin panel, and do not update the field with ContentTypeRestrictedFileField you will get an error of content_type value. here is the source of clean method of ContentTypeRestrictedFileField class fixed. # @gsakkis I designed this field to use it in the admin so i'm not worried about 1GB file because the users are already trusted. For who's worried about huge files: configure the max upload limit in apache. # I am new in Django but I would like to ask this, reading your code. You import FileField from models but FileField in models doesn't have a clean method. FileField from forms have one. You have been using this snippet as it is and it worked without a problem? # A new & dusted off version, checking minimum size, maximum size, matching extensions and mime types. # Some time ago, I needed to buy a building for my business but I didn't have enough cash and could not purchase anything. Thank heaven my father proposed to try to take the loans at banks. Thus, I acted so and was satisfied with my collateral loan. #
http://djangosnippets.org/snippets/2206/
CC-MAIN-2013-20
refinedweb
609
69.01
10 Tips and Tricks for Data Scientists Vol.10 Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. We have started a series of articles on tips and tricks for data scientists (mainly in Python and R). In case you have missed: Python 1.How to Get The Key of the Maximum Value in a Dictionary d={"a":3,"b":5,"c":2} (max(d, key=d.get)) b 2.How to Sort a Dictionary by Values Assume that we have the following dictionary and we want to sort it by values (assume that the values are numeric data type). d={"a":3,"b":5,"c":2} # sort it by value dict(sorted(d.items(), key=lambda item: item[1])) {'c': 2, 'a': 3, 'b': 5} If we want to sort it in descending order: dict(sorted(d.items(), key=lambda item: item[1], reverse=True)) {'b': 5, 'a': 3, 'c': 2} 3.How to Shuffle your Data with Pandas We can easily shuffle our pandas data frame by taking a sample of fraction=1, where in essence we get a sample of all rows without replacement. The code: import pandas as pd # assume that the df is your Data Frame df.sample(frac=1).reset_index(drop=True) 4.How to Move a Column to be the Last in Pandas Sometimes, we want the “Target” column to be the last one in the Data Frame. Let’s see how we can do it in Pandas. Assume that we have the following data frame: import pandas as pd df = pd.DataFrame({'A':[1,2,3], 'Target':[0,1,0], 'B':[4,5,6]}) df Now, we can reindex the columns as follows: df = df.reindex(columns = [col for col in df.columns if col != 'Target'] + ['Target']) df 5.How to Circular Shift Lists in Python We can use the roll method to the numpy arrays. It also supports both directions and n steps. For example: import numpy x=numpy.arange(1,6) numpy.roll(x,1) array([5, 1, 2, 3, 4]) Or, if we want to get 2 steps backward: x=numpy.arange(1,6) numpy.roll(x,-2) array([3, 4, 5, 1, 2]) 6.Replace Values Based On Index In Pandas Dataframes You can easily replace a value in pandas data frames by just specifying its column and its index. import pandas as pd import dataframe_image as dfi df = pd.DataFrame({'A': [1,2,3,4], 'B':['A','B','C','D']}) Having the dataframe above, we will replace some of its values. We are using the loc function of pandas. The first variable is the index of the value we want to replace and the second is its column. df.loc[0,"A"]=20 df.loc[1,"B"]="Billy" The loc function also lets you set a range of indexes to be replaced as follows. df.loc[0:2,"A"]=100 7.How to Generate Requirements.txt For Your Python Project Without Environments When I’m working on a new python project I just want to open the jupyter notebook in a new folder and start working. After the project is done, sometimes we have to create a requirements.txt file that contains all the libraries we used in the project so we can share it or deploy it on a server. This is so annoying because we have to create an environment and then re-install the libraries we used so we can generate the requirements file for this project. Fortunately, there is a package called PIGAR that can generate the requirements file for your project automatically without any new environments. Installation pip install pigar Let’s use it for a project. You can clone the dominant color repo and delete its requirements file. Then, open your terminal, head over the projects folder, and run the following: pigar Simple as that. You should see that a new requirements.txt file is generated with the libraries used for the project. 8.How to Generate Random Names When we generate random data, sometimes there is a need to generate random names, like full names, first names and last names. We can achieve this with the names library. You can also specify the gender of the name. Let’s see some examples: For example: pip install names import names names.get_full_name() 'Clarissa Turner' names.get_full_name(gender='male') 'Christopher Keller' names.get_first_name() 'Donald' names.get_first_name(gender='female') 'Diane' names.get_last_name() 'Beauchamp' 9.How to pass the column names with Pandas Sometimes we get file names without headers. Let’s see how we can read the csv file with pandas by specifying that there are not any headers and to define the column names. We will work with the fertility dataset obtained from IC Irvine. The txt file looks like this: where as you can see there are no headers. Let’s read it with pandas: import pandas as pd headers = ['Season', 'Age', 'Diseases', 'Trauma', 'Surgery', 'Fever', 'Alcohol', 'Smoking', 'Sitting', 'Output'] fertility = pd.read_csv('data/fertility_diagnosis.txt', delimiter=',', header=None, names=headers) fertility R 10.How to estimate the Standard Deviation of Normal Distribution You can encounter this type of questions during the interview process for Data Scientist positions. So the question can be like that: Question: Assume that a process follows a normal distribution with mean 50 and that we have observed that the probability to exceed the value 60 is 5%. What is the standard deviation of the distribution? Solution: \(P(X \geq 60) = 0.05\) \(1- P(X < 60) = 0.05\) \(P(X < 60) = 0.95\) \(P(\frac{X-50}{\sigma} < \frac{60-50}{\sigma}) = 0.95\) \(P(\frac{X-50}{\sigma} < \frac{10}{\sigma}) = 0.95\) \(Z(\frac{10}{\sigma})= 0.95\) But form the Standard Normal Distribution we know that the \(Z(1.644854)=0.95\) ( qnorm(0.95) = 1.644854), Thus, \(\frac{10}{\sigma} = 1.644854\) \(\sigma = 6.079567\) Hence the Standard Deviation is 6.079567. We can confirm it by running a simulation in R estimating the probability of the Normal(50, 6.079567) to exceed the value 60: set.seed(5) sims<-rnorm(10000000, 50, 6.079567 ) sum(sims>=60)/length(sims) [1] 0.0500667 As expected, the estimated probability for our process to exceed the value 60.
https://www.r-bloggers.com/2021/07/10-tips-and-tricks-for-data-scientists-vol-10/
CC-MAIN-2021-39
refinedweb
1,056
67.96
zongshen motorcycle parts importers US $0.4-5 50 Pieces (Min. Order) import thailand motorcycle parts,international clutch plate,p... US $0.13-0.2 1000 Pieces (Min. Order) low price butyl tube motorcycle manufacturer inner tube 4.00-... US $0.86-1.68 1000 Pieces (Min. Order) Import brand bearing 53306 motorcycle bearing US $1-99 1 Set (Min. Order) NACHI motorcycle bearing,F608Z bearing importer email,gasolin... US $0.1-10 1 Piece (Min. Order) china imports dirt bike motorcycle with eec /sport bike for s... US $300-500 1 Set (Min. Order) 3 wheeler Euope-popular BRI-C01 chinese motorcycle imports US $400-900 1 Piece (Min. Order) new product adult electric motorcycles import motorcycle 72V/... US $565-600 32 Pieces (Min. Order) cheap import motorcycles from china made in china US $200-300 1 Piece (Min. Order) Import mini gas motorcycles for sale manufacturers in china US $980-1150 1 Piece (Min. Order) Motorcycle big sprocket,spare parts motorcycle cd70,looking f... US $0.2-3.3 1000 Pieces (Min. Order) importing dirt cheap motorcross motorcycle made in China US $560-590 32 Units (Min. Order) Cheap import motorcycles US $800-1000 1 Set (Min. Order) PT70-A Durable Chongqing Made For Adult Cheap Import ... US $200-300 90 Units (Min. Order) MSX125 Wholesale China Import Pancake engine Motorcycle US $600-750 50 Sets (Min. Order) import motorcycles from china 125cc chopper motorbikes US $200-500 50 Pieces (Min. Order) Fashionable Cheap Import Motorcycles Selling Well US $1000-1250 32 Units (Min. Order) import bicycles from china/cheap import motorcycles/chopper ... US $700-2200 10 Units (Min. Order) Distinctive 2014 New Cheap Fashional import motorcycles from ... US $280-340 24 Pieces (Min. Order) Electric 250W cheap import motorcycle 50 Pieces (Min. Order) Made in China Alibaba Supplier 2014 New Design Cheap chinese... US $900-1200 38 Units (Min. Order) cargo tricycle with cabin/cheap import motorcycles/chopper ... US $580-860 27 Pieces (Min. Order) 150cc China Gas Cheap Import Motorcycles US $450-550 50 Units (Min. Order) 2014 China Kingswing import scooters electric motorcycles for... US $491-869 1 Set (Min. Order) Motorcycles manufacture cheap import motorcycles 150cc dirt b... US $750-790 30 Units (Min. Order) Sale Chinese Sport Bike Cheap Import Motorcycles US $500-800 10 Pieces (Min. Order) 2016 Chinese Popular Motorized Cargo Triciclos Para Adultos,... US $1210-1380 15 Units (Min. Order) China manufacturer import used motorcycles/zongshen ... US $1400-1600 17 Pieces (Min. Order) Importing three wheel motorcycle from Japan US $800-1600 10 Units (Min. Order) 2015 spring 5 wheel cargo motorcycle/chinese three wheel ... US $800-1500 20 Sets (Min. Order) Chongqing cargo use three wheel motorcycle 250cc tricycle chi... US $738-1300 1 Set (Min. Order) Import tricycle 3 wheel motorcycle manufacturers in china US $400-600
http://www.alibaba.com/showroom/imported-motorcycles.html
CC-MAIN-2016-07
refinedweb
469
63.05
Important: Please read the Qt Code of Conduct - Problem adapting a C code to Qt - Flavio Mesquita last edited by VRonin Hi everyone, I´m a beginner on programming and Qt, but as liked the framework I´m trying to improve my skills and write my C++ codes on it. I got a task of writting a Ricker wavelet code and then plot it. I divided it in two tasks, first make the ricker code works, and when it is running, then implement a way to plot it, I will use qcustomplot for it. I got a code from C and I´m trying to adapt it to Qt. Although it doesn´t give any errors during compilation, when executing it crashes, with the following message: Invalid parameter passed to C runtime function. C:/Users/Flavio/Documents/qtTest/build-ricker2-Desktop_Qt_5_11_0_MinGW_32bit-Debug/debug/ricker2.exe exited with code 255 The code I´m supposed to translate is: #include <stdio.h> #include <stdlib.h> #include <math.h> float *rickerwavelet(float fpeak, float dt, int *nwricker); int main(int argc, char **argv) { int i; float dt; float fpeak; float *wricker=NULL; int nwricker; fpeak = atof(argv[1]); dt = atof(argv[2]); wricker = rickerwavelet(fpeak, dt, &nwricker); /* show value of ricker wavelets */ for (i=0; i<nwricker; i++) printf("%i. %3.5f \n", i, wricker[i]); free(wricker); return(1); } /* ricker wavelet function, return an array ricker wavelets */ float *rickerwavelet(float fpeak, float dt, int *nwricker) { int i, k; int nw; int nc; float pi; float nw1, alpha, beta; float *wricker=NULL; pi = 3.141592653589793; nw1 = 2.2/fpeak/dt; nw = 2*floor(nw1/2)+1; nc = floor(nw/2); wricker = (float*) calloc (nw, sizeof(float)); for (i=0; i<nw; i++) { k = i+1; alpha = (nc-k+1)*fpeak*dt*pi; beta = pow(alpha, 2.0); wricker[i] = (1 - (beta*2)) * exp(-beta); } (*nwricker) = nw; return(wricker); } The code i wrote on Qt is: #include <QCoreApplication> #include <qmath.h> #include <stdio.h> #include <stdlib.h> #include <QDebug> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); int i,k,nw,nc; double *wricker=NULL; int nwricker = 60; int wavelet_freq = 30; int polarity=1; int sampling_rate=0.004; float nw1, alpha, beta; const double pi = 3.141592653589793238460; nw1 = 2.2/wavelet_freq/sampling_rate; nw = 2*floor(nw1/2)+1; nc = floor(nw/2); wricker = (double*)calloc (nw, sizeof(double)); for (i=0; i<nw; i++) { k = i+1; alpha = (nc-k+1)*wavelet_freq*sampling_rate*pi; beta = pow(alpha, 2.0); wricker[i] = polarity*((1 - (beta*2)) * exp(-beta)); }; /* show value of ricker wavelets */ for (i=0; i<nwricker; i++) { qDebug()<<i<<wricker[i]; }; free(wricker); return a.exec(); } Analytic expression The amplitude A of the Ricker wavelet with peak frequency f at time t is computed like so: A = (1-2 pi^2 f^2* t^2) e^{-pi^2* f^2* t^2} A py code for it would be:) What seems quite simple. Does anyone have any idea what is wrong in my code??? Thanks in advance. - aha_1980 Lifetime Qt Champion last edited by Hi @Flavio-Mesquita, welcome to the forums. Although it doesn´t give any errors during compilation, when executing it crashes The best hint I can give you is: use a debugger to find out - where your program crashes and - why your program crashes. If you find where, but don't know why, ask again here. But if you let others search for the errors, you have zero learning effect and this doesn't help you in the long run. Regards - Flavio Mesquita last edited by @aha_1980 Ok, I used the Debug, although I´m not sure if i used it right, I followed the instructions on the help manual. According to it, the problem is when passing the vectors to qDebug, it give a message: THE INFERIOR STOPPED BECAUSE IT RECEIVED A SIGNAL FROM THE OPERATING SYSTEM . SIGNAL NAME: SIGSEGV SIGNAL MEANING: SEGMENTATION FAULT I´ll search for more information on this meaning. I used qDebug only with the intention of showing the data on a terminal, actually I want to plot the arrays: wricker and i. - ambershark Moderators last edited by @Flavio-Mesquita So now that you have it crashing (segfault), you run a backtrace and it will show you the call stack. This will tell you where you crashed in your application. If you want further help post the backtrace here. - jsulm Lifetime Qt Champion last edited by @Flavio-Mesquita said in Problem adapting a C code to Qt: qDebug()<<i<<wricker[i]; Is it in this place? If so then most probably it is not related to qDebug, but to wricker[i] - I guess you have less entries in wricker than nwricker. #include <vector> #include <cmath> #include <iostream> #include <QCommandLineParser> #include <QCoreApplication> std::vector<double> rickerwavelet(double fpeak, double dt) { const double nw1 = 2.2/fpeak/dt; const int nwricker = (2*std::floor(nw1/2.0))+1; const int nc = std::floor(nwricker/2.0); const double pi = std::atan(1.0)*4.0; std::vector<double> wricker; wricker.reserve(nwricker); for (int i=0; i<nwricker; ++i){ const double alpha = (nc-i+2)*fpeak*dt*pi; const double beta = std::pow(alpha, 2.0); wricker.push_back((1 - (beta*2)) * std::exp(-beta)); } return wricker; } int main(int argc, char **argv) { QCoreApplication a(argc, argv); QCommandLineParser parser; parser.setApplicationDescription("Ricker wavelet"); parser.addHelpOption(); parser.addPositionalArgument("fpeak","Peak Frequency"); parser.addPositionalArgument("dt","Time"); parser.process(a); const QStringList args = parser.positionalArguments(); if(args.size()!=2){ std::cout << "Invalid Input!" << std::endl; parser.showHelp(1); } bool conversionOk; const double fpeak = args.first().toDouble(&conversionOk); if(!conversionOk){ std::cout << "Invalid Input!" << std::endl; parser.showHelp(1); } const double dt = args.last().toDouble(&conversionOk); if(!conversionOk){ std::cout << "Invalid Input!" << std::endl; parser.showHelp(1); } const std::vector<double> wricker = rickerwavelet(fpeak,dt); for(size_t i=0;i<wricker.size();++i) std::cout << wricker[i] << std::endl; return 0; } - Flavio Mesquita last edited by Thanks everyone for ur help, the question was solved. - jsulm Lifetime Qt Champion last edited by @Flavio-Mesquita Then please mark this thread as solved.
https://forum.qt.io/topic/91344/problem-adapting-a-c-code-to-qt/1
CC-MAIN-2021-10
refinedweb
1,021
56.35
While ago I was looking for Calico helm chart and there were some deployment files floating around the web using kubectl etc, but I really like to automate things with press of a button. So here it is a Calico Helm Chart that needs some love in case you are into Kubernetes. mrzobot / calico-helm-chart Helm Chart for Calico Calico Helm Chart I took AWS/EKS Calico Installation file that you can find here and split it into few files that make up the whole Helm Chart. I have not worked on creating values.yaml file and templating it much, so if you have suggestions or time, feel free to update it. I just used the helm chart to test few things with automated Deployment. Installation Clone this repository and then run helm install . --name=calico --namespace=kube-system If you need to reference tiller, just add --tiller-namespace=NamespaceWhereTillerIsInstalled Notes This helm chart is really straight forward, however feel free to fork it or make changes. I'll see to make updates to values files and start disecting the yaml infrastructure within the template. Discussion
https://dev.to/joehobot/calico-helm-chart-for-kubernetes-5127
CC-MAIN-2020-40
refinedweb
187
59.94
It appears that the call to Super() (to move into supervisor mode) executes correctly, but the executable then bombs out on executing the inline assembly to disable all interrupts. It's as if the machine hasn't entered supervisor mode at all. In order to try and get to the bottom of the issue, I've created a minimal test case based upon the ctest.c in the bigbrownbuild repo: Code: Select all //====================================================================================================================== // BrownELF GCC example: C++ startup/shutdown tests //====================================================================================================================== // --------------------------------------------------------------------------------------------------------------------- // system headers #include <mint/sysbind.h> #include <stdio.h> #include <stdlib.h> // for atexit() #include <stdarg.h> // for printf va_args etc. // force GCC to keep functions that look like they might be dead-stripped due to non-use #define USED __attribute__((used)) int main(int argc, char ** argv) { Super(0); __asm__ __volatile__ ( "move.w #0x2700,%%sr;" : : : "cc" ); while (1==1) {} } What I'd expect this code to do is enter supervisor mode, disable interrupts and then hang. Instead, it bombs out on the inline assembly with what appears to be 8 bombs. This code (or at least code very similar to it) works fine under Vincent's GCC. I'm wondering if I'm doing something wrong that Vincent's compiler somehow allows but GCC 7.1 is less forgiving of. I'd be extremely grateful if somebody could help me get a better understanding of what's going wrong here! Thanks in advance for any responses.
http://www.atari-forum.com/viewtopic.php?f=70&t=32298&p=327563&sid=e9278e6d9c0752c74b9d966e6f865c5a
CC-MAIN-2018-34
refinedweb
237
66.54