text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On Tue, 16 Aug 2005, Walton A. Green wrote: Tried to compile ddrescue 1.0 and 1.0.1 on a Mac 10.2.3 with gcc 3.1 and got the following errors (output from 1.0)...I know this is an old system, but I don't have immediate access to a more up-to-date installation. Is there a reasonably quick fix, or should I find a newer computer? You would need newer C++ libraries, not a newer compiler. (They may come together though.)You would need newer C++ libraries, not a newer compiler. (They may come together though.) ddrescue.cc:185: `llabs' undeclared in namespace `std' ddrescue.cc:187: `snprintf' undeclared in namespace `std' Option 1:Both of those are in the format_num function which is not critical. Just have it output the raw number, not the pretty format - it will mess up the output formating, but it will work. Change: for( int i = 0; i < 8 && std::llabs( num ) > std::llabs( max ); ++i ) { num /= factor; p = prefix[i]; } std::snprintf( buf, sizeof( buf ), "%lld %s", num, p ); return buf; To: std::sprintf( buf, "%lld", num); And change (in the same function): static char buf[16]; To: static char buf[21];PS. This is not a tested change, but if it compiles and look OK when run, it should be fine. Option 2:Change std::llabs to just llabs, and try a recompile. If the llabs error then goes away, change the size of buf to 25, as above, and rename snprintf to sprintf. If the llabs error doesn't go away, go with option 1. -Ariel
http://lists.gnu.org/archive/html/bug-ddrescue/2005-08/msg00001.html
CC-MAIN-2014-42
refinedweb
271
84.47
Ticket #1139 (closed bug: worksforme) installation bug python 2.7.2 blokker Description Python 2.7.2 (default, Jun 24 2011, 12:03:25) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. from Orange import OWGUI Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name OWGUI import OWGUI Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/Orange/OrangeWidgets/OWGUI.py", line 1820, in <module> class BarItemDelegate(QStyledItemDelegate): NameError: name 'QStyledItemDelegate' is not defined Change History comment:2 Changed 2 years ago by ales Which version of Qt4 library do you have. Orange needs at least 4.4 (4.6 and later are recommended). From a quick google for the python's version string it seems you are using hardy heron, and it seems to have Qt 4.3. Do you have newer Qt library's in the PPA's? Doesn't appear to be built? Strange
http://orange.biolab.si/trac/ticket/1139
CC-MAIN-2014-10
refinedweb
175
67.65
(Note: If you get compiler errors that you don't understand, be sure to consult Google Mock Doctor.) When you write a prototype or test, often it's not feasible or wise to rely on real objects entirely. A mock object implements the same interface as a real object (so it can be used as one), but lets you specify at run time how it will be used and what it should do (which methods will be called? in which order? how many times? with what arguments? what will they return? etc). Note: It is easy to confuse the term fake objects with mock objects. Fakes and mocks actually mean very different things in the Test-Driven Development (TDD) community: If all this seems too abstract for you, don't worry - the most important thing to remember is that a mock allows you to check the interaction between itself and code that uses it. The difference between fakes and mocks will become much clearer once you start to use mocks. Google C++ Mocking Framework (or Google Mock for short) is a library (sometimes we also call it a “framework” to make it sound cool) for creating mock classes and using them. It does to C++ what jMock and EasyMock do to Java. Using Google Mock involves three basic steps: While mock objects help you remove unnecessary dependencies in tests and make them fast and reliable, using mocks manually in C++ is hard: In contrast, Java and Python programmers have some fine mock frameworks, which automate the creation of mocks. As a result, mocking is a proven effective technique and widely adopted practice in those communities. Having the right tool absolutely makes the difference. Google Mock was built to help C++ programmers. It was inspired by jMock and EasyMock, but designed with C++'s specifics in mind. It is your friend if any of the following problems is bothering you: We encourage you to use Google Mock as: Using Google Mock is easy! Inside your C++ source file, just #include "gtest/gtest.h" and "gmock/gmock.h", and you are ready to go. Let‘s look at an example. Suppose you are developing a graphics program that relies on a LOGO-like API for drawing. How would you test that it does the right thing? Well, you can run it and compare the screen with a golden screen snapshot, but let’s admit it: tests like this are expensive to run and fragile (What if you just upgraded to a shiny new graphics card that has better anti-aliasing? Suddenly you have to update all your golden images.). It would be too painful if all your tests are like this. Fortunately, you learned about Dependency Injection and know the right thing to do: instead of having your application talk to the drawing API directly, wrap the API in an interface (say, Turtle) and code to that interface: class Turtle { ... virtual ~Turtle() {} virtual void PenUp() = 0; virtual void PenDown() = 0; virtual void Forward(int distance) = 0; virtual void Turn(int degrees) = 0; virtual void GoTo(int x, int y) = 0; virtual int GetX() const = 0; virtual int GetY() const = 0; }; (Note that the destructor of Turtle must be virtual, as is the case for all classes you intend to inherit from - otherwise the destructor of the derived class will not be called when you delete an object through a base pointer, and you'll get corrupted program states like memory leaks.) You can control whether the turtle's movement will leave a trace using PenUp() and PenDown(), and control its movement using Forward(), Turn(), and GoTo(). Finally, GetX() and GetY() tell you the current position of the turtle. Your program will normally use a real implementation of this interface. In tests, you can use a mock. If you are lucky, the mocks you need to use have already been implemented by some nice people. If, however, you find yourself in the position to write a mock class, relax - Google Mock turns this task into a fun game! (Well, almost.) Using the Turtle interface as example, here are the simple steps you need to follow: MockTurtlefrom Turtle. Turtle(while it‘s possible to mock non-virtual methods using templates, it’s much more involved). Count how many arguments it has. public:section of the child class, write MOCK_METHODn();(or MOCK_CONST_METHODn();if you are mocking a constmethod), where nis the number of the arguments; if you counted wrong, shame on you, and a compiler error will tell you so. After the process, you should have something like: #include "gmock/gmock.h" // Brings in Google Mock. class MockTurtle : public Turtle { public: ... MOCK_METHOD0(PenUp, void()); MOCK_METHOD0(PenDown, void()); MOCK_METHOD1(Forward, void(int distance)); MOCK_METHOD1(Turn, void(int degrees)); MOCK_METHOD2(GoTo, void(int x, int y)); MOCK_CONST_METHOD0(GetX, int()); MOCK_CONST_METHOD0(GetY, int()); }; You don‘t need to define these mock methods somewhere else - the MOCK_METHOD* macros will generate the definitions for you. It’s that simple! Once you get the hang of it, you can pump out mock classes faster than your source-control system can handle your check-ins. Tip: If even this is too much work for you, you‘ll find the gmock_gen.py tool in Google Mock’s scripts/generator/ directory (courtesy of the cppclean project) useful. This command-line tool requires that you have Python 2.4 installed. You give it a C++ file and the name of an abstract class defined in it, and it will print the definition of the mock class for you. Due to the complexity of the C++ language, this script may not always work, but it can be quite handy when it does. For more details, read the user documentation. When you define a mock class, you need to decide where to put its definition. Some people put it in a *_test.cc. This is fine when the interface being mocked (say, Foo) is owned by the same person or team. Otherwise, when the owner of Foo changes it, your test could break. (You can't really expect Foo's maintainer to fix every test that uses Foo, can you?) So, the rule of thumb is: if you need to mock Foo and it's owned by others, define the mock class in Foo's package (better, in a testing sub-package such that you can clearly separate production code and testing utilities), and put it in a mock_foo.h. Then everyone can reference mock_foo.h from their tests. If Foo ever changes, there is only one copy of MockFoo to change, and only tests that depend on the changed methods need to be fixed. Another way to do it: you can introduce a thin layer FooAdaptor on top of Foo and code to this new interface. Since you own FooAdaptor, you can absorb changes in Foo much more easily. While this is more work initially, carefully choosing the adaptor interface can make your code easier to write and more readable (a net win in the long run), as you can choose FooAdaptor to fit your specific domain much better than Foo does. Once you have a mock class, using it is easy. The typical work flow is: testingnamespace such that you can use them unqualified (You only have to do it once per file. Remember that namespaces are a good idea and good for your health.). Here's an example: #include "path/to/mock-turtle.h" #include "gmock/gmock.h" #include "gtest/gtest.h" using ::testing::AtLeast; // #1 TEST(PainterTest, CanDrawSomething) { MockTurtle turtle; // #2 EXPECT_CALL(turtle, PenDown()) // #3 .Times(AtLeast(1)); Painter painter(&turtle); // #4 EXPECT_TRUE(painter.DrawCircle(0, 0, 10)); } // #5 int main(int argc, char** argv) { // The following line must be executed to initialize Google Mock // (and Google Test) before running the tests. ::testing::InitGoogleMock(&argc, argv); return RUN_ALL_TESTS(); } As you might have guessed, this test checks that PenDown() is called at least once. If the painter object didn't call this method, your test will fail with a message like this: path/to/my_test.cc:119: Failure Actual function call count doesn't match this expectation: Actually: never called; Expected: called at least once. Tip 1: If you run the test from an Emacs buffer, you can hit <Enter> on the line number displayed in the error message to jump right to the failed expectation. Tip 2: If your mock objects are never deleted, the final verification won‘t happen. Therefore it’s a good idea to use a heap leak checker in your tests when you allocate mocks on the heap. Important note: Google Mock requires expectations to be set before the mock functions are called, otherwise the behavior is undefined. In particular, you mustn't interleave EXPECT_CALL()s and calls to the mock functions. This means EXPECT_CALL() should be read as expecting that a call will occur in the future, not that a call has occurred. Why does Google Mock work like that? Well, specifying the expectation beforehand allows Google Mock to report a violation as soon as it arises, when the context (stack trace, etc) is still available. This makes debugging much easier. Admittedly, this test is contrived and doesn't do much. You can easily achieve the same effect without using Google Mock. However, as we shall reveal soon, Google Mock allows you to do much more with the mocks. If you want to use something other than Google Test (e.g. CppUnit or CxxTest) as your testing framework, just change the main() function in the previous section to: int main(int argc, char** argv) { // The following line causes Google Mock to throw an exception on failure, // which will be interpreted by your testing framework as a test failure. ::testing::GTEST_FLAG(throw_on_failure) = true; ::testing::InitGoogleMock(&argc, argv); ... whatever your testing framework requires ... } This approach has a catch: it makes Google Mock throw an exception from a mock object‘s destructor sometimes. With some compilers, this sometimes causes the test program to crash. You’ll still be able to notice that the test has failed, but it's not a graceful failure. A better solution is to use Google Test‘s event listener API to report a test failure to your testing framework properly. You’ll need to implement the OnTestPartResult() method of the event listener interface, but it should be straightforward. If this turns out to be too much work, we suggest that you stick with Google Test, which works with Google Mock seamlessly (in fact, it is technically part of Google Mock.). If there is a reason that you cannot use Google Test, please let us know. The key to using a mock object successfully is to set the right expectations on it. If you set the expectations too strict, your test will fail as the result of unrelated changes. If you set them too loose, bugs can slip through. You want to do it just right such that your test can catch exactly the kind of bugs you intend it to catch. Google Mock provides the necessary means for you to do it “just right.” In Google Mock we use the EXPECT_CALL() macro to set an expectation on a mock method. The general syntax is: EXPECT_CALL(mock_object, method(matchers)) .Times(cardinality) .WillOnce(action) .WillRepeatedly(action); The macro has two arguments: first the mock object, and then the method and its arguments. Note that the two are separated by a comma ( ,), not a period ( .). (Why using a comma? The answer is that it was necessary for technical reasons.) The macro can be followed by some optional clauses that provide more information about the expectation. We'll discuss how each clause works in the coming sections. This syntax is designed to make an expectation read like English. For example, you can probably guess that using ::testing::Return;... EXPECT_CALL(turtle, GetX()) .Times(5) .WillOnce(Return(100)) .WillOnce(Return(150)) .WillRepeatedly(Return(200)); says that the turtle object's GetX() method will be called five times, it will return 100 the first time, 150 the second time, and then 200 every time. Some people like to call this style of syntax a Domain-Specific Language (DSL). Note: Why do we use a macro to do this? It serves two purposes: first it makes expectations easily identifiable (either by grep or by a human reader), and second it allows Google Mock to include the source file location of a failed expectation in messages, making debugging easier. When a mock function takes arguments, we must specify what arguments we are expecting; for example: // Expects the turtle to move forward by 100 units. EXPECT_CALL(turtle, Forward(100)); Sometimes you may not want to be too specific (Remember that talk about tests being too rigid? Over specification leads to brittle tests and obscures the intent of tests. Therefore we encourage you to specify only what‘s necessary - no more, no less.). If you care to check that Forward() will be called but aren’t interested in its actual argument, write _ as the argument, which means “anything goes”: using ::testing::_; ... // Expects the turtle to move forward. EXPECT_CALL(turtle, Forward(_)); _ is an instance of what we call matchers. A matcher is like a predicate and can test whether an argument is what we'd expect. You can use a matcher inside EXPECT_CALL() wherever a function argument is expected. A list of built-in matchers can be found in the CheatSheet. For example, here's the Ge (greater than or equal) matcher: using ::testing::Ge;... EXPECT_CALL(turtle, Forward(Ge(100))); This checks that the turtle will be told to go forward by at least 100 units. The first clause we can specify following an EXPECT_CALL() is Times(). We call its argument a cardinality as it tells how many times the call should occur. It allows us to repeat an expectation many times without actually writing it as many times. More importantly, a cardinality can be “fuzzy”, just like a matcher can be. This allows a user to express the intent of a test exactly. An interesting special case is when we say Times(0). You may have guessed - it means that the function shouldn't be called with the given arguments at all, and Google Mock will report a Google Test failure whenever the function is (wrongfully) called. We've seen AtLeast(n) as an example of fuzzy cardinalities earlier. For the list of built-in cardinalities you can use, see the CheatSheet. The Times() clause can be omitted. If you omit Times(), Google Mock will infer the cardinality for you. The rules are easy to remember: WillOnce()nor WillRepeatedly()is in the EXPECT_CALL(), the inferred cardinality is Times(1). n WillOnce()'s but no WillRepeatedly(), where n>= 1, the cardinality is Times(n). n WillOnce()'s and one WillRepeatedly(), where n>= 0, the cardinality is Times(AtLeast(n)). Quick quiz: what do you think will happen if a function is expected to be called twice but actually called four times? Remember that a mock object doesn't really have a working implementation? We as users have to tell it what to do when a method is invoked. This is easy in Google Mock. First, if the return type of a mock function is a built-in type or a pointer, the function has a default action (a void function will just return, a bool function will return false, and other functions will return 0). In addition, in C++ 11 and above, a mock function whose return type is default-constructible (i.e. has a default constructor) has a default action of returning a default-constructed value. If you don't say anything, this behavior will be used. Second, if a mock function doesn‘t have a default action, or the default action doesn’t suit you, you can specify the action to be taken each time the expectation matches using a series of WillOnce() clauses followed by an optional WillRepeatedly(). For example, using ::testing::Return;... EXPECT_CALL(turtle, GetX()) .WillOnce(Return(100)) .WillOnce(Return(200)) .WillOnce(Return(300)); This says that turtle.GetX() will be called exactly three times (Google Mock inferred this from how many WillOnce() clauses we‘ve written, since we didn’t explicitly write Times()), and will return 100, 200, and 300 respectively. using ::testing::Return;... EXPECT_CALL(turtle, GetY()) .WillOnce(Return(100)) .WillOnce(Return(200)) .WillRepeatedly(Return(300)); says that turtle.GetY() will be called at least twice (Google Mock knows this as we've written two WillOnce() clauses and a WillRepeatedly() while having no explicit Times()), will return 100 the first time, 200 the second time, and 300 from the third time on. Of course, if you explicitly write a Times(), Google Mock will not try to infer the cardinality itself. What if the number you specified is larger than there are WillOnce() clauses? Well, after all WillOnce()s are used up, Google Mock will do the default action for the function every time (unless, of course, you have a WillRepeatedly().). What can we do inside WillOnce() besides Return()? You can return a reference using ReturnRef(variable), or invoke a pre-defined function, among others. Important note: The EXPECT_CALL() statement evaluates the action clause only once, even though the action may be performed many times. Therefore you must be careful about side effects. The following may not do what you want: int n = 100; EXPECT_CALL(turtle, GetX()) .Times(4) .WillRepeatedly(Return(n++)); Instead of returning 100, 101, 102, ..., consecutively, this mock function will always return 100 as n++ is only evaluated once. Similarly, Return(new Foo) will create a new Foo object when the EXPECT_CALL() is executed, and will return the same pointer every time. If you want the side effect to happen every time, you need to define a custom action, which we'll teach in the CookBook. Time for another quiz! What do you think the following means? using ::testing::Return;... EXPECT_CALL(turtle, GetY()) .Times(4) .WillOnce(Return(100)); Obviously turtle.GetY() is expected to be called four times. But if you think it will return 100 every time, think twice! Remember that one WillOnce() clause will be consumed each time the function is invoked and the default action will be taken afterwards. So the right answer is that turtle.GetY() will return 100 the first time, but return 0 from the second time on, as returning 0 is the default action for int functions. So far we‘ve only shown examples where you have a single expectation. More realistically, you’re going to specify expectations on multiple mock methods, which may be from multiple mock objects. By default, when a mock method is invoked, Google Mock will search the expectations in the reverse order they are defined, and stop when an active expectation that matches the arguments is found (you can think of it as “newer rules override older ones.”). If the matching expectation cannot take any more calls, you will get an upper-bound-violated failure. Here's an example: using ::testing::_;... EXPECT_CALL(turtle, Forward(_)); // #1 EXPECT_CALL(turtle, Forward(10)) // #2 .Times(2); If Forward(10) is called three times in a row, the third time it will be an error, as the last matching expectation (#2) has been saturated. If, however, the third Forward(10) call is replaced by Forward(20), then it would be OK, as now #1 will be the matching expectation. Side note: Why does Google Mock search for a match in the reverse order of the expectations? The reason is that this allows a user to set up the default expectations in a mock object‘s constructor or the test fixture’s set-up phase and then customize the mock by writing more specific expectations in the test body. So, if you have two expectations on the same method, you want to put the one with more specific matchers after the other, or the more specific rule would be shadowed by the more general one that comes after it. By default, an expectation can match a call even though an earlier expectation hasn‘t been satisfied. In other words, the calls don’t have to occur in the order the expectations are specified. Sometimes, you may want all the expected calls to occur in a strict order. To say this in Google Mock is easy: using ::testing::InSequence;... TEST(FooTest, DrawsLineSegment) { ... { InSequence dummy; EXPECT_CALL(turtle, PenDown()); EXPECT_CALL(turtle, Forward(100)); EXPECT_CALL(turtle, PenUp()); } Foo(); } By creating an object of type InSequence, all expectations in its scope are put into a sequence and have to occur sequentially. Since we are just relying on the constructor and destructor of this object to do the actual work, its name is really irrelevant. In this example, we test that Foo() calls the three expected functions in the order as written. If a call is made out-of-order, it will be an error. (What if you care about the relative order of some of the calls, but not all of them? Can you specify an arbitrary partial order? The answer is ... yes! If you are impatient, the details can be found in the CookBook.) Now let's do a quick quiz to see how well you can use this mock stuff already. How would you test that the turtle is asked to go to the origin exactly twice (you want to ignore any other instructions it receives)? After you‘ve come up with your answer, take a look at ours and compare notes (solve it yourself first - don’t cheat!): using ::testing::_;... EXPECT_CALL(turtle, GoTo(_, _)) // #1 .Times(AnyNumber()); EXPECT_CALL(turtle, GoTo(0, 0)) // #2 .Times(2); Suppose turtle.GoTo(0, 0) is called three times. In the third time, Google Mock will see that the arguments match expectation #2 (remember that we always pick the last matching expectation). Now, since we said that there should be only two such calls, Google Mock will report an error immediately. This is basically what we've told you in the “Using Multiple Expectations” section above. This example shows that expectations in Google Mock are “sticky” by default, in the sense that they remain active even after we have reached their invocation upper bounds. This is an important rule to remember, as it affects the meaning of the spec, and is different to how it‘s done in many other mocking frameworks (Why’d we do that? Because we think our rule makes the common cases easier to express and understand.). Simple? Let‘s see if you’ve really understood it: what does the following code say? using ::testing::Return; ... for (int i = n; i > 0; i--) { EXPECT_CALL(turtle, GetX()) .WillOnce(Return(10*i)); } If you think it says that turtle.GetX() will be called n times and will return 10, 20, 30, ..., consecutively, think twice! The problem is that, as we said, expectations are sticky. So, the second time turtle.GetX() is called, the last (latest) EXPECT_CALL() statement will match, and will immediately lead to an “upper bound exceeded” error - this piece of code is not very useful! One correct way of saying that turtle.GetX() will return 10, 20, 30, ..., is to explicitly say that the expectations are not sticky. In other words, they should retire as soon as they are saturated: using ::testing::Return; ... for (int i = n; i > 0; i--) { EXPECT_CALL(turtle, GetX()) .WillOnce(Return(10*i)) .RetiresOnSaturation(); } And, there's a better way to do it: in this case, we expect the calls to occur in a specific order, and we line up the actions to match the order. Since the order is important here, we should make it explicit using a sequence: using ::testing::InSequence; using ::testing::Return; ... { InSequence s; for (int i = 1; i <= n; i++) { EXPECT_CALL(turtle, GetX()) .WillOnce(Return(10*i)) .RetiresOnSaturation(); } } By the way, the other situation where an expectation may not be sticky is when it's in a sequence - as soon as another expectation that comes after it in the sequence has been used, it automatically retires (and will never be used to match any call). A mock object may have many methods, and not all of them are that interesting. For example, in some tests we may not care about how many times GetX() and GetY() get called. In Google Mock, if you are not interested in a method, just don‘t say anything about it. If a call to this method occurs, you’ll see a warning in the test output, but it won't be a failure. Congratulations! You‘ve learned enough about Google Mock to start using it. Now, you might want to join the googlemock discussion group and actually write some tests using Google Mock - it will be fun. Hey, it may even be addictive - you’ve been warned. Then, if you feel like increasing your mock quotient, you should move on to the CookBook. You can learn many advanced features of Google Mock there -- and advance your level of enjoyment and testing bliss.
https://chromium.googlesource.com/external/github.com/google/googletest/+/release-1.8.0/googlemock/docs/ForDummies.md
CC-MAIN-2022-05
refinedweb
4,167
63.29
I have recently become very interested in the area of genetic algorithms and Ant Colony Optimization techniques. I was determined to write a complete program demonstrating these two techniques. In particular I wanted to compare the efficiency of these two approaches in the area of finding solutions to the Traveling Salesman Problem (TSP). Buckland (2002, pp. 118) succinctly summarizes the problem as, “Given a collection of cities, the traveling salesman must determine the shortest route that will enable him to visit each city precisely once and then return back to his starting point.” He goes on to explain that this is an example of what mathematicians call NP-Complete problems. As more cities are added, the computational power required to solve the problem increases exponentially. An algorithm implemented on a computer that solves the TSP for fifty cities would require an increase in computer power of a thousand-fold just to add an additional ten cities. The following table demonstrated the problem: Clearly, a "brute force" way becomes impossible for a large number of cities and alternate algorithms need to be employed if a solution is to be found for this problem. The algorithm consists of the following fundamental steps: I am not going to discuss the details of genetic algorithms, as these are better explained elsewhere, apart from discussing the mechanisms used to produce valid crossover operators and Roulette Wheel Selection. Each chromosome is encoded as a possible tour. For example, a tour of five cities might be encoded as 3,4,0,1,2. A difficulty with the TSP is that a simple crossover will not work. Consider the following situation where crossover occurs at position 3. A problem that has occurred here is that Child 1 has visited city 1 two times, which is not allowed, and Child 2 has not visited city 1 at all. A defacement encoding must be used in order to make sure that only valid tours are produced. A well-known and perhaps the simplest to understand crossover operator is called the Partially Matched Crossover. Buckland (2002, pp. 130-132) explains this technique as follows:. Roulette Wheel Selection is a technique of choosing members from the population of chromosomes in a way that is proportional to their fitness. Buckland illustrates, “Imagine that the population’s total fitness score is represented by a pie chart or roulette wheel. Now you assign a slice of the wheel to each member of the population. The size of the slice is proportional to that chromosome’s fitness score: the fitter a member is, the bigger the slice of pie it gets. Now, to choose a chromosome, all you have to do is spin the wheel, toss in the ball and grab the chromosome that the ball stops on.” Buckland (2002, pp. 100) This algorithm is inspired by observation of real ants. Individually, each ant is blind, frail and almost insignificant. Yet, by being able to co-operate with each other, the colony of ants demonstrates complex behavior. One of these is the ability to find the closest route to a food source or some other interesting landmark. This is done by laying down special chemicals called "pheromones." As more ants use a particular trail, the pheromone concentration on it increases, hence attracting more ants. In our example, an artificial ant is placed randomly in each city and, during each iteration, chooses the next city to go to. This choice is governed by the following formula, as described by Dorigo (1997): Each ant located at city i hops to a city j selected among the cities that have not yet been visited according to the probability: where: is the probability that ant k in city i will go to city j. is the set of cities that have not yet been visited by ant k in city i. is the relative importance of the pheromone trail. is the relative importance of the distance between cities. Each ant located at city i hops to a city j selected among the cities that have not yet been visited according to the probability: where: Therefore the probability that a city is chosen is a function of how close the city is and how much pheromone already exists on that trail. It is further possible to determine which of these has a larger weight by tweaking with the and parameters. Once a tour has been completed (i.e. each city has been visited exactly once by the ant), pheromone evaporation the edges is calculated. Then each ant deposits pheromone on the complete tour by a quantity which is calculated from the following formula (Dorigo 1991): if , where: multiplies the pheromone concentration on the edge between cities i and j by p(RHO), which is called the "evaporation constant." This value can be set between 0 and 1. The pheromone evaporates more rapidly for lower values. is the amount of pheromone an ant k deposits on an edge, as defined by which is the length of the tour created by this ant. Intuitively, short tours will result in higher levels of pheromone deposited on the edges. if , where: At the heart of the application lies the two algorithms which are used to solve the traveling salesman problem. Each algorithm is brought into a view by selecting it from a toolbar of the main frame. A view is a window which is presented to the user. The genetic algorithm view is shown below: Each view is divided into two horizontal sections. Each button is self-explanatory. The Start button is used to start the application and the Stop button is used to stop the simulation. The Reset button resets the simulation while the Step button is used to step through the simulation. The text information shows the following information: Graph of the above information: The blue line is a plot of Epoch vs. the shortest distance found by the algorithm to date. The green line represents the shortest possible tour (optimal solution). When the two lines converge, a solution has been found. I choose C++ as the language to program this application in. A compiled program will always be faster than an interpreted program. Both of the algorithms are computationally quite expensive and so is the process of visualization. Potentially C would be faster still, but by choosing this language the benefits of object orientation are lost. Other possible languages were Delphi, Visual Basic and C#, but I don’t have enough experience in any of these languages to attempt a project of this size. I chose to use the Microsoft Foundation Classes (MFC) application framework. The benefits of using this are: One of the main disadvantages is that it takes a long time to understand the complexity of the underlying model. Also, once the developer tries to move away from wizard-generated code, it is very easy to get lost in the code with the debugger. The project was developed and compiled using Visual Studio .NET Academic Version 7.1.3088. The code used to implement the genetic algorithm is based on the work of Mat Buckland (2002) in his book "AI Techniques for Game Programming." The code for the ACO algorithm is partially based on the work of M Jones (2003) in his book "AI Application Programming." Several other code sources were also used: CSideBannerWnd Initially I thought that the document would hold information common to both algorithms, including the number of cities, city position and shortest possible tour. However, as the project progressed, I realized that each algorithm is sufficiently different to warrant its own data structure. What was needed was a way to switch between different views (derived from a common ancestor). “Sometimes the way a document is being visualized needs to be significantly changed. For example, in PowerPoint, you can see the slides in a WYSIWYG form or view only the text in a text edit window. In the MFC world, one can find an amazingly large quantity of programs that implement this behavior, defining one CView descendant and making it responsible for all visualization changes. This path has several disadvantages: CView The author then goes on to explain how this is possible and gives an example of code that defines different Views and how to switch between them. The following sequence diagram shows the basic interaction between the CACOView and CAntSystem classes. CACOView CAntSystem The following sequence diagram shows the basic interaction between the CGAView and CGASystem classes. CGAView CGASystem I next turned my attention to the ACO and looked at how certain parameters in the algorithm affect performance. Parameters used here are City No: 40, alpha: 1, beta: 1 and rho: 0.6. These findings suggest that it might be possible to write a genetic algorithm that would optimize the parameters. This might make an interesting project somewhere down the track. This has been both a rewarding and difficult project. I previously had little knowledge of MFC programming and thought that it would be relatively easy to master. It turns out that I was wrong and it took me a very long time to get the program up and running. Despite the steep learning curve, I was thrilled to actually produce a working program and learned a lot along the way about genetic algorithms and ant colony optimization algorithms. I am very grateful for all the people who have contributed to this fantastic site. I have found many of the articles very useful in this project. Keep up all the good work, guys. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) #include "Chart.h" #include "MemDC.h" #include "Utils.h" // CANN_TRAINDlg dialog class CANN_TRAINDlg : public CDialog CChart* m_pChart; void CANN_TRAINDlg::OnBnTrain_ANN() { // TODO: Add your control notification handler code here m_pChart->Create(WS_VISIBLE|WS_CHILD,m_rectCln,pwnd,12000); } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/5436/Genetic-and-Ant-Colony-Optimization-Algorithms?msg=4028678
CC-MAIN-2014-42
refinedweb
1,670
61.87
Smart Tags: Dumb Technology? (2/4) - exploring XML Smart Tags: Dumb Technology? Smart Tag Type Declaration and Usage Once you have declared a smart tag namespace URI, you must declare at least one smart tag type, which includes properties such as the tag name, the associated namespace URI, and the URL from where to download the accompanying smart tag DLL or smart tag XML list description file if it isn't already installed on the user's computer. I am waiting for the first Smart Tag Virus DLL to appear... Here is a sample smart tag type declaration: <o:SmartTagType </o:SmartTagType> Smart tag type declarations are placed in the Web page's HTML after the TITLE element if present, or after the HEAD element's opening tag. Smart Tag OBJECT Element Insertion To enable the Smart Tag Options button to display on the Web page as appropriate, you must add the following OBJECT element to the Web page's HTML, preferably in the HEAD element, as follows: <OBJECT classid ="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui> </OBJECT> In the OBJECT element, the classid and id attributes must appear exactly as shown. Smart Tag Element Behavior Declaration In Internet Explorer 5 and above, Dynamic HTML (DHTML) behaviors are used to enhance the default behavior of specific elements. Office XP provides a DLL that associates smart tag namespace URIs to DHTML behaviors, and this association is required in your Web page's HTML code to properly activate the smart tag's shortcut menu. For example, the following DHTML behavior associates the Smart Tag Options button in the OBJECT element to any element that begins with myterms: <STYLE> myterms\:*{behavior:url(#ieooui)} </STYLE> In this example, myterms maps to the alias declared earlier, and ieooui maps to the id attribute of the OBJECT element. Making Terms Smart-Tag Aware One final step remains to adding a smart tag to a Web page: you must designate which terms on the Web page will display the Smart Tag Options button. To designate these terms, enclose each term on the Web page within a smart tag alias/tag name element. This element will take the form of the smart tag alias, followed by a colon, followed by the tag name. For example, if a smart tag's alias is myns, the tag name is myterms, and the smart-tag actionable term is Term, then the smart tag alias/tag name element would be myns:myterms and would be coded in HTML as follows: <myns:myterms>Term</myns:myterms>. An explanation of the inner workings follows. Produced by Michael Claßen URL: Created: Aug 29, 2001 Revised: Aug 29, 2001
http://www.webreference.com/xml/column38/2.html
CC-MAIN-2015-14
refinedweb
444
51.82
102283/how-to-remove-specific-characters-from-a-string-in-python I'm trying to remove specific characters from a string using Python. This is the code I'm using right now. Unfortunately it appears to do nothing to the string. for char in line: if char in " ?.!/;:": line.replace(char,'') How do I do this properly? Strings in Python are immutable (can't be changed). Because of this, the effect of line.replace(...) is just to create a new string, rather than changing the old one. You need to rebind (assign) it to line in order to have that variable take the new value, with those characters removed. Also, the way you are doing it is going to be kind of slow, relatively. It's also likely to be a bit confusing to experienced pythonators, who will see a doubly-nested structure and think for a moment that something more complicated is going on. Starting in Python 2.6 and newer Python 2.x versions *, you can instead use str.translate, (but read on for Python 3 differences): line = line.translate(None, '!@#$') or regular expression replacement with re.sub import re line = re.sub('[!@#$]', '', line) The characters enclosed in brackets constitute a character class. Any characters in line which are in that class are replaced with the second parameter to sub: an empty string. In Python 3, strings are Unicode. You'll have to translate a little differently. kevpie mentions this in a comment on one of the answers, and it's noted in the documentation for str.translate. When calling the translate method of a Unicode string, you cannot pass the second parameter that we used above. You also can't pass None as the first parameter. Instead, you pass a translation table (usually a dictionary) as the only parameter. This table maps the ordinal values of characters (i.e. the result of calling ord on them) to the ordinal values of the characters which should replace them, or—usefully to us—None to indicate that they should be deleted. So to do the above dance with a Unicode string you would call something like translation_table = dict.fromkeys(map(ord, '!@#$'), None) unicode_line = unicode_line.translate(translation_table) Here dict.fromkeys and map are used to succinctly generate a dictionary containing {ord('!'): None, ord('@'): None, ...} Even simpler, as another answer puts it, create the translation table in place: unicode_line = unicode_line.translate({ord(c): None for c in '!@#$'}) Or create the same translation table with str.maketrans: unicode_line = unicode_line.translate(str.maketrans('', '', '!@#$')) * for compatibility with earlier Pythons, you can create a "null" translation table to pass in place of None: import string line = line.translate(string.maketrans('', ''), '!@#$') Here string.maketrans is used to create a translation table, which is just a string containing the characters with ordinal values 0 to 255. Use re.sub. Just match all the chars ...READ MORE class Solution: def firstAlphabet(self, s): self.s=s k='' k=k+s[0] for i in range(len(s)): if ...READ MORE If you are talking about the length .. 1886 Use del and specify the index of the element ...READ MORE Hey, Web scraping is a technique to automatically ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/102283/how-to-remove-specific-characters-from-a-string-in-python
CC-MAIN-2021-21
refinedweb
538
60.61
12 October 2012 03:09 [Source: ICIS news] MELBOURNE (ICIS)--?xml:namespace> The operating rate of the phenol/acetone plant, which has a phenol capacity of 200,000 tonnes/year and an acetone capacity of 120,000 tonnes/year, is being gradually ramped up, the source said. The plant was shut on 10 October after a fire broke out at the hydrogenation unit of the plant’s fuel oil refinery. The phenol/acetone plant did not sustain any damage from the fire. Shipment of phenol/acetone cargoes resumed on the afternoon of 10 October, the source added. Shiyou Chemical is a subsidiary of Kingboard Chemical Holdings, which operates a 200,000 tonne/year phenol/acetone plant at Huizhou in
http://www.icis.com/Articles/2012/10/12/9603294/chinas-shiyou-chemical-restarts-yangzhou-phenolacetone.html
CC-MAIN-2014-42
refinedweb
119
53.71
Important: Please read the Qt Code of Conduct - Error unknown typ name 'DS18B20' Re: Read DS18B20 with Raspberry pi I'm sorry to bring up a 3 year old topic again if i try to add "DS18B20 *tempReader_;" to my private variables in mainwindow.h i get the error "Error unknown typ name 'DS18B20'" i downloaded and includes #include "ds18b20.h" where's the mistake? @mrjj Now it works. i reached the slave-address each sensor, made a code like this char w1_address[16]; const char* adr1 = "10-000803673bfb"; DS18B20 w1Device1 (adr1); actualTemp1 = w1Device1.getTemp(); And it's working. But unfortunately if i add a new sensor. i have to find out his address and add it to the code. its not only plug and play^^ But now a basic c++ problem. I created a class: checktemperature.h #ifndef CHECKTEMPERATURE_H #define CHECKTEMPERATURE_H #include <settemperature.h> #include <readtemperature.h> #include <wiringPi.h> class CheckTemperature { private: SetTemperature S; ReadTemperature R; public: void CheckTemp(); float TempEin; float TempAus; }; #endif // CHECKTEMPERATURE_H checktemperature.cpp #include "checktemperature.h" void CheckTemperature::CheckTemp() { TempEin = S.getHystTemperature(); TempAus = S.getHystTemperature(); if (R.actualTemp1 <= TempEin) digitalWrite(31, LOW); else if (R.actualTemp1 >= TempAus) digitalWrite(31, HIGH); if (R.actualTemp2 <= TempEin) digitalWrite(26, LOW); else if (R.actualTemp2 >= TempAus) digitalWrite(26, HIGH); if (R.actualTemp3 <= TempEin) digitalWrite(27, LOW); else if (R.actualTemp3 >= TempAus) digitalWrite(27, HIGH); if (R.actualTemp4 <= TempEin) digitalWrite(28, LOW); else if (R.actualTemp4 >= TempAus) digitalWrite(28, HIGH); if (R.actualTemp5 <= TempEin) digitalWrite(29, LOW); else if (R.actualTemp5 >= TempAus) digitalWrite(29, HIGH); } i call that funktion every 500ms in loop. But the values of R.actualTemp and S.getHystTemperature the value does not adjust, so they still at the moment when i start the program. Do you know whats wrong, cause i call that funktion every 500ms and at the beginning "TempEin= ..." and S.getHystTemperature(); is already changing^ Hi and welcome to devnet, Can you show your code ? Can you also show the version of the header you are using ? Hi and welcome to forums Did you add the also right click the top project name and select add exiting file and point to this .h file ? - AlexKrammer last edited by SGaist #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include "ds18b20.h" namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: explicit MainWindow(QWidget *parent = nullptr); ~MainWindow(); public slots: void LOOP(); private slots: void on_Button1_clicked(); void on_Button2_clicked(); void on_Button3_clicked(); void on_horizontalSlider_valueChanged(int position); private: Ui::MainWindow *ui; QTimer * m_timer; DS18B20 *tempReader_; }; #endif // MAINWINDOW_H I took this file: As silly as it may sound, the file on github at least is all uppercase, is it also the case for yours ? If so, then I would start by updating the include. Depending on your machine and configuration, the file system might be sensible to that. Hi Also when you extracted it, did you put it directly in your project folder or is in a sub folder called src ? @SGaist no it wasn't. But if i changed it, i got the error "file not found" only i write it like ds18b20 i dont get an error. what do you mean it depends on machine and configuration? @mrjj First i put it in the project folder with an folder called "Libraries" in it. now i put it directly und after that in a folder called src. Nothing changed. Some file systems are case insensitive and others not. @SGaist Yes. I know what you mean. But i think the mistake is not to include the file. because there is no error message. @AlexKrammer Hi I just tried putting DS18B20.h and DS18B20.cpp directly in the project folder. (where .pro file is ) and then add them to the project using the Right-click menu as shown and it seems to like it just fine. @mrjj Oh yes. Now it works. But if i tried the Library in github it didnt work. i created a new .h and .cpp file and copied the .h und .cpp file from the post 3 years ago into that. now the bugs are fixed. Thanks for that. Now the next problem ^^. i just added "tempReader_ = new DS18B20(address);" to my MainWindow::MainWindow. Now, what's adress? The second thing: If i try to create a new class called check temperature and i try to create a new object with "DS18B20 sensor;" i receive the following error massage: "No matching constructor for initialization of'DS18B20'. My background. Im trying to create an app with RaspberryPi in QT with using C++ und wiringpi. In order to control the temperature of 5 sections of an area im going to use the sensor DS18B20. At the first problem. is the adress the pin that i chose to use for the sensor. e.g. digitalRead(22); but this cant be i think. thanks for help! @AlexKrammer well the constructor is DS18B20::DS18B20(uint8_t pin) so it does seem like the pin. - DS18B20 sensor;" i receive the following error message: "No matching constructor for initialization of'DS18B20'. if we have the same .h , the compiler is correct. There is no default constructor and you must create it with and uint8_t as a parameter. also looking at this its seems correct :) @mrjj no mine is a other one. when i use the arduino library i got this. there are a lot of errors. then i delete it and created a .h and .cpp file with that code. ds18b20.h #define DS18B20_H_ #include <inttypes.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #define CELCIUS 0 #define FAHRENHEIT 1 #define BUS "/sys/bus/w1/devices/" #define TEMPFILE "/w1_slave" class DS18B20 { public: DS18B20(const char* address); virtual ~DS18B20(); uint8_t getUnits(); void setUnits(uint8_t); float getTemp(); float CtoF(float); private: uint8_t unit_; char* address_; char path[47]; // path should be 46 chars }; ds18b20.cpp #include "ds18b20.h" DS18B20::DS18B20(const char* address) { address_ = strdup(address); unit_ = CELCIUS; snprintf(path, 46, "%s%s%s", BUS, address_, TEMPFILE); } DS18B20::~DS18B20() { } float DS18B20::getTemp() { FILE *devFile = fopen(path, "r"); if (devFile == NULL) { printf("Count not open %s\n", path); perror("\n"); } float temp = -1; if (devFile != NULL) { if (!ferror(devFile)) { unsigned int tempInt; char crcConf[5]; fscanf(devFile, "%*x %*x %*x %*x %*x %*x %*x %*x %*x : crc=%*x %s", crcConf); if (strncmp(crcConf, "YES", 3) == 0) { fscanf(devFile, "%*x %*x %*x %*x %*x %*x %*x %*x %*x t=%5d", &tempInt); temp = (float) tempInt / 1000.0; } } } fclose(devFile); if (unit_ == CELCIUS) { return temp; } else return CtoF(temp); } uint8_t DS18B20::getUnits() { return unit_; } void DS18B20::setUnits(uint8_t u) { unit_ = u; } float DS18B20::CtoF(float temp) { return temp * 1.8 + 32; } now im on this problem: what kind of .h file do u use and where did u download that, that there are no errors. Hi I downloaded from your link But i don't have a board so don't know if it actually works. It does complain about me not having #include "Arduino.h" #include <OneWire.h> but that is expected. The actual .h and .cpp seems happy enough. I goggled the use of DS18B20::DS18B20(const char* address) to see what they would put as adresss but could not find a single example of that. But plenty of DS18B20 examples otherwise @mrjj now i removed the .h and .cpp file i just downloaded the libraries again and i got the same problem like in the first picture. why dont you have #include "Arduino.h" #include <OneWire.h> and what did you do that you dont get the error unkown type name "uint8_t" the error is still hundreds time here. hi I dont have a board so i dont have the Arduino software / libs installed so no #include "Arduino.h" etc. the uint8_t comes from #include <stdint.h> @mrjj i dont use arduino too. Raspberry Pi is what i use. The stdint.h is just a standart library or where can i get it? If this change remove the error do u thing when i type in a raspberry pin into the constructor e.g. 22 it will work? @AlexKrammer Oh I just assume since it had Arduino included and most google searches talked about Arduino :) Well i just added #include <stdint.h> and it knew it. So its included on newer compilers. If this change remove the error do u thing when i type in a raspberry pin into the constructor e.g. 22 it will work? Yes if it has that chip then yes. @mrjj And if i create a new class called "CheckTemlerature" how could i use the DS18B20 sensor. The first thing to open the constructor with input the Raspberry Pin. Second step Funktion getTemp? Third step close with destructor? Thanks for halp @AlexKrammer Yes, it seems that way looking over the examples. are you using the DS18B20::DS18B20(const char* address) version or the other version using uint pin ? @mrjj I deleted the DS18B20::DS18B20(const char* address) version and try it with your idea. I hope it will work but ill see it tomorrow. Its enough for today. Thanks a lot. @mrjj Now i removed the #include "Arduino.h" #include <OneWire.h> and added #include <stdint.h> But now i receive the error massage with OneWire again. How did you solve that problem? The second fault is that the .cpp produced a lot of errors too. I think it depends on the OneWire file. How did you solve that problem? Hi That include file comes from But im not sure that works with a RaspberryPi directly. @AlexKrammer Yes but im not sure it will just work with a pi board. @mrjj There are still error. i dont know, why you dont receive that errors respectively when you try to compile the code, why its work. if i try to run it, there are hundreds of error messages. - mrjj Lifetime Qt Champion last edited by mrjj @AlexKrammer Well i didnt download and added OneWire to see. Maybe why. Just checkout DS18B20 as question started with. I dont have a board to test on so i would never be able to run it anyway. Thats just #include <stdio.h> it complains about. Very odd the .h does not include that. Also, you do understand that these libs are meant to be used ON the board.? They can never read a sensor on a pi board where apps run on win pc and have the pi board connected via USB or similar. But you do seem to have a raspberry Debian running so you are compiling directly on the board, right ? @mrjj Now I includes #include <stdio.h> in ds18b20.cpp but nothing changed. the error C1083 does not depend on stdio.h, does it? yes i have both systems running. - mrjj Lifetime Qt Champion last edited by mrjj @AlexKrammer Ok. Good. Just checking No that error comes from some missing .h file. As it shows. From the utils folder so you should add those too it seems. oh thanks. that i missed. but its funny. i have the same errors on windows and raspien. i think it dont depend where to compily. The only different i think will be, on raspberry will the outputs work. Dont you think? @AlexKrammer Well it should give same errors from compile on both systems if they are related to missing includes. What are the errors now ? Still bitRead etc ? And yes, the biggest difference is when you run it. On win pc there would be no hardware to to talk to. Hi Seems to come from So I think we are down the wrong path using this with a pi board as it seems tied to Arduino. @mrjj and what will be the right way. is it possible to use qt for this? is it possible to create a own library to convert the input of the sensor in a temperatur? @AlexKrammer Hi But do you have such sensor ? and hook it up to the board like they do here ? @AlexKrammer I think you need a c++ lib then like to interface with the hardware. @mrjj to interface with the hardware ill use wiringpi. thats work good. i can turn on und off leds and i can read pins. how does this lib work? @mrjj ou i just saw. it looks like that one right? "are you using the DS18B20::DS18B20(const char* address) version"
https://forum.qt.io/topic/116761/error-unknown-typ-name-ds18b20
CC-MAIN-2020-40
refinedweb
2,061
77.84
remember a trip to Tijuana, where on Revolución Avenue, a carved chess set caught my eye. For several minutes I gazed admiringly at it, feeling the rough texture of the carved stone pieces. It was a very special one, the shopkeeper assured me, but since he got a such a good deal on it he could let it go for $100. The rest of my group had already left, so I politely headed for the door to join them. "Wait!" the shopkeeper called. "$80?" Twenty dollars off wasn't bad, but still, I had to get going. I moved toward the door again without saying anything. "$60?" he called out to me. I hesitated. "$40?" In conversations on the general topic of XML annoyances--and I've been in more than my fair share of them--the discussion inevitably centers around lament over specific features. "Why did they add (insert feature name here) to XML?" or less gentle variations thereof. Yet the forces that aggravate XML pros and newbies alike can almost always be attributed to a great power I first noticed on that trip to Mexico: the Power of No, which I think might actually be the primary natural force in the universe. Let's take a closer look at the Power of No and some examples. XML itself is based on the Power of No: XML imposes a level of structure beyond plain text. The vast majority of random strings of characters won't qualify as XML. This ties in with basic definitions of information, uncertainty, and entropy. I think my favorite is the definition of uncertainty as "weighted average surprisal." By way of rules that reject non-well-formed sequences, what's left over--that is, XML itself--has less potential for "surprisal". As a result, writing a conforming XML parser is orders of magnitude easier than writing a parser for, say, general stuff you might find on the Web. Or so the theory goes. Specific XML vocabularies too are based on the Power of No. Defining a specific application of XML narrows down a set of possible XML documents to ones meeting specific criteria: element names, content models, attribute names, namespaces, and so on. Further, a well-designed vocabulary will have even more constraints, though possibly harder to quantify, such as separation of concerns, or use of intentional markup. Microformats are even more constrained, since they exist within the framework of an encompassing vocabulary--a rejection of more flexible schools of markup language design. A large portion of the criticism directed at microformats can be described in terms of taking constraints too far. Advocates of REST also use the Power of No, but again phrased in the language of constraints. These examples show that the Power of No is pervasive and fundamental, manifesting itself time and again in widely varying systems and subsystems. The examples so far also haven't been very controversial, showing what many would consider important features of the Web and modern computing practice. Discussion to this point has focused on explicitly technical levels--architectures, patterns, and designs. Can the Power of No extend to areas outside of computer science, into the slippery realm of nature and human nature? Think about a software product launch in its entire sprawling form over time--much more than engineering. Involved are levels of product managers, marketing folks, beta testers, QA staff, and others all working towards a common goal. The overall effectiveness of the project depends to a great deal on the ability of individual players to make wise decisions. As a result, the process has countless potential outcomes in which a product might have flaws or annoyances, but very few outcomes that come out flawlessly. The nos push the result toward the ideal. One more example: take a typical fluff question in an interview, like, "If money were no object, what would you do?" Don't get too carried away on a shopping spree--even if money is unlimited, one person can enjoy only so many things in life. Available time is constrained, so careful trade-offs need to be made--deciding not whether but when to say no. I could cite more examples, like the Tijuana experience. Suffice it to say that the Power of No appears in lots of places if you're willing to look hard enough. Getting back to the arena of standards, recall that the XML specification was forged by a process that has strong hints of natural selection and managing constrained resources, modifying an existing standard to meet a new set of requirements. The W3C maintains an elaborate process document describing in painstaking detail the official process used at any given time to develop Recommendations and other technical reports. The primary tenet of the process, though, lies in consensus, which the document also describes in detail. Consensus works because the ability of participants to say no virtually ensures that incoming proposals get optimized before getting signed off. The most successful projects I've seen at the W3C had a small core of talented architects, often just a single person, surrounded by a larger group that had relatively few problems forming consensus around the brilliant original design. In groups that lacked a cohesive architectural center, the results tended more towards the dreaded "design by committee." In particular, Working Groups with large numbers of participants early on seem to have a poor track record in the area of focus. Diverse subsets of the overall group each push for features they care about, with the resulting technical report covering the entire union of many different proposals. Things that come from this sort of group often become popular due to sheer political force, but leave developers gnashing their teeth along the way. The opposite can be a problem too--a group can become too focused, relying on an echo-chamber effect in lieu of genuine consensus from a wide spectrum of interested parties. Such groups also tend to have a limited or overly literal view of what problem they really are trying to solve. One result of this is the word "standard" becoming nearly an epithet in certain circles. A badly formed or "rubber-stamped" standard is far more annoying than no standard at all. Thinking in terms of the Power of No can offer a fresh perspective on many aspects of XML and XML projects. Many times removing code, features, or requirements can make the difference between a good product and a great one. Pay attention to the negative space in a vocabulary or product design. Keep in mind specific anti-goals that should be avoided. It goes without saying that a major development project should have a solid architect at the core. A good architect needs to be able to separate personal preferences and prejudices from legitimate good design points--in other words, get the focus right. A concrete measure of an architect is whether outside groups--even ones that wouldn't normally be directly involved--are able to understand and accept the architecture. There will be times where it's just not possible to please everyone to the level of consensus, but still it provides a measuring rod against which to evaluate a design (or a designer). To anyone considering designing a new XML vocabulary: remember the Power of No. If you absolutely have to design a new vocabulary, shoot for a minimum of weighted average surprisal. Finally, for those annoying things which we have no power to change, at least understanding a bit more about how they came to be can help ease the suffering of dealing with them on a recurring basis. And as for the Tijuana chess set--I ended up saying no. © , O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.xml.com/pub/a/2006/02/01/the-power-of-no.html?CMP=OTC-TY3388567169&ATT=The+Power+of+No
CC-MAIN-2014-42
refinedweb
1,312
52.39
Hello, address@hidden writes: Thank you for all the debugging. > org-export-with-current-buffer-copy calls org-clone-local-variables > which uses a regexp to detect buffer-local variables, but > *org-babel-use-quick-and-dirty-noweb-expansion* is not detected, so it > gets dropped. > > Solution add "\\*org-babel-use-.*dirty.*\\*\\|" or something like that > to the regexp. Before doing that, I'd like to know if there's a particular reason for this variable to not belong to the regular namespace. I think this is confusing and error-prone. Thus, I'd rather have the variable renamed instead. Eric, is that ok with you? Regards, -- Nicolas Goaziou
https://lists.gnu.org/archive/html/emacs-orgmode/2012-11/msg00650.html
CC-MAIN-2021-39
refinedweb
109
53.47
Software Development Kit (SDK) and API Discussions Hello all, In a special use case, one of our customer needs that snapshots created with snapshot-create API call are usable to create a CIFS share in the snapshot namespace. However, from time to time (right after snapshot creation), the share creation fails with the error : The specified path "/vol/xxxxxxx/.snapshot/testsharertv/xxxxxx/yyyy/zzzzzz" does not exist in the namespace belonging to Vserver "xxxxxxx". The snapshot-create API call has the value 'async' set to false, but the share creation fails when issued right after the snapshot creation, and 1-2 retries later (1-2 secs apart) eventually passes. I've tried to check the state of the created snapshots using snapshot-list-info/snapshot-get-iter, but there's no information returned by the API that would match the output of the diag snap status command (creating/complete/deleting status). Is there some way to make sure a newly created snapshot can be used as a path for a CIFS share? Kind regards. Does the share created from the snapshot have to be available immediately, or is 3-5 seconds of delay acceptable? If it is acceptable, then why not just put a brief sleep into the script? Or, use a try/catch statement to test the operation until it succeeds or exceeds some threshold. Andrew Hello assuliva, In fact, this is the solution I've come to, but my customer isn't happy with this as the ckeck/retry code is associated with the creation of the share and not with the snapshot creation itself. They argue that the snapshot creation could be used for other purposes and would like to ensure created snapshots are indeed usable even if a share is not created for it. That's the reason why I'm looking for some way to ensure a snapshot is created and complete independantly of share creation. I haven't tried this.. but i believe this shuld work.. aleast in theory. After creating the snpshot can't you do a "snpshot-get-ltr" and look for the snapshot *state" (which is inside snapshot-info) If the state is 'valid' ... The snapshot is complete and consistent. Then proceed with share creation, otherwise wait.. Here are the three different state you might find.. (according to API Doc) 'valid' ... The snapshot is complete and consistent'invalid' ... The namespace constituent snapshot is missing'partial' ... One or more data constituent snapshots are missing Robin. Hello robinpeter, I saw these in the documentation : The default value is valid. Isn't the state attribute value only relevant for Infinite Volumes? (that was my assumption so far) I don't think the "state" attribute is only for Infinite Volumes. Might worth checking. I'll be surpriced if its only for Infinite Volumes. If that didn't work.. it may worth trying to look in to the value of percentage-of-used-blocks or percentage-of-total-blocks (in snapshot-info) In the assumption.. of Snapshot creation should complete to return these values. Thanks Robin, I'll give these ideas a try, thanks a lot for the help!
https://community.netapp.com/t5/Software-Development-Kit-SDK-and-API-Discussions/making-sure-snapshot-create-snapshots-are-usable/td-p/132367
CC-MAIN-2020-45
refinedweb
523
64.81
Lesson 27 of 27By John TerraLast updated on Jun 22, 202045748 Devised back in 1989, Python wasn’t one of the programming languages until the onset of digitalization. Today, the world of Big Data and analytics has gained enormous popularity and consequently, Python has become the preferred programming language among data scientists. Scalability, simpler coding, and its collection of libraries and frameworks are just some of the features that make Python suitable for companies engaged in Big Data or machine learning-based initiatives. Are you applying for a job that involves knowledge of Python? Here are some of the important interview questions that you may face in your Python-related interview. Dive into these Python interview questions and see just how well-versed you are in this programming language. Learn data operations in Python, strings, conditional statements, error handling, and the commonly used Python web framework Django with the Python Training course. Deepcopy creates a different object and populates it with the child objects of the original object. Therefore, changes in the original object are not reflected in the copy. copy.deepcopy() creates a Deep Copy. Shallow copy creates a different object and populates it with the references of the child objects within the original object. Therefore, changes in the original object are reflected in the copy. copy.copy creates a Shallow Copy. Multithreading usually implies that multiple threads are executed concurrently. The Python Global Interpreter Lock doesn't allow more than one thread to hold the Python interpreter at that particular point of time. So multithreading in python is achieved through context switching. It is quite different from multiprocessing which actually opens up multiple processes across multiple threads. Django is a web service used to build your web pages. Its architecture is as shown: Numpy is written in C so that all its complexities are backed into a simple to use a module. Lists, on the other hand, are dynamically typed. Therefore, Python must check the data type of each element every time it uses it. This makes Numpy arrays much faster than lists. Numpy has a lot of additional functionality that list doesn’t offer; for instance, a lot of things can be automated in Numpy. If you just created a neural network model, you can save that model to your hard drive, pickle it, and then unpickle to bring it back into another software program or to use it at a later time. Python has a private heap space that stores all the objects. The Python memory manager regulates various aspects of this heap, such as sharing, caching, segmentation, and allocation. The user has no control over the heap; only the Python interpreter has access. Arguments are passed in python by a reference. This means that any changes made within a function are reflected in the original object. Consider two sets of code shown below: In the first example, we only assigned a value to one element of ‘l’, so the output is [3, 2, 3, 4]. In the second example, we have created a whole new object for ‘l’. But, the values [3, 2, 3, 4] doesn’t show up in the output as it is outside the definition of the function. To generate random numbers in Python, you must first import the random module. The random() function generates a random float value between 0 & 1. > random.random() The randrange() function generates a random number within a given range. Syntax: randrange(beginning, end, step) Example - > random.randrange(1,10,2) In Python, the / operator performs division and returns the quotient in the float. For example: 5 / 2 returns 2.5 The // operator, on the other hand, returns the quotient in integer. For example: 5 // 2 returns 2 The ‘is’ operator compares the id of the two objects. list1=[1,2,3] list2=[1,2,3] list3=list1 list1 == list2 🡪 True list1 is list2 🡪 False list1 is list3 🡪 True The pass statement is used when there's a syntactic but not an operational requirement. For example - The program below prints a string ignoring the spaces. var="Si mplilea rn" for i in var: if i==" ": pass else: print(i,end="") Here, the pass statement refers to ‘no action required.’ Python has an inbuilt method isalnum() which returns true if all characters in the string are alphanumeric. Example - >> "abcd123".isalnum() Output: True >>”abcd@123#”.isalnum() Output: False Another way is to use regex as shown. >>import re >>bool(re.match(‘[A-Za-z0-9]+$','abcd123’)) Output: True >> bool(re.match(‘[A-Za-z0-9]+$','abcd@123’)) Output: False There are three types of sequences in Python: Example of Lists - >>l1=[1,2,3] >>l2=[4,5,6] >>l1+l2 Output: [1,2,3,4,5,6] Example of Tuples - >>t1=(1,2,3) >>t2=(4,5,6) >>t1+t2 Output: (1,2,3,4,5,6) Example of String - >>s1=“Simpli” >>s2=“learn” >>s1+s2 Output: ‘Simplilearn’ Python provides the inbuilt function lstrip() to remove all leading spaces from a string. >>“ Python”.lstrip Output: Python The replace() function can be used with strings for replacing a substring with a given string. Syntax: str.replace(old, new, count) replace() returns a new string without modifying the original string. Example - >>"Hey John. How are you, John?".replace(“john",“John",1) Output: “Hey John. How are you, John? Learn data operations in Python, strings, conditional statements, error handling, and the commonly used Python web framework Django with the Python Training course. Here is an example to understand the two statements - >>lis=[‘a’, ‘b’, ‘c’, ‘d’] >>del lis[1:3] >>lis Output: [“a”,”d”] >>lis=[‘a’, ‘b’, ‘b’, ‘d’] >>lis.remove(‘b’) >>lis Output: [‘a’, ‘b’, ‘d’] Note that in the range 1:3, the elements are counted up to 2 and not 3. You can display the contents of a text file in reverse order using the following steps: >>def addToList(val, list=[]): >> list.append(val) >> return list >>list1 = addToList(1) >>list2 = addToList(123,[]) >>list3 = addToList('a’) >>print ("list1 = %s" % list1) >>print ("list2 = %s" % list2) >>print ("list3 = %s" % list3) Output: list1 = [1,’a’] list2 = [123] lilst3 = [1,’a’] Note that list1 and list3 are equal. When we passed the information to the addToList, we did it without a second value. If we don't have an empty list as the second value, it will start off with an empty list, which we then append. For list2, we appended the value to an empty list, so its value becomes [123]. For list3, we're adding ‘a’ to the list. Because we didn't designate the list, it is a shared value. It means the list doesn’t reset and we get its value as [1, ‘a’]. Remember that a default list is created only once during the function and not during its call number. Lists are mutable while tuples are immutable. Example: List >>lst = [1,2,3] >>lst[2] = 4 >>lst Output:[1,2,4] Tuple >>tpl = (1,2,3) >>tpl[2] = 4 >>tpl Output:TypeError: 'tuple' the object does not support item assignment There is an error because you can't change the tuple 1 2 3 into 1 2 4. You have to completely reassign tuple to a new value. Docstrings are used in providing documentation to various Python modules, classes, functions, and methods. Example - def add(a,b): " " "This function adds two numbers." " " sum=a+b return sum sum=add(10,20) print("Accessing doctstring method 1:",add.__doc__) print("Accessing doctstring method 2:",end="") help(add) Output - Accessing docstring method 1: This function adds two numbers. Accessing docstring method 2: Help on function add-in module __main__: add(a, b) This function adds two numbers. The solution to this depends on the Python version you are using. Python v2 >>print(“Hi. ”), >>print(“How are you?”) Output: Hi. How are you? Python v3 >>print(“Hi”,end=“ ”) >>print(“How are you?”) Output: Hi. How are you? The split() function splits a string into a number of strings based on a specific delimiter. Syntax - string.split(delimiter, max) Where: the delimiter is the character based on which the string is split. By default it is space. max is the maximum number of splits Example - >>var=“Red,Blue,Green,Orange” >>lst=var.split(“,”,2) >>print(lst) Output: [‘Red’,’Blue’,’Green, Orange’] Here, we have a variable var whose values are to be split with commas. Note that ‘2’ indicates that only the first two values will be split. Python is considered a multi-paradigm language. Python follows the object-oriented paradigm Python follows the functional programming paradigm The function prototype is as follows: def function_name(*list) >>def fun(*var): >> for i in var: print(i) >>fun(1) >>fun(1,25,6) In the above code, * indicates that there are multiple arguments of a variable. *args *kwargs fun(colour=”red”.units=2) It means that a function can be treated just like an object. You can assign them to variables, or pass them as arguments to other functions. You can even return them from other functions. __name__ is a special variable that holds the name of the current module. Program execution starts from main or code with 0 indentations. Thus, __name__ has a value __main__ in the above case. If the file is imported from another module, __name__ holds the name of this module. A numpy array is a grid of values, all of the same type, and is indexed by a tuple of non-negative integers. The number of dimensions determines the rank of the array. The shape of an array is a tuple of integers giving the size of the array along each dimension. >>import numpy as np >>arr=np.array([1, 3, 2, 4, 5]) >>print(arr.argsort( ) [ -N: ][: : -1]) >>train_set=np.array([1, 2, 3]) >>test_set=np.array([[0, 1, 2], [1, 2, 3]) Res_set 🡪 [[1, 2, 3], [0, 1, 2], [1, 2, 3]] Choose the correct option: Here, options a and b would both do horizontal stacking, but we want vertical stacking. So, option c is the right statement. resulting_set = np.vstack([train_set, test_set]) Answer - 3. from sklearn.tree import DecisionTreeClassifier We can use the following code: >>link =... >>source = StringIO.StringIO(requests.get(link).content)) >>data = pd.read_csv(source) df[‘Name’] and df.loc[:, ‘Name’], where: df = pd.DataFrame(['aa', 'bb', 'xx', 'uu'], [21, 16, 50, 33], columns = ['Name', 'Age']) Choose the correct option: Answer - 3. Both are copies of the original dataframe. Want to get skilled in working with Python classes and files? Then check out the Python Training Course. Click to enroll now! Error: Traceback (most recent call last): File "<input>", line 1, in<module> UnicodeEncodeError: 'ascii' codec can't encode character. Choose the correct option: The error relates to the difference between utf-8 coding and a Unicode. So option 3. pd.read_csv(“temp.csv”, encoding=’utf-8′) can correct it. >>import matplotlib.pyplot as plt >>plt.plot([1,2,3,4]) >>plt.show() Choose the correct option: Answer - 3. In line two, write plt.plot([1,2,3,4], lw=3) Answer - 3. df.reindex_like(new_index,) The function used to copy objects in Python are: copy.copy for shallow copy and copy.deepcopy() for deep copy The attribute df.empty is used to check whether a pandas data frame is empty or not. >>import pandas as pd >>df=pd.DataFrame({A:[]}) >>df.empty Output: True This can be achieved by using argsort() function. Let us take an array X; the code to sort the (n-1)th column will be x[x [: n-2].argsoft()] The code is as shown below: >>import numpy as np >>X=np.array([[1,2,3],[0,5,2],[2,3,4]]) >>X[X[:,1].argsort()] Output:array([[1,2,3],[0,5,2],[2,3,4]]) The code is as shown: >> #Input >>import numpy as np >>import pandas as pd >>mylist = list('abcedfghijklmnopqrstuvwxyz’) >>myarr = np.arange(26) >>mydict = dict(zip(mylist, myarr)) >> #Solution >>ser1 = pd.Series(mylist) >>ser2 = pd.Series(myarr) >>ser3 = pd.Series(mydict) >>print(ser3.head()) >> #Input >>import pandas as pd >>ser1 = pd.Series([1, 2, 3, 4, 5]) >>ser2 = pd.Series([4, 5, 6, 7, 8]) >> #Solution >>ser_u = pd.Series(np.union1d(ser1, ser2)) # union >>ser_i = pd.Series(np.intersect1d(ser1, ser2)) # intersect >>ser_u[~ser_u.isin(ser_i)] >> #Input >>import pandas as pd >>np.random.RandomState(100) >>ser = pd.Series(np.random.randint(1, 5, [12])) >> #Solution >>print("Top 2 Freq:", ser.value_counts()) >>ser[~ser.isin(ser.value_counts().index[:2])] = 'Other’ >>ser >> #Input >>import pandas as pd >>ser = pd.Series(np.random.randint(1, 10, 7)) >>ser >> #Solution >>print(ser) >>np.argwhere(ser % 3==0) The code is as shown: >> #Input >>p = pd.Series([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) >>q = pd.Series([10, 9, 8, 7, 6, 5, 4, 3, 2, 1]) >> #Solution >>sum((p - q)**2)**.5 >> #Solution using func >>np.linalg.norm(p-q) You can see that the Euclidean distance can be calculated using two ways. >> #Input >>df = pd.DataFrame(np.arange(25).reshape(5, -1)) >> #Solution >>df.iloc[::-1, :] Yes. One common beginner mistake is re-tuning a model or training new models with different parameters after seeing its performance on the test set. Seaborn is a Python library built on top of matplotlib and pandas to ease data plotting. It is a data visualization library in Python that provides a high-level interface for drawing statistical informative graphs. Did you know the answers to these Python interview questions? If not, here is what you can do. Cracking a job interview requires careful preparation apart from the right blend of skills and knowledge. There are a number of emerging job opportunities that demand proficiency in Python. As recruiters hunt for professionals with relevant skills, you need to ensure that you have a thorough knowledge of Python fundamentals and the ability to answer all the Python interview questions. You can enroll in Simplilearn’s Data Science with Python course to gain expertise in this language and become a potential candidate for your next job interview.
https://www.simplilearn.com/tutorials/python-tutorial/python-interview-questions?source=sl_frs_nav_playlist_video_clicked
CC-MAIN-2020-40
refinedweb
2,337
67.25
Look at this code I dont know what is wrong I make a object p2 which is copy of p1I make a object p2 which is copy of p1Code:#include<iostream> using namespace std; class Bird { private: int *years; char *name; public: int Getyears() const { return *years;} char* Getname() {return name;} void Setyears(int year) { *years=year;} void Setname(char i[]) { delete [] name; name=new char [strlen(i)+1]; strcpy(name,i);} Bird() { years=new int; *years=5; name=0; } ~Bird() { delete years; years=0; delete [] name; *name=0; } }; int main() { Bird p1; cout<<"The first bird is called Tweety"<<endl; p1.Setname("Tweety"); cout<<p1.Getname()<<" is old:"<<p1.Getyears()<<"years"<<endl; cout<<"I incrase the years of Tweety at 6..."<<endl<<endl; p1.Setyears(6); cout<<"I create second bird (copy of Tweety) called Daffy"<<endl<<endl; Bird p2(p1); p2.Setname("Daffy"); cout<<"The first bird is called "<<p1.Getname()<<" and it's old:"<< p1.Getyears()<<" years"<<endl; cout<<"The second bird is called"<<p2.Getname()<<" and it's old:"<<p2.Getyears()<<" years"<<endl<<endl; cout<<"I incrase the years of Daffy at 7..."<<endl<<endl; p2.Setyears(7); cout<<"The first bird is called"<<p1.Getname()<<" and it's old:"<<p1.Getyears()<<" years"<<endl; cout<<"The second bird is called "<<p2.Getname()<<" and it's old:"<<p2.Getyears()<<" years"<<endl<<endl; cin.get(); cin.get(); return 0; } when i try to change the name of the bird on object p2 from tweety to Daffy i change the name on p1(i dont want that , i want p1 to have name tweety and p2 to have name daffy) The same thing happens with the years... Where is my mistake? pls answer me soon
https://cboard.cprogramming.com/cplusplus-programming/89549-problem-class.html
CC-MAIN-2017-22
refinedweb
286
92.53
The QErrorMessage class provides an error message display dialog. More... #include <QErrorMessage> Inherits QDialog. The QErr. For Qt/Embedded developers, the static qtHandler() installs a message handler using qInstallMsgHandler() and creates a QErrorMessage that displays qDebug(), qWarning() and qFatal() and QStatusBar::showMessage(). Constructs and installs an error handler window with the given parent. Destroys the error message dialog. Returns a pointer to a QErrorMessage object that outputs the default Qt messages. This function creates such an object, if there isn't one already. Shows the given message, and returns immediately. This function does nothing if the user has requested that message should not be shown again. Normally, message is shown at once, but if there are pending messages, message is queued for later display.
http://doc.trolltech.com/4.0/qerrormessage.html
crawl-001
refinedweb
124
50.73
Putting SEO First with Gatsby Wesley Handy Originally published at wesleylhandy.net on ・12 min read I was drawn to Gatsby out of my love for React, and my desire to serve fast, perfomant, responsive applications. I started learning React just as the library started embracing ES6 classes, and just as create-react-app was taking off. It was a challenge at first to learn the simplicity of proxying to get the front end to run concurrently with a node / express API. And yet, to get from that point to server-side rendering took a little more research and effort. It has been fun learning with and from the larger coding community. I could continue to enumerate the other technical issues I have encountered and solved, and yet some others I still have to learn, but note something important so far regarding my journey—it has been a techinical-first-journey, focused on knowledge and skill of the various langauges and libraries, necessary yes, but perhaps not primary. The last thing on mind has been SEO , accessbility , and security. Setting An SEO-First Mindset I’ll leave accessibility and then security future posts, but what has impressed me thus far in my dive into the Gatbsy ecosystem has been the ease to configure search engine optimization. In fact, you can create an architecture for your site or app that is SEO driven long before developing your UI. I want to walk you through the things I have learned so far in optimizing a Gatsby site for SEO right from the start. Before We Begin You need to have some familiarity with Gatsby. To learn how to install the gatbsy-cli and create a new Gatsby project from one of the many Gatsby Starters, please consider completing the Gatsby tutorial. Otherwise, pick a starter from the SEO category, and open the SEO component or just use the gatsby-starter-default by running from the command line: gatsby new seo-blog The SEO component is dependent upon react-helmet, if your starter does not come with SEO initialized be sure to add it. We also want to add some other features like sitemap, google analytics, and RSS feed, for syndication. Finally, we need to create robots.txt to manage how search engines crawl the site. I’ve split up the commands below for spacing purposes, but they can all run as one yarn add command: yarn add react-helmet gatsby-plugin-react-helmet yarn add gatsby-plugin-feed gatsby-plugin-google-analytics gatsby-plugin-sitemap yarn add gatsby-plugin-robots-txt With your starter and these plugins installed, we are ready to set up our site for SEO Performance. Setting up gatsby-config.js Within the root of your new Gatsby project sits the file Gatsby uses to configure siteMetaData and plugins and several other important features. This file, gatsby-config.js is going to do the heavy lifting of importing all your vital SEO related content into GraphQL or create necessary files directly (like robots.txt). Site Metadata Metadata is as it sounds, data that extends across or throughout the entirity of your site. It can be used anywhere, but will come most in handy when configuring your SEO component as well as Google Structured Data. Initialize your metadata as an Object literal with key/value pairs that can be converted into <meta> tags, or can be passed to sitemaps or footers, wherever you might need global site data: <meta name="title" content="My Super Awesome Site"/> <meta name="description" content="My Super Awesome Site will blow your mind with radical content, extravagant colors, and hip designs."/> Here is where you need to plan what might use across your site: - title - description - keywords - site verification - social links - other With siteMetadata set up, your can now query this data to be used within your plugin initialization, even within the same gatsby-config.js file. I’ve organized my siteMetadata so that I can make the following query: query: ` site { siteMetadata { title description author siteUrl siteVerification { google bing } social { twitter } socialLinks { twitter linkedin facebook stackOverflow github instagram pinterest youtube email phone fax address } keywords organization { name url } } } ` This query returns an object matching this query structure: site: { siteMetadata: { title: String, description: String, author: String, siteUrl: String, siteVerification: { google: String, bing: String }, social: { twitter: String }, socialLinks: { twitter: String, linkedin: String, facebook: String, stackOverflow: String, github: String, instagram: String, pinterest: String, youtube: String, email: String, phone: String, fax: String, address: String }, keywords: [String], organization: { name: String, url: String } } } Plugin Setup We are going to focus on four plugins, each for the sitemap, RSS feed, robots.txt file, and Google Analytics (for tracking the SEO success of your site). We’ll initialize the plugins immediately following siteMetaData. module.exports = { siteMetadata: { / **SEE ABOVE** / }, plugins: [/ **An ARRAY** /], } gatsby-plugin-sitemap The sitemap plugin can be used simply or with options. Generally, you want to include anything and everything with high quality content, and exclude duplicate content, thin content, or pages behind authentication. Gatsby provides a detailed walkthrough for setting up your sitemap. plugins: [ { resolve: `gatsby-plugin-sitemap`, options: { exclude: [`/admin`, `/tags/links`] } }, ] gatsby-plugin-feed An RSS feed helps with syndication of content on your site, like if you had a blog and wanted to cross-post to another better established blog, and it helps communicate changes to your site to search engines. This plugin allows you to create any number of different feeds utlizing the power of GraphQL to query your data. Some of what is below is dependent on how you structure your pages and posts in gatsby-node.js. The feed will walk through each of your pages in markdowngenerate an XML RSS Feed. Gatsby provides a detailed walkthrough for adding an RSS feed. Note particularly the use of queries below to complete the feed. { resolve: `gatsby-plugin-feed`, options: { // graphQL query to get siteMetadata query: ` { site { siteMetadata { title description siteUrl site_url: siteUrl, author } } } `, feeds: [ // an array of feeds, I just have one below { serialize: ({ query: { site, allMarkdownRemark } }) => { const { siteMetadata : { siteUrl } } = site; return allMarkdownRemark.edges.map(edge => { const { node: { frontmatter: { title, date, path, author: { name, email }, featured: { publicURL }, featuredAlt }, excerpt, html } } = edge; return Object.assign({}, edge.node.frontmatter, { language: `en-us`, title, description: excerpt, date, url: siteUrl + path, guid: siteUrl + path, author: `${email} ( ${name} )`, image: { url: siteUrl + publicURL, title: featuredAlt, link: siteUrl + path, }, custom_elements: [{ "content:encoded": html }], }) }) }, // query to get blog post data query: ` { allMarkdownRemark( sort: { order: DESC, fields: [frontmatter___date] }, ) { edges { node { excerpt html frontmatter { path date title featured { publicURL } featuredAlt author { name email } } } } } } `, output: "/rss.xml", title: `My Awesome Site | RSS Feed`, }, ], }, }, gatsby-plugin-robots-txt With robots.txtfiles, you can instruct crawlers to ignore your site and/or individual paths, based on certain conditions. See the plugin description for more detailed use cases. { resolve: 'gatsby-plugin-robots-txt', options: { policy: [{ userAgent: '*', allow: '/' }] } }, gatsby-plugin-google-analytics Google analytics will help you track how users find and interact with your site, so you can manage and update your site over time for better SEO results. Verify your site with Google and store your analytics key here: { resolve: `gatsby-plugin-google-analytics`, options: { trackingId: ``, }, }, Optimizing the SEO Component The secret sauce behind the SEO Component is the well-known react-hemlet package. Every page and every template imports the SEO Component and thus you can pass specific information to adjust the metadata for each public facing page. Use your imagination—what can you pass to this component to create the perfect SEO enabled page? Is the page a landing page, a blog post, a media gallery, a video, a professional profile, a product? There are 614 types of schemas listed on. You can pass specific schema related information to the SEO component. If any of these values you pass to the component, a StaticQuery would fill in what’s missing with siteMetaData. From this data, react-helmet creates all the <meta> tags, including open graph and Google Structured Data. Rather than including a long code-snippet, refer back to the Before We Begin section, or refer to the gatsby-plugin-react-helment page for installation. But please note, the structure of my SEO Component follows that outlined by this excellent post by Dustin Schau. As I have tinkered with the SEO Component, here are the features I added: - Additional Fields Passed to Component to: distinguish between types of pages, such as blog posts, to handle authors of content other than main site author, and to handle changes in date modified for pages and posts. More could be added in the future. function SEO({ description, lang, meta, keywords, image, title, pathname, isBlogPost, author, datePublished = false, dateModified = false }) { - Setting og:typeconditionally { property: `og:type`, content: isBlogPost ? `article` : `website`, }, - Always setting an altproperty on the image object // ALWAYS ADD IMAGE:ALT { property: "og:image:alt", content: image.alt }, { property: "twitter:image:alt", content: image.alt }, - Handling Secure Images .concat( // handle Secure Image metaImage && metaImage.indexOf("https") > -1 ? [ { property: "twitter:image:secure_url", content: metaImage, }, { property: "og:image:secure_url", content: metaImage }, ] : [] ) - Adding a Component to handle Google Structured Datausing schema.org categories. I came across an example of such a component from reading through various documentation and articles, I can’t recall where I saw the link first, but I borrowed and adapted from Jason Lengstorf. I made two small adaptations to his excellent work. Configuring the SchemaOrg Component You will import and call the SchemaOrg Component from within the SEO Component and place it just after the closing tag of the Helmet Component and pass the following properties: function SEO(.....) { ... return ( <> {/* Fragment Shorthand */} <Helmet {/* All the Meta Tag Configuration */} /> <SchemaOrg isBlogPost={isBlogPost} url={metaUrl} title={title} image={metaImage} description={metaDescription} datePublished={datePublished} dateModified={dateModified} canonicalUrl={siteUrl} author={isBlogPost ? author : siteMetadata.author} organization={organization} defaultTitle={title} /> </> ) } I won’t copy and paste the entire SchemaOrg Component here. Grab it from the link above, and give Jason Lengstorf some credit in your code. Below are the few additions I made: - I added author email to the Schema. This will come from siteMetadatafor most pages and from post frontmatterfrom blog posts. This will support multiple authors for your site and each page can reflect that uniquely if you so choose. author: { "@type": "Person", name: author.name, email: author.email, }, - I updated the organization logo from a simple URI to an ImageObjecttype. While the StringURI is acceptable to the Organizationtype, Google has specific expectations and was throwing an error until I changed it to an ImageObject. publisher: { "@type": "Organization", url: organization.url, logo: { "@type": "ImageObject", url: organization.logo.url, width: organization.logo.width, height: organization.logo.height }, name: organization.name, }, - Add dataModifiedto reflect changes to the page after initial publication for the BlogPostingtype. datePublished, dateModified, When you have more complexity within you site, you can pass flags to this Component to return differing types of schema as needed. But no matter what you do, do not put your site into production without first passing through the script generated by the component to the Google Structured Data Testing Tool. Concluding Thoughts When I configured my site according to the plan I outlined above, not only do I get image rich, descriptive Social Sharing cards, I get perfect SEO scores when running a Lighthouse Audit on my site: You also see that I scored 100% for Accessibility on my site. This is so easy to score with Gatsby as well, and I’ll talk about what I learned on this topic in the future. Originally Posted on my blog at How to set up a short feedback loop as a solo coder Strategies for continuous improvement when you're a freelance developer. Proper I18n with Gatsby, i18next and Sanity.io Johannes Spohr - How to Use Gatsby With PHP Shakti Singh - Working on some new freelance jobs with requirements of gatsby and react. Any tips, resources, hacks, blogs are appreciated. Laksh Arora -
https://practicaldev-herokuapp-com.global.ssl.fastly.net/wesleylhandy/putting-seo-first-with-gatsby-3n2g
CC-MAIN-2019-39
refinedweb
1,976
50.67
Miklos Szeredi <miklos@szeredi.hu> writes:> I'm still not sure, what your problem is.My problem right now is that I see a serious complexity escalation inthe user interface that we must support indefinitely.I see us taking a nice powerful concept and seriously watering it down.To some extent we have to avoid confusing suid applications. (I wouldso love to remove the SUID bit...).I'm being contrary to ensure we have a good code review.I have heard it said that there are two kinds of design. Somethingso simple it obviously has no deficiencies. Something so complex it hasno obvious deficiencies. I am very much afraid we are slipping themount namespace into the latter category of work. Which is a badbad thing for core OS feature.> With the v3 of the usermounts patchset, by default, user mounts are> disabled, because the "allow unpriv submounts" flag is cleared on all> mounts.>> There are several options available to sysadmins and distro builders> to enable user mounts in a secure way:>> - pam module, which creates a private namespace, and sets "allow> unpriv submounts" on the mounts within the namespace>> - pam module, which rbinds / onto /mnt/ns/$USER, and chroots into> /mnt/ns/$USER, then sets the "allow unpriv submounts" on the> mounts under /mnt/ns/$USER.In part this really disturbs me because we now have two mechanisms forcontrolling the scope of what a user can do.A flag or a new namespace. Two mechanisms to accomplish the samething sound wrong, and hard to manage.> - sysadmin creates /mnt/usermounts writable to all users, with> sticky bit (same as /tmp), does "mount --bind /mnt/usermounts> /mnt/usermounts" and sets the "allow unpriv submounts" on> /mnt/usermounts.>> All of these are perfectly safe wrt userdel and backup (assuming it> doesn't try back up /mnt).I also don't understand at all the user= mount flag and options.All it seemed to be used for was adding permissions to unmount. In userspace to deal with the lack of any form of untrusted mounts I can understandthis. In kernel space this seems to be more of a problem.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/4/17/294
CC-MAIN-2015-35
refinedweb
384
65.62
This week’s EAP brings you important bug fixes and improvements in all areas, with following changes most notable: - PHP 5.4 is now fully supported – this build will recognize trait use section’s method renames and conflict resolution rules and infer $this type in closures. There still some minor things to refine.. Anyway please pay attention to problems with 5.4 features and file all reports as new separate issues - PHP “Language Level” setting is added to project – see Settings | PHP. It will also set features for Language Level inspection to mark as unavailable - PHPDoc inheritance support added – both for classes/methods with missing docs or tagged with @inheritDoc superclasses/interfaces/traits will be looked up. Also PHPDoc presentation got some facelifting, with more coming - PHPUnit tests are now generated using phpunit-skelgen, see WI-9340 - CSS editor got a significant speedup on large files - JavaScript completion got much requested ability to order suggestions by inheritance rather than alphabetically, completion for JQuery selector inside $() and ExtJS4/Sencha implied methods and configuration properties More details available in build changelog and. Great work Now waiting for the instanceof type inference Guys thank you! It’s really the best IDE i’ve ever used! Your the best! When a Phpdoc is generated, all classes are printed with namespaces. Can I disable it? I don’t use namespaces at all. I’m waiting for the instanceof type inference too GREAT IDE!! I think being able to refactor ID & class HTML attributes while changing CSS it’s using is brilliant! However we use jQuery extensively & I was wondering: is there currently a way to refactor/rename selectors used with jQuery in JS, CSS, HTML & PHP ? If not, is there a feature request list somewhere ?
http://blog.jetbrains.com/webide/2012/03/phpstorm-webstorm-4-0-eap-116-101/
CC-MAIN-2015-11
refinedweb
290
55.13
I would have no clue how to do that.. xD This is a discussion on Need help learning C++. within the C++ Programming forums, part of the General Programming Boards category; I would have no clue how to do that.. xD... I would have no clue how to do that.. xD would you like a clue? indeed If you need this explaining, then please ask.If you need this explaining, then please ask.Code:#include <iostream> using namespace std; int main() { int age; char ans; do { cout<<"Please input your age : "; cin>> age; cin.ignore(); if ( age < 5 || age == 5 ) { cout<<"Are you potty trained yet?!\n"; } else if ( age > 5 && age < 12 || age == 12 ) { cout<<"Woo! You're maturing.\n"; } else if ( age > 12 && age < 20 ) { cout<<"You're a teenager, welcome to hell.\n"; } else if ( age > 21 && age< 30 ) { cout<<"Party it up, you can drink, your prolly in college, or just out of it. Have fun.\n"; } else if ( age == 20 ) { cout<<" You are the ........tiest age ever. You are in college, alot of work.. and you can't drink yet. Hold int here one more year.\n"; } else if ( age > 30 && age < 40 || age == 30 || age == 40) { cout<<"You are getting towards middle aged, start to settle down.\n"; } else if ( age > 40 && age < 47 ) { cout<<"You're now middle aged. Get used to it.\n"; } else if ( age == 47 ) { cout<<"I love you mom!!!!!\n"; } else if ( age > 100 || age == 100 ) { cout<<"Plan your funeral, make sure your will is correct, etc.\n"; } else if ( age > 47 && age < 60 ) { cout<<"You're getting up there in years!!!\n"; } else if ( age > 60 || age == 60 && age < 100 ) { cout<<"You're old.\n"; } cout<<"Continue (y/n)"<<endl; cin>>ans; }while(ans!='n'); cin.get(); } thanks i get it. Now that I'm done I was wondering if I wanted to give an answer like. "you are ___ (whatever they entered) years young." how would I do something like that? Code:cout<<"you are "<< age << " years young."<< endl;
http://cboard.cprogramming.com/cplusplus-programming/74088-need-help-learning-cplusplus-2.html
CC-MAIN-2014-41
refinedweb
342
95.17
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "rocket science" - - - SERIOUSLY: FUCK YOU PAYPAL... 🖕For your 500 Apis that seemingly do the same fucking thing 🖕For your fucking Webhooks that get dispatched every fucking century 🖕For needing a fucking degree in PayPal sciences to understand which fees apply and when 🖕For doc links that seemingly lead to nowhere 🖕For having to plow through 500 pages on your fucking retarded website to be able to execute or receive a fucking payment 🖕For your casual internal server errors 🖕For your fucking ancient sandbox account design and dysfunctional features therein Making payments is not fucking rocket science you fucking cunts. 🖕FUCK YOU!🖕22 - Me: *opens a terminal in front of parents and starts a build script from command line. Logs start rapidly flooding the screen* Mom: *whispers to dad proudly* “look at how much she has worked. Look how fast the lines are running on the screen!! I didn’t wanna burst their bubble by explain them that their child is NOT doing any rocket science, and is something even they can do(maybe better). So I responded back with a fake serious tone “Yeah it’s all code.” If only they knew what I was actually doing...11 - - - - A few of Stux's !dev pet peeves 1) People that walk slow as fuck in the middle of a side walk. Like hurry uppppp. I've gotta get 0.5 miles in like 8 minutes and you blocking the walkway doesn't help. 2) People that don't understand how side walks work. Treat it like the fucking road. ⬇️⬆️ Is the pattern in which you should walk. It's not rocket science. 3) People that start walking up the bus steps as I'm clearly walking forward to get off. Ffs let me off and THEN get on you stupid bitch 4) people that bike or ride their skateboard/longboard around campus but are moving slower than I am while walking. If you're gonna do that hop the fuck off and carry the damn thing. 5) people that don't try to solve an issue with their code on their own BEFORE they call the professor over. (There goes the !dev lol) 6) people that act like their favorite musician or athelete or actor or anyone fucking famous they play kiss ass with can't be criticized. Just bc they're famous and/or good at what they do sure asf doesn't make them perfection and I retain the right to voice my opinion. My name is Stuxnet and you're watching Disney Channel.12 - - - Why do people say "Well, I don't know about that" to voice disagreement? If you admit your own naivety on a subject compared to your peers, if you admit that you do not have the required knowledge to have formed an opinion, how can you disagree? So it can either be expressed with genuine innocence, like 'Well, I don't know about that, tell me more!', which is never the case. Or it means "Well I don't know anything about that... and I'm ashamed of the fact that I can't find any counter argument, so I refuse to trust your fucking expertise, shut the fuck up until I give you the right to voice your knowledge" Which is a bit rude. Now that we're on the topic of annoying expressions and platitudes... "It's not rocket science" -- Rocket science, understanding how a rocket works, is surprisingly simple. You fill a cylinder with fuel and oxygen, add a pump or two, put some sparks underneath. Chemical reaction equals energy, direct energetic particles using a nozzle, Newton's first law does the rest. It's so simple that people don't actually study rocket science. They study aerospace engineering, or astrodynamics, which are difficult topics. So if someone says "Devops is not rocket science", they're right, but for the wrong reason. It's actually harder than rocket science. Maybe easier than developing thermal protection system materials or solving n-body orbital problems with a slide ruler though. "Great minds think alike" -- No, great minds actually think creatively and generate unique thoughts, if two minds think alike, the solution was just fucking obvious. "Don't reinvent the wheel" -- First of all, pretty much nothing in code looks or even remotely functions like a simple wheel. Even metaphorically, all existing code equates to oval or square wheels. If you said "Hey, don't bother making better wheels, I like my ride to be bumpy because it stimulates my asshole", say no more, who am I to come between a product manager and their anal stimulation. Anyway, those were four coworkers who I would've strangled with an Ethernet cable if it weren't for a certain pandemic and the risk of infection which comes with choke-coughing. What are your linguistic pet peeves you get homicidal over?31 - - - Question everything! Comments lie.. sometimes code does too.. Customers..they lie the most..and are sloppy.. Don't be like customers, don't be sloppy. If you were sloppy own it & don't lie about it! Pick your fights (trying to fix vs rewrite the shit out of it)..you will know what to do more with experience.. RTFM & docs.. If things still unclear, ask before your dick gets stuck in a toaster! Ask away, learn about the customers & how they use your product.. you'll be surprised how something intuitive to you might be a rocket science for them..meaning more room to fuck things up when using it..more ways you can adapt & prevent things.. Most of all, don't fuckin lie.. ever!! If you lie on you're CV, we will find out.. If you fuck up something & lie about it, we will find out.. but it will cost us precious time when solving it from scratch.. People fuck up..that's a fact..how you go about it is what makes/breaks it for me. So don't ever fuckin lie to me!! And don't be arogant.. if you complain about fixing bugs, this is not a job for you.. if you can't even fix the obvious ones you've put there in the first place..twice as bad.. So think before you code..what do you want to do, how you want to accomplish this, is it reusable, can it be extended, does it introduce new technology into the project, will it fuck up current setup.. once you have this shit figured out, code will write itself.. Did I mention already you're not to lie to me, ever?! And don't try talking about me behind my back either..I've seen it backfire before, results were not good..3 - - - When I became a dev my parents said: "it's not rocket science, or chemistry or anything good, but at least it's not stressful, or dangerous and we can get free tech support. I hope you are able to pay your own bills and find rent" A few years, a degree and finding a supportive woman who is now my wife and and I make 4 time as much as my parents make now they want me to move back so they can get their tech support and see my wife I guess she is more social than I am - My programming teacher is explaining functions in C++ as if they were rocket science... Complicating things when they're easy5 - Fuck. I just want to fucking use OpenCV on CLion on Windows. Why the fuck do I need twenty fucking PhD's in quantum rocket science to set up a simple project with Cmake? During the time I just wasted trying to get the correct library to link properly, I could have rewritten all of Tesla's fucking CV functionality from scratch, but instead here I am spending literal fucking hours googling why the fuck does 'recursive_mutex' not name a fucking type in namespace 'std' on mingw. Fuck C++ I'm going back to C# where I can literally install OpenCV and all of its fucking dependencies from nuget with ***ONE*** fucking click - [before] Client: Throughput is very less. ~50K communications/min. Me: 🤷♂️ [after some rocket science] Client: Throughput is very high. ~600K communications/min. Our servers crash under high load. Me: 🤷♂️1 - after these final year exams , i would need to unlearn some of the garbage am stuffing into my brain. Particularly Human Computer Iteraction(HCI) , that shit is even worse than the state of buzzwords words like blockchain and agile, this stuff is just so meta and boring, i am feeling like sitting in a rocket science class mixed with ancient literature3 - - I live in an apartment building and ordered a DVD from Amazon, 2-day delivery on Friday. So was supposed to arrive yesterday but got a "Delivery was Attempted". I said ok probably the postal service being lazy. Some days they just don't deliver even though they should... They tried again today but I get notified of the same problem. Now I'm pissed so finally contact Amazon. Turns out they didn't use USPS or any of the big shippers. I'm going WTF... isn't it common sense... all these rocket science engineers and they can't add a simple if? if(address.HasApartmentNumber) shippers.Select( x => x.CanAccessApartmentBuildings)9 -. - Apparently Facebook can put ads everywhere saying fake accounts are not their friend etc etc. However, they can't figure out that a new profile with a new post that contains a link is potentially a scam account... Seriously, it isn't rocket science. -? - Sometimes i feel that i am the only person who knows the type-2 trick in a dwh design. Why don't you read a book about dwh101 before doing your work? I am not a genius and its not a rocket science. - ! Top Tags
https://devrant.com/search?term=rocket+science
CC-MAIN-2021-17
refinedweb
1,677
72.97
One of the new features we’re adding to Visual Basic is called Operator Overloading. This feature allows the programmer to create a class that knows what +, -, *, and other operators mean. There are two main components to operator overloading: consumption and declaration. (Actually, I’ve just described a major precept of language design. Although they go hand-in-hand, the declaration form of a feature and the consumption form of a feature are two intrinsically different concepts. The dichotomy reveals itself even in .NET executables. When people talk about "metadata", they are referring to declarations. When people talk about intermediate language (IL) opcodes, they are referring to consumption.) Let’s take a look at consumption first with an example.The System.Drawing namespace contains the types Point and Size. A new point can be calculated by adding or subtracting a size from an already existing point. Programmatically, the + and – operators lend themselves naturally to this calculation: Dim p1 As New class=GramE>Point(10, 10) Dim s1 As New class=GramE>Size(20, -10) Dim p3 As Point = p1 + s1 Dim p4 As Point = p1 – s1MsgBox class=GramE>(p3.ToString) class=GramE>MsgBox class=GramE>(p4.ToString) Because Point overloads the + and – operators, this code compiles and works as expected, outputting: {X=30,Y=0} {X=-10,Y=20} How was this done? When the compiler analyzes the expression "p1 + s1", it discovers the type of "p1" is Point and that Point defines Operator +. Operators are actually just functions with funny names, so the compiler turns "p1 + s1" into a function call, equivalently written as: Dim p3 As Point = System.Drawing.Point.op_ class=GramE>Addition(p1, s1)This is the key to understanding overloaded operators: they are just functions. An expression like "a + b" is really a call to the + operator with two arguments: "+(a, b)". Because member names cannot contain symbols like "+", the compiler uses a special name behind the scenes: "op_Addition class=GramE>(a, b)". And as functions, the compiler uses normal overload resolution rules to pick the correct overload (in exactly the same way the compiler uses these rules to pick the correct overload of Console.WriteLine, which at last count had 19 overloads!). Perhaps you can guess what the declarations of overloaded operators look like class=GramE>; they’re an awful lot like functions: The Visual Basic compiler will allow you to overload several operators, summarized in the table below: CType, IsTrue and IsFalse are special and I’ll cover those in a future entry. But quickly, you can overload conversions between types using the class=SpellE>CType operator (instead of overloading the = assignment operator). And IsTrue and IsFalse are for overloading AndAlso and OrElse which are themselves not directly overloadable. A few rules greatly simplify the design and use of operator overloading: each operator declaration must be Public and Shared, and must be declared in the same type as one of its operands. And some operators, like <= and >=, must be pair-wise declared, i.e., a declaration of one requires an identical declaration of the other. You’ll find, as I did, that these simple rules make understanding operator overloading much, much easier. More info to follow… Nice to see a VB blog. There’s far too many by that C# rabble… What is the form of the function declaration going to be? I assume op_blahblahblah to keep things consistent. Declarations will be just like functions except substitute Operator for Function and use the actual operator itself for the name: Public Shared Operator >=(a as foo, b as foo) as foo End Operator Operator overloading! w00t! Finally. : ) Now, if only we could get generics (I wish…)… Although I agree with Mark, I’m so glad to finally see a VB blog… the way MS carries on, you’d think VB.NET doesn’t really exist. On a related note, I don’t suppose we’ll see the reintroduction of those lovely Bitwise AND, OR, XOR, etc. operators? I spent about twenty minutes trying to work out why bitmasking wasn’t working… Daniel: I’ll talk about generics shortly. The bitwise operators still exist, just like in VB6. And, Or, Xor still do bit masking on integral types. AndAlso and OrElse were added in VB.NET for logical short-circuiting. So when is this coming out!? This looks interesting! Quick question. If the operator declarations are shared subs/functions, that means we cant access instance members. That will probably be a rather cripling limitation. Another suggestion is requiring special syntax markers in expressions with types that overload the standard operators. E.g Dim p1 As New Point(10, 10) Dim s1 As New Size(20, -10) Dim p3 As Point = | p1 + s1 | or similar, where the vertical bar indicates that the + operator is user defined for variables p1 and s1. This will resolve a major problem with overloaded operators (that of not knowing whether the operator is overloaded or not just by looking at the expression) without adding too much clutter. When can we expect these facilities? Its looking good already and I like the simplified rules you have at the moment. Lets just be careful with the feature and try to design it so we avoid operator overloading nightmares. Great stuff! Cheers. Eric. Question: Which version of VB is this supposed to be in? It is not in VB7 (for reasons I dont understand). Will it be in VB8? Or .Net 2004 (if such a thing exists) as an upgrade? Mot: This will be included in the next version of VB. Operator Overloading wasn’t included in VB7 because, unfortunately, we didn’t have enough time to do it. Eric: Right, operators are shared, but that doesn’t mean you can’t access instance members. Operators work by accessing the instance members of their parameters. "Operator +(a As Foo, b As Foo)" works by doing something useful with the instance members of a and b. You can even access the Private members of a and b given that the operator is declared in Class Foo. As for the special syntax, we hope that the operator overloading feature is designed sufficiently well that, along with some IDE hints, we can avoid the overloading nightmares present in other languages. Hey, the only problem I have with VB.NET is that you called it VB. That’s like calling PASCAL ‘C’. I’ve got applications with tens of thousands of lines of VB6 code, none of which can be converted reliably to VB.net. So while Bill Gates saves a few bucks by writing half a language to replace VB, we orphaned application developers are looking at spending many millions of dollars rewriting our applications from scratch, while Mr. Gates has his attention on Sun instead of on his customers. My code has hundreds of interacting features, and rewriting it is not going to be any fun, particularly in a language that has taken the simplest things and made them difficult. Things were changed to make them more ‘C’-like, which is wonderful from a religious point of view, I suppose. And while web services are a great concept, the overhead is horrendous, making it impractical for most systems. We are already using XML in VB6 for interprogram communication, but not as web services. Using web services for interprogram communicaiton would be like going to the phone book to look up your girlfriend’s number every time you call her. I suppose a class-action lawsuit against MS would be futile, so I’m going to contact my Senators and Congressman instead. Sounds nice, but… Instead of the work of declaring operators one by one (even forcing pair-wise declarations!), wouldn’t it be nicer to implement them in terms of one standard declaration (like in Ruby)? E.g. class Foo include Comparable attr_accessor :val def initialize(val) self.val = val end # comparison method; # all others defined in terms of this def <=>(other) val <=> other.val end end Oh, well… Maybe there’s a not-too-ugly workaround. (Macros?) And Brad, this guy is trying to improve things… My former manager, Matt Gertz, has put together a paper describing VB operator overloading in greater detail. Check it out: Operator overloading is great for scientific applications and I greatly appreciate the ability to use in it VB.NET and C#. However, I REALLY wished there as a way to overload the assignment operator (=) as this would make a library I am developing now much more intuitive and bullet proof. Any thoughts for anyone out there?
https://blogs.msdn.microsoft.com/cambecc/2003/10/20/operator-overloading/
CC-MAIN-2016-30
refinedweb
1,427
55.03
TerminalPrompt Synopsis [Startup] TerminalPrompt=n Description This is a comma separated string of values which set the default terminal prompt for the system. The default is 8,2 Values: The order of the values in the string determines the order the values appear in the prompt. For example TerminalPrompt="2,1" gives you a terminal prompt of %SYS:HostName> 0 - Use only ">" for the prompt. 1 - Host name, also known as the current system name. The name assigned to your computer. For example, LABLAPTOP>. This is the same for all of your terminal processes. 2 - Namespace name. For example, %SYS>. The current namespace name is contained in the $NAMESPACE special variable. It can be an explicit namespace name or an implied namespace name. 3 - Config name. The name of your system installation. For example, CACHE2>. This is the same for all of your terminal processes. 4 - Current time, expressed as local time in 24-hour format with whole seconds. For example, 15:59:36>. This is the static time value for when the prompt was returned. This value changes for each prompt. 5 - pid. The Process ID for your terminal. For example, 2336>. This is different for each terminal process. This value can also be returned from the $JOB special variable. 6 - Username. For example, fred>. This is the same for all of your terminal processes. 7 - Elapsed time executing the last command, in seconds.milliseconds. For example, .000495>. Leading and trailing zeros are suppressed. This changes for each prompt. 8 - Transaction Level. For example, TL1>. TerminalPrompt=0 This gives you a right-angle bracket as a prompt. 0–8, as described above. On the page System Administration > Configuration > Additional Settings > Startup, in the TerminalPrompt row, select Edit. Enter a comma-separated string of values.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RCPF_TERMINALPROMPT
CC-MAIN-2021-10
refinedweb
295
63.05
XDK for C: Specifications and Cheat Sheets, 3 of 8 Table D-1 lists the XML Parser for C revision history. XML Parser 2.0.4.0.0 (C) This is the first production V2 release. This changes in this release were mainly bug fixes. For the XML parser, the following bugs were fixed: For the XSLT processor, the following bugs were fixed: XML Parser 2.0.3.0.0 (C) SAX memory usage: Much smaller, and flat for any input size and multiple parses (memory leaks plugged). XSLT memory usage: Improved. Validation warnings: Validity Constraint (VC) errors have been changed to warnings and do not terminate parsing. For compatibility with the old behavior (halt on warnings as well as errors), a new flag XML_FLAG_STOP_ON_WARNING (or '-W' to the xml program) has been added. Performance improvements: Switch to finite automata VC structure validation yields 10% performance gain. HTTP support: HTTP URIs are now supported; look for FTP in the next release. For other access methods, the user may define their own callbacks with the new xmlaccess() API. Oracle XML Parser 2.0.2.0.0 (C) XSLT improvements: Various bugs fixed in the XSLT processor; error messages are improved; xsl:number, xsl:sort, xsl:namespace-alias, xsl:decimal-format, forwards-compatible processing with xsl:version, and literal result element as stylesheet are now available; the following XSLT-specific additions to the core XPath library are now available: current(), format-number(), generate-id(), and system-property(). Bug fixes: Some problems with validation and matching of start and end tags with SAX were fixed (1227096). Also, a bug with parameter entity processing in external entities was fixed (1225219). Oracle XML Parser 2.0.1.0.0 (C) Performance improvements: Major performance improvement over the last, about two and a half times faster for UTF-8 parsing and about four times faster for ASCII parsing. Comparison timing against previous version for parsing (DOM) and validating various standalone files (SPARC Ultra 1 CPU time): File size Old UTF-8 New UTF-8 Speedup Old ASCII New ASCII Speedup 42K 180ms 70ms 2.6 120ms 40ms 3.0 134K 510ms 210ms 2.4 450ms 100ms 4.5 247K 980ms 400ms 2.5 690ms 180ms 3.81M 2860ms 1130ms 2.5 1820ms 380ms 4.82 7M 10550ms 4100ms 2.6 7450ms 1930ms 3.9 10.5M 42250ms 16400ms 2.6 29900ms 7800ms 3.8. Conformance improvements: Stricter conformance to the XML 1.0 spec yields higher scores on standard test suites (Jim Clark, Oasis, ...). Lists, not arrays: Internal parser data structures are now uniformly lists; arrays have been dropped. Therefore, access is now better suited to a firstChild/nextSibling style loop instead of numChildNodes/getChildNode. DTD parsing:A new API call xmlparsedtd() is added which parses an external DTD directly, without needing an enclosing document. Used mainly by the Class Generator. Error reporting: Error messages are improved and more specific, with nearly twice as many as before. Error location is now described by a stack of line number/entity pairs, showing the final location of the error and intermediate inclusions (e.g. line X of file, line Y of entity). NOTE: You must use the new error message file (lpxus.msb) provided with this release; the error message file provided with earlier releases is incompatible. See below. XSL improvements: Various bugs fixed in the XSLT processor; xsl:call-template is now fully supported. Oracle XML Parser 2.0.0.0.0 (C) Oracle XML v2 parser is a beta release and is written in C. The main difference from the Oracle XML v1 parser is the ability to format the XML document according to a stylesheet via an integrated an XSLT processor. The XML parser will check if an XML document is well-formed, and optionally validate it against a DTD. The parser will construct an object tree which can be accessed via a DOM interface or operate serially via a SAX interface. Supported operating systems are Solaris 2.6, Linux 2.2, HP-UX 11.0, and NT 4 / Service Pack 3 (and above). Be sure to read the licensing agreement before using this product.
http://docs.oracle.com/cd/A97335_02/apps.102/a86030/appdxdk3.htm
CC-MAIN-2017-17
refinedweb
690
59.19
John Shooter998 Points Did you define your Square class as an inner class of the Polygon class? Be sure to define the Square class before ? What Am i doing wrong? namespace Treehouse.CodeChallenges { class Square : Polygon; { public Square(int numSides, int sideLength) base: 4; SideLength = sideLength; } class Polygon { public readonly int NumSides; public readonly int SideLength; public Polygon(int numSides) { NumSides = numSides; } } } 2 Answers sigfredo santiago1,502 Points Switch it around. Define the Polygon class first then the Square since Square class is a sub class of Polygon. Steven Parker177,483 Points Here's a few hints: - define "Square" after "Polygon" - there should not be a semicolon on the "class" line - you still need to create the readonly field named "SideLength" - the colon goes before the word "base" - the argument to "base" should be in parentheses - the body of the constructor should be enclosed by a pair of braces
https://teamtreehouse.com/community/did-you-define-your-square-class-as-an-inner-class-of-the-polygon-class-be-sure-to-define-the-square-class-before
CC-MAIN-2019-51
refinedweb
149
64.24
Finally we need to create a color value to store in the pixel. The problem here is that the format used depends on the number of bits used for each pixel and how the RGBA values are packed. This is coded by the fields in fb_var_screeninfo: struct fb_bitfield red; struct fb_bitfield green; struct fb_bitfield blue; struct fb_bitfield transp; and the fb_bitfield stuct is: struct fb_bitfield { __u32 offset; /* beginning of bitfield */ __u32 length; /* length of bitfield */ __u32 msb_right; /* != 0 :Most significant bit is right */ }; This is officially a legacy way of doing the job but it has been in use for so long that it is widely supported. This allows you to pack the RGBA values correctly no matter what the format is. Pixels are always stored in an integer number of bytes and padding bits are added according. Suppose the red fb_bitfield was offset=16, length=8, msb_right=0 this would mean that the red color value was stored in the pixel starting at bit 16 and was 8 bits e.g 0x00RR0000. Suppose we have the color value stored in variables r, g, b and a, then we could assemble a 32 bit pixel value using: uint32_t r=0x00,g=0x00,b=0xFF,a=0xFF; uint32_t pixel =(r<<vinfo.red.offset) | (g<<vinfo.green.offset) | (b<<vinfo.blue.offset) | (a<<vinfo.transp.offset); That is, shift by the offset and OR the results together. Notice that this simple way of packing a pixel color value doesn't take care of all of the possibilities - it doesn't deal with the possibility that the color values could be less than 8-bits or that the most significant bit could be on the right. However, this is typical of a 32 bit RGBA pixel format. In any real application you would have to do a lot more bit manipulation to ensure that your program worked with a range of display modes. We can now store the pixel value in the framebuffer: *((uint32_t*) (fbp + location)) = pixel; Notice that we have to cast the pointer to ensure that the pixel is stored as a 4-byte value. The modern way of doing the same job is to specify a FOURCC code. This is a set of four-character codes that specify the pixel format without specifying it in the way that the old API does. You simply have to know what a particular FOURCC code means and implement it. For example, the FOURCC code RGBA or 0x41424752 specifies the same format as used in the example above. The framebuffer signals that it supports FOURCC by setting a bit in the capability field - most don't. It is time to put all of this information together and draw something on the screen. As an example we can create a function that draws a pixel of a given color value on the screen and then use it to draw a line: #include <stdio.h> #include <stdlib.h> #include <linux/fb.h> #include <fcntl.h> #include <sys/ioctl.h> #include <sys/mman.h> #include <inttypes.h> struct fb_fix_screeninfo finfo; struct fb_var_screeninfo vinfo; size_t size; uint8_t *fbp; void setPixel(uint32_t x, uint32_t y, uint32_t r, uint32_t g, uint32_t b, uint32_t a) { uint32_t pixel = (r << vinfo.red.offset)| (g << vinfo.green.offset)| (b << vinfo.blue.offset)| (a << vinfo.transp .offset); uint32_t location = x*vinfo.bits_per_pixel/8 + y*finfo.line_length; *((uint32_t*) (fbp + location)) = pixel; } int main(int argc, char** argv) { int fd = open("/dev/fb0", O_RDWR); ioctl(fd, FBIOGET_VSCREENINFO, &vinfo); ioctl(fd, FBIOGET_FSCREENINFO, &finfo); size = vinfo.yres * finfo.line_length; fbp = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); uint32_t x = 0; uint32_t y = 400; for (x = 0; x < 800; x++) { setPixel(x, y, 0xFF, 0xFF, 0x00, 0xFF); } return (EXIT_SUCCESS); } The program assumes that the graphics are in 32-bit per pixel color mode. If this isn't the case then to set the mode, just after the two existing ioctl calls, add: vinfo.grayscale = 0; vinfo.bits_per_pixel = 32; ioctl(fd, FBIOPUT_VSCREENINFO, &vinfo); ioctl(fd, FBIOGET_VSCREENINFO, &vinfo); Notice that the line drawn on the screen goes over any windows that might be in its way and it isn't persistent in the sense that if anything writes over it, such as a window, it is wiped out. As a very simple demonstration of using the framebuffer, let's bounce a ball around the screen - the whole screen, not just in a window. The basic idea is very simple - save the contents of a small block of the screen, draw a "ball" in this block and finally restore the original contents of screen. We need some graphics utility functions to get started. The getRawPixel and setRawPixel functions simply work with a 32-bit RGBA value: uint32_t getRawPixel(uint32_t x, uint32_t y) { uint32_t location = x * (vinfo.bits_per_pixel / 8) + y * finfo.line_length; return *((uint32_t*) (fbp + location)); } uint32_t setRawPixel(uint32_t x, uint32_t y, uint32_t pixel) { uint32_t location = x * (vinfo.bits_per_pixel / 8) + y * finfo.line_length; *((uint32_t*) (fbp + location)) = pixel; } Using these it is easy to write a setPixel to a color function: void setPixel(uint32_t x, uint32_t y, struct color c) { uint32_t pixel = (c.r << vinfo.red.offset)| (c.g << vinfo.green.offset) | (c.b << vinfo.blue.offset) | (c.a << vinfo.transp .offset); setRawPixel(x, y, pixel); } We don't need a getPixel function in this program. We do need a setBlock function to draw the ball: void setBlock(uint32_t x, uint32_t y, uint32_t L, struct color c) { for (int i = 0; i < L; i++) { for (int j = 0; j < L; j++) { setPixel(x + i, y + j, c); } } } It draws a square block of width L in color c with top left-hand corner at x,y.
https://www.i-programmer.info/programming/cc/12839-applying-c-framebuffer-graphics.html?start=1
CC-MAIN-2020-34
refinedweb
942
61.26
Rest assured api automation step by step procedure REST Assured Tutorial: How to test API with Example What is Rest Assured? Rest Assured enables you to test REST APIs using java libraries and integrates well with Maven. It has very efficient matching techniques, so asserting your expected results is also pretty straight forward. Rest Assured has methods to fetch data from almost every part of the request and response no matter how complex the JSON structures are. For the testing community, API Automation Testing is still new and niche. The JSON complexities keep API testing unexplored. But that does not make it less important in the testing process. Rest Assured.io framework has made it very simple using core java basics, making it a very desirable thing to learn. In this tutorial, you will learn, Why need Rest-Assured? Imagine you open your google map view and look for a place you want to go, you immediately see closeby restaurants, you see options for the commute; from some leading travel providers, and see so many options at your fingertips. We all know they are not google products, then how does Google manage to show it. They use the exposed APIs of these providers. Now, if you are asked to test this kind of setup, even before the UI is built or is under development, testing APIs becomes extremely important and testing them repeatedly, with different data combinations makes it a very suitable case for automation. 28.7M 243 What is Software Testing Why Testing is Important Earlier, we were using dynamic languages such as groovy, ruby to achieve this, and it was challenging. Hence API testing was not explored by functional testing. But using a open source with a lot of additional methods and libraries being added has made it a great choice for API automation. Step by step guide for the setup of Rest Assured.io Step 1) Install Java. Step 2) Download an IDE to begin: eclipse Step 3) InstallMaven and set up your eclipse. Setup Rest Assured 1. Create a Maven Project in your IDE. We are using Intellij, but you will get a similar structure on any IDE you may be using. 2. Open your POM.xml Project structure For Rest Assured.io: For Java version < 9 users: Add the below dependency to your POM.xml: <dependency> <groupId>io.rest-assured</groupId> <artifactId>json-path</artifactId> <version>4.2.0</version> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>xml-path</artifactId> <version>4.2.0</version> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>json-schema-validator</artifactId> <version>4.2.0</version> <scope>test</scope> </dependency> For Rest Assured.io : For Java version 9+ users : <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured-all</artifactId> <version>4.2.0</version> <scope>test</scope> </dependency> Troubleshooting: In case you see errors and not sure if the dependencies got downloaded well, 1. Perform a maven build to import all dependencies, again you will find help on Maven set up on test99. 2. Still, you see errors, then do a maven clean followed by a maven install, and it should build without any errors. 3. You can add the below lines in your java class and see no compile errors are present. import io.restassured.RestAssured.*; import io.restassured.matcher.RestAssuredMatchers.*; import org.hamcrest.Matchers.*; First simple Rest Assured script Syntax: The syntax of Rest Assured.io is the most beautiful part, as it is very BDD like and understandable. Given(). param(“x”, “y”). header(“z”, “w”). when(). Method(). Then(). statusCode(XXX). body(“x, ”y”, equalTo(“z”)); Explanation: Code Explanation Given() ‘Given’ keyword, lets you set a background, here, you pass the request headers, query and path param, body, cookies. This is optional if these items are not needed in the request When() ‘when’ keyword marks the premise of your scenario. For example, ‘when’ you get/post/put something, do something else. Method() Substitute this with any of the CRUD operations(get/post/put/delete) Then() Your assert and matcher conditions go here Now that you have the setup and some background to the syntax, let’s create our first simple test. It is okay if so far the structure seems new to you, as you code further interpret each line, you will get the hang of it. What will you fetch? Open your browser and hit -. Ensure you see something as below. In case you get an error on the browser when you try to get a response for the request, 1. See if you have used Https or Http. Your browser might have settings to not open insecure websites. 2. See if you have any proxy or firewall blocks your browser from opening websites. *Note — you did not use any headers here, no body, and no cookie. It was a URL and also you are getting content from the API and not posting or updating any existing content, so that makes it a GET call. Remember this to understand our first test better. The Objective of your test: The goal of the script is to print the same output on your IDE console as what you received on the browser through Rest Assured. Let us code this with the below steps: Getting the response Body Step 1) Create a class named as “myFirstRestAssuredClass” Step 2) Create a method called “getResponseBody” Step 3) Similar to the structure learned earlier of given, when and then, type the below code given(). -> No headers required, no query or path param. when(). -> No specific condition setup get(‘)->only the url needs to be supplied then(). -> No specific assertions required log(). all() -> Once all the response is fetched, log response, headers, essentially everything that the request returns to you. public static void getResponseBody(){ given().when().get(“").then().log() .all(); } Now notice that the URL used is long and less readable, if you look closely, you will notice that 3 query parameters are being used which are 1. Customer_ID 2. Password 3. Account_No Rest Assured, helps us pass every part(query, path, header param) separately, making the code more readable and easy to maintain. Also, we can parameterize the data from an external file as required. For using query param, we go back to our definition of the syntax and see that all of them are passed as a part of given. public static void getResponseBody(){ given().queryParam(“CUSTOMER_ID”,”68195") .queryParam(“PASSWORD”,”1234!”) .queryParam(“Account_No”,”1") .when().get(“").then().log() .body(); } **Note that we used “body” instead of “all”; this helps us to extract only the body of the response. Output: Output for getResponseBody Getting the response status code The next method that we script will be to get the status code and also put an assertion to validate the same. Step 1) Create a method called getResponseStatus() Step 2) Use the same request structure used above. Copy and paste it. Step 3) Instead of logging it, we use the ‘getStatusCode’ inbuilt method of Rest Assured to fetch the status code value Step 4) In order to assert that your status code is 200, we use the keywords — assertThat().statusCode(expectedCode) **Note — URL is a variable used for simplicity. URL holds the entire API request URL); } Output: Output for getResponseStatus Business need One of the basic rules of automation is that we have to put checkpoints so that the test proceeds only if all the required conditions are met. In API testing, the most basic validation is to check if the status code of the request is in 2XX format. The complete code, so far: import java.util.ArrayList; import static io.restassured.RestAssured.*; import static java.util.concurrent.TimeUnit.MILLISECONDS; public class myFirstRestAssuredClass { final static String url=”"; public static void main(String args[]) { getResponseBody(); getResponseStatus(); ; } //This will fetch the response body as is and log it. given and when are optional here public static void getResponseBody(){ given().when().get(url).then().log() .all(); given().queryParam(“CUSTOMER_ID”,”68195") .queryParam(“PASSWORD”,”1234!”) .queryParam(“Account_No”,”1") .when().get(“").then().log().body(); }); } } *Note: 1. 200 is a successful response for this scenario. At times, you need the request to fail as well, and then you might use 4XX or 5XX. Do try to change the status code by supplying invalid parameters and check. 2. When we assert a condition, there will be no printing on the console unless there is an error. Script to fetch different parts of a response Fetching response body and response status code is already covered in the above segment. It is worthy to note that to fetch different parts of the response, the keyword ‘extract’ is very important. Header Rest Assured is a very straightforward language, and fetching headers is just as simple. The method name is headers(). Like before, we will create a standalone method to do the same. public static void getResponseHeaders(){ System.out.println(“The headers in the response “+ get(url).then().extract() .headers()); } Please note that ‘given().when()’ is skipped here, and the code line starts from get(), this is because there is no precondition or verification made here to hit the request and get a response. In such cases, it’s optional to use the same. Output : Output for getResponseHeader Business need: Quite a few times, you would need to use the authorization token, or a session cookie for the subsequent request, and mostly, these details are returned as headers of the response. Response Time To get the time needed to fetch the response from the backend or other downstream systems, Rest Assured provides a method called ‘timeIn’ with a suitable timeUnit to get the time taken to return the response. public static void getResponseTime(){ System.out.println(“The time taken to fetch the response “+get(url) .timeIn(TimeUnit.MILLISECONDS) + “ milliseconds”); } Output: Output for getResponseTime Business need: A very important feature of testing APIs is their response time, to measure the performance of the application. Note that the time taken for your call may take more or less time depending on your internet speed, the performance of the API at that time, server load, and other factors impacting the time. Content-Type You can get the content-Type of the response returned using the method is “contentType ()”. public static void getResponseContentType(){ System.out.println(“The content type of response “+ get(url).then().extract() .contentType()); } Output Output for getContentType Business Need: At times getting the content-type is essential for ensuring there are no security gaps for any cross-origin threats or just to ensure the content passed is as per the standards of the API. Fetch Individual JSON Element From the given response, you are asked to calculate the total amount, you need to fetch every amount and sum it up. selenium. Step 4) Fetch all amounts in a collection, and then loop for all values to calculate the sum public static void getSpecificPartOfResponseBody(){ ArrayList<String> amounts = when().get(url).then().extract().path(“result.statements.AMOUNT”) ; int sumOfAll=0; for(String a:amounts){ System.out.println(“The amount value fetched is “+a); sumOfAll=sumOfAll+Integer.valueOf(a); } System.out.println(“The total amount is “+sumOfAll); } Note: Since the amount value is in string data type, we convert to integer and use it for summation. Output: Output for getSpecificPartOfResponse Summary: - Rest Assured is a group of java libraries which enables us to automate Rest API testing - Rest Assured is Java-based, and knowledge of core Java suffices for learning it - It helps fetch values of request and response from complicated JSON structures - The API request can be customized with a variety of header, query, path param, and any session or cookies to be set. - It helps set assert statements and conditions. - While Rest Assured is very helpful when the response is JSON type, it’s methods may not work seamlessly if content type id HTML or plain text. - If you need any help please contact us at +91–93 92 91 89 89 or sales@qaprogrammer.com -
https://raghavendrar672.medium.com/rest-assured-api-automation-step-by-step-procedure-58414bee56b2
CC-MAIN-2022-33
refinedweb
1,998
63.7
You can embed boardgame.io, an open source game engine for turn based games, in the Tabletop Playground JavaScript environment. If you already have a game implemented using boardgame.io, this allows you to use Tabletop Playground as visualization and networking layer, while you can reuse all the game logic you already have! If you create a new game in Tabletop Playground, using boardgame.io can help you to keep track of the game state and rules. And when you use boardgame.io to create your package, you can easily port your game to be playable in a web browser by connecting to a different frontend. Using boardgame.io together with Tabletop Playground means that you won’t use some of the functionality that boardgame.io offers: the multiplayer and lobby features are taken care of by Tabletop Playground, and your JavaScript code only runs on the host. You also don’t need the React bindings, instead you’ll be writing code to use Tabletop Playground as your view layer! Set up boardgame.io This tutorial will assume that you have some JavaScript knowledge and you are familiar with how scripting in Tabletop Playground works (see Scripting Basics). In order to set up boardgame.io for your package, you first need to make sure that you have npm installed. Then open up a command line in your package’s “Scripts” folder and enter: npm install boardgame.io This will install boardgame.io and it’s dependencies in the “node_modules” folder in your scripts directory. You now have boardgame.io available in the scripts for your package. Create the game logic As an example of how boardgame.io can be used, we’ll adapt the tic-tac-toe tutorial at to work within Tabletop Playground. You can download the completed package at mod.io: For the game logic, we can use the code from the tutorial almost unchanged, we only need to change the first line from import to require and use module.exports instead of export: import { INVALID_MOVE } from 'boardgame.io/core'; const TicTacToe = { setup: () => ({ cells: Array(9).fill(null) }), turn: { moveLimit: 1, }, moves: { clickCell: (G, ctx, id) => { if (G.cells[id] !== null) { return INVALID_MOVE; } G.cells[id] = ctx.currentPlayer; }, }, endIf: (G, ctx) => { if (IsVictory(G.cells)) { return { winner: ctx.currentPlayer }; } if (IsDraw(G.cells)) { return { draw: true }; } }, }; // Return true if `cells` is in a winning configuration. function IsVictory(cells) { const positions = [ [0, 1, 2], [3, 4, 5], [6, 7, 8], [0, 3, 6], [1, 4, 7], [2, 5, 8], [0, 4, 8], [2, 4, 6], ]; const isRowComplete = row => { const symbols = row.map(i => cells[i]); return symbols.every(i => i !== null && i === symbols[0]) }; return positions.map(isRowComplete).some(i => i === true); } // Return true if all `cells` are occupied. function IsDraw(cells) { return cells.filter(c => c === null).length === 0; } module.exports = TicTacToe; This gives us the basic mechanics of a tic-tac-toe game and a check whether the game is over and who has won. Connect with Tabletop Playground Instead of writing a view layer for a browser, we now connect the game logic to Tabletop Playground objects. For this example, we can use a board with snap points, and two card stacks to represent the X and O symbols that players can place. To connect these objects with the game logic represented in boardgame.io, we create a global script. First, we import the Tabletop Playground API, our TicTacToe object, and the boardgame.io client. Then we initialize the client object: const {world, globalEvents} = require('@tabletop-playground/api'); const { TicTacToe } = require('./Game'); const { Client } = require('boardgame.io/client'); client = Client({ game: TicTacToe }); client.start(); Now we need to get the information about player actions to boardgame.io. Since we created a board with snap points, the relevant event is onSnapped: when an X or O card is snapped to the board, a player has placed a mark. We add the onSnapped callback new objects of the card types – new objects are created when taking a card from a stack. o_id = world.getObjectById('O Cards').getTemplateId(); x_id = world.getObjectById('X Cards').getTemplateId(); function snapped(obj, player, snap) { client.moves.clickCell(snap.getIndex()); }; globalEvents.onObjectCreated.add(function(obj) { if (obj.getTemplateId() === o_id || obj.getTemplateId() === x_id) { obj.onSnapped.add(snapped); } }); Finally, we can subscribe to state updates from boardgame.io to react to events in the game. For example, we can show a message when the game is over: client.subscribe(function (state) { if (state.ctx.gameover) { if (state.ctx.gameover.winner !== undefined) { console.log("Winner: " + state.ctx.gameover.winner); } else { console.log("Draw"); } } }) And that’s it! With just a few lines of code, we’ve connected an existing boardgame.io game to Tabletop Playground. There’s still a lot that could be improved, of course. Most importantly, actions that players can take are usually not restricted in Tabletop Playground, so the scripts could check whether a player action is valid and react appropriately if it is not. For the tic-tac-toe game, this could mean moving a card back to the stack if it is snapped to an invalid position or dropped somewhere else.
https://tabletop-playground.com/knowledge-base/using-boardgame-io/
CC-MAIN-2020-50
refinedweb
858
68.77
The following instructions assume that you have already picked a program to write. First of all, I'm sure you recall that you should not begin writing code before you've thoroughly read the problem assignment -- including the thought-provoking questions. The assignment itself describes the 'problem' you are trying to solve. The sample run (if present) gives clues to details that might not have been obvious from the description (often including notes to point these issues out). But the thought-provoking questions bring all of these details into perspective and try to make sure you are clear on all aspects of the program you are about to (or, at worst, are) writing. So at least read them first! Also take note (the TPQs will help here, too) of the topics/techniques you are supposed to be demonstrating with your program. These are listed at the very top of the assignment under 'Topical Information'. It is often possible to solve a programming assignment with many different approaches, but I've created these assignments with the goal of helping you understand certain ideas. If you solve a problem with approach B and I was trying to get you to see technique A more clearly, I can't give you full credit. Certainly you may have learned something, but it wasn't the point of the exercise. *shrug* Sorry... Now you need to decide about the layout your program is going to have: its User Interface (UI). How will you ask the user for information? Are your prompts on the same line as their typing or the line above? Are you mundane, acerbic, witty, or something else entirely? Are your results simply labeled or layed out in fancy columns? Are your results simply stated in a set of English sentences? What is the title of your program? Once you've decided these things, actually writing the program will be easier because the 'cout' parts will flow from this guiding design decision. Your job in writing a program to solve an assignment is NOT to mimic the sample run exactly. Your program must accomplish the same aims (have the same results), but can do it in a different 'way'. You can design the look-and-feel of your program. The only thing that must remain the same is the overall structure (opening, input, results, closing) and the results that are printed (but not how they are printed). That said, I know that not everyone feels creative all the time, so if you want to mimic the sample run, go ahead. *shrug* Who am I to stifle your boredom? If your UI choice above wasn't 'mundane', perhaps you'd like a theme? There's the usual science fiction, fantasy, country & western, horror, mystery, romance, etc. But there are cross-genres, too: anime, puppets, live-action, etc. (*grin* Just kidding, kinda hard to do those in a text interface. *chuckle*) The text in your program shouldn't be an eclectic mish-mash of styles but rather should have a flow to it. A theme isn't necessary, of course, but can help to accomplish that flow. Looking through the problem description, sample run, and TPQs, you should get an idea of what results your program should produce. Note those as 'output variables'. These will be our entire reason for existence during this programming process. We may eventually decide that they can be simply printed from their calculation instead of stored as variables, but for now it is fine to just say 'variables' for all of them. You should also be able to decipher what information the program is supposed to read in from the user. Note those as 'input variables'. Everything must have a beginning and an ending: these will be our beginning. Also, these must be variables because cin can't store values directly into a calculation -- it needs a [writable] memory location to put the values into. Look through the description for any formulas you might need to transform the input into the output. If none are given and yet are clearly needed, they are probably standard math or science formulas that are considered 'common knowledge'. A quick Google search or look through an old textbook should bring them to light. Also, while looking for/through your formulas, take note of any likely candidates for constants. (Remember, if it is a literal value with a name-able meaning, we'd prefer it to be a constant. When you can't come up with a name for it, just place the original formula in a comment to show that it is a standard value.) Our general setup is to: All of this will be placed inside the main function/program between the open curly brace ( {) and the return statement ( return 0;). And, of course, we can't use all those variables/constants we just noted without first declaring them. These declarations go immediately after the open curly brace of main. (Constants can also go between the ' using namespace std;' line and the ' int main(void)' line.) Refer to your notes to see the proper syntax of declaration statements for variables and constants; of cout statements for prompts, labels, and messages; of cin statements for reading; and of assignment statements for calculations. Remember that you must declare variables/constants before you can use them. You must prompt before you read. You must read before you calculate. And you must calculate before you print results. As the semester progresses and you are using more functions from standard libraries, don't forget to #include them at the top of the program (above the ' using namespace std;' line). Also, once you learn to write your own functions (and forever after!), try to break longer processes down into more manageable steps and try to make 'repeated' code into parameterized functions that you can re-use (within a single program and/or across multiple programs). Don't forget to consider style at every turn. Make sure you are using blank space effectively both within your code and in your UI. Make sure your code is indented a [consistently-sized] level within each pair of curly braces ( {}). Make sure you've named your variables, constants, etc. with clear, understandable names. If any calculations are at all tricky, make sure you've commented before (or to the right) of them. (Also comment any functions, branches, or loops you may have written...see your notes about these.) And, make sure your lines are no longer than 72-78 characters so that the printer doesn't chop them off. (See also the printing notes for more on detecting and avoiding this problem...as well as saving a small forest by the end of the course.) When in doubt about a style issue, always look it up on the master style list. In fact, why don't you just go ahead and add that page to your 'Top 10 Things to Read before Bed' list. *smile* *grin* In the 121 examples directory under 'basics', you'll notice one called 'math'. In this example are an actual assignment, the info file for the solution, the answers to the thought-provoking questions, the actual program to solve the problem, and even a completed script for handing in.
http://home.earthlink.net/~craie/program.submission/121/how.write.prog.html
CC-MAIN-2013-20
refinedweb
1,211
71.95
Over, if we were to build a web crawler to spider a series of webpages (a task that is, by definition, I/O bound), our main program would spawn multiple threads to handle downloading the set of pages in parallel instead of relying on only a single thread (our “main thread”) to download the pages in sequential order. Doing this allows us to spider the webpages substantially faster. The same notion applies to computer vision and reading frames from a camera — we can improve our FPS simply by creating a new thread that does nothing but poll the camera for new frames while our main thread handles processing the current frame. This is a simple concept, but it’s one that’s rarely seen in OpenCV examples since it does add a few extra lines of code (or sometimes a lot of lines, depending on your threading library) to the project. Multithreading can also make your program harder to debug, but once you get it right, you can dramatically improve your FPS. We’ll start off this series of posts by writing a threaded Python class to access your webcam or USB camera using OpenCV. Next week we’ll use threads to improve the FPS of your Raspberry Pi and the picamera module. Finally, we’ll conclude this series of posts by creating a class that unifies both the threaded webcam/USB camera code and the threaded picamera code into a single class, making all webcam/video processing examples on PyImageSearch not only run faster, but run on either your laptop/desktop or the Raspberry Pi without changing a single line of code! Looking for the source code to this post? Jump right to the downloads section. Use threading to obtain higher FPS The “secret” to obtaining higher FPS when processing video streams with OpenCV is to move the I/O (i.e., the reading of frames from the camera sensor) to a separate thread. You see, accessing your webcam/USB camera using the cv2.VideoCapture function and the .read() method is a blocking operation. The main thread of our Python script is completely blocked (i.e., “stalled”) until the frame is read from the camera device and returned to our script. I/O tasks, as opposed to CPU bound operations, tend to be quite slow. While computer vision and video processing applications are certainly quite CPU heavy (especially if they are intended to run in real-time), it turns out that camera I/O can be a huge bottleneck as well. As we’ll see later in this post, just by adjusting the the camera I/O process, we can increase our FPS by as much as 379%! Of course, this isn’t a true increase of FPS as it is a dramatic reduction in latency (i.e., a frame is always available for processing; we don’t need to poll the camera device and wait for the I/O to complete). Throughout the rest of this post, I will refer to our metrics as an “FPS increase” for brevity, but also keep in mind that it’s a combination of both a decrease in latency and an increase in FPS. In order to accomplish this FPS increase/latency decrease, our goal is to move the reading of frames from a webcam or USB device to an entirely different thread, totally separate from our main Python script. This will allow frames to be read continuously from the I/O thread, all while our root thread processes the current frame. Once the root thread has finished processing its frame, it simply needs to grab the current frame from the I/O thread. This is accomplished without having to wait for blocking I/O operations. The first step in implementing our threaded video stream functionality is to define a FPS class that we can use to measure our frames per second. This class will help us obtain quantitative evidence that threading does indeed increase FPS. We’ll then define a WebcamVideoStream class that will access our webcam or USB camera in a threaded fashion. Finally, we’ll define our driver script, fps_demo.py, that will compare single threaded FPS to multi-threaded FPS. Note: Thanks to Ross Milligan and his blog who inspired me to do this blog post. Increasing webcam FPS with Python and OpenCV I’ve actually already implemented webcam/USB camera and picamera threading inside the imutils library. However, I think a discussion of the implementation can greatly improve our knowledge of how and why threading increases FPS. To start, if you don’t already have imutils installed, you can install it using pip : Otherwise, you can upgrade to the latest version via: As I mentioned above, the first step is to define a FPS class that we can use to approximate the frames per second of a given camera + computer vision processing pipeline: On Line 5-10 we define the constructor to our FPS class. We don’t require any arguments, but we do initialize three important variables: - _start : The starting timestamp of when we commenced measuring the frame read. - _end : The ending timestamp of when we stopped measuring the frame read. - _numFrames : The total number of frames that were read during the _start and _end interval. Lines 12-15 define the start method, which as the name suggests, kicks-off the timer. Similarly, Lines 17-19 define the stop method which grabs the ending timestamp. The update method on Lines 21-24 simply increments the number of frames that have been read during the starting and ending interval. We can grab the total number of seconds that have elapsed between the starting and ending interval on Lines 26-29 by using the elapsed method. And finally, we can approximate the FPS of our camera + computer vision pipeline by using the fps method on Lines 31-33. By taking the total number of frames read during the interval and dividing by the number of elapsed seconds, we can obtain our estimated FPS. Now that we have our FPS class defined (so we can empirically compare results), let’s define the WebcamVideoStream class which encompasses the actual threaded camera read: We define the constructor to our WebcamVideoStream class on Line 6, passing in an (optional) argument: the src of the stream. If the src is an integer, then it is presumed to be the index of the webcam/USB camera on your system. For example, a value of src=0 indicates the first camera and a value of src=1 indicates the second camera hooked up to your system (provided you have a second one, of course). If src is a string, then it assumed to be the path to a video file (such as .mp4 or .avi) residing on disk. Line 9 takes our src value and makes a call to cv2.VideoCapture which returns a pointer to the camera/video file. Now that we have our stream pointer, we can call the .read() method to poll the stream and grab the next available frame (Line 10). This is done strictly for initialization purposes so that we have an initial frame stored in the class. We’ll also initialize stopped , a boolean indicating whether the threaded frame reading should be stopped or not. Now, let’s move on to actually utilizing threads to read frames from our video stream using OpenCV: Lines 16-19 define our start method, which as the name suggests, starts the thread to read frames from our video stream. We accomplish this by constructing a Thread object using the update method as the callable object invoked by the run() method of the thread. Once our driver script calls the start method of the WebcamVideoStream class, the update method (Lines 21-29) will be called. As you can see from the code above, we start an infinite loop on Line 23 that continuously reads the next available frame from the video stream via the .read() method (Line 29). If the stopped indicator variable is ever set, we break from the infinite loop (Lines 25 and 26). Again, keep in mind that once the start method has been called, the update method is placed in a separate thread from our main Python script — this separate thread is how we obtain our increased FPS performance. In order to access the most recently polled frame from the stream , we’ll use the read method on Lines 31-33. Finally, the stop method (Lines 35-37) simply sets the stopped indicator variable and signifies that the thread should be terminated. Now that we have defined both our FPS and WebcamVideoStream classes, we can put all the pieces together inside fps_demo.py : We start off by importing our necessary packages on Lines 2-7. Notice how we are importing the FPS and WebcamVideoStream classes from the imutils library. If you do not have imutils installed or you need to upgrade to the latest version, please see the note at the top of this section. Lines 10-15 handle parsing our command line arguments. We’ll require two switches here: --num-frames , which is the number of frames to loop over to obtain our FPS estimate, and --display , an indicator variable used to specify if we should use the cv2.imshow function to display the frames to our monitor or not. The --display argument is actually really important when approximating the FPS of your video processing pipeline. Just like reading frames from a video stream is a form of I/O, so is displaying the frame to your monitor! We’ll discuss this in more detail inside the Threading results section of this post. Let’s move on to the next code block which does no threading and uses blocking I/O when reading frames from the camera stream. This block of code will help us obtain a baseline for our FPS: Lines 19 and 20 grab a pointer to our video stream and then start the FPS counter. We then loop over the number of desired frames on Line 23, read the frame from camera (Line 26), update our FPS counter (Line 35), and optionally display the frame to our monitor (Lines 30-32). After we have read --num-frames from the stream, we stop the FPS counter and display the elapsed time along with approximate FPS on Lines 38-40. Now, let’s look at our threaded code to read frames from our video stream: Overall, this code looks near identical to the code block above, only this time we are leveraging the WebcamVideoStream class. We start the threaded stream on Line 49, loop over the desired number of frames on Lines 53-65 (again, keeping track of the total number of frames read), and then display our output on Lines 69 and 70. Threading results To see the affects of webcam I/O threading in action, just execute the following command: As we can see, by using no threading and sequentially reading frames from our video stream in the main thread of our Python script, we are able to obtain a respectable 29.97 FPS. However, once we switch over to using threaded camera I/O, we reach 143.71 FPS — an increase of over 379%! This is clearly a huge decrease in our latency and a dramatic increase in our FPS, obtained simply by using threading. However, as we’re about to find out, using the cv2.imshow can substantially decrease our FPS. This behavior makes sense if you think about it — the cv2.show function is just another form of I/O, only this time instead of reading a frame from a video stream, we’re instead sending the frame to output on our display. Note: We’re also using the cv2.waitKey(1) function here which does add a 1ms delay to our main loop. That said, this function is necessary for keyboard interaction and to display the frame to our screen (especially once we get to the Raspberry Pi threading lessons). To demonstrate how the cv2.imshow I/O can decrease FPS, just issue this command: Using no threading, we reach 28.90 FPS. And with threading we hit 39.93 FPS. This is still a 38% increase in FPS, but nowhere near the 379% increase from our previous example. Overall, I recommend using the cv2.imshow function to help debug your program — but if your final production code doesn’t need it, there is no reason to include it since you’ll be hurting your FPS. A great example of such a program would be developing a home surveillance motion detector that sends you a txt message containing a photo of the person who just walked in the front door of your home. Realistically, you do not need the cv2.imshow function for this. By removing it, you can increase the performance of your motion detector and allow it to process more frames faster. Summary In this blog post we learned how threading can be used to increase your webcam and USB camera FPS using Python and OpenCV. As the examples in this post demonstrated, we were able to obtain a 379% increase in FPS simply by using threading. While this isn’t necessarily a fair comparison (since we could be processing the same frame multiple times), it does demonstrate the importance of reducing latency and always having a frame ready for processing. In nearly all situations, using threaded access to your webcam can substantially improve your video processing pipeline. Next week we’ll learn how to increase the FPS of our Raspberry Pi using the picamera module. Be sure to enter your email address in the form below to be notified when the next post goes live! Thank you ! Great tuto ! I’m wondering if (in a production app) we should use a lock or something to synchronize access to the frame which is a shared resource, right ? If it’s a shared resource, then yes, you should absolutely use a lock on the image data, otherwise you can run into a synchronization issue. Same here – I assumed you should have a thread acquire and release so it isn’t reading the image wihile it is being written? Apparently assinging a value to be a numpy array is atomic – or it doesn’t really matter if it was the last frame, not the very latest? Looks like if you have ANY processing you need to have it out of that fetching image thread, and it runs pretty fast. Hi Adrian, looks like the increase in fps is a fake; you get a frame immediately when required, but looks like it is still the former frame when the program body is executed faster than the physical frame rate of the camera. What do you think? Jürgen Very true — and at that point it depends on your physical camera. If your loop is faster than the physical frame rate of the camera, then the increase in FPS is not as realistic. This is further evidenced when we use the cv2.imshowfunction to simulate a more “realistic” scenario. In either case though, threading should be used since it can increase FPS. This is 2 years later and I am using opencv 3.2. Seems to me that it is already using threading, at lest all cores are in use. I agree with Jürgen, that the higher framerate is just fake. In the 1st (“non-threaded”) case, the code assumedly waits for the next frame of the camera to arrive, then calculates, framerate is calculated from the resulting speed of the loop. So the framerate is limited by a) camera framerate b) calculation time. If calculation time is much smaller than camera-framerate, you are only limited by a) In the 2nd case, you decouple the two. that means your framerate only depends on b), but you still only get another new frame according to a) and so you keep re-calculating uselessly on the same frame. I don’t see an advantage of threading the way it is done here, at least as long as your calculation time is shorter than the time between frames. (But I think that it *is* a good way to measure how long your image processing takes). You even lose some time, because you are (obviously) busy needlessly re-calculating at the time that the next frame comes in. What happens if your calculation time is larger than the time between frames from the camera is not so clear. In your threaded case, you keep fetching images and throw away any previous ones… But I guess in the non-threaded case, the kernel does just the same? Else you’d fill kernel buffer space with frames if you cannot process fast enough (or wait long enough between fetches with waitKey). You can, btw., set and query the framerate in opencv 3.2 with e.g. cap.set(cv2.CAP_PROP_FPS,150) After setting, reading the value with cap.get only returns what you set, you have to measure fps in the read-loop, similar to what you did. You probably also have to set smaller cv2.CAP_PROP_FRAME_WIDTH and cv2.CAP_PROP_FRAME_HEIGHT for higher fps rates or so (at least that is the case with the camera here). Cheers, I. Hi Iridos — thanks for the comment. I’ve tried to make this point clear in the blog post and in the comments section, but what you’re actually measuring here is the frame processing throughput rate of your pipeline. There are physical limitations to the number of frames that can be read per second from a given camera sensor, so it’s often helpful to instead look at how fast our process pipeline (and see if we can speed it up). By using threading, we don’t have to worry about the video I/O latency. In the purposely simple examples in the blog post, the whileloop runs faster than the next frame can be read. Keep in mind that for any reasonable video processing pipeline this wouldn’t be the case. Hi, My query is. If I set it, will it be set for the session until cv object is obsolete / video is cleared. Will the “set” scope be limited to certain conditions? Like, the script that includes the cv2, or is there a way to globally set it to say 30 fps. Right now,the camera runs 20 fps and I need it to work on 30. Can I permanently set it for always ? Thanks, ReemaRaven Hi Adrian This is great. I am myself experimenting with a multithreaded app that runs opencv and other libraries and I’m already using your video stream class. Special note for OSX users: I’ve run into a limitation in opencv on OSX, the command cv2VideoCapture(0) can only be issued in the main thread, so take this into account when designing your app. See for more info Thanks for sharing David — that’s a great tip regarding cv2.VideoCapturecan only be executed from the main thread. Awesome, Adrian!! Can’t wait to read the tutorial about fps increase for Raspberry Pi using the picamera module! Great tutorial (as allways)! and good timing too… I’m just trying to make the image gathering threaded for my raspberry pi project to improve the framerate. Without threading but with the use of a generator type of code for the image handling I improved the framerate by around 2 times but hopefully threading will do more. Another thing that is interesting is how to optimize the framerate vs the opencv computational time to reach a good balance. Jurgen mentioned that several frames could be similar and then it is no need to make calculation on that second frame (at least not in my case). On a raspberry pi 2 there is 4 cores and distributing the collection of frame data and calculations in a good way would improve the performance. Do you have any thoughts or advice about that? If you’re using the Pi 2, then distributing the frame gathering to a separate thread will definitely improve performance. In fact, that’s exactly what next week’s blog post is going to be about 😉 Hi Adrian In the single threaded case you’re limited to 30fps because that is the framerate of the camera in this case and you’re not really achieving 143fps in the multi-threaded case since you’re simply processing the same frame multiple times. The 143fps is really a measure of the amount of time the imutils.resize() takes, i.e. ~6.9ms. So the comparison between 30fps and 143fps isn’t really a fair and accurate comparison. I recently had a project where we ended up using the same approach, i.e. grabbing the webcam frames on a secondary thread and doing the OpenCV processing on the main python thread. However this wasn’t in order to increase the fps processing rate, rather it was to minimize the latency of our image processing. We were measuring aircraft control positions by visually tracking targets on the aircraft controls to record their position during flight and needed to synchronize the recording of the control positions with another instrument we use to record the aircraft’s attitude etc. So we needed as little latency as possible in order to keep the control positions synchronized with the other aircraft data we were recording. We didn’t need a high sampling rate, i.e. we were happy with 10Hz as long as the latency was as small as possible. Our camera output at 1080@30Hz and our image processing (mainly Hough circles) took longer than the frame period of ~33ms and if we read the camera frames on the main thread the OS would buffer roughly 5 frames if we didn’t read them fast enough. So going with the multithreaded approach we could always retrieve the latest frame as soon as our image processing was complete, so at a lower rate than the camera rate but with minimizing latency. Cheers Indeed, you’re quite right. The 143 FPS isn’t a fair comparison. I was simply trying to drive home the point (as you suggested) of the latency. Furthermore, simply looping over a set of frames (without doing any processing besides resizing) isn’t exactly fair of what a real-world video processing would look like either. Hi Adrian But I think that overall you’ve made it more confusing mixing up fps and latency. If your main point that you were trying to drive home is the win in terms of latency then that should be in the title, in the examples your provide etc. Sort of like mixing up a disk’s transfer rate and latency. Cheers I’ll be sure to make the point more clear in the next post on Raspberry Pi FPS and latency. Hi Sean, I have the same problem as yours. I need in my project the minimum latency as possible. Due to the opencv internal buffer I have to use threads. I am working with several 8Mp cameras, each of them with its own thread. But using threads then I face the “select timeout” problem. Did you have the same problem? By the way, did you use locks to access the variable “frame”? Nice tutorial – thanks for the mention! I was experiencing a subtly different problem with webcam frames in my apps, which led me to use threading. I was not so concerned with the speed of reading frames, more that the frames were getting progressively stale (after running app for a minute or so on Raspberry Pi, the frame served up was a number of frames behind the real world). Perhaps my app loop was too slow interrogating the webcam and was being served up a cached image? By using a thread I was able to interrogate the webcam constantly and keep the frame fresh. Thanks for the tip Ross — you’re definitely the inspiration behind where this post came from. Hi Ross See my comment above, we saw the same issue as you on the ODROID we were using. On our system it looked like the OS/v4l/OpenCV stack was maintaining a buffer on the order of 5 frames if we didn’t retrieve frames as fast as the camera’s frame rate, which meant we ended up with an extra latency on the order of 5x33ms = 165ms. So we ended up pulling the usb web camera images at the camera’s frame rate on a secondary thread so that we were always processing the latest web camera image even though overall our main video processing only ran at 10fps. We initially tried to see if there was a way to prevent this buffering but weren’t able to find a way to disable it, so we ended up with the multi-threading approach. Cheers I installed imutils but still get this error when I run the program: ImportError: No module named ‘imutils.video’ Running python 3.4 and opencv2 Make sure you have the latest version of imutils: $ pip install --upgrade imutils --no-cache-dir I have the latest version installed, but I’m still getting the error. Please help if you can. All I need is something simple that can display an image on the screen from a USB webcam, and can start automatically at boot. I am running a Raspberry Pi Zero and Raspbian Jessie. The webcam is a rather cheap GE model, with YUYV 640×480 support. I have already tried multiple programs, but only luvcview gave a usable picture, and it broke itself when attempting at auto-start script. Any help at all would be useful! Thank you in advance! I detail how to create an autostart script here. This should also take care of the error messages you’re getting since you’ll be accessing the correct Python virtual environment where ideally you have installed the imutils package. Thank you for a great tutorial. I am working with applications like SURF, Marker detection etc. but i need to increase the FPS for the above mentioned applications. Will this approach work with OpenCV C++? If yes, how? Yes, the same approach will work for C++. The exact code will vary, but you should do some research on creating and managing separate threads in C++. I completed a C ++ version, but don’t know it is suitable with C++ mutex. Hope this is useful. [mutexvideocapture.cpp]() This is a classic producer/consumer problem. Here we have a camera (the producer) that is delivering frames at a constant rate in real time and a frame reading program (the consumer) that is not processing the frames in real time. In this case we must have a frame queue with a frame count. If the frame count is > 0 then the consumer consumes a frame – reducing the frame count. If the frame count is zero then the consumer must wait until the frame count rises above zero before consuming a frame. There can be any number of consumers but the producer must serialize access to the frame queue (using locks/semaphores). I’m wondering if Adrian has fully covered this aspect in his book and tutes… I don’t cover this in detail inside Practical Python and OpenCV, but it’s something that I do plan on covering in future blog posts. Unless you want to process every frame that is read by the camera, I actually wouldn’t use a producer-consumer relationship for this. Otherwise, as you suggested, if your buffer is > 0, then you’ll quickly build up a backlog of frames that need to be processed, making the approach unsuitable for real-time applications. Thank you sooo much for sharing your blog post. My use case is using Tensorflow model evaluation from a webcam stream. Each instance of model evaluation is taking about 80ms on Mobilenet based model. My question is, if i used the threaded stream reader, would i run into an issue wherein my model is evaluating older frames because it is slow while, the threaded stream reader is quickly building up backlog… If that happens the model would never be able to catch up. In that situation, i would have to figure out how to reduce the FPS to match the model’s evaluation time of 80ms. The threaded reader isn’t building a backlog of frames, it’s grabbing the most recent frame. I don’t see a performance increase on Windows 7. Process explorer (confirmed by cpu usage graph in Task Manager) confirms cv2 is already divided in multiple threads?? I’m not a Windows user, so I’m not entirely sure how Windows handles threading. The Python script won’t be divided across multiple process, but a new thread should be spawned. Again, I haven’t touched a Windows system in over 9+ years, so I’m probably not the right person to ask regarding this question. Hello, Thank you very much for sharing this information. One drawback of this method is as mentioned in the comments that you kind of lose the timestamp / counter information when a frame was shot by the camera. Now the funny thing is a timesteamp is provided if you use the capture_continous function and provide a string as argument. So diving a bit deeper in the code of this function, don’t you think we could just add a timestamp / counter in the “else” condition? It might make the code that later processes these images a bit more efficient since a mechanism can be made to avoid processing the same frame twice. Not sure if you have an opinion on this one / ever experimented with it 🙂 ? hey is there a use for the variable “grabbed”? I can’t see it being used anywhere… I might be misunderstanding a lot though! The stream.readfunction returns a 2-tuple consisting of the frame itself along with a boolean indicating if the frame was successfully read or not. You can use the grabbedboolean to determine if the frame was successfully read or not. Hello Adrian, Thank you for the very useful blog post. You explain everything very clearly, especially to someone very new to python and image processing. Have you ever made a blog post regarding packages/module hierarchy? In some of your other blog posts you explicitly tell us how to name a file, but I’m confused about how the FPS and WebcamVideoStream classes are defined in the directory. More specifically, what names should those files have (is there a naming convention?) Where in the project are they typically located? How are they pulled “from imutils.video”? I know these are very basic questions, but I haven’t found a resource online that explains this clearly. Thanks again for your work. Hey Kevin — this is less of a computer vision question, but more of a Python question. I would suggest investing some time reading about Python project structure and taking a few online Python courses. I personally like the RealPython.com course. The course offered by CodeAcademy is also good. hi! Thanks for the great tutorial. Is there any way to make the .read() method block until there is a new frame to return? If not, is there any efficient way to determine if the old frame is the same as the new frame so I can ignore it? thanks again As far as I know, you can’t block the .read()method until a new frame is read. However, a simple, efficient method to determine if a new frame is ready is to simply hash the frame and compare hashes. Here is an example of hashing NumPy arrays. Hi, I followed up on the tutorial, after executing “python picamera_fps_demo.py” the results were: [INFO] sampling frames from ‘picamera’ module… [INFO] elapsed time: 3.57 [INFO] approx. FPS: 28.32 [INFO] sampling THREADED frames from ‘picamera’ module… [INFO] elapsed time: 0.48 [INFO] approx. FPS: 208.95 but when I ran “python picamera_fps_demo.py –display 1” the results were: [INFO] sampling frames from ‘picamera’ module… [INFO] elapsed time: 8.54 [INFO] approx.FPS: 11.83 [INFO] sampling THREADED frames from ‘picamera’ module… [INFO] elapsed time: 6.54 [INFO] approx. FPS: 15.29 So it ran slower, I’m not sure what is the real issue, but I’m using a Pi 3 fresh out of the box, using a 5V 1A power source, I’m connected the Pi 3 to the laptop using Xming and putty via LAN cable. Looking at your results, it seems that in both cases the threaded version obtained faster FPS processing rate. 15.29 FPS is faster than 11.83 FPS and 208.95 FPS is faster than 28.32 FPS. So I’m not sure what you mean by running slower? Thanks for the reply Adrian, when I see the video feed from the Pi camera on my laptop, there is a large delay, when I wave my hand over the camera it almost takes 2 to 3 seconds then I see my hand in the video feed, is this delay normal or is it an issue. So you’re accessing the video feed via X11 forwarding or VNC? That’s the problem then. Your Pi is reading the frames just fine, it’s just the network overhead of sending the frames from the Pi to your VNC or X11 viewer. If you were to execute the script on your Pi with a keyboard + HDMI monitor, the frames would look much more fluid. Hi Adrian, I’m accessing the video feed via X11 forwarding, thanks for helping me identify what the problem really is. A lot of your tutorials has provided me with the basic foundation for my project, there is no other place I would recommend a beginner like me to start off learning image and video processing. Great job resolving the issue Peni, I’m glad it was a simple fix. And thank you for the kind words, I’m happy I can help out 🙂 Hi Adrian, thank you for the post! I have a question that is currently bothering me a bit, that is: when we use multi threads as this post’s approach, does that mean we are using multi cores? or are we just using single core with 2 threads? cuz I’m currently up to some real-time project and trying to do it using multi threads, what surprised me is that the number of frames i can process per second actually decreased a bit, cuz i thought if i’m using a different core for reading in the video i should at least save the reading time and be able to process a bit more frames per second right? Thanks again for your help! Hey Samuel — we are using a single core with 2 threads. Typically, when you work with I/O operations it’s advisable to utilize threads since most of your time is spent waiting for new frames to be read/written. If you’re looking to parallelize a computation, spread it across multiple cores and processors. Hi.! How to use this python code with other image processing task. should we run parallel these two codes using terminal? how to do this? please help me. Anyway your blog is the best Hey Shirosh — I’m not sure what you mean by “use this Python code with other image processing task”. Can you please elaborate? Hello Adrian, I’ve noticed that cv2.imshow on OS X is muuuuch slower than its counterpart on windows. The following benchmark runs in 15 seconds on a virtualised windows inside my mac, but it takes as long as 2 minutes to run on OS X itself! Do you know what could be the reason and possible fix? Thanks a lot! Best, Cassiano That’s quite strange, I’m not sure why that may be. I don’t use Windows, but I’ve never noticed an issue with cv2.imshowbetween various operating systems. Hi, Adrian. I have a question. Where can I save ‘fps_demo.py’ ? You can save it anywhere on your system that you would like. I would suggest using the “Downloads” section of this tutorial to download the code instead though. hi adrian thank you for this good project when i run this project after increse fps show a core dump segmentation fault error . can you help me ??.. If you’re getting a segmentation fault then the threading is likely causing an issue. My best guess is that the stream object is being destroyed while it’s in the process of reading the next frame. What OS are you using? Hi, I am trying to use Ps3 EYE Camera on ubuntu for my OpenCv project. This camera support 640×480 up to 60fps and 320×240 resolution up to 187fps. I am sure you know what I mention about. I can set each one of this values on windows with CodeLaboratories driver. But on ubuntu, I use ov534.c driver and QT v4L2 software. Even though I ‘m seeing all of configuration settings of this camera on v4L2, I can’t set over 60fps. I can set any value under 60fps. Do you have an idea about this problem. What can I do for setting at least 120fps? Unfortunately, I don’t have any experience with the PS3 Eye Camera, so I’m not sure what the proper parameter settings/values are for it. I hope another PyImageSearch reader can help you out! Hey Adrian, I’m working on an image processing pipeline with live display that runs at 75FPS without cv2.imshow() and 13 FPS with no screen output (OS-X FPS #s). I need a live output to the screen, and I want to maximize framerate. I already tried using Pygame’s blit command as an imshow() replacement, but got about the same speed. Are you aware of a module/command that will get an efficient screen refresh? If I’m lucky there will be an approach that will transfer from OS X to Raspberry Pi without too many hitches. Thanks. Hey Jon — when it comes to displaying frames to screen I’ve always used cv2.imshow. Unfortunately I don’t know of any other methods to speedup the process. Best of luck with the project! I found that PySDL2 as an interface to the SDL2 library fixes the problem on OS-X and may be viable on Raspberry Pi too. For any interested readers, this post has code that can be integrated into the code here to speed things up when displaying all frames to the screen. Thanks for a great resource Adrian! Thanks for sharing Jon! Hello adrian.. i recently mixing this code with image filtering.. but the image had a quite delay,, the image is like quite frezee.. not like cv2.videocapture(-1). What happen..? I dont know this..Can you explain me..? and if i use only serial usart communication i/o to send x y coordinate out.. where the code part must be change.. ? Thanks Hey Hanifudin — I’m honestly not sure what you are trying to ask. Can you please try to elaborate on your question? I don’t understand. 1. I send coordinate x and y target with serial pin. But when i using threading,, raspberry pi cannot send data via serial..so, how i activate serial tx rx in threading mode..? 2. In threading mode,, the image is quite freeze,, like paused and lagging (not smooth)..it doesnt like non threading mode.. so how i make it smooth like non threading mode..? Im sorry i’ve bad english Unfortunately, I don’t have much experience sending serial data via threading in this particular use case. I would try to debug if this is a computer vision problem by removing any computer vision code and ensure your data is being properly sent. As for the slow/freezing video stream, I’m not sure what the exact problem is. You might be overloading your hardware or there might be a bug in your code. It’s impossible for me to know without seeing your machine. Oh,, i know how why captured image is quite freeze,, im using usb hub to connect webcam,, wifi adapter and wireless mouse.. when i connect without usb hub it solved.. And sending serial data in threading mode is waiting to solved Hi Adrian, Thanks for the codes! I tried to run the code. it works perfectly without display but I get an error “Segmentation fault: 11” when sampling threaded frames with display on. what should I do? And I’m also doing a project with python and opencv try to record 120fps video, my camera is good for the fps but the utmost I can get is 30fps, any recommendations? Chris Regarding the segmentation fault, that sounds like a threading error. Most likely either the OpenCV window is being closed before the thread is finished or vice versa. It’s hard to debug that error, especially without access to your machine. Also, keep in mind that you cannot increase the physical FPS of your camera. The code used in this blog post shows how you can increase the frame processing rate of your pipeline. If your camera is limited to 30 FPS, you won’t be able to go faster than that. However, you will use this technique when you add more complex steps to your video processing pipeline and need to milk every last bit of performance out of it. Hi Adrian, your post is impressive! I have similar problem right now but I’m working on 16 bits sensors/cameras instead. Can I apply this to 16 bits? Do you know how to use opencv to capture 16 bits frames from usb device? Thanks! Hi Guille — I haven’t tried this code using a 16 bit raw sensor to OpenCV, so unfortunately I don’t have any insight here. Hi, Its a great tutorial. I am new to python and opencv and your explanation is great . Thanks for posting and explaining everything line by line. I understood the program, but if i want to store the frame using videowriter instead of viewing them, where do i do it. I am thinking wither in the webcamcapture class so that it is done in the thread itself or should i do it in the main thread where you use cv2.show. Currently i am using a webcam to capture at 1280×720 @ 30 hz, but i only get 8Hz. I am trying to improve this and want 30 Hz. Is this possible. If you would like to save the frames to disk, please see this blog post where I discuss cv2.VideoWriter. Hi , Thank you for this awesome tutorials ! I’m using RPi3 with Raspbian Stretch. When I run fps_demo.py, I get the error message as shown: [INFO] sampling frames from webcam… Traceback (most recent call last): … (h, w) = image.shape[:2] AttributeError: ‘NoneType’ object has no attribute ‘shape’ I saw your OpenCV: Resolving Nonetype errors post but I’m still confused. Could you help me solve this error? Oh, figured out that it was for the webcam, not the picamera. I saw your increasing picamera fps and it was all good. Is it possible to use this method if I unify picamera and cv2.VideoCapture as you posted in here ? I would suggest simply using the VideoClassdiscussed in the link you included. This is the best method to achieve computability with both the Raspberry Pi camera module and USB cameras. Hi Adrien, Thank you for the wonderful tutorial. I’ve developed an app to get 4 camera streams and show and save one video with all 4 streams. I’ve also used your threaded streams approach for better performance. The problem is, I cannot figure out correct FPS to use for the output video (in the VideoWriter). Any thoughts on this ? Please see this blog post, in particular the comments section. The gist is that you’ll likely need to tune the FPS experimentally. instead cv2.VideoCapture(0) I replaced it with WebcamVideoStream(src=0) then @ ret,frame = cap.read() ERROR :too many value to unpack why it happen and how to fix it ? Please refer to this blog post. Notice how the .readmethod of WebcamVideoStreamonly returns a single value: frame = cap.read by use this code, my program running noticeable faster , but~ still have some delay, what others I can do ? or do u have any tutorial about use GPU to make it faster ? I’m not sure what you’re asking. What in particular are you trying to optimize via the GPU? basically, you code made my program running much faster , thanks for that , but it still not fast enough, I wonder if any other ways can make it even better ? What type of hardware are you using? Laptop? Desktop? Raspberry Pi? I would suggest trying to optimize the OpenCV library itself, as we do in this post. Hi Adrian, Awesome tutorials. I have learned a ton. I wish all online tutorials were as thoughtful and well-designed. Thank you. I was trying to time the latency of a few of my web cams. Basically, I took your program and revised it a bit. It starts off by reading about 100 frames to warm the camera up. After 100 frames, it starts a timer and then uses serial via an Arduino to light a blue LED in front of the lens. The program then counts the number of frames read after the LED is lit and the time that elapses. Using 640×480 and 60fps, without threading, it takes about .09 secs between turning on the LED and identifying the LED. (It does this in one frame, which I gather means there’s no buffer.) With threading, it takes about .06 seconds and reads about five frames. With 720p images, there’s almost no difference in latency bw threaded and unthreaded approaches. So, have three questions please: 1) If the frames come in faster than the program can process them, will a queue build up and will that queue get longer and longer as the program runs? Is there any way to clear it out? 2) I am using this for robotics, so I would actually want to read the newest frame and dump the older frames, even though they haven’t been read. (I actually want last in first out rather than first in first out.) 3) Why would threading not improve the latency of capturing video at higher resolutions? Sorry for the long question, but I thought more detail was better. Thanks Jim, I appreciate that. 1. That really depends on the data structure you use. With a traditional list, yes. Other data structures place limits on the number of items to store in the queue. It’s really implementation dependent. 2. If that’s the case, use a FIFO queue. The Python language has them built-in my default. 3. It’s an issue of more data to transfer. The less data we transfer, the faster the pipeline. The more data we transfer, the slower the pipeline. Then there are overheads on packing/unpacking the data which we can’t really get around. Hi Adrian Thank you for the tutorial! It helps a lot (i am actually a greenhorn in programming :-)) I have a quick question: I have a C920 USB Webcam attached to a raspberry 3b. Before i start the script, i can set the cameras resolution to 1920×1080 (v1l2-ctl –set-fmt-video=width=1920,height=1080,pixelformat=1). I check the settings with: vl2-ctl -d /dev/video0 -V. Then i can see that the resolution was changed to 1920×1080 and the codec to H264. When i start your script, the resolution gets changed to 640×480 and the codec is changed to YUYV. I cannot set the resolution while your script is running (“Device is busy”). So i have to set it in your script. Do you have a clue where to set the resolution/codec? Best regards, Roman Hey Roman — you would need to download a copy of the WebcamVideoStreamclass and then modify the self.streamobject in the constructor after Line 9. The capture properties can be set via the .setfunction but they are a bit buggy depending on the camera and install. You can read up more on them here. Hi Adrian, I use Logitech 920, and I put some more lines after self.stream = cv2.VideoCapture(src) def __init__(self, sec=0): self.stream = cv2.VideoCapture(src) self.stream.set(cv2.CAP_PROP_FRAME_WIDTH, 1280) self.stream.set(cv2.CAP_PROP_FRAME_HEIGHT, 720) self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*”MJPG”)) but the resolution remains the same. On the other hand, iit works without threading webcam = cv2.VideoCapture(0) webcam.set(cv2.CAP_PREP_FRAME_WIDTH, 1280) webcam.set(cv2CAP_PREP_FRAME_HEIGHT, 720) Is that a buggy thing you mentioned? IF you’re modifying the actual imutils library itself then it should use the dimensions you supplied. Make sure you’re not accidentally editing a different imutils install on your system and make sure you’re importing the correct ones (a few “print” statements will help you debug that). I was checking imutils code and the things is, that using it doesn’t give you a bigger framerate. When you call read() it will return the last read frame. You can call it 5 times, and get the same frame back over and over. So this does not improve the framerate, instead if gives you repeated frames creating the illusion of a bigger framerate Hey Javier — you are correct that .read() will return whatever the newest frame is in the buffer. If your processing pipeline is so incredibly fast that you can process a frame before the next one is ready you could process duplicate frames. For most processing pipelines that will not be the case. Perhaps I didn’t make it clear enough in this blog post (but I tried to in the follow up one) that we are increasing the frame rate of the processing pipeline, not necessarily the frame rate of the actual camera sensor. Hello Sir I cant detect face using rpi. Help me. Which code are you using to detect faces? For what it’s worth, I demonstrate how to perform face detection on the Raspberry Pi inside Practical Python and OpenCV. Hi Adrian, I love your blog and all the posts in it. I am working with Python 3.6 on a Windows 8.1 machine. The code runs perfectly except that it doesn’t exit from command-line on completion. Is it because some threads are still active in the background? Please help me resolve this tiny problem. Thank you for your help! Keep posting 🙂 Hey “Menaka” — can you elaborate on what you mean by “doesn’t exit from command-line completion”? I’m not sure what you are referring to. how to reduces frame per second in video file Just to clarify, are you referring to writing a frames to an output file to disk? Thanks for reply Actually we are doing object detection in video using tensorflow. As we are using cpu the video file hardly runs. so we just tried to reduce the fps read by opencv So the TensorFlow model is running too quickly on each of the input frames? In that case, just add a time.sleep call to the end of your loop that is responsible for grabbing new frames. This is great tutorial, but I have a couple questions: 1) How can I make use of cv2.waitKey() on a headless system (ie: with no windowing system, accessed over SSH)? and 2) None of this works with the PiCamera module. What would be the equivalent code for that? Oh wait, nevermind! I see there’s a separate tutorial for the PiCamera module ( ). Thanks! Yep, no problem Phil. Let me know if you have any other questions. Hi everybody ! I used this method to read a video not from my cam but from another stream. The problem is that if you read directly from the file, opencv will try to read it as fast as possible. If you want to stick with the right frame rate, add a sleep(1/fps) in the thread update method 🙂 Thank you Adrian for this great tuto again ! I am doing a project where my robot follows a person using a color tracking method, i am using the camera , using this camera I am getting 5-second delay to show the streaming using cv2.videocapture(0) Can u tell me suggestion to reduce the delay? i am using raspberry pi 3 and python code. can u tell me to get a low latency video using cv2.videocapture() A five second delay sounds incredibly high. Can you tell us more about the stream? You mentioned using a robot, but is the robot streaming the video stream to another system where the processing takes place? Hi Adrian, I love your blog, it is really usefull. I have a technical question: I am doing an pupil diameter project and with every action I do to measure the pupil (aka: morphological transformation, median filter, among others) I lose FPS, so I start reading the video at 30 FPS and at the end it is only 4 FPS. This is do to all the action I do in the frame video? If I use a camera with a higher default FPS, will I get at the end more FPS? The problem isn’t your camera, it’s your processing pipeline. Think of your pipeline as a staircase. The less steps there are (i.e., computer vision or programming operations), the less stairs on the staircase and the faster you can clime it. The more operations, the more stairs, and the longer it takes. You’re in the same boat here — you need to optimize your actual processing pipeline. Adrian, wow. I can’t thank you enough for all of your tutorials on OpenCV with dnn + Python. My question to you is, I am looking to consume an rtsp stream from IP Cameras on my network. I’ve seen some basic examples of how to capture frames in to memory but not sure I am confident in integrating the two (Your real time object detection and the rtsp frame capture). Do you have any advice or know of a tutorial that showcases this? I guess it doesn’t necisarily have to be rtsp, even http (if that’s all the camera supports). My initial thought was to have a different script (or thread) capturing frames and feeding them through the script we wrote in your tutorial for “Object detection with deep learning and OpenCV”. Would love to know your thoughts. Hey Mike, thanks for the comment! I’m so happy to hear you are enjoying the PyImageSearch blog 🙂 I do not have any tutorials on RTSP but I’ve added it to my queue for later this year. I’ll try to do a post on it sooner rather than later. As far as combining the two, is your goal to apply object detection to the frame after you grab it from an RTSP stream? Or are you trying to pull an image from your webcam, apply object detection, and then write it back out to the RTSP stream? Thanks Adrian I using the code “Real-time object detection with deep learning and OpenCV” for object detect from IPCAM hikvision throught RTSP stream. Its great solution, but after a few minutes I have problems with the get frame: the error is: ————————————————————- frame = imutils.resize(frame, width=400) File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py”, line 45, in resize (h, w) = image.shape[:2] AttributeError: ‘NoneType’ object has no attribute ‘shape’ ————————————————————— I think the problem is the processing pipeline and frame delay. I will hope your blog about RTSP stream. Cheers from Loja-Ecuador Rodrigo I think it may be an issue related to the RTSP stream — perhaps the stream is dropping. That said you can handle the error gracefully by checking if the frame is “None” before you continue processing it: That way if a frame drops during the stream you can still handle the issue and ideally the next frame will not be dropped. Hi Adrian. If my VideoCapture source is an RTSP stream, does that make it pointless to use threading because there is no real processing of frames on my side? I use your code for web cam capture and must say that it bounded by physical speed of sensor so in my case (c270 + ubuntu 16.04) the quality of the video the same with blocking capture and without it. But somehow guvcview retrieve delivers much better FPS/ As far as I understand it might be the case that when you read a frame you skip may over previous ones. I think you need a queue to make sure you’ve e.g. displayed everything you’ve read. It is spectacularly optimized but practically I am not sure if it’s that useful. :/ 🙂 * To add more, I think your code is physically indeed reading all these frames but if you decide to process of show each of them, it will fail as it’s always fetching the most recent read-in frame. It might have read 2-3 others in that time, which will remain hidden. This is a feature not a bug. Some computer vision pipelines cannot run in true real-time. If you used a queue you would end up storing all frames that need to be processed. The queue would eventually build up and your script would never complete. The point of this script is to process the most recent frames only. If you want to process every single frame you can use the raw cv2.VideoCapturefunction. Hi,i am using this in combination with an object detection api. It works well but it would be kind of you if you can let me know how do I manage skipping of frames.i mean I don’t want to skip too many frames and make it a bit faster. For example with yolov3 python wrapper I get fps of around 11 I don’t want a dramatic increase it would be nice if I can get it upto 15 fpsor so. Do i need to use a queue instead? Instead of the obvious things like “check to see if the frame should be displayed to our screen”, i’d rather read why strange things like “& 0xFF” are there. Is it a Mac thing? If so, other code appears to use Windows style, then. Code needed You can use the “Downloads” section of this post to download the source code. Great blog, thanks for the clear, thorough, and complete instructions and useful tips. You say in this post that using threading this way reduces “latency” because the program “always [has] a frame ready for processing.” I think that processing the same frame repeatedly is likely to *increase* latency (compared to not re-processing the same frame) rather than reduce it because processing a genuinely new frame will be delayed until after (re-)processing an old frame has finished. If processing takes, say, 90% of the frame period, the first frame would be processed twice and the second not until 80% of a period after it’s ready. The FPS numbers collected by re-processing the same frame are a bit meaningless. I’m new to python but this worked for me: add a threading.Condition() to the WebcamVideoStream; then wrap the block in the loop in update() in a “with self.cv:”; then call “self.cv.notify_all()” when the new frame is ready; then, in read(), call self.cv.wait() before returning the frame. This causes the caller to wait until a *new* frame is ready but to process it *immediately* when it is actually ready. In my 90% example, the second frame will be processed immediately and the result will be ready much sooner than with the re-processing method. If what matters is reacting to what the camera sees, this would seem to be a further improvement in (reduced) latency. Hey Adrian, I don’t know if you still reply to old posts, but might as well give it a try. I’m using your tutorial to thread my opencv image subtraction code for multi-object tracking. On my computer, I get well above 90fps, but on the pi, I can only get 0.5-0.9fps… I assume it’s because of my code, but anyway I’m trying to improve it step by step… During an image subtraction, what should the threads be? how to separate the capturing and the processing of the images? The way I’m doing it is capturing two grayscale frames, absolute diff and thresholding, then find the contours, and draw the bounding boxes… I just can’t seem to understand how and what to separate in different threads for a better fps thanks a lot for your wonderful blogs! Hello adrian, i implemented this kind of threading for an IP camera stream. But i see that the one using threading uses more CPU than the one that does not. Do you know what might cause this, and how i can alleviate this? Thanks! It’s because the cv2.VideoCapturemethod is constantly running in the background. You might instead want to use a blocking operation or manually update the class to only read new frames every N seconds. what difference does it have between a threaded videocapture and just a videocapture in a while loop? can you elaborate a bit? Is it simply because in my videocapture while loop, the cv2.imshow is there, preventing the update from running at super high speed? The “cv2.imshow” call is going to take up a lot of time. If you remove it you’ll see your pipeline speedup. As for the threading, I’m not sure I understand your question. This tutorial shows you how to wrap cv2.VideoCapture in a threaded class to make sure it doesn’t block. A normal cv2.VideoCapture in a while loop would be a blocking operation but it may reduce CPU load as you don’t have a thread in the background constantly polling for new frames.
https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/
CC-MAIN-2019-35
refinedweb
10,437
72.26
Introducing Liftbridge: Lightweight, Fault-Tolerant Message Streams If you are looking for a tool to add to your data arsenal, take a look at this open source project that helps fortify the NATS Streaming Service. Join the DZone community and get the full member experience.Join For Free Last week I open sourced Liftbridge, my latest project and contribution to the Cloud Native Computing Foundation ecosystem. Liftbridge is a system for lightweight, fault-tolerant (LIFT) message streams built on NATS and gRPC. Fundamentally, it extends NATS with a Kafka-like publish-subscribe log API that is highly available and horizontally scalable. I've been working on Liftbridge for the past couple of months, but it's something I've been thinking about for over a year. I sketched out the design for it last year and wrote about it in January. It was largely inspired while I was working on NATS Streaming, which I'm currently still the second top contributor to. My primary involvement with NATS Streaming was building out the early data replication and clustering solution for high availability, which has continued to evolve since I left the project. In many ways, Liftbridge is about applying a lot of the things I learned while working on NATS Streaming as well as my observations from being closely involved with the NATS community for some time. It's also the product of scratching an itch I've had since these are the kinds of problems I enjoy working on, and I needed something to code. At its core, Liftbridge is a server that implements a durable, replicated message log for the NATS messaging system. Clients create a named stream which is attached to a NATS subject. The stream then records messages on that subject to a replicated write-ahead log. Multiple consumers can read back from the same stream, and multiple streams can be attached to the same subject. The goal is to bridge the gap between sophisticated log-based messaging systems like Apacha Kafka and Apache Pulsar and simpler, cloud-native systems. This meant not relying on external coordination services like ZooKeeper, not using the JVM, keeping the API as simple and small as possible, and keeping client libraries thin. The system is written in Go, making it a single static binary with a small footprint (~16MB). It relies on the Raft consensus algorithm to do coordination. It has a very minimal API (just three endpoints at the moment). And the API uses gRPC, so client libraries can be generated for most popular programming languages (there is a Go client which provides some additional wrapper logic, but it's pretty thin). The goal is to keep Liftbridge very lightweight — in terms of runtime, operations, and complexity. However, the bigger goal of Liftbridge is. NATS Streaming provides a similar log-based messaging solution. However, it is an entirely separate protocol built on top of NATS. NATS is an implementation detail – the transport – for NATS Streaming. This means the two systems have separate messaging namespaces – messages published to NATS are not accessible from NATS Streaming and vice versa. Of course, it's a bit more nuanced than this because, in reality, NATS Streaming is using NATS subjects underneath; technically messages can be accessed, but they are serialized protobufs. These nuances often get confounded by first-time users as it's not always clear that NATS and NATS Streaming are completely separate systems. NATS Streaming also does not support wildcard subscriptions, which sometimes surprises users since it's a major feature of NATS. As a result, Liftbridge was built to augment NATS with durability rather than providing a completely separate system. To be clear, it's still a separate server, but it merely acts as a write-ahead log for NATS subjects. NATS Streaming provides a broader set of features such as durable subscriptions, queue groups, pluggable storage backends, and multiple fault-tolerance modes. Liftbridge aims to have a relatively small API surface area. The key features that differentiate Liftbridge are the shared message namespace, wildcards, log compaction, and horizontal scalability. NATS Streaming replicates channels to the entire cluster through a single Raft group, so adding servers does not help with scalability and actually creates a head-of-line bottleneck since everything is replicated through a single consensus group (n.b. NATS Streaming does have a partitioning mechanism, but it cannot be used in conjunction with clustering). Liftbridge allows replicating to a subset of the cluster, and each stream is replicated independently in parallel. This allows the cluster to scale horizontally and partition workloads more easily within a single, multi-tenant cluster. Some of the key features of Liftbridge include: - Log-based API for NATS - Replicated for fault-tolerance - Horizontally scalable - Wildcard subscription support - At-least-once delivery support - Message key-value support - Log compaction by key (WIP) - Single static binary (~16MB) - Designed to be high-throughput (more on this to come) - Supremely simple Initially, Liftbridge is designed to point to an existing NATS deployment. In the future, there will be support for a "standalone" mode where it can run with an embedded NATS server, allowing for a single deployable process. And in support of the "cloud-native" model, there is work to be done to make Liftbridge play nice with Kubernetes and generally productionalize the system, such as implementing an Operator and providing better instrumentation – perhaps with Prometheus support. Over the coming weeks and months, I will be going into more detail on Liftbridge, including the internals of it – such as its replication protocol – and providing benchmarks for the system. Of course, there's also a lot of work yet to be done on it, so I'll be continuing to work on that. There are many interesting problems that still need solved, so consider this my appeal to contributors. Published at DZone with permission of Tyler Treat, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/introducing-liftbridge-lightweight-fault-tolerant
CC-MAIN-2021-31
refinedweb
992
51.28
How can I get Pygame's Mixer to play my .mp3? In my TBAG, (Text-Based Adventure Game) I have attempted to use Pygame's Mixer module to play it. I use VSC (Visual Studio Code) and have opened the file in another tab, but when I use the following code, import pygame from pygame import mixer mixer.init() mixer.music.load('Horoscope.mp3') mixer.music.set_volume(1) mixer.music.play() I get this error; pygame.error: mpg123_seek: Invalid RVA mode. (code 12) Anyone know how to fix this? Most sources I've found have given me the answers I've already tried, like doing pygame.mixer.music.play(). Help! -Frustrated) - What is the most efficient/effective way to host a pygame application on HTML? I'm currently doing my dissertation where I'm producing games for a cognitive webtool. However I have hit a snag by finding that flask cannot host pygame as I thought (never trust 2015 reddit posts you read late at night). Originally I decided to scrap my pygame games and begin creating very basic Javascript games which I could host on HTML through node.js. I'm not too familiar with JS and the progress was very slow to say the least. My tutors advised me to continue trying to find a way to host the 4 games I've produced on pygame so I don't loose that progress, but also since I was more familiar with that platform, that I could continue using it. So far I've found a website called trinket.io, which as far as I can tell, embeds a python environment into the HTML through a javascript-like application. It costs about $4 (or £3 for me) per month to have, and the only other downsides are that the screen size is very restrictive (being only about 600x600 and cutting off anything below that) and from what I can work out, no audio can be played. Essentially to use this I would have to resize all of my games. I was also told of Phaser.JS by my python tutor but was struggling to wrap my head around the various functions and Javascript, as aforementioned, since I have a lack of knowledge with JS as a whole. I've attempted to look at Docker, but as I use Windows, from the documentation and reports I could find, I can't dockerize my pygame applications, as it seems it only works in Linux, of which I'm not familiar using. Essnetially my question was, is there a way for me to include pygame applications in my HTML so that a game can be played in browser and if so, what is the most efficient way to show these pygame applications. Ideally, I don't want to have a download of the game, as I require eyetracking data from the webpage alongside the pygame being played. Thanks all for the help in advance, and I hope you have a good day! - Mask collision between polygon and sprite This is the first time I try to use mask collisions and can't find reference for this specific situation (polygon and sprite mask collision). I want the capability for sprites to detect other sprites using a 'field-of-view' method. Here, is an example of my Pygame application: A bunch of worms with their FOV shown for debugging. In this scenario, I want them to see the little green dots. Firstly, I can generate that fov as a polygon. I draw it on the pygame.display Surface (referred to as program.screen). Then, I check each 'target' in the sprite group and check for rect collision between the polygon and the target - works fine. So all the math and calculations are correct, but I cannot get the next bit with mask to mask collision to work. With the code I attach, the print("Detection") statement is never called. #Check if entity can detect its target given its fov # #@param entity: pygame.sprite.Sprite object performing the look() function #@param fov: The angle and depth of view of 'entity', in the form of [angle (degrees), depth (pixels)] #@param target: pygame.sprite.Sprite object the 'entity' is looking out for #@param program: the main instance that runs and handles the entire application = pygame.draw.polygon(program.screen, WHITE, poly_coords) # Draw the fov white for indication(program.screen).overlap(mask, (0, 0)): # For now, colour the polygon red as indication signal = pygame.draw.polygon(program.screen, RED, [(entity.x, entity.y), coords_1, coords_2]) print("Detection") pygame.display.update(signal) When I use if pygame.sprite.collide_mask(mask, pygame.mask.from_surface(program.screen)): for the mask check, I get an AttributeError: 'pygame.mask.Mask' object has no attribute 'rect' I also tried to draw the polygon on a different surface with the same result as the first code (no detection):_layer = pygame.Surface((500, 500)) # some random size big enough to hold the fov #view_layer.fill(BLUE) view = pygame.draw.polygon(view_layer, WHITE, poly_coords) # Draw the fov white for indication pygame.display.update(view) program.screen.blit(view_layer, (min(poly_coords[0]), max(poly_coords[1])))(view_layer).overlap(mask, (0, 0)): # For now, colour the polygon red as indication signal = pygame.draw.polygon(program.screen, RED, [(entity.x, entity.y), coords_1, coords_2]) print("Detection") pygame.display.update(signal) What is wrong with the ways I'm trying to do it, or is there a better way for doing it? Thanks, Kres - How to make an enemy follow the player in pygame using vectors and rect? class monkey(object): def init(self, x, y, width, height): self.walkRight = [ list of images] self.walkLeft = [list of images] self.walkDown = [list of images] self.walkUp = [list of images] #get rect for each image using list comprehension. I do this 4 times for each arrow key self.rect_Right = [img.get_rect() for img in self.walkRight] #repeat for other arrows class enemy(object): (basically the same thing as player with enemy images) def move_towards_player(self, monkey): dirvect = pygame.math.Vector2(monkey.rect.x - self.rect.x, monkey.rect.y - self.rect.y) dirvect.normalize() dirvect.scale_to_length(self.speed) self.rect.move_ip(dirvect) I am running into 2 problems. 1st, since I am uploading multiple images into a list, I don't know whether I am using the get_rect() correctly. I am currently using a list comprehension. How do I get the rect.x and rect. y positions? How do I get the center position? 2nd, I don't know how to get the move_towards_player to get working. I passed the player class into the function, but I don't think it can read the rects. -. - Mixer module won't initiate I've been trying to get into pygame, and I want to play a sound. I try to initiate the mixer module like this: import pygame import time pygame.init() pygame.mixer.init() When I run it, it gives this error: Traceback (moce recent call last): File "main.py", line 5 in <module> pygame.mixer.init() pygame.error: ALSA: Couldn't open audio device: No such file or directory I am running the code on repl.it, and I'm unsure if its repl's issue, as it has caused issues on code that ran perfectly on the default IDLE. - How do you reserve a pygame.mixer Channel in pygame? How does reserving channels work? Can I reserve specific channels or are they picked randomly? There is no clear documentation on how it works and I seem to do it wrong because mixer.findChannel()still picks reserved channels. Here's my code: self.music1 = pygame.mixer.Channel(0) self.music2 = pygame.mixer.Channel(1) self.sound1 = pygame.mixer.Channel(2) self.sound2 = pygame.mixer.Channel(3) self.sound3 = pygame.mixer.Channel(4) self.sound4 = pygame.mixer.Channel(5) self.sound5 = pygame.mixer.Channel(6) self.sound6 = pygame.mixer.Channel(7) pygame.mixer.set_reserved(2) I'd like to reserve music1 and music2. The documentation states that the argument of mixer.set_reserved()defines the number of channels that will be reserved. If I can't pick which channels will be reserved, is there a way around it? Thanks in advance
https://quabr.com/67075447/how-can-i-get-pygames-mixer-to-play-my-mp3
CC-MAIN-2021-21
refinedweb
1,365
66.13
Creating a monster is similar to creating a player. In fact, we are going to use the same basic class code but change its name and namespace. Create a new file called zombie.js in the entities folder. Now, copy the following code into the monster class: 1 ig.module( 2 'game.entities.zombie' 3 ) 4 .requires( 5 'impact.entity' 6 ) 7 .defines(function(){ 8 EntityZombie = ig.Entity.extend({ 9 10 }); 11 }); As you can see, we simply changed the entity name and class name, but everything else is the same as the code we used to start the player class. Now we are ready to add our monster’s animation and set its initial properties: animSheet: new ig.AnimationSheet( 'media/zombie.png', 16, 16 ), size: {x: 8, y:14}, offset: {x: 4, y: 2}, maxVel: {x: 100, y: 100}, flip: false, Now we need to set up the animations just like we did for the player. This is a simple monster, so there are only a few sprites representing its animation. Let’s create a new init() method with the following code: init: function( x, y, settings ) { this.parent( x, y, settings ); this.addAnim('walk', .07, [0,1,2,3,4,5]); }, With our default animation in place, we can start adding instances of the monster to test the level. Let’s switch over to Weltmeister, select the entities layer, and then add a monster by clicking into the layer and pressing the space bar, just as we did when adding the player. You can then click on the map to add the monster where you want it. Feel free to add a few of them, as shown in Figure 4-16. Once you have done this, refresh the game in your browser and you should see your new monsters. We haven’t added any movement logic yet, so they don’t do much right now. Let’s add some basic code to make them walk back and forth, but be smart enough not to fall off ledges. We’ll need to create an update function that will handle the basic movement logic or AI (Artificial Intelligence) for our monster: update: function() { // near an edge? return! if( !ig.game.collisionMap.getTile( this.pos.x + (this.flip ? +4 : this.size.x −4), this.pos.y + this.size.y+1 ) ) { this.flip = !this.flip; } var xdir = this.flip ? −1 : 1; this.vel.x = this.speed * xdir; this.currentAnim.flip.x = this.flip; this.parent(); }, This function tests to see if the monster hits anything in the collision map. If it does, we toggle the value of the class flip property. After testing, the direction and velocity are updated before this.parent() is called. We will also need to define the monster’s friction and speed. You can add that toward the top of the class just under where we define the flip property: friction: {x: 150, y: 0}, speed: 14, Refresh the game to take a look at it in action. You will see the monster instances moving around, and when they hit the edge of a ledge, they flip and go the other way. We just need to add a few more lines of code to clean this up. Add the following block of code to the end of your defines() function: handleMovementTrace: function( res ) { this.parent( res ); // collision with a wall? return! if( res.collision.x ) { this.flip = !this.flip; } }, This helps make sure that if a monster runs into a wall, that it also turns around. Collisions with walls and the collision map are handled through the handleMovementTrace function. Now we have covered all our bases and made sure our zombies will not fall off ledges or platforms, but we still have one issue. There is no collision detection between the monster and the player. Before we get into adding more code to the monster, we need to talk a little bit about entity-based collision detection in Impact. So far, we’ve handled simple interactions with walls and platforms manually. However, Impact has built-in collision detection that we can use for interaction between our entities. That is, we can focus on setting up collision relationships instead of creating all that collision code from scratch. Let’s look a little closer at how we can use Impact to do this work for us.
https://www.safaribooksonline.com/library/view/building-html5-games/9781449331207/ch04s08.html
CC-MAIN-2016-50
refinedweb
726
74.08
Tell us what’s happening: When i run the tests the formatting looks correct with all the spaces and everything, however i still keeps getting 6 failed tests. It would be amazing if anyone can check out my code and help out thanks! Your code so far def arithmetic_arranger(problems, solve = False): #final solution strings firstline = "" secondline = "" lines = "" sumdif = "" strings = "" if (len(problems) >= 6): return "Error: Too many problems." for problem in problems: first_number = problem.split(" ")[0] operator = problem.split(" ")[1] second_number = problem.split(" ")[2] #operator can only be + or - if (operator == "/" or operator == "*"): return "Error: Operator must be '+' or '-'." #numbers can only be a digit if (first_number.isdigit() == False or second_number.isdigit() == False): return "Error: Numbers must only contain digits." #max of 4 digit width if (len(first_number) >= 5 or len(second_number) >= 5): return "Error: Numbers cannot be more than four digits." # if solve is true solutions answer = "" if (operator == "+"): answer += str(int(first_number) + int(second_number)) elif (operator == "-"): answer += str(int(first_number) - int(second_number)) #find the length of the longer number line = "-" long_length = max((len(first_number)), len(second_number)) + 1 for d in range(long_length): line += "-" #returning/printing final values if problems != problem[-1]: firstline += first_number.rjust(len(line)) + " " secondline += operator+ " " + second_number.rjust(len(line)-2) + " " lines += line + " " sumdif += answer.rjust(len(line)) + " " else: firstline += first_number.rjust(len(line)) secondline += operator + " " +second_number.rjust(len(line)-2) lines += line sumdif += answer.rjust(len(line)) if solve: strings += firstline + "\n" + secondline + "\n" + lines + "\n" + sumdif else: strings += firstline + "\n" + secondline + "\n" + lines print(strings) return strings Challenge: Arithmetic Formatter Link to the challenge:
https://forum.freecodecamp.org/t/arithmetic-arranger-help-needed/508965
CC-MAIN-2022-40
refinedweb
261
51.34
WTF... ... is interface overloading. Or Overriding. I kinda toggle between the terms. Right now I'm favoring Interface Overloading; so that's what the title is. Interface Overloading Interface overloading is the process I've found in C# and JAVA and Swift to allow TDD when using Libraries and OS Systems. I figure it'll work in most (if not all) languages with interfaces. Except GoLang; this is kinda how GoLang works. It's a mechanism I stumbled across while TDDing up an android app. I wrote this tdd-against-android-widgets post while I first found it. It was part of a long and twisted path... Which I think resulted in an excellent tool. Simple Definition Extending a class and implementing an interface of a base class method allowing passing around as an interface. The driving forces behind this discovery is TDD, Extreme Encapsulation, and Clean Architecture. Mix-In The basic idea really comes into a mix-in style that gives functionality as required to classes you don't control. Where I discovered this, and I'm confident it will have significant impact is the UI. It can be utilized against Library and OS classes as easily, but I haven't encountered any cases yet that scream "WIN" as loudly as UI does. Example Let's go for a simple example that was TDD'd. The code link is available at the end of the post. The base class public class BaseClass { public string TheMethod(int intVal, bool boolVal) => $"StringVal {intVal} {boolVal}"; } A simple example with just one method. TheMethod. It builds a string and returns it. TheMethod is not a virtual method so it's unable to be mocked. We need a way to pass this around. Really; this example is overly simplified. A MAJOR reason it works so well for UI, is that those often tie themselves to the UI thread, which makes calling textBox.Text = "example"; in a unit test will throw an exception. In Android, it's due to the OS not actually being there. For C#, I'm working in UWP; and it's a "Ui Changed on non-ui thread" exception. I suspect very similar if working in WinForms. In general; the UI is thread restricted which forces what I've seen from some frameworks; the ability to force a specific thread. Now that we have a method we want to use in a method; we need to create the interface to hook into it. public interface ITheInterface { string TheMethod(int intVal, bool boolVal); } Yep; same signature. Not a lot to go over here. It's an interface with the same definition... yeah. OK; next up! public class WrapperClass : BaseClass, ITheInterface { } And here we have our Wrapper class. It's the object we'll use in our UI. In android it'll look like <com.quantityandconversion.widget.WrapperClass android: that can be seen here and in XAML it'll look like <userControls:WrapperClass x: and can be seen here The base class of these are have been TextBox in actual implementation; and did the Interface Overload for setting text. Both of the above repo's can be explored to see how I've done used it. Use this WrapperClass control Using Android; we'll look at the TopItemsAdapter from my HackerNews Reader experiment project. We retrieve the object like so in our TopItemsAdapter .ViewHolder /* package */ static class ViewHolder extends RecyclerView.ViewHolder{ private QacTextView points; private QacTextView comments; private QacTextView time; ... /* package */ ViewHolder(final View itemView) { super(itemView); ... points = (QacTextView)itemView.findViewById(R.id.tv_score_value); comments = (QacTextView)itemView.findViewById(R.id.tv_comments); time = (QacTextView)itemView.findViewById(R.id.tv_posted_time); } } And we'll use these control in the Item class; which I won't reproduce the entirety of here; just some relevant methods public class Item { ... public void postTimeInto(final SetText item) { postTime.postTimeInto(item); } public void commentCountInto(final SetText item) { itemComments.commentCountInto(item); } public void scoreInto(final SetText item) { itemScore.scoreInto(item); } ... } We can see the use of the SetText interface in these methods. The way this is used by the `TopItemsAdapter' is by passing the correct control into each method @Override public void onBindViewHolder(final ViewHolder viewHolder, final int position) { final Item story = topItemsActivityMediator.itemAt(position); ... story.commentCountInto(viewHolder.comments); story.scoreInto(viewHolder.points); story.postTimeInto(viewHolder.time); ... } and with this pattern in place; you maintain encapsulation, and can still produce the data to be displayed. Examples The available simple examples are available in my InterfaceOverride git repo. Summary This isn't a complex way to abstract classes you don't control. It's just a really useful one... at least very useful in the UI... for me. I hope it can help others TDD more of the application. Other tricks may be required for other things; but that's a post for another day. This was a really quick write up; happy to improve and answer questions about the process. UPDATE This is a major component in my Hotel Pattern to maintain Clean Architecture.
https://quinngil.com/2017/05/21/interface-overloading/
CC-MAIN-2021-10
refinedweb
832
58.79
{- | Module : $Header$ Description : Internalities of NNTP modules Copyright : (c) Maciej Piechotka License : LGPL 3 or later Maintainer : uzytkownik2@gmail.com Stability : none Portability : portable This module contains internalities of NNTP library -} module Network.NNTP.Internal ( -- * Types Article(..), Group(..), NntpT(..), NntpState(..), NntpConnection(..), NntpError(..), NntpParser, -- * Functions runNntpParser, nntpPutStr, nntpPutStrLn, nntpSendCommand, tryCommands ) where import Control.Applicative hiding (empty) import Control.Arrow import Control.Monad.Error import Control.Monad.State hiding (State) import Data.ByteString.Lazy.Char8 hiding (foldl) import Data.Monoid import Text.Parsec hiding ((<|>), many) import Text.Parsec.Pos {- | Represents a single article. Please note that except the splitting into header and body no parsing is done. -} data Article = Article { -- | Returns the article ID articleID :: String, -- | Returns the article header. 'Data.Maybe.Nothing' indicates not -- fetched header. articleHeader :: Maybe ByteString, -- | Returns the article body. 'Data.Maybe.Nothing' indicates not -- fetched body. articleBody :: Maybe ByteString } instance Show Article where show = ("Article "++) . articleID instance Eq Article where (==) = curry (articleID *** articleID >>> uncurry (==)) {- | Represents a single group. -} data Group = Group { -- | Returns the group name. groupName :: String, -- | Returns the number of first article avaible. groupArticleFirst :: Integer, -- | Returns the number of last article avaible. groupArticleLast :: Integer } instance Show Group where show = ("Group "++) . groupName instance Eq Group where (==) = curry (groupName *** groupName >>> uncurry (==)) {- | NntpConnection represents a connection in a NntpT monad. Please note that for 'runNntpWithConnection' you need to supply both 'input' and 'output' functions. -} data Monad m => NntpConnection m = NntpConnection { -- | Input is an stream which is from a server. input :: ByteString, -- | Output is a function which sends the data to a server. output :: ByteString -> m () } {- | NntpState represents a state at given moment. Please note that this type is not a part of stable API (when we will have one). -} data Monad m => NntpState m = NntpState { connection :: NntpConnection m } {- | NntpT represents a connection. Since some servers have short timeouts it is recommended to keep the connections short. -} data Monad m => NntpT m a = NntpT { runNntpT :: StateT (NntpState m) (ErrorT NntpError m) a } instance Monad m => Applicative (NntpT m) where pure = return (<*>) = ap instance Monad m => Functor (NntpT m) where f `fmap` m = NntpT $ f <$> runNntpT m instance Monad m => Monad (NntpT m) where m >>= f = NntpT $ runNntpT m >>= runNntpT . f return = lift . return instance MonadTrans NntpT where lift = NntpT . lift . lift instance MonadIO m => MonadIO (NntpT m) where liftIO = lift . liftIO {- | Indicates an error of handling NNTP connection. Please note that this should indicate client errors only (with the exception of 'ServiceDiscontinued', in some cases 'PostingFailed' and 'NoSuchCommand'. The last one if propagated outside NNTP module indicates a bug in library or server.). -} data NntpError = NoSuchGroup -- ^ Indicates that operation was performed on group that does -- not exists. | NoSuchArticle -- ^ Indicates that operation was performed on article that does -- not exists. | PostingFailed -- ^ Indicates that posting operation failed for some reason. | PostingNotAllowed -- ^ Indicates that posting is not allowed. | ServiceDiscontinued -- ^ Indicates that service was discontinued. | NoSuchCommand -- ^ Indicates that command does not exists. deriving (Eq, Show, Read) instance Error NntpError where noMsg = undefined strMsg = read type NntpParser m a = ParsecT ByteString () (NntpT m) a -- | Transforms "NntpParser" into "NntpT" monad taking care about input -- position runNntpParser :: Monad m => NntpParser m a -> NntpT m a runNntpParser}} -- | Puts an argument to output. nntpPutStr :: Monad m => ByteString -> NntpT m () nntpPutStr s = lift . ($ s) =<< NntpT (gets $ output . connection) -- | Puts an argument to output followed by end-of-line. nntpPutStrLn :: Monad m => ByteString -> NntpT m () nntpPutStrLn = nntpPutStr . (`mappend` pack "\r\n") -- | Sends a commad. nntpSendCommand :: Monad m => String -- ^ Command. -> ByteString -- ^ Arguments. -> NntpParser m a -- ^ Parser of output. -> NntpT m a -- ^ Returned value from parser. nntpSendCommand c a p = nntpPutStrLn (pack (c ++ " ") `mappend` a) >> runNntpParser p -- | Try commands one by one to check for existing command. tryCommands :: Monad m => [NntpT m a] -- ^ Possible command. -> NntpT m a -- ^ Result tryCommands = foldl (\a b -> NntpT $ runNntpT a `catchError` \e -> if e == NoSuchCommand then runNntpT b else throwError e) (NntpT $ throwError NoSuchCommand)
http://hackage.haskell.org/package/nntp-0.0.2.1/docs/src/Network-NNTP-Internal.html
CC-MAIN-2016-07
refinedweb
637
51.95
This package contains a number of utilities that are used inside of openmdao. It does not depend on any other openmdao package. A script to add a group of required packages to the current python environment. Routines to help out with obtaining debugging information Try to dump out the guts of an object, and optionally its children. A function useful for tracing Python execution. Wherever you want the tracing to start, insert a call to sys.settrace(traceit). Some useful decorators A class decorator that takes delegate classes or (name,delegate) tuples as args. For each tuple, an instance with the given name will be created in the wrapped __init__ method of the class. If only the delegate class is provided, then the instance created in the wrapped __init__ method will be named using an underscore (_) followed by the lower case name of the class. All of the public methods from the delegate classes will be added to the class unless there is an attribute or method in the class with the same name. In that case the delegate method will be ignored. Returns a method that forwards calls on the scoping object to calls on the delegate object. The signature of the delegate method is preserved in the forwarding method. Returns a function with a new body that replaces the given function. The signature of the original function is preserved in the new function. A class decorator that will try to import the specified modules and in the event of failure will stub out the class, raising a RuntimeError that explains the missing dependencies whenever an attempt is made to instatiate the class. Routines analyzing dependencies (class and module) in Python source. Bases: object Take module pathnames of base classes that may be. finfo: list Bases: ast.NodeVisitor Collects info about imports and class inheritance from a Python file. Take module pathnames of classes that may. This executes every time a class definition is parsed. This executes every time an “import foo” style import statement is parsed. This executes every time a “from foo import bar” style import statement is parsed. Bases: object Returns a list of names of classes that inherit from the given base class. Bases: ast.NodeVisitor Importing this file will fix problems we’ve found in distutils. Current fixes are: Update the library_dir_option function in MSVCCompiler to add quotes around /LIBPATH entries. A utility to extract Traits information from the code and get it into the Sphinx documentation. Note No traits docs will be generated unless the class containing the traits has a doc string! Gets traits info. Connect the doctools to the process-docstring hook. If run as main, dumpdistmeta.py will print out either a pretty-printed dict full of the metadata found in the specified distribution or just the value of a single piece of metadata if metadata-item is specified on the command line. The distribution can be in the form of an installed egg, a zipped egg, or a gzipped tar file containing a distutils distribution. usage: dumpdistmeta.py distribution [metadata-item] Example output: $ dumpdistmeta.py pyparsing-1.5.1-py2.5.egg {'SOURCES': ['README', 'pyparsing.py', 'setup.py', 'pyparsing.egg-info/PKG-INFO', 'pyparsing.egg-info/SOURCES.txt', 'pyparsing.egg-info/dependency_links.txt', 'pyparsing.egg-info/top_level.txt'], 'author': 'Paul McGuire', 'author-email': 'ptmcg@users.sourceforge.net', 'classifier': 'Programming Language :: Python', 'dependency_links': [], 'description': 'UNKNOWN', 'download-url': '', 'entry_points': {}, 'home-page': '', 'license': 'MIT License', 'metadata-version': '1.0', 'name': 'pyparsing', 'platform': None, 'py_version': '2.5', 'summary': 'Python parsing module', 'top_level': ['pyparsing'], 'version': '1.5.1', 'zip-safe': False} Example output: $ dumpdistmeta.py pyparsing-1.5.1-py2.5.egg license MIT License Retrieve metadata from within a distribution. Returns a dict. Retrieve metadata from a file or directory specified by path, or from the name of a distribution that happens to be installed. path can be an installed egg, a zipped egg file, or a zipped or unzipped tar file of a python distutils or setuptools source distribution. Returns a dict. A generator that retrieves resource file pathnames from within a distribution. Egg loading utilities. Load object(s) from an input stream (or filename). If instream is a string that is not an existing filename or absolute path, then it is searched for using pkg_resources. Returns the root object. Extracts files in egg to a subdirectory matching the saved object name. Then loads object graph state by invoking the given entry point. Returns the root object. Load object graph state by invoking the given package entry point. Returns the root object. Display requirements (if logger debug level enabled) and note conflicts. Returns a list of unavailable requirements. Bases: object Provides a convenient API for calling an observer of egg operations. observer will be called with: Observe add of file. If observer returns False, raises RuntimeError. Observe analysis of file. If observer returns False, raises RuntimeError. Observe operation complete. Observe copy of file. If observer returns False, raises RuntimeError. Observe exception. Observe extraction of file. If observer returns False, raises RuntimeError. Egg save utilities. Note that pickle can’t save references to functions that aren’t defined at the top level of a module, and there doesn’t appear to be a viable workaround. Normally pickle won’t handle instance methods either, but there is code in place to work around that. When saving to an egg, the module named __main__ changes when reloading. This requires finding the real module name and munging references to __main__. References to old-style class types can’t be restored correctly. Save the state of root and its children to an output stream (or filename). If outstream is a string, then it is used as a filename. The format can be supplied in case something other than cPickle is needed. For the pickle formats, a proto of -1 means use the highest protocol. Save state and other files to an egg. Analyzes the objects saved for distribution dependencies. Modules not found in any distribution are recorded in an egg-info/openmdao_orphans.txt file. Also creates and saves loader scripts for each entry point. Returns (egg_filename, required_distributions, orphan_modules). Writes Python egg files. Supports what’s needed for saving and loading components/simulations. Returns name for egg file as generated by setuptools. Write egg in the manner of setuptools, with some differences: Returns the egg’s filename. envirodump.py USAGE: "python envirodump.py", text file generated in that spot TO DO: get package VERSIONS attempt to use pip freeze if available fix aliases problem do a WHICH on compilers? deal with sys.exit on bad Python version? X make docstrings consistent X make Exception handling consistent This function will find compilers, print their paths, and reveal their versions. Gets aliases on Unix/Mac This function will call out for specific information on each compiler. Writes the time and date of the system dump This function will list python packages found on sys.path This function will capture specific platform information, such as OS name, architecture, and linux distro. This function will capture specific python information, such as version number, compiler, and build. This function captures the values of a system’s environment variables at the time of the run, and presents them in alphabetical order. This function runs the rest of the utility. It creates an object in which to write and passes it to each separate function. Finally, it writes the value of that object into a dumpfile. Misc. file utility routines Bases: object Supports using the ‘with’ statement in place of try-finally for entering a directory, executing a block, then returning to the original directory. Create a directory structure based on the contents of a nested dict. The directory is created in the specified top directory, or in the current working directory if one isn’t specified. If a file being created already exists, a warning will be issued and the file will not be changed if force is False. If force is True, the file will be overwritten. The structure of the dict is as follows: if the value at a key is a dict, then that key is used to create a directory. Otherwise, the key is used to create a file and the value stored at that key is written to the file. All keys must be relative names or a RuntimeError will be raised. delete the given files or directories if they exists Copy a file or directory. Return filenames (using a generator). Walks all subdirectories below each specified starting directory. Search the given list of directories for the specified file. Return the absolute path of the file if found, or None otherwise. Search for a given file in all of the directories given in the pathvar string. Return the absolute path to the file if found, None otherwise. Return the pathname of the file corresponding to the given module name, or None if it can’t be found. If path is set, search in path for the file, otherwise search in sys.path Search upward from the starting path (or the current directory) until the given file or directory is found. The given name is assumed to be a basename, not a path. Returns the absolute path of the file or directory if found, None otherwise. Return the name of the directory that is ‘num_levels’ levels above the specified path. If num_levels is larger than the number of members in the path, then the root directory name will be returned. Attempts to get the /OpenMDAO-Framework/config/testhosts.cfg first, then gets the ~/.openmdao/testhosts.cfg next. Given a module filename, return its full Python name including enclosing packages. (based on existence of __init__.py files) A collection of utilities for file wrapping. Note: This is a work in progress. Bases: object Utility to locate and read data from a. Grabs a 2D array of variables relative to the current anchor. Each line of data is placed in a separate row. If the delimiter is set to ‘columns’, then the values contained in fieldstart and fieldend should be the column number instead of the field number. Grabs an array of variables relative to the current anchor. Setting the delimiter to ‘columns’ elicits some special behavior from this method. Normally, the extraction process wraps around at the end of a line and continues grabbing each field at the start of a newline. When the delimiter is set to columns, the paramters (rowstart, fieldstart, rowend, fieldend) demark a box, and all values in that box are retrieved. Note that standard whitespace is the secondary delimiter in this case. Searches for a key relative to the current anchor and then grabs a field from that line. You can do the same thing with a call to mark_anchor and transfer_var. This function just combines them for convenience. Returns a whole line, relative to current anchor. Grabs a single variable relative to the current anchor. — If the delimiter is a set of chars (e.g., ”, ”) — fieldend - IGNORED — If the delimiter is “columns” — Bases: object Utility to generate an input file from a template. Substitution of values is supported. Data is located with a simple API. Replace the contents of a row with the newline character. Use the template file to generate the input. Set the name of the template file to be used The template file is also read into memory when this method is called. Changes the values of a 2D array in the template relative to the current anchor. This method is specialized for 2D arrays, where each row of the array is on its own line. Changes the values of an array in the template relative to the current anchor. This should generally be used for one-dimensional or free form arrays. Changes a single variable in the template relative to the current anchor. row - number of lines offset from anchor line (0 is anchor line). This can be negative. field - which word in line to replace, as denoted by delimiter(s) Bases: pyparsing.TokenConverter Converter for PyParsing that is used to turn a token into a float. Converter to make token into a float. Bases: pyparsing.TokenConverter Converter for PyParsing that is used to turn a token into Python inf. Converter to make token into Python inf. Bases: pyparsing.TokenConverter Converter for PyParsing that is used to turn a token into an int. Converter to make token into an integer. Bases: pyparsing.TokenConverter Converter for PyParsing that is used to turn a token into Python nan. Converter to make token into Python nan. Transfer a file from one place to another. If src_server or dst_server is None, then the os module is used for the source or destination respectively. Otherwise the respective object must support open(), stat(), and chmod(). After the copy has completed, permission bits from stat() are set via chmod(). Create ‘zip’ file filename of files in patterns. Returns (nfiles, nbytes). Note The code uses glob.glob() to process patterns. It does not check for the existence of any matches. Translate the newlines of filename to the local standard. Unpack ‘zip’ file filename. Returns (nfiles, nbytes). Pull a tarfile of a github repo and place it in the specified destination directory. ‘version’ can be a tag or a commit id. Downloads a distribution from the given package index(s) based on the given requirement string(s). Downloaded distributions are placed in the specified destination or the current directory if no destination is specified. If a distribution cannot be found in the given index(s), the Python Package Index will be searched as a last resort unless search_pypi is False. This does NOT install the distribution. Requirements may be supplied as strings or as Requirement objects. This is just a wrapper for the logging module. Messages can be routed to the console via enable_console(). If the file logger.cfg exists, it can be used to configure logging. See the Python documentation for logging.config for details. The example below is equivalent to calling enable_console(): [loggers] keys=root [handlers] keys=consoleHandler [formatters] keys=consoleFormatter [logger_root] level=DEBUG handlers=consoleHandler [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=consoleFormatter args=(sys.stderr,) [formatter_consoleFormatter] format=%(levelname)s %(name)s: %(message)s Bases: object Pickle-able logger. Mostly a pass-through to a real logger. Log a critical message. Log a debug message. Log an error message. Log an exception. Log an information message. Log a message at a specified level. Change name reported in log. Log a warning message. Logging message level. Bases: object Can be useful when no logger has been supplied to a routine. It produces no output. Log a critical message. Log a debug message. Log an error message. Log an exception. Log an information message. Log a message at a specified level. Log a warning message. Return the named logger. Configure logging to receive log messages at the console. Stop receiving log messages at the console. Enable iteration tracing. Disable iteration tracing. A command line script (mkpseudo) points to this. It generates a source distribution package that’s empty aside from having a number of dependencies on other packages. usage: make_pseudopkg <pkg_name> <version> [-d <dest_dir>] [-l <links_url>] [-r req1] ... [-r req_n] If pkg_name contains dots, a namespace package will be built. Required dependencies are specified using the same notation used by setuptools/easy_install/distribute/pip. Note If your required dependencies use the “<” or “>” characters, you must put the entire requirement in quotes to avoid misinterpretation by the shell. Utilities for reading and writing Fortran namelists. Bases: object Data object that stores the value of a single card for a namelist. Bases: object Utility to ease the task of constructing a formatted output file. Add a comment in the namelist. Add every variable in an OpenMDAO container to the namelist. This can be used it your component has containers of variables. Add a new group to the namelist. Any variables added after this are added to this new group. Add a new variable to the namelist. Add an openmdao variable to the namelist. varpath: string varpath is the dotted path (e.g., comp1.container1.var1). Generates the input file. This should be called after all cards and groups are added to the namelist. Loads the current deck into an OpenMDAO component. Group id number to use for processing one single namelist group. Useful if extra processing is needed, or if multiple groups have the same name. Returns a tuple containing the following values: (empty_groups, unlisted_groups, unlinked_vars). These need to be examined after calling load_model to make sure you loaded every variable into your model. unlinked_vars: list containing all variable names that weren’t found in the component. Parses an existing namelist file and creates a deck of cards to hold the data. After this is executed, you need to call the load_model() method to extract the variables from this data structure. Set the name of the file that will be generated or parsed. Sets the title for the namelist Note that a title is not required. Bases: pyparsing.TokenConverter Converter for PyParsing that is used to turn a token into a Boolean. Converter to make token into a bool. Routines to help out with obtaining debugging information find an unused IP port ref: note: use the port before it is taken by some other process! Parses the variable definition section of a Phoenix Integration ModelCenter component wrapper and generates an OpenMDAO component stub. Generates a dummy component given a Phoenix Integration Modelcenter script wrapper. The first section of this wrapper is parsed, and the appropriate variables and containers are placed in the new OpenMDAO component. infile - ModelCenter scriptwrapper. outfile - File containing new OpenMDAO component skeleton. compname - Name for new component. Support for generation, use, and storage of public/private key pairs. The pk_encrypt(), pk_decrypt(), pk_sign(), and pk_verify() functions provide a thin interface over Crypto.PublicKey.RSA methods for easier use and to work around some issues found with some keys read from ssh id_rsa files. Return public key from text representation. Return base64 text representation of public key key. Returns RSA key containing both public and private keys for the user identified in user_host. This can be an expensive operation, so we avoid generating a new key pair whenever possible. If ~/.ssh/id_rsa exists and is private, that key is returned. Note To avoid unnecessary key generation, the public/private key pair for the current user is stored in the private file ~/.openmdao/keys. On Windows this requires the pywin32 extension. Also, the public key is stored in ssh form in ~/.openmdao/id_rsa.pub. Return True if path is accessible only by ‘owner’. Note On Windows this requires the pywin32 extension. Make path accessible only by ‘owner’. Note On Windows this requires the pywin32 extension. Return encrypted decrypted by private_key as a string. Return list of chunks of data encrypted by public_key. Return signature for hashed using private_key. Verify hashed based on signature and public_key. Return dictionary of public keys, indexed by user, read from filename. The file must be in ssh format, and only RSA keys are processed. If the file is not private, then no keys are returned. Write allowed_users to filename in ssh format. The file will be made private if supported on this platform. Bases: subprocess.CalledProcessError subprocess.CalledProcessError plus errormsg attribute. Bases: subprocess.Popen A slight modification to subprocess.Popen. If args is a string then the shell argument is set True. Updates a copy of os.environ with env, and opens files for any stream which is a basestring. Closes files that were implicitly opened. Return error message for return_code. The error messages are derived from the operating system definitions, some programs don’t necessarily return exit codes conforming to these definitions. Stop child process. If timeout is specified then wait() will be called to wait for the process to terminate. Polls for command completion or timeout. Closes any files implicitly opened. Returns (return_code, error_msg). Run command with arguments. Returns (return_code, error_msg). Run command with arguments. If non-zero return_code, raises CalledProcessError. Bases: object Wrapper of standard Python file object. Supports reading/writing int and float arrays in various formats. Close underlying file. Returns next float. Returns floats as a numpy array of shape. Returns next integer. Returns integers as a numpy array of shape. Returns value of next recordmark. Returns record length for count floats. Returns record length for count ints. Writes array as text. Writes a float. Writes a float array. Writes an integer. Writes an integer array. Writes recordmark. Utilities for the OpenMDAO test process. Determine that code raises err_type with err_msg. Determine that code raises exception with msg. Determine that the relative error between actual and desired is within tolerance. If desired is zero then use absolute error. Return path to the OpenMDAO python command Returns the the absolute path of an inaccessible directory. Files cannot be created in it, it can’t be os.chdir() to, etc. Not supported on Windows.. Bases: object Pool of worker threads; grows as necessary. Cleanup resources (worker threads). Get a worker queue from the pool. Work requests should be of the form: (callable, *args, **kwargs, reply_queue) Work replies are of the form: (queue, retval, exc, traceback) Return singleton instance. Release a worker queue back to the pool.
http://openmdao.org/releases/0.2.5/docs/srcdocs/packages/openmdao.util.html
CC-MAIN-2019-43
refinedweb
3,543
60.82
: Hi, I need to write the Z-test algorithm in the C language. Does anyone know : if there is available some code or pseudo-code on the web? I need it quite : urgently so any help will be appreciated. Thanks. HI Q Modnar - just look in any intro stats book. The formula is very simple. Best wishes, Kent. Of course, to code a Z-test you need to compute the _cumulative_ density function. The intro stats books on my book shelf only give the formula for the normal density function, which isn't much use because you can't integrate it. There is no simple exact formula for the CDF, although there are some pretty simple approximations (books with titles like "statistics with BASIC" usually give some). I posted the following program in C a few months ago. jan --------------------------------------------------------------- # include <math.h> # define PI 3.14159265 # define PREC 0.00005 double cdf_norm(double x); main(int argc, char *argv[]) { double x, y; x = atof(argv[1]); y = cdf_norm(x); printf("P(0.00 < Z < %.2f) = %.2f\%\n", x, 100 * y); } double cdf_norm(double x) { int i; double a, b, c, term, sum; a = b = 1; c = sum = term = x; if (fabs(x) > 8) return 0.5 * x / fabs(x); for (i = 1; fabs(term) > PREC; i++) { a += 2; b *= -2 * i; c *= x * x; term = c / (a * b); sum += term; } return sum / sqrt(2 * PI); }
https://groups.google.com/g/sci.math.num-analysis/c/YnS3q8Evhaw/m/gCbbDBlpFWIJ
CC-MAIN-2021-43
refinedweb
235
74.9
Describe Your Data Charlie Heinemann Microsoft Corporation July 20, 1999 I must admit that over the past year I've had mixed feelings concerning this whole validation thing. The well-formedness part I get. A few simple rules, when followed, allow a parser to understand where a tag begins and where it ends, what's a comment and what's text. The validation part, though, is a bit trickier. It does provide some obvious benefits, allowing you to describe your data and to define relationships within that data -- but document type definitions (DTDs) and XML schemas can be a bit foreign, and their purpose in many situations easily questioned. On top of all this, sorting out whether to use DTDs or schemas (not to mention which flavor of schemas) can be difficult. The point of what follows is to clear up some of the questions that arise once you've moved past well-formedness and on to validation. Why Describe My Data? I say describe, rather than validate, because the term "validate" implies that the function of a DTD or schema is to validate the structure of your data. Validation is one function of a DTD or schema; it isn't, however, the only one. DTDs and schemas can also define data types and relationships within your data, something that can be useful even if validation seems unnecessary, so I prefer to say "describe" when it comes to explaining the function of DTDs and schemas. Which leads me back to my original question of why describe? Because data authors across the Web need a way to understand the structure of the data that can be processed by your application. Data authors need something that tells them how you expect the data to look both when they receive it and when they send it back to you. For an example of how validation can be used within applications, check out my article on Internet Explorer 5 support for XML validation. Why else? Because describing your data in more detail can give the consumers of that data the information that will greatly assist in processing it. For example, by providing data type and ID/IDREF information, you can relieve the data consumer of having to do type conversions (check out my data-typing article for more details), and can increase their performance when navigating to related nodes (see my article on ID/IDREF navigation). What Is a DTD? DTDs are a method of providing "a grammar for a class of documents." This is the method of data description described within the World Wide Web Consortium (W3C) XML 1.0 Specification. Rather than go into the details (which can be found within the XML 1.0 spec), I'll give you a brief rundown of the relevant facts concerning DTDs: - DTDs describe XML Documents - DTDs can be used to validate XML documents and define ID/IDREF relationships - DTDs employ a funky syntax where angle brackets, exclamation points, white space, parentheses, question marks, and asterisks are used to define which elements and attributes can go where and the contents they can contain. - DTDs are supported within the Internet Explorer 5 parser. The following is an example of a DTD: The following XML element would be a valid instance according to the above DTD: What is an XML Schema? An XML schema is used to describe XML elements and attributes (as opposed to XML documents). It basically consists of attribute and element type declarations that describe content models for XML elements and attributes within an instance document. It serves much of the same purpose as a DTD. However, its functionality extends beyond that of a DTD. The MSXML parser, released with Internet Explorer 5 and as a re-distributable shortly after, contains support for the XML schemas described within XML-Data Reduced (a subset of the XML-Data proposal to the W3C): How Are XML Schemas Different from DTDs? The big question becomes "When do I use XML schemas, and when do I opt for DTDs?" As when choosing any technology over another, you have to decide which gives you the most value both in the short term and over the long haul. To make such a decision, you need to know where the two technologies differ. Instance Syntax XML schemas are XML documents. Unlike DTDs which have their own peculiar syntax, XML schemas are written in XML. This provides the user with three benefits. First, you don't have to know two syntaxes to author a well-formed XML schema. Granted, you do have to learn the grammar rules for XML-Data Reduced. But, you don't have to worry about two sets of well-formedness rules. Second, tools support can take advantage of the common syntax between XML documents and schemas to provide support for each. Just as it is easier for you to know only one syntax; it's easier for you to build in support for one rather than two syntaxes. The support built into the parser for navigating XML documents, for instance, can also be used to navigate schemas. Unfortunately, DTDs cannot be navigated in the same fashion. Third,being XML documents, XML schemas can be extended. You can add elements and attributes to XML schemas just as you would any XML document. As long as the elements and attributes are of a different namespace, they are legal within a schema. Data Typing DTDs allow you to type content only as a string. XML schemas allow you to type content as ints, floats, dates, Booleans, or a number of other simple data types. In the example above, the B element described by the schema contains an integer, while the B element described by the DTD contains a string. If you were then to write an application to process the contents of that element and required that the value of that element be an integer, you would have to convert it to an integer in the DTD case, whereas you could access that value directly as an integer in the schema case. Open Content Model XML schemas allow for an open content model, meaning that you can extend XML documents while not violating the validity constraints. Take the following XML: The following DTD and schema could be used to describe this instance: DTD: <!ELEMENT name (#PCDATA)> <!ELEMENT quantity (#PCDATA)> <!ELEMENT price (#PCDATA)> <!ELEMENT item (name,quantity,price)> schema: <ElementType name="name"/> <ElementType name="quantity" dt:> <ElementType name="price" dt: <ElementType name="item" model="open"> <element type="name"/> <element type="quantity"/> <element type="price"/> </ElementType> If you validate the above XML using the schema, you can add element children to the item element and it will still be valid, provided the elements added are valid within the context of their own namespace: If you were to validate this element using the above DTD, you would get a validation error because "myItem:time" is not defined within the DTD. Namespace Integration XML schemas integrate namespaces, allowing you to associate particular nodes in an instance document with type declarations within a schema. The only way to associate XML nodes to a DTD is through the DOCTYPE declaration. This is limiting, because only one DTD can be used per instance document. Multiple XML schemas may be used to described a single XML document, because the XML schema doesn't describe the XML document itself, but XML elements within it. Who's Doing What with Schemas? Biztalk at is supporting XML schemas based on the XML-Data Reduced syntax. The eventual goal is to migrate to the W3C XML Schema standard, but they are progressing toward that by supporting the XML schemas that can be parsed by the MSXML parser. For more information on BizTalk, see XML: The Buzz on BizTalk by Robert Hess. In addition to the support within the MSXML parser, tools vendors, such as Extensibility are coming out with tools that support XML-Data Reduced schemas. An example is XML Authority 1.0, which allows you to start working with XML schemas. The gist of the schema activity is that you can begin to seriously consider using XML schemas today. The tools support is here, and will continue to grow, and industry support is strong. The benefits of schema and the ability to increase those benefits through extensibility make them a good choice when describing your XML documents. Charlie Heinemann is a program manager for Microsoft's XML team. Coming from Texas, he knows how to think big.
https://msdn.microsoft.com/en-us/library/ms950791.aspx
CC-MAIN-2015-22
refinedweb
1,415
60.24
Tips 'N Tricks "Java Tip 122: Beware of Java typesafe enumerations" Vladimir Roubtsov Enums should be encouraged, not discouraged Vladimir, It's true that you can't rely on Java enums to safely restrict the number of instances per-enum value to 1 when multiple classloaders are present. But you can still implement equals() and hashcode() with value semantics (rather than reference). Although that won't help speed, I find the pattern is still a better solution than using integer constants. At the very least, the enum idiom makes code more readable and helps encapsulate conditional code on the enum value. The enum idiom should be encouraged rather than discouraged. Edoardo Edoardo, Your suggestion for fixing the typesafe pattern when multiple classloaders are present will not work as you describe. An important point that I tried to make in the article was that multiple classloaders do not simply mess with the number of instances; they mess with the class identity itself. Consider the case of fixing equals() with value semantics, as you suggest. To accomplish that, you must cast the Object parameter passed into equals() to Enum and retrieve some identifying value. However, if Enum is loaded twice, as simulated in my Main class, you are in fact dealing with two different classes, Enum#1 and Enum#2 (although with the same stringified name Enum ). this will be of type Enum#1 , and the method parameter will be of type Enum#2 . Because of that, the cast will never succeed. In fact, you need to guard the cast with an instance-of check more or less like this: public boolean equals (Object obj) { if (obj instanceof Enum) return ((Enum) obj).getID() == this.getID(); // assuming getID() returns a primitive type of some kind else return false; } This instance-of check will always fail because the Enum operand will always mean Enum#1 . Download the sources for my tip ( ) and switch EnumConsumer.validate() to use equals() , instead of == . Then provide an implementation of equals() in the Enum class that would make both validate() calls in Main.main() work -- you will find that impossible. In the second validate() call, this and obj are instances of two different classes -- albeit with the same name. Your suggestion to use hashCode() will indeed work around the need to cast, because that method is available on the root Object class. However, this suggestion is not viable for a different reason: if two objects are identical or equal, their hash codes are the same, but the reverse is not guaranteed to be true. Thus, inferring that two objects are the same just because their hash codes are equal is incorrect in principle. I don't discourage this pattern's use simply because of speed issues: in some situations it cannot be made to work at all, regardless of whether you use == or equals() . Vladimir Roubtsov Vladimir, It's true that with multiple classloaders, we effectively have multiple definitions of the Enum class that can't be casted on each other. I still maintain that enums prove useful for factoring conditional behavior on the enum value. I suggest the following: - Use the enum pattern with regular value object implementations of equals()and hashcode(), if you don't anticipate comparing enum instances loaded by multiple classloaders. - If you need to compare enums loaded by multiple classloaders, implement value object equals()and hashcode()with reflection. It will work. You say: In some situations it cannot be made to work at all, regardless of whether you use ==or equals(). I disagree. It can work, but it will cost you speed. I always favor design over speed and revert to hacks -- like using integer constants instead of enums -- in critical code sections. Edoardo Edoardo, You are entitled to your own opinion, of course, but let me make some final comments. Let's again consider the original motivation for the typesafe enum construct. To me, the construct adds two advantages over the traditional integer sets: - All comparisons are typesafe, which is ensured at compile time. - If identity operator is okay to use, the comparisons remain very fast. Now consider what happens when we use your fix suggestion: equals()takes in a generic Object. Thus, the type safety is gone. The compiler will never catch that e.equals(SomeOtherEnum.TRUE)should be e.equals(Enum.TRUE). How is that better than using plain integers? - In fact, any suggestion to use a value-based comparison to fix the issues I raise would feature the same problem -- no type safety -- since a value-based comparison is essentially the same as comparing integer values in the first place! - In one breath, you say you favor design over speed and in another, you suggest using a class with equals()patched to use reflection. That appears incongruous to me, as reflection is a low-level, non-object-oriented feature of Java. It reduces compile-time checks and creates maintenance problems. - Even if we decide it's okay to use reflection in equals(), consider the ramifications: Remember that the two classes on both sides of equals()are still distinct in the case of multiple classloaders. Thus, to use reflection, you must call Class.getField()(you can't cache this, because a Fieldis specific to a Class), followed by Field.setAccessible()(unless you make the field public), followed by Field.get(). I disagree that the resulting monstrosity is better and more maintainable then an integer enumeration. - While speed might not always prove as important as good design, speed certainly matters to many people. Consider this example: the tag enumeration in javax.swing.text.html.HTML.Tagis used if, for example, you write your own HTML parser based on javax.swing.text.html.parser.ParserDelegator. When you process the callback events, you must compare against various HTML.Tagtags. In JDK 1.3, there are 74 such tag values. Imagine the worst case scenario: since you can't use a switch anymore, you have to use a chain of if/else if equals()comparisons. By the time you get to the last branch, not only will more than 70 methods be called, the methods all use slow reflection. The result: one slow parser. My point: a typesafe enum class that uses equals() is slow and not typesafe -- so it loses the two advantages we started out with. Vladimir Roubtsov Does J2EE violate fundamental Java concepts? Vladimir, The use of multiple classloaders breaking primitive language patterns brings the Java language to a crisis. Is J2EE (Java 2 Platform, Enterprise Edition) and multiple application classloaders so important that now fundamental language concepts are invalidated? Multiple application classloaders are generally evil as this article clearly substantiates. With them, you no longer can rely on any Singleton or instance identity patterns. Every expert Java book written today discusses the importance of instance identity for enumeration types. Now we have a whole fleet of application programmers who code to this pattern. Is it any wonder that Java applications on J2EE are failing mysteriously and, in fact, are more prone to runtime failure? Should the J2EE patterns invisibly break the primitive language patterns experts have been advocating for years? Lane Sharman Lane, In all fairness, I must say that the multiple classloader situation I created in my article is not always likely to occur. Only a complicated set of circumstances would cause that situation to happen -- but when it does, I'd rather not be in charge of tracking it down. When writing application code, I would rather know that a certain problem can never happen, instead of almost never. When designing a class, you cannot always easily predict all contexts in which it will be used later. Perhaps we should lobby for introduction of a simple C/C++-like enum feature into Java? Vladimir Roubtsov Humphrey Sheil Don't forget one last EJB advantage Humphrey, Another major advantage of EJB: non-Java components can call EJB (Enterprise JavaBean) components. Bharat Nagwani Bharat, I didn't list this as an advantage of EJB per se, as it is more an advantage of the Java platform, as opposed to EJB technology. However, you are correct. A major advantage of the J2EE (Java 2 Platform, Enterprise Edition)/Java platform is that by using CORBA orbs, JNI (Java Native Interface), and so on, you can achieve almost any level of integration. However, with integration, you must consider specific architectural issues, such as performance and scalability (and with JNI - security and stability). Humphrey Sheil Taylor Cowan Are Java extensions truly beneficial? Taylor, I really liked your article, which, by coincidence, came a few days after I discovered this facility myself. However, I think you might be encouraging overuse of Java extensions. For one thing, a stylesheet shouldn't depend on a particular processor. The same stylesheet should be useful when processed by Xalan-Java or Xalan C++. Once you stick Java code in there, you must guarantee Xalan-Java use. Secondly, a neat XSL (Extensible Stylesheet Language) solution to the problem might exist. A quick rush to use a procedural-based language will only create unreadable and fragile code. I agree that the declarative nature of XSLT (Extensible Stylesheet Language Transformations) can be daunting to us procedural-types, but it is very powerful once you get used to it. To conclude, this extension facility should be treated like JNI (Java Native Inteface): use with caution only in cases of extreme need. Shimon Crown Shimon, I agree in part, however, I do not believe that using extensions necessarily creates unreadable or fragile code. We're not even inlining Java, we're simply calling a function. One example that I had wanted to use was a tokenizing extension; since Xalan already provided one, I took to writing an original extension. The XSLT code, which tokenizes a string into a set of nodes would be infinitely more unreadable and fragile than just simply using Java's tokenizing ability. Just an extension library's existence adds strength to the idea that extensions are necessary from time to time. Ideally, these extensions will be wrapped back into the XSLT specification. XSLT by nature is unreadable by itself; it just couldn't be worse. XSLT doesn't even have if/then statements. (Yes, I know you can simulate them, but so what? They're still missing.) The current spec doesn't include regular expressions either. You say: Once you stick Java code in there, you must guarantee Xalan-Java use. Yes, you're right, but I'm a Java developer. I use XSLT as a utility in my Java applications. XSLT to the Java developer is secondary. We develop in Java because Java will run anywhere. Because Java will run anywhere, so will Xalan. Your JNI comment really gets to the point of our differences: I see XSLT and Java as similar to JSP (JavaServer Pages) and Java. A stylesheet with a lot of HTML content resembles a JSP page. I can replace my JSPs with XSLT. I'm talking J2EE (Java 2 Platform, Enterprise Edition), Java, and Web servers. That's my perspective as a developer, so in that sense, XSLT purity is not my goal. One thing to keep in mind is the stylesheet's purpose. Normally, my stylesheets run on the server within a J2EE application server. They play a part in a larger Java environment and will never, ever be used anywhere else. Perhaps I am encouraging overuse of Java extensions, but I'm really just trying to share a bit of joy I experienced recently. I agree that you can do almost anything with XSLT. However, I solved a bad problem in minutes once I figured out extensions. Taylor Cowan "UI design with Tiles and Struts" Prakash Malani How do you incorporate a visual mode with Struts and Tiles? Prakish, With this approach, I wonder how a graphical designer can make a design for the pages you describe. This question comes up often in our projects: designers use software like Dreamweaver to create HTML and want (and need) to work visually (WYSIWYG mode). Design becomes a big challenge when the visual mode is an additional requirement for frameworks like Struts and Tiles. Perhaps we just need an update for Dreamweaver. Marcel Offermans Marcel, I have run into this problem again and again. Depending on your circumstances and project, the following options might be appropriate: - Encourage the UI designers to think in terms of view components, i.e., Tiles. This takes some learning and time. The first cut of the pages might feature duplication and heavy cohesion between components. Developers can component-ize, i.e., Tile-ize, the application. Now it will be hard to work in a WYSIWYG environment. A simple, yet highly effective solution, sets up a live environment for the UI designer, so they can observe real-time changes. - I have worked with many developers who are artistic and proficient in UI design and implementation. Once the initial cut is made to create the view components, the developers end up doing all the work from then on. In certain circumstances, they might seek help from the original UI designers. - I haven't played with any fancy tools. However, I imagine that tools like UltraDev and others should make things much easier in the future. Prakash Malani "Exceptional Practices" Part 1. Use exceptions effectively in your programs Part 2. Use exception chaining to preserve debugging information Part 3. Use message catalogs for easy localization Brian Goetz More exceptional practices Brian, Thanks for a tremendous series on exception handling. As a lead developer, I have asked all my junior developers to read this series. Too often, exception-handling by even experienced developers is ignored or misused. I have some comments about Part 3: You should have stressed more the use of properties files to hold the message catalog, as this approach truly separates the text from the Java code -- as opposed to the messages being in a ListResourceBundle. Secondly, I have discovered two extreme views with exceptions: One extreme is the developer who maintains an almost flat exception hierarchy and then has many messages in the message catalog. While this means fewer classes to develop, often the client code starts having to peek at the message key to determine the appropriate action to take. On the other extreme is the developer who wants an extremely large hierarchy and feels that the type (class) of the message itself should indicate all message meaning. This tends to lead to an explosion of exception classes. There is a middle ground that features enough of a hierarchy where client code does not have to use the message key to take action -- of course, you can't always know how an API will be used. Bill Siggelkow Bill, I tried to avoid the topic of where the messages live -- you could use ListResourceBundle or PropertyResourceBundle -- the effect is the same. While ListResourceBundle s are still classes, they contain nothing but resource strings, so the ListResourceBundle source files can be owned by the document writer -- and that's the important thing. Actually, the two exception views are pretty much orthogonal to the question of whether or not you have to peek at the message text/key. Whether you have a flat hierarchy or a deeply branching hierarchy, the real question is whether you have enough information embedded in the type to identify the exception, which basically translates to "How lazy were you about making enough exception classes?" One technique I've used, which some dislike, is to group related exception classes as static subclasses: public class Holder { public static class FooException extends Exception { } public static class BarException extends Exception { } } You now have the ability to put related code in the same file. Brian Goetz
https://www.javaworld.com/article/2073972/letters-to-the-editor.amp.html
CC-MAIN-2018-22
refinedweb
2,623
54.52
maybe its just me again, but can't you protected override void Render(HtmlTextWriter writer) { TextWriter myTextWriter = new StringWriter(); HtmlTextWriter myWriter = new HtmlTextWriter(myTextWriter); base.Render( myWriter); string html =myWriter.ToString(); html = html.Replace("foo", "bar"); writer.Write(html); } Anders -- you had it (almost, see below). The thing I missed in your code is that you're creating a "local" HtmlTextWriter, rendering to that, then grabbing the HTML out of it. The only change is what I think Erik was hinting at above: myWriter.ToString() just returns the class name. I did this: myWriter.InnerWriter.ToString() This gave me the underlying HTML. I have a control adapter that does some manipulation on EPiServer properties. It's declared like this: public class EPiServerContentFilter : PropertyDataControlAdapter I override the Render method, and do some stuff, like this: This works fine, with one exception: dynamic content does not execute. When I just call this -- -- it works fine. I had this same problem when extending the Property control. I posted about that, but then figured out that I needed to do was call "EnsurePropertyControlsCreated()" which is a method on the base Property class. However, I can't get at this method inside the adapter. The adapter as a "Control" property and a "PropertyDataControl" property, neither of which can be cast into "Property." Ted Nyberg had a good post about manually parsing a control with dynamic content. The trick to that seemed to be calling "SetupControl" on the PropertyLongString class. I tried it, but it still didn't work. So, how do you get dynamic content to execute when you're manipulating the XHTML property via an adapter?
https://world.episerver.com/forum/developer-forum/Developer-to-developer/Thread-Container/2010/9/How-do-you-get-dynamic-content-to-execute-when-youre-manipulating-the-XHTML-property-via-an-adapter/
CC-MAIN-2020-34
refinedweb
272
59.19
Currently, this is what I h ave for my list implementation. It's supposed to be a circular linked list queue with a single variable (rear) identifying the end of the queue. I keep getting null pointer exceptions when my programs hits the "dequeue" so I assume it's somewhere in there. Again, I'm not sure how these are implemented in terms of coding, and if additional information (i.e. what the interface it's extending looks like etc) let me know. public class CircLinkedUnbndQueue implements UnboundedQueueInterface { protected LLObjectNode rear; // reference to the rear of this queue public CircLinkedUnbndQueue() { rear = null; } public void enqueue(Object element) // Adds element to the rear of this queue. { LLObjectNode newNode = new LLObjectNode(element); if (rear == null) rear = newNode; else rear.setLink(newNode); rear = newNode; } public Object dequeue() // Throws QueueUnderflowException if this queue is empty; // otherwise, removes front element from this queue and returns it. { Object element; element = rear.getInfo(); rear = rear.getLink(); if (rear == null) rear = null; return element; } public boolean isEmpty() // Returns true if this queue is empty; otherwise, returns false. { if (rear == null) return true; else return false; } }
http://www.dreamincode.net/forums/topic/203844-circular-linked-list-queue-issue/
CC-MAIN-2016-40
refinedweb
187
56.55
Kubernetes SDKs from the Pulumiverse Posted on. new k8s.apiextensions.CustomResource('marvin-the-martian', { apiVersion: 'looneytunes.com/v1990', kind: 'LooneyTune', metadata: { name: 'Marvin the Martian', planet: 'mars' }, spec: { armor: { style: 'hoplite', colors: ['green', 'red'], }, phrases: [ 'Wheres the kaboom', 'Oh, drat these computers. They’re so naughty and so complex. I could pinch them', 'I do so enjoy observing the flora and fauna of that tiny planet', 'Please, sir. Do not interrupt my chain of thought. I am a busy Martian', 'Brace yourself for immediate disintegration' ] }, }); While this is a huge unblocker for working with Kubernetes, we can do better. crd2pulumi Back in August, 2020, we published crd2pulumi. crd2pulumi allows you to generate a Pulumi SDK for any Kubernetes Custom Resource. Now, when you want to start using cert-manager, Ambassador, Linkerd, or any other project within the Kubernetes and Cloud Native space; you can download the CRD YAML and run crd2pulumi, which will generate the SDK for whatever supported language you wish. Neat? I’ve been using this approach for the past year and it’s the easiest way to provide that rich interface to developers, but the repetition can become a little frustrating. For every new project that I was working on, I’d need to build out the same automation and there’s also the maintenance burden of ensuring the SDK is up to date with the latest version deployed to the cluster. Further more, these SDKs never change; if you generate a SDK for cert-manager and someone else does, supporting the same version - there’s no difference in the code that is generated. Do we really need everyone to do this themselves? Perhaps some further tooling is required. Kubernetes SDKs at Pulumiverse I like solving problems like this. It’s not technically challenging, the hard work is already done by the team supporting crd2pulumi. The problem we need to solve here is a quality of life / developer experience issue. How can we make this easier for developers to consume? For us to make this easier, we have two issues: - How do we automate the generation of the SDKs, ensuring that the SDKs are always up to date? - Where do we publish them? Automation GitHub Actions plays such a large role in our developer world these days, especially for open source projects. So it was an easy choice to make that any automation I build out for this will be built on top of GitHub Actions. Knowing that we want the SDKs to be up to date, we also know that this automation must run on a regular cadence. As such, the beginnnig of our GitHub Action can take some shape: name: Build SDKs on: schedule: - cron: "10 3,15 * * *" Here, we have our GitHub Action scheduled to run every day at 3:10am and 3:10pm. Next, we need to know what to build. We don’t want to build EVERY SDK that the project is aware of, so we need some code to look up GitHub Releases for some new tags. For this, we can drop to Python to do some quick lookups. from atoma import parse_atom_bytes from datetime import datetime, timedelta, timezone from json import dumps from requests import get as httpget from typing import List, Mapping, Optional from yaml import safe_load, YAMLError import sys def has_been_updated(updated_at: Optional[datetime] = None) -> bool: if updated_at == None: return False hours_since: int = int(sys.argv[1]) if len(sys.argv) > 1 else 12 return (datetime.now(timezone.utc) - updated_at) < timedelta(hours=hours_since) def get_new_tags(repository: str) -> List[str]: response = httpget(f'https://{repository}/tags.atom') feed = parse_atom_bytes(response.content) return list(map(lambda entry: entry.title.value, filter(lambda entry: has_been_updated(entry.updated), feed.entries))) with open("./sdks.yaml", "r") as stream: try: sdks: Mapping[str, str] = safe_load(stream) except YAMLError as exc: print(f"Error parsing SDK Yaml: {exc}") sdks_to_build: List[str] = [] for name, repository in sdks.items(): for tag in get_new_tags(repository): sdks_to_build.append( f'{name}|{repository}|{tag}') print(dumps(sdks_to_build)) This Python will loop through our configured SDKs, loaded from a YAML document, and reach out to the atom feed that GitHub provides for every public repository. This allows us to quickly loop over the tags/releases and find any published within the last window. We don’t need to do anything else besides print this to the terminal and allow GitHub Actions to store the output for use in our build matrix. jobs: find_new_tags: runs-on: ubuntu-latest steps: - ... random boring steps - id: find_new_tags run: echo "::set-output name=sdk_versions::$(poetry run python ./bin/find-new-tags.py ${{ github.event.inputs.since_hours }})" outputs: sdk_versions: ${{ steps.find_new_tags.outputs.sdk_versions }} build_nodejs_sdk: needs: find_new_tags runs-on: ubuntu-latest strategy: matrix: version: ${{ fromJson(needs.find_new_tags.outputs.sdk_versions) }} Once our Python script has run and the output stored, we’re using a helper function from GitHub called fromJson to use the output as a dynamic input to their matrix build system. We’ll get exactly one job for every new tag that we’ve found. The job to generate the SDK and publish it to npm looks like this: - name: crd2pulumi-generate-sdk run: | IFS='|' read -r -a build_parts <<< "${{ matrix.version }}" mkdir -p ./_output curl -fsSL -o crd.yaml{build_parts[1]}@${build_parts[2]} crd2pulumi --nodejsName ${build_parts[0]} --nodejsPath ./_output ./crd.yaml --force # Fix Package Name sed -ie "s#@pulumi/${build_parts[0]}#@pulumiverse/${build_parts[0]}#g" ./_output/package.json # Fix Package Version sed -ie "s#\"version\": \"\"#\"version\": \"${build_parts[2]}\"#g" ./_output/package.json - run: npm publish --access=public working-directory: ./_output env: NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }} Like I said earlier, the hard work was already done; we just needed to glue some automation together to improve the DX. Publishing Now that we have this system in place to regularly check and build new SDKs, we needed a home for them to live. Recently, some amazing Pulumi community members, worked together on a new GitHub initiative, with Pulumi support, called Pulumiverse. Pulumiverse aims to be a community lead and governed initiative to provide new abstractions and libraries for consumers of Pulumi to use and make their lives easier. This is the perfect place for these SDKs to live. You can checkout the repository today. SDKs Currently, the automation publishes NodeJS SDKs (JavaScript and TypeScript) to npm, but plans to add Python will be executed very soon. We’re also looking to publish packages for dotNet, but research is still underway. I don’t expect to publish packages for Go … sadly, because Go modules use GitHub for fetching source code - we’d need to publish generated code to the repository and I’m not entirely keen on that approach. I could possibly be persuaded, but I’d love some suggestions for alternate ways to resolve this. We currently provide SDKs for: - argocd - cert-managaer - crossplane - knative - redpanda Need more? No problem! Open a pull request adding a SINGLE LINE to sdks.yaml and the SDK will be available shortly. Nice. Prior Art While writing this article, I discovered an old repository within the Pulumi organization that actually did something similar to this last year 😮 It’s not automated like this new initiative, but it was great to find that previous attempts to improve the developer experience for Kubernetes teams using Pulumi. Mad props to Paul Stack and Albert Zhong. So that’s it! Thanks for reading. We hope you find this useful and we encourage you to join us and help by adding your favourite Kubernetes SDKs to the automation. We’ll see you in 2022, have a great New Year. 🎉
https://www.pulumi.com/blog/kubernetes-sdks-pulumiverse/
CC-MAIN-2022-05
refinedweb
1,259
55.95
Opened 6 years ago Closed 6 years ago Last modified 6 years ago #16790 closed Bug (fixed) GeoDjango tutorial error (selecting WorldBorders via admin crashes) Description First time django contributor; I may be doing something wrong, but here it goes... Note that the crash does not occur with the 1.3.0 release (I'm running trunk, 1.4.0 alpha). Got to the following point in the tutorial: "Finally, browse to, and log in with the admin user created after running syncdb. Browse to any of the WorldBorders entries" Clicking on "WorldBorders" causes the crash: "'super' object has no attribute '_media'" The following seems to be the problem: the first thing that django.contrib.gis.admin.options.GeoModelAdmin._media() does is to invoke... media = super(GeoModelAdmin, self)._media() The problem is that there was a recent changeset (16594) in the parent class (ModelAdmin). In that changeset the following changed: old: def _media(self): new: @property def media(self): I tested a patch (which I'll submit shortly) where GeoModelAdmin gets "media" from its parent as a property, instead of getting it via a method call. This seems to fix the problem. media = super(GeoModelAdmin, self).media Note that I ran the GeoDjangoTestSuiteRunner tests and although there were some errors, none of them seem to correspond to this case; however, I'm not positive about this. Added test stub.
https://code.djangoproject.com/ticket/16790
CC-MAIN-2017-26
refinedweb
229
55.03
:01:43 PM. OK, I modified Dave's script thus: <%params = {Dave}; xml.rpc ("127.0.0.1", 1234, "radio.helloWorld", @params, protocol:SOAP, soapAction:"/examples")%> Note the change in the port number (so I can capture the messages) and in the protocol. And now have a helloWorld.txt in *both* the H:Program FilesRadio UserLandWeb Services and H:Program FilesRadio UserLandWeb Servicesexamples. But I still get a Macro error as the name "helloWorld" hasn't been defined. So I can't see what SOAP responses are generated, but here's the request (some white space added for readability): Dave Note the lack of a namespace on the body. Unfortunately, this makes it difficult for implementations like Apache SOAP to interoperate for reasons I detail here. The good news is that the situation gets a bit better in Axis. Dave has written an excellent article. It tells how easy it is to create web services with Radio 8. And he is right, it truly is easy. I wish that was the whole story and I could be done with it. Unfortunately, I have one question. Without the answer to that question, I can't access the web services using the SOAP stack I have participated in developing - Apache Axis. There may be other questions too, like what are the parameter names? These are the types of problems we faced last April when a bunch of soapbuilders got together. SOAP stack A would interop well with SOAP stack A. SOAP stack B would interop well with SOAP stack B. SOAP stack A's messages were 100% SOAP compliant. So were SOAP stack B's. But SOAP stack A didn't interop well with SOAP stack B because they both were based on different assumptions. I don't have a secret decoder ring. I can't read your mind. On the other hand, I'm sure I can capture the wire messages that are sent back and forth by Radio and reverse engineer them successfully. And I undoubtedly will find that the assumptions you made were simple and reasonable. Unfortunately, there are several dozen SOAP implementations, each with different, simple (to them), and reasonable (to them) assumptions. The solution is simple. Documentation. Preferably in an XML format. One with lots of room for human readable content. This is the second cable that we need to hoist as a part of this bootstrap
http://radio.weblogs.com/0101679/2002/02/04.html
crawl-002
refinedweb
400
76.01
On 1/31/06, Peter N. Lundblad <peter@famlundblad.se> wrote: > More than one attribute with the same name in a start or empty-element tag > is not well-formed XML, so the parser has to error out in that case. Right. Hopefully expat should be catching that instead of relying upon me to detect that. =) > OTOH, there's no reason to not break early, but that's a minor > optimization. Yes, it could do so. All of the XML code is going to have to be tweaked to better support namespaces in the near-term. When I get around to that, I'll add in a break. -- justin --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org Received on Tue Jan 31 21:43:49 2006 This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2006-01/1100.shtml
CC-MAIN-2019-39
refinedweb
149
75.5
March 14, 2017 Approaching Marathon Match Task, Pt. 1 The following guest post is from our 2017 Topcoder Open blogger, gorbunov. Once you have decided to participate in a Marathon Match, are waiting for the start of a new one, or joined an ongoing one; you have a task to solve. Usually the tasks provided during MMs belong to the Nondeterministic Polynomial (NP) class. That said, no exact solution can solve this task with the given size of the input data at all! All humans can do is to approximate the solution. Sometimes these approximations can be quite good, and even prove to fit within 1-2% out of the ideal solution. Before You Start Before you start, you may want to register for the MM newsletter from Topcoder. This way you will not accidentally miss a competition. It also may be tricky if you are not familiar with the website interface. Open the community portal, login, hover over the username in the top-right corner, choose Settings, Email. Doing this step will save you from the frustration of missing a good MM. The problem statement Once you read it, you can start developing your solution. Do not hurry to start coding straight away as the tasks for MMs are chosen in such a way that a good solution have to be prepared on paper first. If you want a trivial solution to start playing with the visualizer, it is often packaged with the contestant’s distribution. You can use, modify, and even submit it (if you really want to). In case it is not there, write a simple one, possibly scoring 0 points, modify the visualizer to allow a manual play, in case it is feasible for a particular task. Common Techniques There are several common methods you absolutely have to be aware of if you want to do well in a MM. The one that pops up most frequently and is thus the most important is probably Simulated Annealing (SA) metaheuristic. Actually there are a plenty of so-called metaheuristics, or meta-algorithms that can be used to solve the NP tasks. The others include Genetic Algorithms + other Evolutionary algorithms, Ant Colony optimization, Hill Climbing… Thus one might ask, why is the Annealing technique is so good? A short answer is that if implemented properly: - it has a very fast evaluation of neighbor states; - transition to the neighbor state is relatively fast compared to the other methods; - no need to store many different states, a single one is enough; - following from that, no need to process entire population on an each iteration, thus an enormous speed gain. A long answer Let us look at how SA is implemented instead. I will use the C++ language here instead of the pseudocode to be more concrete. Note that your code during the match must be written on your own! The basic procedure may look like this: #include <random> void simulated_annealing() { double skip_time = get_time() - start_time; double used_time = skip_time; std::uniform_real_distribution<double> type_dist(0, 1); std::mt19937 rnd; while (used_time < timeout) { double temperature = (1.0 - (used_time - skip_time) / (timeout – skip_time)) * (max_temperature - min_temperature) + min_temperature; for (int iteration = 0; iteration < 1000; iteration++) { double type = type_dist(rnd); // prob_change_1 + prob_change_2 + prob_change_3 == 1.0 if (type < prob_change_1) { state_change_1 sd1 = random_state_change_1(); if (accept(sd1.delta, temperature)) { sd1.apply(); } } else if (type < prob_change_1 + prob_change_2) { state_change_2 sd2 = random_state_change_2(); if (accept(sd2.delta, temperature)) { sd2.apply(); } } else { state_change_3 sd3 = random_state_change_3(); if (accept(sd3.delta, temperature)) { sd3.apply(); } } } } used_time = get_time() - start_time; } As can be seen from the code, in general SA walks through the so-called state space; meaning all the possible solutions of the task space. At each step, it either greedily accepts a better solution by slightly modifying the current one, or does a backtrack by accepting a generally worse solution. This worse solution acceptance allows avoiding local minimum, or a solution that “looks optimal”, but is not indeed. While time goes, the probability of accepting a bad solution decreases, thus better solutions are more probable with the time. A few questions may arise after looking at this code. First of all, why do we use some get_time function here instead of the all the way good and standard one, std::chrono::system_clock::now()? That is a good question and all I know is that Topcoder’s servers currently return a garbage out of the system_clock::now(). As a side note, we must use Wall clock here, because that is how Topcoder servers will measure your program’s execution time. Here is a possible implementation of the get_time routine. It works only on the GNU toolchain, or the CygWin for Windows. const double ticks_per_sec = 2500000000; inline double get_time() { uint32_t lo, hi; asm volatile ("rdtsc" : "=a" (lo), "=d" (hi)); return (((uint64_t)hi << 32) | lo) / ticks_per_sec; } The next interesting point is the temperature. It is a feature of the SA heuristic and its main source of the power. As it can be seen from the formula, the temperature decreases with the time. The min_temperature and max_temperature are selected experimentally. Usually that means that you just run a couple of tests with different values and select the values that provide better results. This is of course easy to say, but mastering this allows you to master the SA in general. Another question you may have. Why do we use a separate object to store just a state change, instead of modifying the state in place, re-calculating the total score, and subtracting from it the previous score? That is purely an optimization. However, this optimization is usually so important that it may cost you too much if you do not implement it. The author of this article faced this himself several times. The question why do we need a 3 different state change functions is an easy one. They are needed to reduce the amount of steps needed to go from a given state to a completely different one. In this case there would be just one function, everything would work, but it would converge to good results much slower. Last Notes We have discussed the very basics of using common metaheuristic in a MM called Simulated Annealing. To master it, you need a practice, reading other contestants’ source code, and participating in the post-match forum discussions. gorbunov Guest Blogger
https://www.topcoder.com/approaching-marathon-match-task-pt-1/
CC-MAIN-2019-43
refinedweb
1,047
62.78
Hello all this is my first time here. So I'm doing this program for school and we are supposed to follow the below UML. The program below works the way it is however it uses the int pkg instead of the char that it is supposed to. So the user has to type 1 2 or 3 instead of the A B or C like they should. So if anyone knows how to fix this or has any suggestions they would be much appreciated. Also if I did anything wrong with my first post I apologize in advance. InternetCharges - pkg: char - hours: double + InternetCharges(p: char, h: double) +setPkg(p:char):void +setHours(h: double): void +getPkg():char +getHours():double +getCharges():double public class InternetCharges { private int pkg; private double hours; public InternetCharges(int p, double h) { pkg = p; hours = h; } public void setPkg(char p) { pkg = p; } public void setHours(double h) { hours = h; } public int getPkg() { return pkg; } public double getHours() { return hours; } public double getCharges() { double charges = 0; if (pkg == 1) charges = 9.95; else if (pkg == 2) charges = 14.95; else if (pkg == 3) charges = 19.95; else if (pkg == 3) charges = (hours * 0); if (pkg == 3) charges = charges + (hours * 0); else if ((hours > 10) && (pkg == 1)) charges = charges + ((hours - 10) * 2); else if ((hours > 20) && (pkg == 2)) charges = charges + (hours - 20); return charges; } } import java.util.Scanner; public class InternetChargesDemo { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); int pkg; double hours; System.out.println("What package have you purchased? A=1 B=2 and C=3"); pkg = keyboard.nextInt(); System.out.println("How many hours have you used? "); hours = keyboard.nextDouble(); InternetCharges IC = new InternetCharges(pkg, hours); System.out.printf("Your total charges for this month are: $%,.2f\n" , IC.getCharges()); } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/25111-trying-figure-out-how-use-char-problem.html
CC-MAIN-2015-35
refinedweb
302
72.56
Hi, I have a library item in my FLA that uses ActionScript Linkage. I'm linking this item to an existing class. When I write the path in the Class input window I can click the pencil icon and open the class in Flash profesional. I export the swc in a swc folder. But in Flash Builder the class that the library item links to wont recognize any named display object instances in the linked library item. And when I compile and test the swf and add the class to the diplay list I see nothing. But if I just use ActionScript Linkage with the library item and set the Path field so that it puts the linkage Class in the default package, I can create an instances of this linked library item. So the swc and Flash builder are working, but just not when I link the class to an existing class. Any suggestions? cheers. Are you using getDefinitionByName() to obtain a reference to the object? If so I've found if I don't have a reference to an object in the main SWF I cannot find the class either. I ended up having to create a fake reference to each item I wanted in my Main.as which was a growing pain. e.g. package { import flash.display.Sprite; import flash.utils.getDefinitionByName; public class Main extends Sprite { public function Main() { // useless SWC references _initReferences(); _loadALibraryItem(); } private function _initReferences():void { var a:SomeLibraryID; var b:SomeOtherLibraryID; var c:YouGetTheIdea; } private function _loadALibraryItem():void { // get a class reference, init via class to the correct type (e.g. Sprite) var itemClass = getDefinitionByName("SomeLibraryID") as Class; var item:Sprite = Sprite(new itemClass()); addChild(item); } } } The general idea was Flash Builder did not compile the library items even though they had a class name because there were no references to any of them. Once I made a reference in the Main SWF I could access them in any other class, not just Main. This is similar to the OS codepage dictating which font outlines are embedded via Flash Pro (Flash Builder can get around this) rather than the built-in font manager. You tell Arial to embed Korean and it simply doesn't, just latin. Sometimes Flash/Builder makes decisions on its own that it really shouldn't, like deciding library items don't exist unless you make a bunch of useless dead references just for the compilers sake. I feel like this is back to c/obj-c with a .h file so I can predefine variables before I define them..... Thanks for your reply. I think my issue is different. In Flash Builder I am able to reference and instantiate classes from the swc that was created with Flash Pro. But I can't get Flash Builder to recognize a link between an existing class and an ActionScript Linkage class in the swc. In the Flash Pro file I can link the library item to the existing class, I can verify it with the green check box, and I can edit it with the pencil, but in Flash Builder the existing class can't access any of the objects in the Flash Pro swc linkage class. any ideas? Are you creating a class and then are applying it as the base class for a library item? It sounds like you're doing this and then when you instantiate the class the library items display objects aren't present. Hi, I'm not using it as a Base Class, I'm using it as the Class. This of course works when working in a Flash Profesional Project in Flash Builder, but I'm working in an ActionScript Project in Flash Builder and I'm using a swc for all my assets but I can't get this feature to work right. cheers. I'm using Flash Builder and Flash Pro. I'm using Flash pro to create a swc for my ActionScript Project in Flash Builder. When I use ActionScript Linkage in the swc and give the class a unique name and have it exported to the default package I can instantiate it in my Flash Builder project. But if I try to attach or link the ActionScript Linkage class in my swc to a specific and existing class in my codebase, Flash Builder instantiates this class from codebase with out any of the art from the swc. So it appears that my Flash Builder ActionScript project is not linking the class in the swc with the class in the codebase. I've had no such issues so it sounds like Flash Builder has more than one class invading each others namespace. I do the same. I use Flash Builder 4.6 and I use Flash Pro CS5.5 to create library elements, exported for actionscript with a linkage ID. I export my Flash Pro library to a SWC and import that into Flash Builder. I then have no problems instantiating any library object from the SWC. I'll just make a quick example. A Flash file containing only a circle (Sprite) in the library with a linkage ID. I'll export to SWC. I'll make a new ActionScript FB project and import the SWC. I'll then instantiate it and add it to the display list and it will show up. Example Flash Pro SWC w/ FB Project loading library element. That said, you'll have to be a bit more clear on how this "other" class is loading the SWC. Where is the class located that's loading it? Is it another Flash Pro document with the SWC being loaded and used? How exactly are you loading the SWC into "another class" without loading the SWC into Flash Builder? A class cannot load a SWC. Hi, I too can do as you describe. Like I said, I can instantiate a class from the swc. But I can not link a class in the swc to a class in my code base. This is possible when working with a Flash Professional Project in Flash Builder, but it doesn't work when using an ActionScript Project in Flash Builder. In the Flash Pro file I can write out the file path in the Class field: "com.package.ClassName". Then if I click the green check box it says that it can fine the class. When I click the pencil icon it will open up the class. But when I export the swc and then in Flash Builder try to instantiate this class it's not linked to the class in the swc. In your example you are just instantiating a class that is not linked to an external class in your code base. Try linking MyCircle to a class in your code base, and then instantiate it. I'd be interested in what happens. does that make sense? cheers. I think so. You can redownload the same link: I made a new AS3 doc in Flash Pro, set the base class to an external file (com.example.Main). I kept the circle in the library. I exported it as a SWC. I imported that SWC in FB. I instantiated the SWC itself (com.example.Main) as the MovieClip it is. I ran a custom method from the class (HelloWorld()). I also grabbed the circle from the library and put it on the screen as well. I mentioned namespace above because the com.example.Main is what separates it from the default package so FB can correctly access the right class. Hi, Looking at your FLA you aren't linking MyCircle to an external class. Your example is not attempting to do what I'm trying to do. Try linking MyCircle to an external class that is in the same shared code base as your Flash Builder project. In the FLA Publish settings set a source path to Flex.src. Then create a package in the src package called "test". Then create a class in the test package called MyCircleClass (make sure to extend Sprite or MovieClip) Then in the FLA link MyCircle to MyCircleClass. So the Class path in the properties panel looks like this: test.MyCircleClass. Not the Base Class path, but the Class path. Then click the green check box, it should find the class. Then click the pencil, it too should find the class. Export a swc. Then go into ExampleLoadSWC and instantiate MyCircleClass. When I do this, MyCircleClass would be empty. cheers. Ultimately it's the same concept. A flash document IS a MovieClip so setting its base class is the same. Also as long as your linkage in the library uses the class you share in Flash Builder but has a different class name for the library item itself it will be easy for FB to differentiate. I moved the com.example.Main into the Flex src folder and changed it to com.example.MyCircle. I made this the base class of the "MyCircle" library element. I changed the library elements class name to RedCircle. FB can clearly see com.example.MyCircle now as it's in src. I instantiate RedCircle and add to the display list and I see it. I invoke the HelloWorld() method from inside com.example.MyCircle, which is the base class of RedCircle and therefore inherited and you see it traced. Perhaps, using the example for context, you did something like make the "Class" (not the BASE Class) com.example.MyCircle for the library element. Then when you try to instantiate com.example.MyCircle, of course FB will first look in its own class paths before the SWC to get the class. That's the point of having a Base Class and a Class separately. Hi, Okay we are getting closer. So it sounds like what you are saying is that what I want to do is not possible with a Pure ActionScript project in Flash builder using a swc. I don't want to use a base class. I want to be able to access objects in the library item. Using a Base class wont work for that. The method that you are using wont allow for this. I of course can not access any properties in RedCircle from within MyCircle. When I use a Flash Project in Flash Builder I can link an item in the library to a class in the code base, then in this class I can access all the objects in the library item. This is what I want to do, but can't when using a swc and an ActionScript project in Flash Builder. I want to have a class in my code base that extends an Abstract Class, then I want a class in the swc to link to this child class. Then I want to be able to access all of the swc class objects while of course inheriting everything from the Abstract Class. The only way I've gotten this to work is to have the Library item's base class be the Abstract Class, export the swc. Then using my external code base class I extend the swc class, and with this class I can access everything in the swc class and inherent everything from the Abstract Class, but code hinting doesn't work, I get lots of "?". However it exports fine. I guess this is the only solution? So in short, it appears that when using an ActionScript Project in Flash builder and a swc, you can not link, or merge, a class in the swc with a class in the code base. Flash Builder defaults to the codebase class and ignores the ActionScript Link in the swc. But this is possible when using a Flash Project in Flash Builder and no swc. That sucks : ( Sorry there isn't a better solution but you have to expect issues if you share common classes and tell flash builder to compile using a class directly in its source path. It will always go there first and hit other linked paths, and SWCs later. I do agree it's an issue, but it's a fair issue. Most of the time people do this is strictly code completion. I wish there was a code completion class path setting.. If that's all you need please mark it as answered and good luck! Well thanks for trying : P My solution isn't so bad, just doesn't give code hinting and shows those annoying "?" when ever I reference an object that's inside the ActionScript Linked item. cheers. They always accept suggestions and one of my favorite things about FlashBuilder and Flash Develop is excellent code completion. Maybe you should suggest a code completion library path setting be added? I'd vote for it if you post a link Thanks Sinious, Is there a link you can give me? Any suggestions on how I should phrase it? cheers. Here's the link to the basic suggestion form: Hi DJ, Did you ever come across a better solution for this? Well, as they say, composition of inheritance. So now I use an interface for my "views" (library linked items). So I have a class that extends the linked library item and implements the interface. And I have a Component View wrapper/manager class that either instantiates this view object or is passed the view object. The component view holds the functionality and listens to the view object. It's probably a better solution anyway. Now I can more easily switch between Flash display list objects and something like Starling. Cheers. Thank you for the speedy response!
http://forums.adobe.com/message/4626796
CC-MAIN-2013-48
refinedweb
2,274
81.73
There was a post about a new url traversal implemented in turbogears on top of cherrypy (with ideas being borrowed from Nevow): Here's how to do it in CherryPy 3 (and without requiring mixin classes!): class NevowStyleDispatcher(cherrypy.dispatch.Dispatcher): """Dispatcher which walks a tree of objects to find a handler. The tree is rooted at cherrypy.request.app.root, and). In addition, this class extends the builtin CherryPy Dispatcher by allowing any node in the tree to define a "locateChild" method. This will be passed a "segments" tuple of the remaining path components (starting with the name of the desired child), and should return (child node, remaining segments tuple). If a locateChild method does not exist or returns None, then the normal CP lookup will occur. """ def find_handler(self, path): "" 'virtual path' components: parts of the URL which are dynamic, and were not used when looking up the handler. These virtual path components are passed to the handler as positional arguments. """ request = cherrypy.request app = request.app root = app.root # Get config for the root object/path. curpath = "" nodeconf = {} if hasattr(root, "_cp_config"): nodeconf.update(root._cp_config) if "/" in app.config: nodeconf.update(app.config["/"]) object_trail = [['root', root, nodeconf, curpath]] node = root names = [x for x in path.strip('/').split('/') if x] + ['index'] traversal_names = names[:] while True: name = traversal_names.pop(0) # map to legal Python identifiers (replace '.' with '_') objname = name.replace('.', '_') nodeconf = {} nextnode = None # Here's where we differ from the superclass. We allow # each node to define a "locateChild" method, which can # return any object as the next child in the chain. if hasattr(node, "locateChild"): segments = curpath.split("/") nextnode, traversal_names = node.locateChild(segments) # If locateChild did not exist or did not locate a child, # default back to the normal CP attribute lookup. if nextnode is None: nextnode = getattr(node, objname, None) node = nextnode if node is not None: # Get _cp_config attached to this node. if hasattr(node, "_cp_config"): nodeconf.update(node._cp_config) # Mix in values from app.config for this path. curpath = "/".join((curpath, name)) if curpath in app.config: nodeconf.update(app.config[curpath]) object_trail.append([name, node, nodeconf, curpath]) if not traversal_names: break def set_conf(): """Collapse all object_trail config into cherrypy.request.config.""" base = cherrypy.config.copy() # Note that we merge the config from each node # even if that node was None. for name, obj, conf, curpath in object_trail: base.update(conf) if 'tools.staticdir.dir' in conf: base['tools.staticdir.section'] = curpath return base # Try successive objects (reverse order) num_candidates = len(object_trail) - 1 for i in xrange(num_candidates, -1, -1): name, candidate, nodeconf, curpath = object_trail[i] if candidate is None: continue # Try a "default" method on the current leaf. if hasattr(candidate, "default"): defhandler = candidate.default if getattr(defhandler, 'exposed', False): # Insert any extra _cp_config from the default handler. conf = getattr(defhandler, "_cp_config", {}) object_trail.insert(i+1, ["default", defhandler, conf, curpath]) request.config = set_conf() # See request.is_index = path.endswith("/") return defhandler, names[i:-1] # Uncomment the next line to restrict positional params to "default". # if i < num_candidates - 2: continue # Try the current leaf. if getattr(candidate, 'exposed', False): request.config = set_conf() if i == num_candidates: # We found the extra ".index". Mark request so tools # can redirect if path_info has no trailing slash. request.is_index = True else: # We're not at an 'index' handler. Mark request so tools # can redirect if path_info has NO trailing slash. # Note that this also includes handlers which take # positional parameters (virtual paths). request.is_index = False return candidate, names[i:-1] # We didn't find anything request.config = set_conf() return None, []
http://tools.cherrypy.org/wiki/NevowDispatcher
CC-MAIN-2017-04
refinedweb
593
52.15
Steps to create Invalidation using lambda function are: - Create a Lambda Function and write a code. - Create an API Gateway / API from which we are going to create Invalidation of CDN Step:1 Creating a Lambda Function: Just give a name of the function and use the Runtime of python 3.8 and put rest all things as it is. Now Click on Create Button and Your Function is created. When you go inside a function you will find this window. Paste this code inside the code area. from future import print_function import boto3 import time def lambda_handler(event, context): client = boto3.client(‘cloudfront’) path = “/*” invalidation = client.create_invalidation(DistributionId=”Put your CDN ID”, InvalidationBatch={ ‘Paths’: { ‘Quantity’: 1, ‘Items’: [path] }, ‘CallerReference’: str(time.time()) }) You will get your CDN ID from Cloudfront Dashboard like this Now attach the Cloudfront full access policy to our lambda policy like this. We have completed step 1. Step:2 Create an API for that you have to use AWS API Gateway Service. Create HTTP API. The Integration, section adds lambda and selects your lambda function in my case it is an invalidation name of the function. And rest all thing keep as it is and create an API. We have successfully completed all the steps. Now come to the lambda function and reload our lambda function page and you will get our API Gateway in the trigger section like this Now Click on API Gateway you will get something page like this: Click on API Endpoint and execute this URL in the browser and here your CDN invalidation is created. You can check it like this: So here we Created a CND Invalidation using AWS Lambda Function. Thank you for reading this blog, hope you learned some new thing. Still, if you have any query you can refer to my video Still, you have any doubt you can contact me on LinkedIn Discussion (1) We can invoke Lambda function through awscli instead of API Gateway for trigger create invalidation Full solution of ci/cd dev.to/vumdao/ci-cd-for-cdn-invali...
https://dev.to/aws-builders/cdn-invalidation-using-aws-lambda-function-2pg4
CC-MAIN-2021-31
refinedweb
347
63.29
. Issue: When I add the "Google.Cloud.BigQuery.V2 1.0.0-beta11" NuGet package, I am not able to access a table and the process just keeps spinning with "Running". I can choose to "Abort" and try again, but no luck. How to reproduce: Add package "Google.Cloud.BigQuery.V2" with 'Show Pre-release packages' checked. Follow instructions here () to authenticate with your gcloud. After command `gcloud auth application-default login` you should create a user environment variable called GOOGLE_APPLICATION_CREDENTIALS that points to the application_default_credentials.json generated by that gcloud statement. Add this code to the Console Workbook cell: // Good from here #r "Google.Cloud.BigQuery.V2" using Google.Cloud.BigQuery.V2; BigQueryClient client = BigQueryClient.Create("your-project-name"); // Until this point, where it just hangs BigQueryTable table = client.GetTable("bigquery-public-data", "samples", "shakespeare"); Console.WriteLine(table); Hit ctrl-enter to execute that cell What to expect: An output string of `bigquery-public-data.samples.shakespeare` What you get: Spinning "Running" in the cell that was executed. Never returns anything. Notes: -I put this exact same code into a netcoreapp1.1 Console application with the same NuGet package and the code executed on first build. -Is there a way I can find a more detailed log than AppData\Local\Xamarin\Inspector\logs? It doesn't give me any information on why it is hanging? -Is Xamarin Workbooks running target netcoreapp1.1? -Let me know if this bug report needs more/less detail. A "Console" workbook targets desktop .NET 4.6.1, not .NET Core. It might be worth trying our latest 1.3 alpha, which has better NuGet support . There are no more detailed logs, but we can investigate the use of this package and see what we find. Thanks for reporting this! This appears to be fixed in the 1.3 release candidate that we released yesterday. Please try it and reopen the issue if it is not fixed for you. Created attachment 24461 [details] Screenshot demonstrating correct output Attaching a screenshot demonstrating the fix as well.
https://xamarin.github.io/bugzilla-archives/56/56985/bug.html
CC-MAIN-2019-39
refinedweb
340
61.83
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. I want to delete a row in a class. unlink metho.sach Hi all ! I want when I delete a object in class danh.sach.nv then object in class 'huy.hd' has fields which relate methor many2one , will delete( all fields) I wrote it above: But it did not work. Can anything help me? my code .py class danh_sach_huy_hd(osv, Model): def unlink(self, cr, uid, ids, context=None): if context is None: context = {} """Allows to delete sales order lines in draft,cancel states""" for rec in self.browse(cr, uid, ids, context=context): if rec.ma_nv != '': raise osv.except_osv(_('Invalid Action!'),('Cannot delete a sales order line which is in state \'%s\'.') %(rec.state,)) return super(danh_sach_huy_hd, self).unlink(cr, uid, ids, context=context) _columns = { 'ma_nv':fields.many2one('danh.sach.nv', 'Mã NV'), 'ten_nv':fields.related('ma_nv', 'ten_nv', type ='char', string = 'Tên nhân viên', size = 30), 'ngay_sinh':fields.related('ma_nv', 'ngay_sinh', type ='char', string = 'Ngày sinh', size = 30), 'ten_bp':fields.many2one('bo.phan', 'Bộ phận'), 'ten_cd':fields.many2one( 'chuc.danh','Chức danh'), 'ly_do_huy': fields.char('Lý do hủy hợp đồng', size = 100), 'ngay_huy':fields.date('Ngày hủy'), } Can you help me? About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/i-want-to-delete-a-row-in-a-class-unlink-metho-sach-60150
CC-MAIN-2018-17
refinedweb
249
70.8
- Python if-elif-else statement is used to write a conditional flow code. - The statements are in the order of if…elif…else. - The ‘elif’ is short for ‘else if’ statement. It’s shortened to reduce excessive indentation. - The else and elif statements are optional. - There can be multiple elif statements. - We can have only one else block for an if statement. - The if, else, and elif are reserved keywords in Python. - Python doesn’t have switch-case statements like other languages. The if-else conditions are sufficient to implement the control flow in the Python scripts. - We can have nested if-else blocks in Python. - Python ternary operation allows us to write an if-else statement in a single line. It’s useful when the if-else condition is simple and we want to reduce the code length. Python if else Syntax The general syntax if if-else statement in Python is: if condition: # code elif condition: # code ... # multiple elif statements else: # default code - Every if-elif statement is followed by a condition. - If the condition evaluates as True, then the code in that block gets executed. - Once the code in any of the block is executed, the control flow moves out of the if-elif-else block. - If none of the conditions are True, then the else block code is executed. Python if-elif-else Example Let’s say we have a function that accepts a country name and return its capital. We can implement this logic using if-else conditions. def get_capital(country): if country == 'India': return 'New Delhi' elif country == 'France': return 'Paris' elif country == 'UK': return 'London' else: return None Earlier, we mentioned that elif statement is optional. Let’s look at another simple example where we don’t have elif statement. def is_positive(num): if num >= 0: return 'Positive' else: return 'Negative' Even the else block is optional. Let’s look at another example where we have only if condition. def process_string(s): if type(s) is not str: print('not string') return # code to process the input string In case you are wondering what is with empty return statement, it will return None to the caller. Python if-else in One Line Let’s say we have a simple if-else condition like this: x = 10 if x > 0: is_positive = True else: is_positive = False We can use Python ternary operation to move the complete if-else block in a single line. The syntax of ternary operation is: value_true if condition else value_false Let’s rewrite the above if-else block in one line. is_positive = True if x > 0 else False Nested if-else Conditions We can have multiple nested if-else conditions. Please be careful with the indentation, otherwise the result might be unexpected. Let’s look at a long example with multiple if-else-elif conditions and nested to create an intelligent number processing script. # accepting user input x = input('Please enter an integer:\n') # convert str to int x = int(x) print(f'You entered {x}') if x > 0: print("It's a positive number") if x % 2 == 0: print("It's also an even number") if x >= 10: print("The number has multiple digits") else: print("It's an odd number") elif x == 0: print("Lovely choice, 0 is the master of all digits.") else: print("It's a negative number") if x % 3 == 0: print("This number is divided by 3") if x % 2 == 0: print("And it's an even number") else: print("And it's an odd number") Here are the sample output from multiple iterations of this code. Please enter an integer: 10 You entered 10 It's a positive number It's also an even number The number has multiple digits Please enter an integer: 0 You entered 0 Lovely choice, 0 is the master of all digits. Please enter an integer: -10 You entered -10 It's a negative number And it's an even number Conclusion Python if-else condition allows us to write conditional logic in our program. The syntax is simple and easy to use. We can use ternary operation to convert simple if-else condition into a single line. Please be careful with the indentation when you have multiple nested if-else conditions.
https://www.askpython.com/python/python-if-else-elif-statement
CC-MAIN-2020-16
refinedweb
713
62.17
I have a code that takes a condition C as an input, and computes the solution to my problem as an 'allowed area' A on the (x,y) space. This area is made of several 'tubes', which are defined by 2 lines that can never cross. The final result I'm looking for must satisfy k conditions {C1, .., Ck}, and is therefore an intersection S between k areas {A1, .. , Ak}. Here is an example with 2 conditions (A1: green, 3 tubes. A2: purple, 1 tube); the solution S is in red. How can I find S when I'm dealing with 4 areas of around 10 tubes each? (The final plot is awful!) I would need to be able to plot it, and to find the mean coordinate and the variance of the points in S (variance of each coordinate). [If there is an efficient way of knowing whether a point P belongs to S or not, I’ll just use a Monte Carlo method]. Ideally, I’d also like to be able to implement “forbidden tubes” that I would remove from S [it might be a bit more complicated than intersecting S with the outside of my forbidden area, since two tubes from the same area can cross (even if the lines defining a tube never cross)]. sets Problem solved with Shapely! I defined each tube as a Polygon, and an area A is a MultiPolygon object built as the union of its tubes. The intersection method then computes the solution I was looking for (the overlap between all areas). The whole thing is almost instantaneous. I didn't know shapely was so good with large objects [around 2000 points per tube, 10 tubes per area, 4 areas]. Thank you for your help! :) Edit: A working example. import matplotlib.pyplot as plt import shapely from shapely.geometry import Polygon from descartes import PolygonPatch import numpy as np def create_tube(a,height): x_tube_up = np.linspace(-4,4,300) y_tube_up = a*x_tube_up**2 + height x_tube_down = np.flipud(x_tube_up) #flip for correct definition of polygon y_tube_down = np.flipud(y_tube_up - 2) points_x = list(x_tube_up) + list(x_tube_down) points_y = list(y_tube_up) + list(y_tube_down) return Polygon([(points_x[i], points_y[i]) for i in range(600)]) def plot_coords(ax, ob): x, y = ob.xy ax.plot(x, y, '+', color='grey') area_1 = Polygon() #First area, a MultiPolygon object for h in [-5, 0, 5]: area_1 = area_1.union(create_tube(2, h)) area_2 = Polygon() for h in [8, 13, 18]: area_2 = area_2.union(create_tube(-1, h)) solution = area_1.intersection(area_2) #What I was looking for ########## PLOT ########## fig = plt.figure() ax = fig.add_subplot(111) for tube in area_1: plot_coords(ax, tube.exterior) patch = PolygonPatch(tube, facecolor='g', edgecolor='g', alpha=0.25) ax.add_patch(patch) for tube in area_2: plot_coords(ax, tube.exterior) patch = PolygonPatch(tube, facecolor='m', edgecolor='m', alpha=0.25) ax.add_patch(patch) for tube in solution: plot_coords(ax, tube.exterior) patch = PolygonPatch(tube, facecolor='r', edgecolor='r') ax.add_patch(patch) plt.show() And the plot :
https://codedump.io/share/1Kpd5d4Sowm0/1/area-intersection-in-python
CC-MAIN-2017-26
refinedweb
498
67.55
Tensorflow Placeholders in Python Tensorflow placeholder() as the name suggests creates a placeholder for a tensor that will be fed later. In simple words, it allocates a block of memory for future use which allows us to build our operation graphs without needing the data which is the scenario in most of the machine learning tasks. We can later use feed_dict to feed the data into the tensor. Structure of TensorFlow placeholders: x = tf.placeholder(dtype, shape = None, name = None) - dtype – The type of elements to be fed in the tensor. - shape – The shape of the tensor to be fed (optional). By default, the placeholder() has a shape with no constraints, which allows us to feed tensors of any shapes. - name – The name of the tensor which is optional but as a good practice we can provide names for each tensor. Let us see examples of some simple tensors with a placeholder(). TensorFlow program to round the elements of lists import tensorflow as tf x = tf.placeholder(dtype="float", shape=(2, 5), name='placeholder1') y = tf.round(x) with tf.Session() as session: output=session.run(y,feed_dict={x:[[2.33,24.24,6.56,7.87,0.55], [8.24,5.52,75.24,13.95,48.26]]}) print(output) Let’s break down the above code which rounds the elements to the nearest integer value. Here we first import the tensorflow as tf then create a placeholder x of dtype float, shape (2, 5), as we later want to pass two lists of 5 float elements each. Also, we have given the name to this placeholder() as ph1. Then we have an operation y to run it in a session that rounds the elements of the placeholder x. Note we have not assigned any values to x yet. We create a session object and run the operation y which requires the values of x and we provide these values through feed_dict argument. The following output is received showing the elements rounded to the nearest integer as our output. Output: [[ 2. 24. 7. 8. 1.] [ 8. 6. 75. 14. 48.]] Some more examples. TensorFlow program to perform Matrix Multiplication. import tensorflow as tf import random matA = tf.placeholder(dtype = 'int32', shape = (3, 3), name = 'MatrixA') matB = tf.placeholder(dtype = 'int32', shape = (3, 3), name = 'MatrixB') mat_mul = tf.matmul(matA, matB) with tf.Session() as session: output=session.run(mat_mul,{matA:np.random.randint(0,5,size=(3,3)), matB:np.random.randint(5,10,size=(3,3))}) print(output) Here we perform matrix multiplication of two matrices A and B using two placeholders matA and matB. To do this, we have used random to create two matrices of size 3X3 with random integer values and hence have mentioned the shape as (3, 3) for both the placeholders. Output: [[14 18 14] [30 34 30] [23 25 24]] TensorFlow program to concat two strings. import tensorflow as tf str1 = tf.placeholder(dtype = 'string', shape = None, name='String1') str2 = tf.placeholder(dtype = 'string', shape = None, name='String2') str_concat = str1 +" - "+str2 with tf.Session() as session: output = session.run(str_concat, {str1: str(input()), str2: str(input())}) print(output) This code concatenates two strings taken from the user. Output: CodeSpeedy Coding Solution & Software Development b'CodeSpeedy - Coding Solution & Software Development' Note here we have mentioned the shape of both the placeholders as None. This becomes useful in most of the machine learning tasks as most of the time we are unaware of the number of rows but let’s assume we know the number of features. In such cases, we can use None. x = tf.placeholder(dtype = 'float', shape = (None,5), name = 'xyz') By doing this, we can feed a matrix with 5 columns and any number of rows. Also, you can read the below-related blogs, Python: How to create tensors with known values Basics of TensorFlow with examples
https://www.codespeedy.com/tensorflow-placeholders-in-python/
CC-MAIN-2020-50
refinedweb
647
57.16
Read analog input So far, you've learned about digital signals. In this example, let's try analog signals. You'll turn a potentiometer to change the input. Your board reads its value and shows them on the serial monitor. What you need - SwiftIO Feather (or SwiftIO board) - Breadboard - Potentiometer - Jumper wires Circuit Let's build the circuit now. - Place the potentiometer onto the breadboard. - Connect the first leg on the left to pin GND. - Connect the second leg to pin A0. - Connect the third leg goes to pin 3V3. Example code You can find the example code at the bottom left corner of IDE: / GettingStarted / ReadAnalogInput. For the code, you will use the AnalogIn class. // Read the input voltage on a specified analog pin. The value you get will be a decimal between 0.0 and 3.3. // Import the library to enable the relevant classes and functions. import SwiftIO // Import the board library to use the Id of the specific board. import MadBoard // Initialize the pin A0 as a analog input pin. let pin = AnalogIn(Id.A0) // Read the input voltage every second. while true { // Declare a constant to store the value you read from the analog pin. let value = pin.readVoltage() // Print the value and you can see it in the serial monitor. print(value) // Wait a second and then continue to read. sleep(ms: 1000) } Background Analog input You have known that the digital signal has determined values, and converter has different precision, and the resolution describes the possible values it can measure. Our boards have a 12-bit resolution, which means there are 4096 (0-4095) values in total. The values from 0 to 4095 are known as raw values. Let's see the working process in detail. When the board reads from the analog pin, it will first get a raw value between 0 and 4095,. Potentiometer The potentiometer is one kind of variable resistor. You could adjust its resistance by rotating it clockwise or anticlockwise. The resistance between ① and ③ is its maximum value. The wiper divides it into two parts. As the wiper moves, the resistance of the two parts will change accordingly. Code analysis import SwiftIO import MadBoard Import the SwiftIO library to use the related functionalities and the MadBoard library to use the corresponding pin id. If you use the SwiftIO board, you need to import the SwiftIOBoard instead. let pin = AnalogIn(Id.A0) The potentiometer connects to pin A0, so you initialize it before using the pin. let value = pin.readVoltage() The method .readVoltage() returns directly the voltage value. The return value is a floating-point number between 0V-3.3V. print(value) Print the result directly to the serial port. You can view the results on the serial monitor. Reference AnalogIn - read the voltage from an analog pin. init(_:)- initialize an analog input pin. You need to tell the id of a specified pin to initialize it. readVoltage()- read the input voltage from a pin. It will return a float between 0 and 3.3. MadBoard - find the corresponding pin id of your board.
https://docs.madmachine.io/tutorials/general/getting-started/read-analog-input
CC-MAIN-2022-21
refinedweb
518
69.68
Author: Amal G Jose 310 Posts Docker is now a paid software How to check the computer model details from Windows command line ? How to delete or remove a disk from a Linux system without reboot ? How to check the version of Nginx using command line ? Table is marked as crashed and should be repaired – MySQL Error How to add a new disk to a Linux server without downtime or reboot ? Nodetool Repair failed with error – Cassandra Validation failed The wait is over – KairosDB 1.3.0 got released How to update the UID and GID of elasticsearch ? docker, kubernetes, linux, python Malicious Docker Images – New Attack – Best practices to Follow Pipe grep equivalent command in Windows How to check the version of Flask in python ? CentOS Alternative – Rocky Linux is Ready How to manually delete a Kubernetes namespace stuck in terminating state ? How to delete a kubernetes pod which is stuck in terminating state ? How to disable automatic unattended updates in Ubuntu using command line ? Nginx Error: 413 Request Entity Too Large How to fix – Nginx not accepting headers with underscore ? What are Web Services ? Fundamentals of Web Services . kubectl error: You must be logged in to the server (Unauthorized) – how to fix kubeadm certs – unknown command “certs” for “kubeadm” How to renew the certificates in a Kubernetes Cluster ? Difference between kubectl create and kubectl apply commands ? How to check the Operating System details in a Raspberry Pi ? How to connect to multiple Kubernetes clusters using kubectl ? How to configure Kubernetes port forward bind to 0.0.0.0 instead of default 127.0.0.1 ? Kubernetes Cheat Sheet – Important kubectl commands
https://amalgjose.com/author/amalgjose/
CC-MAIN-2021-43
refinedweb
273
66.44
I'm setting up Spiceworks for the first time and about to perform a network scan of my Windows network. It is on an Active Directory domain. I'm going to give it a domain admin account to scan with just to get things moving, but I would prefer to use the principle of least privilege to craft a dedicated Spiceworks account. I've hunted around for some documentation, but can't seem to find a definitive document that shows all of the privileges that Spiceworks needs to have in order to perform remote scans on windows machines. This post was interesting and gave me some insights into the topic. This official document merely says that the account needs to have admin access to the device. However, I'd be willing to bet that you could pare down the privileges to a smaller subset. Or are the minimum privs needed so broad that it really doesn't matter? Thanks for your time! 8 Replies Jan 7, 2011 at 4:37 UTC I'm pretty sure that it needs full admin rights as it does everything from reading WMI to pulling hardware and network stats aswell as querying your AD structure. Because it needs access to both local machines aswell as servers and AD it is easiest to just give it full admin rights, I dread to think how many individual permissions would be needed if you were to try and delegate only the permissions needed! Jan 7, 2011 at 9:45 UTC Most of the information collected through the remote scans is through a WMI connection. This page has a table that lists where a lot of the device information is retrieved: http:/ Jan 7, 2011 at 2:33 UTC Thanks for the replies, guys! Good information to know. I suspected that the laundry list of privs necessary for Spiceworks to behave properly would be frightening. However, I'm a little surprised that I can't seem to find anyone who's at least attempted it. Perhaps it's time to turn permissions auditing on and see what gets denied for Spiceworks scans and then gradually pick through adding privs here and there. Or not. =) One idea that I had was to create a local service account on all of my domain PCs via a simple script. Then use that account for Spiceworks. That way it only has local admin privs and not the Big Kahuna domain admin account. As for it's need to see AD information, perhaps just read permissions on those. Anyone else like to weigh in on this? Sep 10, 2011 at 12:04 UTC Hey Gang... first off, if it's bad form to be jumping a thread that's been inactive for 8 months, I apologize in advance. Today, however, I am also facing the exact same situation. I am about to install SpiceWorks 5.1 on a new virtual machine, and this will be my first hands-on trial of SpiceWorks. Like Wesley, I too would like to know the Least Privilege that's needed. I have created a domain account as the SpiceWorks service account, but I would rather not leave it as a Domain Admin if I can avoid it. On the other hand, if nobody has ever really tried this or worked out the kinks, I'm sure it is just easier and more reliable to leave it as a Domain Admin. Nov 19, 2012 at 4:39 UTC Ressurecting this old thread in case someone has been able to find a definitive answer to this? Perhaps Domain Admin privileges are necessary for the S'works network scan, but if not, I'd like to give it the least necessary. thx Jun 5, 2015 at 2:04 UTC 1st Post Yeah, I sure would like to know as well. I get that Spiceworks wants to read a bunch of stuff, so it shouldn't need any write permissions, right? Nov 26, 2015 at 12:25 UTC So it's been nearly 5 years and still no answer to this question? Or has this been answered somewhere else? Feb 18, 2016 at 5:27 UTC Power Users, LLC is an IT service provider. Security admin here! The principal of least privilege is a hard one to achieve, especially when there is a lack of documentation that denots the privileges required! I hope this helps.. - Right click WMI Control > Properties > Security Tab > Security Button > Advanced > Add - Select a principal > scanner.spiceworks - Type: Allow - Applies to: This namespace and subnamespaces -.
https://community.spiceworks.com/topic/123898-list-of-domain-privileges-that-are-needed-for-spiceworks-network-scan
CC-MAIN-2017-22
refinedweb
757
70.84
How to plot a heart December 31st, 2011 | Categories: just for fun, mathematica, matlab | Tags: Equation of Pasta Actually this is a way better link. It has equations for all sorts of pastas The MATLAB version above is missing assignment of the “x” variable. This command for “x” seems to work well: >> x=[-2:.001:2]; Thanks Scott, I’ve updated the main text. Sorry for being late to reply. I tried it in MathStudio (previously known as TimeSpace Mathematics) for Android. FINALLY we have a very good math application for android. It’s too pricey though since it’s $20 dollars compared to $10 dollars for the iPhone. I hope the price goes down though. Here’s the screenshots of the heart! Thanks for that Silver. MathStudio is a great piece of software–I’ve bought it several times now! First for Ye Olde Windows Mobile, then for iPad and finally for Android. Very cool! I took the liberty of animating it a bit :) Export[Environment[“userprofile”] “\\heart.gif”, Table[ Plot[Sqrt[Cos[d*x]]*Cos[153*d*x] + Sqrt[Abs[d*x]] – 0.7*(4 – x*x)^0.01, {x, -2, 2}, PlotStyle -> Red], {d, 1.3, 1.8, 0.1} ] // {#, Reverse[#]} & // Flatten ] Here is the python version for those interested import numpy as np import matplotlib.pyplot as plt x=np.r_[-2:2:0.001] y=(np.sqrt(np.cos(x))*np.cos(200*x)+np.sqrt(np.abs(x))-0.7)*np.power((4-x*x),0.01) plt.plot(y) plt.savefig(‘Heart’) I am wondering how to plot a 3D heart by mesh function in MATLAB… Hi Qing Try the code at There are two pieces of code, A star and a heart. plot(x,real(y)) not warning ;) regards ending with plot(x,y,’r’) could be more better
http://www.walkingrandomly.com/?p=4030
CC-MAIN-2019-43
refinedweb
307
67.96
Most of you reading this are probably already familiar and using axios for making HTTP requests from your application to an API, whether it is your own API or an external one. Today I want to introduce you to 4rest, the new npm package I built, on top of axios, which is designed to help you set up all of your app's functions for making HTTP requests to API, easily, quickly, and make them as organized as possible by splitting them to services based on API's data models. Let's see a basic usage example: 1) First of all you create a Forest Instance with your API base URL and other relevant configuration. import forest from "4rest"; export const instance = forest.create({ axiosSettings: { baseURL: "" } }); 2) Then we create a Forest Service using the instance we just made. import { instance } from "./forestInstance"; import { UserWithId, User } from "./types"; export const userService = instance.createService<UserWithId, User>("user"); That's It! Just by doing these 2 simple steps we got our self a total of 9 different functions for making calls to an API with types for request payload and response data on them, including: getAll getById deleteAll deleteById patch patchById put putById Let's see a few examples of using the service methods in our app: - GET // GET async function getUsers() { const users: User[] = (await userService.getAll()).data; } // GET async function getUserById(id: string) { const user: User = (await userService.getById(id)).data; } // POST async function createUser(newUser: User) { const userCreated: User = (await userService.post(newUser)).data; } - PATCH // PATCH async function updateUser(partialUser: Partial<User>) { const updatedUser: User = (await userService.patch(partialUser)).data; } // PATCH async function updateUserById(id: ObjectId, partialUser: Partial<User>) { const updatedUser: User = (await userService.patchById(id, partialUser)).data; } This is only the most basic example of how you use 4rest but there are a lot more options to configure service and instance, which let you do much more than shown in this article like zod payload and response data type validations, methods routes configuration, custom service with custom methods, requests onSuccess and onError handling and more. See more: 4rest Full docs and source code on Github Hope you will find this package useful. I would like to hear your feedback on the package and suggestions for future improvement 😀 Discussion (8) I like the CRUD generation feature. Nice work. I also made one, but completely typescript oriented , that does not hide the routes behind functions : it's named zodios. I might steal your idea for CRUD auto generation. Hey, thank you for the nice feedback, I checked out your package as well and really liked it's dx and features. My plan was to do a react-query based plugin for 4rest as well. You really inspired me with zodios. Maybe we can even collaborate on some features :) Hello, quick update. I like both your features of named enpoints and crud generattion. so i added them both to zodios. You can check it in these tests and documentation Hi, I have seen those features you have added and you really did a very good job. Few things I would like to recommand you to do as well are changing the names of the getUser, deleteUser and maybe even updateUser to getUserById and etc or at least give the user the option to choose its own default alias extensions which will be used for every asCrudApi function. Second of all, let the user an option to valdiate the id he sends out to the API as param, for example to validate it represents ObjectId string or such. Also, think about maybe splitting the api instance to services based on the DataModel of the api similirally to how you have done with the asCrudApi function but to make it actually diffrrent services in order for the user to get diffrenet autocomplete when calling api based on the service he wants and not just by url. Lastly, give the user to make his own scheme validation decisions based on the method he is calling to, for example: change only the getUsers validation to be of interface including the following keys: resultof type User array scheme and amountof type number. Thank you for the feedback, asCrudApia helper monster that tries to fit everyone needs. So maybe, i'll instead add guidance in the documentation on how to write your own helpers like asCrudApi. dev.todirectory in examples. it's called asApiand can be used to code split your deplarations by models. asCrudApi. i would also suggest the user to create is own helper if his use cases are too complex or even not use one and use zodios already generic api declaration for this use case. Super cool, very useful and unbelievably easy 👌 😀 👍 Nice 👍🏼, looks very useful! Seems like a really handy tool! Great job, keep it up 🙌
https://dev.to/liorvainer/ive-made-a-restful-http-client-which-will-make-your-life-much-easier-1olm
CC-MAIN-2022-27
refinedweb
804
59.43
Let us Practice some C++ Programs While writing C++ programs, one thing remember you must choose proper data type, otherwise the result may not be accurate. For example if in the program you are doing any division operation then there is very much chance that result is in fraction, but if your data type of the variable to store the result is "int" then the fractional part will be lost. Therefore the data type must be of float. Now let us practice programs. Q2. Write a program to accept three numbers then find their average. Q3. Write a program to accept principal, rate of interest and time then find the simple interest Q4. Write a program to accept marks obtained in five different subjects out of 100 then find total and percentage of marks. Program-2 #include <iostream.h> void main(){ int a, b, c; float d; cout<<"Enter the first No:"; cin>>a; cout<<"Enter the second No:"; cin>>b; cout<<"Enter the third No:"; cin>>c; d=(a+b+c)/3.0; cout<<"Average="<<d; }Note:Notice that the data type of result variable here variable "d" is float. Another point to be noticed that the sum is divided by 3.0 this means that the sum would be converted into float data type by the compiler itself then it would be divided by 3.0, so if the result has got any fractional part it will be stored in "d". If the sum is divided by 3 then, first the result would be generated in integer then it would be converted in float as the left hand side of the assignment operator is of type float; so the result would be only integral part of a real number ( or a zero at the fractional part of the answer.) Another point to be noted that students often do mistake in writing the line d=(a+b+c)/3.0; they write it as d=a+b+c/3.0 here the answer becomes wrong because only the value of "c" is divided by 3 and is added with the sum of "a" and "b". This line can be broken down into two line as below- Take another int variable "k" k=a+b+c; d=k/3.0; cout<<"Average="<<d; Program-3 #include <iostream.h> void main(){ float p, r, t,si; cout<<"Enter Principal:"; cin>>p; cout<<"Enter Rate of Interest:"; cin>>r; cout<<"Enter Time in years:"; cin>>t; si=p*r*t/100; cout<<"Interest="<<si; } Program-4 #include <iostream.h> void main(){ float phy, chem, eng, math, cs, total, percent; cout<<"Enter marks of english:"; cin>>eng; cout<<"Enter marks of physics:"; cin>>phy; cout<<"Enter marks of chemistry:"; cin>>chem; cout<<"Enter marks of maths:"; cin>>math; cout<<"Enter marks of computer sc:"; cin>>cs; total=eng + phy + chem + math + cs; percent = total/500*100; cout<<"Total="<<total<<"\n"; cout<<"Percentage="<<percent; }Note: In the program-4 the line "percent=total/500*100" we have written 500 still the result is correct because here all data type are float. In this program "\n" is used to display Total and Percentage in two different lines we can use "endl" instead of "\n", endl is used without quotes. Write and run the programs and see the outputs. Tweet
https://www.mcqtoday.com/CPP/home1/CppProgramsPractice-1.html
CC-MAIN-2022-21
refinedweb
554
67.28
Overview This script can be used to parse date and time. Open a blank file and name it for example dateParser.py Copy and paste the code below (and make sure you understand what it does) into the file. dateParser.py from datetime import datetime now = datetime.now() mm = str(now.month) dd = str(now.day) yyyy = str(now.year) hour = str(now.hour) mi = str(now.minute) ss = str(now.second) print mm + "/" + dd + "/" + yyyy + " " + hour + ":" + mi + ":" + ss Now save and exit the file and run it by: $ python dateParser.py Time.sleep In Python you can use time.sleep() to suspend execution for the given number of seconds. The seconds are being given between the parenthesis. # How to sleep for 5 seconds in python: import time time.sleep(5) # How to sleep for 0.5 seconds in python: import time time.sleep(0.5) How to get the current date and time I found this date and time script on this excellent website:") The result::
https://www.pythonforbeginners.com/code-snippets-source-code/date-and-time-script
CC-MAIN-2020-16
refinedweb
167
88.84
As you might have guessed, this article focuses on lock-free queues. Queues can be different. They can differ in the number of producers and consumers (single/multi producer — single/multi consumer, 4 variants). They can be bounded, on the basis of a pre-allocated buffer, and unbounded, on the basis of a list. They can either support priorities, or not. They can be lock-free, wait-free or lock-based, with the strict (fair) and not strict (unfair) conformance to FIFO, etc. These types of queues are described in detail in the article by Dmitry Vyukov. As a rule, the more specialized requirements to a queue are, the more efficient its algorithm is. In this article, we will consider the most common version of queues – multi-producer/multi-consumer unbounded concurrent queues without the support of priorities. I guess the queue is the favorite data structure for researchers. On the one hand, it’s really simple; on the other hand, it is not as easy as the stack, as it has two ends, not just one. Since there are two ends, interesting problems arise, such as: how to manage them in a multithreaded environment? The number of publications with different variations of the queue algorithm is off the charts, so it impossible to cover all of them. I’ll dwell briefly on the most popular of them, and start with the classic queue. Classic Queue A classic queue is a list (no matter if it’s a singly or doubly linked list) with two ends – the head and the tail. We read from the head and write to the tail. A Naive Standard Queue This is a copy/paste from the first article of the series. struct Node { Node * m_pNext ; }; class queue { Node * m_pHead ; Node * m_pTail ; public: queue(): m_pHead( NULL ), m_pTail( NULL ) {} void enqueue( Node * p ) { p->m_pNext = nullptr; if ( m_pTail ) m_pTail->m_pNext = p; else m_pHead = p ; m_pTail = p ; } Node * dequeue() { if ( !m_pHead ) return nullptr ; Node * p = m_pHead ; m_pHead = p->m_pNext ; if ( !m_pHead ) m_pTail = nullptr ; return p ; } }; Don’t look for it, there’s no concurrency, it’s just an illustration of how simple the subject for conversation is. The article will show us what happens to simple algorithms if they’re adapted to the concurrent environment. The algorithm of Michael & Scott is considered to be the classic (1996) algorithm of a lock-free queue. The code from the libcds library is provided in case it has a short form of the implementation of the algorithm under consideration. To see the full code – refer to the cds::intrusive::MSQueue class. Comments are in the code. I tried to make them not so boring. bool enqueue( value_type& val ) { /* Implementation detail: node_type and value_type in my implementation - are not the same and require converting from one type to the other one. For simplicity, we can assume that node_traits::to_node_ptr - is just static_cast<node_type *>( &val ) */ node_type * pNew = node_traits::to_node_ptr( val ) ; typename gc::Guard guard; // A guard, for example, Hazard Pointer // Back-off strategy (of the template-argument class) back_off bkoff; node_type * t; // As always in lock-free, we’ll deal with it, till we make the right thing... while ( true ) { /* Protect m_pTail, as we’ll read its fields do not want to get into a situation of reading from the deleted memory */ t = guard.protect( m_pTail, node_to_value() ); node_type * pNext = t->m_pNext.load( memory_model::memory_order_acquire); /* An interesting detail: the algorithm assumes that m_pTail can point not to the Tail, and hopes that further calls will set up the Tail correctly. A typical example of the multithreaded mutual help */ if ( pNext != nullptr ) { // Oops! It’s necessary to clear (literally) the Tail //after the next thread m_pTail.compare_exchange_weak( t, pNext, std::memory_order_release, std::memory_order_relaxed); /* Need to start all over again, even if CAS is not successful CAS is not successful, which means that m_pTail has been changed before we read it. */ continue ; } node_type * tmp = nullptr; if ( t->m_pNext.compare_exchange_strong( tmp, pNew, std::memory_order_release, std::memory_order_relaxed )) { // Have successfully added a new element to the queue. break ; } /* We’ve failed — the CAS didn’t work. This means that someone has got there before us. Concurrency has been detected, so, not to aggravate it, let’s back off for a very brief moment in time and call the back_off functor */ bkoff(); } /* Generally, we can use the counter of elements...if we want to. As can be seen, this counter is not very accurate: The element has already been added, and we change the counter just now. Such counter can only speak of the order of the number of elements, but we cannot use it as the sign of emptiness of a queue */ ++m_ItemCounter ; /* Finally, trying to change the m_pTail tail. We are not interested whether we’ll succeed or not, if not, they’ll clean up after us see the 'Oops!!' above and below, in dequeue */ m_pTail.compare_exchange_strong( t, pNew, std::memory_order_acq_rel, std::memory_order_relaxed ); /* This algorithm always returns true. Other ones, like bounded queues, can return false, when a queue is full. For consistency of the interface, enqueue always returns a sign of success признак успеха */ return true; } value_type * dequeue() { node_type * pNext; back_off bkoff; // We need 2 Hazard Pointers for dequeue typename gc::template GuardArray<2> guards; node_type * h; // Keep trying till we can execute it… while ( true ) { // Read and guard our Head m_pHead h = guards.protect( 0, m_pHead, node_to_value() ); // and the element that follows the Head pNext = guards.protect( 1, h->m_pNext, node_to_value() ); // Check: is what we have just read // remains bounded?.. if ( m_pHead.load(std::memory_order_relaxed) != h ) { // no, - someone has managed to spoil everything... // Start all over again continue; } /* The sign that the queue is empty. Note that unlike the tail, the H=head is always at the right place */ if ( pNext == nullptr ) return nullptr; // the queue is empty /* We read the tail here, but we don’t have to protect it with the Hazard Pointer, as we are not interested in the content it points to (fields of the structure) */ node_type * t = m_pTail.load(std::memory_order_acquire); if ( h == t ) { /* Oops! The tail is not in place: there is the Head, the element following it, and the Tail points to the Head. I guess we should help them here... */ m_pTail.compare_exchange_strong( t, pNext, std::memory_order_release, std::memory_order_relaxed); // After helping them, we have to start over. // Therefore, the CAS result is not important for us continue; } // The most important thing is to link a new Head // That is, we move down the list if ( m_pHead.compare_exchange_strong( h, pNext, std::memory_order_release, std::memory_order_relaxed )) { // Success! Terminate our infinite loop break; } /* Failed… which means that someone interfered. Not to disturb others, let’s step back for a moment */ bkoff() ; } // Change the not very useful counter of elements, // see the comment in enqueue --m_ItemCounter; // It’s the call of 'remove the h element' functor dispose_node( h ); /* !!! Here’s an interesting thing! We return the element that follows the [ex] Head Note that pNext is still in the queue — It’s our new Head! */ return pNext; } As you can see, the queue is represented by a singly linked list from head to tail. What’s the important point of this algorithm? It’s to be able to control two pointers – to a head and to a tail – by using the normal (not the double) CAS. This is achieved by the fact that the queue is never empty. Look at the code. Are there any checks of the head/tail for nullptr? You won’t find them. To provide physical (but not logical) non-emptiness in the queue constructor, one dummy element is added to it, which is the head and the tail. dequeue returns an element that becomes the new dummy element (the new head), and the former dummy element (the former head) is removed: This should be taken into account when designing an intrusive queue – the returned pointer is still a part of the queue, and we will be able to remove it only during the next dequeue. Secondly, the algorithm assumes that the tail can point not to the last element. Each time we read the tail, we check whether it has the next m_pNext element. If this pointer is not nullptr, the tail is not in place and we should move it forward. But there’s another pitfall here: it may happen that the tail will point to the element before the head (the intersection of the head and the tail). To avoid this, we implicitly check m_pTail->m_pNext in the dequeue method: we have read the head, and the m_pHead->m_pNext element that follows the head, have made sure that pNext != nullptr. Then, we see that the head is equal to the tail. Thus, there is something after the tail, as there’s pNext and we should move the tail forward. This is a typical case when threads help each other, which is very common in the lock-free programming. In 2000, a small optimization of the algorithm was offered. It was noted that the MSQueue algorithm in the dequeue method the tail was read at each iteration of the loop, which was unnecessary: we should read the tail (to verify that it is really a tail and points to the last element) only when the successful update of the head took place. Thus, it can be expected to reduce the pressure on m_pTail for certain types of load. This optimization is presented in libcds as the cds::intrusive::MoirQueue class. Baskets queue An interesting variation of MSQueue was introduced in 2007. Nir Shavit, a fairly well-known researcher in the world of lock-free, and his associates took a different approach to the optimization of the classic lock-free queue of Michael & Scott. He presented the queue as a set of logical baskets. During some short period of time, each of them was available for inserting a new element. When the interval passes, a new basket is created. Each basket is an unordered set of elements. It would seem that this definition violates the basic feature of the queue – FIFO; that is to say that the queue becomes unfair. FIFO is observed for baskets, but not for elements in them. If the interval of the basket availability for the insertion is quite short, we can ignore the disorder of items in it. How to determine the duration of this interval? Actually, it is not necessary to determine it, — say the authors of the Baskets Queue. Let’s take a look at the MSQueue queue. In the enqueue operation, when the concurrency is high and the CAS of changing the tail did not work, that is, where the back-off is called in MSQueue, we cannot determine the order, in which the items will be added to the queue, as they’re added concurrently. That is the logical basket. Turns out, the abstraction of logical baskets is a kind of the back-off strategy. I do not like to read miles of code in review articles, so I won’t provide the code here. The example with MSQueue has already shown us that the lock-free code is really verbose. Those wanting to look at the implementation, refer to the cds::intrusive::BasketQueue class in the libcds library, cds/intrusive/basket_queue.h file. Meanwhile, to explain the algorithm, I’ll provide another picture borrowed from the work of Nir Shavit & Co: - A, B, C threads want to insert items to the queue. They see that the tail is in the right place (remember that the tail in MSQueue can point not to the last element in the queue) and try to change it concurrently. - A thread is the winner, as it has inserted a new item. B and C threads are losers – their CAS with the tail is unsuccessful. Therefore, both of them begin to insert their items to the basket, using the read previously, old value of the tail. - B thread has managed to be the first one to perform the insertion. At the same time, D thread also invokes enqueue and adds its item successfully by changing the tail. - C thread also successfully completes the insertion. Just look, it added the item into the middle of the queue! During the insertion, it uses the old pointer to the tail that it has read when entering the operation, before it performed the unsuccessful CAS. It should be noted that during such insertion an item will be added before the head of the queue. For example, look at the item before C at the picture Nr.4 above: while C thread is in enqueue, another thread can delete this item before C. To prevent this situation, it is proposed to apply the logical deletion, which is to mark the deleted elements with the special deleted flag. Since it is required that the flag and the pointer to the item could be read atomically, we will store the flag in the least significant bit of the pointer to the pNext item. This is acceptable, as memory allocation in modern systems is 4-bytes aligned, so the least significant 2 bits of the pointer will always be zeros. Thus, we have invented the marked pointers approach, widely applied in lock-free data structures. We will come across this approach more than once in the future. Applying the logical deletion, that is, setting the pNext least significant bit to the value of 1 with the help of CAS, we will exclude the possibility to insert an item before the head. The insertion is carried out by CAS as well, and the deleted item contains 1 in the least significant bit. Thus, CAS will be unsuccessful (of course, when inserting the item, we do not take the whole marked pointer, but only its most significant bits that contain the address; we assume that the least significant bit is equal to zero). The last improvement introduced by BasketQueue refers to the physical deletion of items. It has been observed that changing the head at each successful call of dequeue can be unfavorable, as CAS is also called, and, as you know, it’s quite heavy. Therefore, we will change the head only when there are several logically deleted elements (by default, there are three of them in the implementation of libcds). We can also change it when the queue becomes empty. Thus, we can say that the head changes in hops in BasketQueue. All these optimizations are designed to improve the capacity of the classical lock-free queue in a situation of high concurrency. Optimistic Approach In 2004, Nir Shavit and Edya Ladan Mozes introduced another approach to optimizations in MSQueue, that they called optimistic. They have noticed that the dequeue operation in the algorithm of Michael and Scott requires just one CAS, while enqueue requires two (see the picture above). The second CAS in enqueue can significantly affect performance even at low load, as CAS is a quite heavy operation in modern processors. Is it possible to somehow get rid of it? Let’s consider how two CASs appeared in MSQueue::enqueue. The first CAS links the new item to the tail – changes pTail->pNext. The second one moves the tail forward. Can we change the pNext field by an atomic record, and not by CAS? Yes, we can, if the direction of our singly linked list would be different, not from the head to the tail, but vice versa. We could use the atomic store (pNew->pNext = pTail) for the pNew->pNext task, and then change pTail by CAS. But if we change the direction, how to perform dequeue then? There will be no pHead->pNext link anymore, as the list direction has changed. The authors of the optimistic queue suggested using a doubly linked list. But there’s one problem: an efficient algorithm of a doubly linked lock-free list for CAS is not yet known. There are known algorithms for DCAS, but there’s no DCAS implementation in the hardware. We know the MCAS emulation algorithm (CAS for M unbounded memory cells) for CAS, but it’s inefficient (requires 2M + 1 CAS) and represents rather the theoretical interest. The authors came up with the following solution: the link in the list from the tail to the head (next is the kind of link we do not need for the queue, but it will allow us to get rid of the first CAS in enqueue) will always be consistent. As for the reversed direction, from the head to the tail, the most important link – prev – can be not really consistent, meaning that its violation is permissible. Finding such violation, we can always restore the correct list, following the next references. How to detect such violation? Actually, it’s really simple: pHead->prev->next != pHead. If this inequality is found in dequeue, the fix_list auxiliary procedure is invoked: void fix_list( node_type * pTail, node_type * pHead ) { // pTail and pHead are already protected by Hazard Pointers node_type * pCurNode; node_type * pCurNodeNext; typename gc::template GuardArray<2> guards; pCurNode = pTail; while ( pCurNode != pHead ) { // Till we reach the head pCurNodeNext = guards.protect(0, pCurNode->m_pNext, node_to_value() ); if ( pHead != m_pHead.load(std::memory_order_relaxed) ) break; pCurNodeNext->m_pPrev.store( pCurNode, std::memory_order_release ); guards.assign( 1, node_traits::to_value_ptr( pCurNode = pCurNodeNext )); } } [taken from the cds::intrusive::OptimisticQueue class of the libcds library] fix_list searches the queue from the tail to the head, using the obviously correct pNext references, and corrects pPrev. The violation of the list from the head to the tail (prev pointers) is possible rather due to delays, not due to heavy load. Delay is a displacement or an interrupt of a thread by the operating system. Take a look at the following code for OptimisticQueue::enqueue: bool enqueue( value_type& val ) { node_type * pNew = node_traits::to_node_ptr( val ); typename gc::template GuardArray<2> guards; back_off bkoff; guards.assign( 1, &val ); node_type * pTail = guards.protect( 0, m_pTail, node_to_value()); while( true ) { // Form a direct list – from the tail to the head pNew->m_pNext.store( pTail, std::memory_order_release ); // Trying to change the tail if ( m_pTail.compare_exchange_strong( pTail, pNew, std::memory_order_release, std::memory_order_relaxed )) { /* Form a reversed list – from the head to the tail. The operating system can interrupt (displace) us here. As a result, pTail can be displaced (dequeue) from the queue (but we should not be afraid of this, as pTail is guarded by the Hazard Pointer; thus, it’s non-removable) */ pTail->m_pPrev.store( pNew, std::memory_order_release ); break ; // Enqueue done! } /* CAS is unsuccessful — pTail has changed (keep in mind the signature of CAS in C++11: the first element is passed by the reference!) Guard the new pTail with the Hazard Pointer */ guards.assign( 0, node_traits::to_value_ptr( pTail )); // High contention – let’s step back bkoff(); } return true; } It turns out that we are optimistic: we have built the pPrev list (the most important one for us), hoping for success. If we find a mismatch between the direct and the reversed lists, we’ll have to spend time on conforming them (run fix_list). So, what’s the bottom line? Both enqueue and dequeue have one CAS each. The price paid for this is running fix_list when the violation of the list is detected. Large or small is the price — the experiment will tell us. You can find the code in the cds/intrusive.optimistic_queue.h file, the cds::intrusive::OptimisticQueue class of the libcds library. Wait-Free Queue To cover the subject of the classic queue in full, we should mention the wait-free queue algorithm. Wait-free is the most strict requirement among others. It says that the execution time of the algorithm must be finite and predictable. In practice, wait-free algorithms are often (surprise!) much inferior in performance to their less strict brothers, lock-free and obstruction-free. But they surpass the latter in number and code complexity. The structure of many wait-free algorithms is pretty standard: instead of performing an operation (enqueue/dequeue in our case), they declare it at first – store the operation descriptor with arguments in some publicly accessible shared storage– and then start helping concurrent threads. They browse descriptors in the storage and try to perform what is written in them. As a result, several threads perform the same work at high load, and only one of them will be the winner. The complexity of the implementation of such algorithms in C ++ is mainly how to implement this storage and how to get rid of the memory allocation for descriptors. The libcds library has no implementation of the wait-free queue, as the authors of the queue provide in their research quite disappointing data on its performance. Test Results In this article, I decided to provide test results of the provided above algorithms. Tests are synthetic, the test machine is the dual-processor Debian Linux, Intel Dual Xeon X5670 2.93 GHz, 6 cores per processor + hyperthreading, a total of 24 logical processors. At the time of the test, the machine was almost free — idle at 90%. The compiler is GCC 4.8.2, the optimization is -O3 -march=native -mtune=native. The tested queues are from the cds::container namespace. Thus, they are not intrusive, which means that memory allocation is performed for each element. We’ll compare them to the standard implementations of std::queue<T, std::deque<T>> and std::queue<T, std::list<T>> to the synchronization by mutex. T type is a structure of two integers. All lock-free queues are based on the Hazard Pointer. Endurance Test This test is quite specific. There are 10 million enqueue/dequeue pairs of operations. In the first part, the test performs 10 million enqueue. 75% of operations are enqueue, the remaining 25% are dequeue (i.e. there are 10 million enqueue and 2.5 million dequeue in the first part). In the second part, dequeue is performed 7.5 million times, till the queue becomes empty. The idea in the design of this test was as follows: minimize the impact of the cache allocator, if, of course, the allocator has cache. Absolute data is the time of the test: What can we say? The first thing that catches your eye is that the locked std::queue<T, std::deque<T>> turned out to be the fastest one. How come? I think that the whole thing is in working with memory: std::deque allocates memory in blocks of N elements, and not for each element. This suggests that we should eliminate the impact of the allocator in tests, as it brings quite long delays. Besides, it usually has mutex(es). Well, libcds has intrusive versions of all containers that do not allocate memory for their elements; you should try to test them. As for our lock-free queues, it is clear that all the considered optimizations that we looked at with regard to MSQueue have borne fruit, although they’re not that great. Producer/Consumer Test The test is quite practical. There are N producers and N consumers of the queue. Total performed 10 million write operations, and 10 million read operations, respectively. The number of threads in charts is the sum of the number of threads of consumers and producers. Absolute data is the time of the test: Lock-free queues behave more decently here. The winner is OpimisticQueue, which means that all the assumptions laid in its structure proved to be correct. This test is closer to reality, as there is no mass concentration of elements in the queue. That’s why, I think, internal optimizations of the allocator come into play. Well, it’s all right. In the end, the queue is made not for a huge concentration, but to buffer bursts of activity, and the normal state for a queue is the absence of elements in it. Bonus About the Stack Since we’re talking about tests… Between this and the previous article about lock-free stacks, I implemented the elimination back-off for the Treiber's stack. The implementation itself, or rather the path from the description of the approach/[pseudo-] code to the finished product in C++, deserves a separate article (that is most likely will never be written, as there would be too much code in it). In fact, the result was what the authors of the elimination back-off wrote, but if you look at the code – it’s completely different. So far it’s only available in the libcds repository. I will also provide the results of the synthetic tests. The test machine is the same. Producer/Consumer test: some threads write to the stack (push), while others read (pop). There’s the same number of consumers and producers, their total number is the number of threads – 10 million (that is, 10 million push and 10 million pop). For standard stacks, the synchronization is performed by the mutex. The absolute time of the test: I think the chart speaks for itself. What provided such performance growth of the elimination back-off? Seemingly due to the fact that push/pop annihilate each other. But if we look at the internal statistics (all classes of containers in libcds have their own internal statistics, disabled by default), we’ll see that only 10-15 thousand push/pop annihilated of 10 million (together with 64 threads), which is about 0.1%, while the total number of attempts (meaning the number of entries to the elimination back-off) is about 35 thousand! It turns out that the main advantage of the elimination back-off is that some threads fall asleep when the load is too high (in the provided examples, the sleep of a passive thread in the elimination back-off lasts about 5 milliseconds), thus, automatically reducing the overall load on the stack. Summary We have reviewed the classic lock-free queue, represented as a list of elements. Such queue is characterized by the presence of two points of concurrency – the head and the tail. We have considered the classic algorithm of Michael and Scott, as well as some of its improvements. I hope that the considered optimizations have been interesting for you and can be useful in everyday life. According to the test results, it can be concluded that despite the fact that queues are lock-free, the magic CAS did not give any special performance gain. Therefore, it is necessary to look for some other approaches to abandon the bottlenecks (the head and the tail) and somehow parallel the work with queues. That’s exactly what we are going to talk about in the next article.
https://kukuruku.co/hub/cpp/lock-free-data-structures-yet-another-treatise
CC-MAIN-2022-33
refinedweb
4,418
62.58
#include "config.h" #include <stddef.h> #include <stdbool.h> #include "mutt/lib.h" #include "config/lib.h" #include "core/lib.h" #include "debug/lib.h" #include "lib.h" #include "mutt_menu.h" Go to the source code of this file. Dialog dialog.c. Find the parent Dialog of a Window. Dialog Windows will be owned by a MuttWindow of type WT_ALL_DIALOGS. Definition at line 46 of file dialog.c. Display a Window to the user. The Dialog Windows are kept in a stack. The topmost is visible to the user, whilst the others are hidden. When a Window is pushed, the old Window is marked as not visible. Definition at line 66 of file dialog.c. Hide a Window from the user. The topmost (visible) Window is removed from the stack and the next Window is marked as visible. Definition at line 98 of file dialog.c. Listen for config changes affecting a Dialog - Implements observer_t. Definition at line 131 of file dialog.c. Create a simple index Dialog. Definition at line 165 of file dialog.c. Destroy a simple index Dialog. Definition at line 209 of file dialog.c.
https://neomutt.org/code/dialog_8c.html
CC-MAIN-2021-10
refinedweb
189
64.88
regex remove 3 characters before and after / looking for a simple regular expression for this, This is the input: This is a sample text eiwen34EDJ/VUFercsFIR/GRSnrr Output: eiwen34 ercs nrr I want to remove 3 characters before and after '/'. i.e. In terms of my example, EDJ/VUF and FIR/GRS should be removed. I was searched for many questions in here but didn't find the solution for it, please help me. 2 answers - answered 2017-11-14 23:52 KevBot You can use: /[a-z]{3}\/[a-z]{3}/gi This checks for 3 characters before and after the slash and replaces with a space: var string = 'eiwen34EDJ/VUFercsFIR/GRSnrr'; var newString = string.replace(/[a-z]{3}\/[a-z]{3}/gi, ' '); console.log(newString); - answered 2017-11-15 01:11 Allan You can also use the following regex replacement, where you don't need to put it in case insensitive. /[A-z]{3}\/[A-z]{3}/g? - Need a little twitch in my regex to get all the numbers How to fix the pattern to get the all listed numbers. Currently what I'm getting with the way I've tried is given below: import re content=''' (555) 567-4523, 555 567 4523, (555) 567 4523, 555-567-4623 +91-8071805024, +91-11-2352280 ''' print(re.findall(r"\+?.\d+.-?\(?\d+.\)?-?\s?\w*.-?\s?\w*.",content)) Result I'm getting: ['(555) 567-4523,', ' 555 567 4523, (', ' 567 4523, 555-567-', '+91-8071805024, +91-', '11-2352280'] Result I'm after: (555) 567-4523, 555 567 4523, (555) 567 4523, 555-567-4623 +91-8071805024, +91-11-2352280 - Regex and negative look ahead I am trying to create some regex patterns that match a website domain. The rules are as below : For France, the URL pattern must have /fr-fr (followed by anything else) after the domain name, ie For Germany, the URL pattern must have /de-de (followed by anything else) after the domain name, ie And for all other countries, the URL pattern can be the root domain (ie) OR anything EXCEPT fr-fr and de-de after the domain name I have these Regex patterns for France and Germany which work fine : https?://.*?/(?i)FR-FR.\* and https?://.*?/(?i)DE-DE.\* However, I am struggling to get a Regex pattern that will match the root domain and other domains (such as with anything after it) but EXCLUDE /fr-fr.* and /de-de.* I have tried a negative lookahead, such as this (for example, NOT france) : https?://.*?/(?!fr-fr).\* But this does not seem to work, and matches against URLs that it should not. Maybe I am missing something obvious. Any help very much appreciated. - Pretty URL internally redirect to ugly URL Refer to that link: how to configure apache for xampp on mac osx lion to use mod_rewrite My question: This helped me a lot, but what if link looks like this: How to change it to this: and that the second rule redirect pretty url back. RewriteCond %{THE_REQUEST} ^[A-Z]{3,}\s/+index.php\?([^&=]+)=([^&\ ]+)&([^&=]+)=([^&\ ]+)(\ |$) [NC] RewriteRule ^ /%2/%4/? [R=302,L]. I think this is it. RewriteRule ^(.+)/?$ /category.php?category=$1&id=$2 [QSA,NC,L] is not working. I have to rewrite ^(.+)/?$, but I don't know how. My code: Options +FollowSymLinks -MultiViews RewriteEngine On RewriteBase / # Redirect /category.php?category=something to /something RewriteCond %{THE_REQUEST} ^[A-Z]{3,}\s/+index\.php\?([^&=]+)=([^&\ ]+) (\ |$) [NC] RewriteRule ^ /%2? [R=302,L] # Internally forward /something to /category.php?category=something RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^/.]+)/?$ /index.php?page=$1 [QSA,NC,L] # Redirect /category.php?category=something&id=ID to /something/ID RewriteCond %{THE_REQUEST} ^[A-Z]{3,}\s/+index\.php\?([^&=]+)=([^&\ ]+)&([^&=]+)=([^&\ ]+)(\ |$) [NC] RewriteRule ^ /%2\/%4/? [R=302,L] # Internally forward /something/ID to /category.php?category=something&id=ID RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^/.]+)/?$ /index.php?page=$1 [QSA,NC,L] -. - Multi-character warnings with only one character involved I get this error when trying to run this function: void lcd_print(uint8_t* s) { uint8_t lim = 0, i = 0; lim = strlen((char*)s); for(i=0; i<lim; i++) { if (s[i] == '°') { lcd_putch(cg_0); } else { lcd_putch(*(s+i)); } } } I got the [-Wmultichar] warning at the line when compare with '°'. But on the following code is working without warnings: for(i=0; i<strlen(buff); i++) { if (buff[i]=='°') { lcd_putch(cg_0); } else if (buff[i]=='+') { lcd_putch(cg_1); i = i+2; } else { lcd_putch(buff[i]); } } and buff is declared as a char. EDIT: also declaring s as a char* does not change the outcome. What am I missing under my nose? - Fortran: Error "end of file" while read character in namelist I am writing a fortran appication, and i get a problem. When i define a namelist as following: CHARACTER(100) :: INPUT_DIR, OUTPUT_DIR, ROOT_DIR NAMELIST /IODIR/ INPUT_DIR, OUTPUT_DIR and then I read IODIR from file as: READ(FUNIT,IODIR, ERR=99) the data in file is: &IODIR INPUT_DIR="Input", OUTPUT_DIR="Output" / But it get error "End of file". It seems like the length of variables is longer than their deffined in file. I don't know how to set delimiter for the character variable, or read an unknown character in name list. I use GNU fortran to build. I need some helps! Thanks you! - Characters on Pycharm are displayed incorrectly Here is what I see when I launch PyCharm: Incorrect characters image As you can see the line numbers are replaced with 0's in places and other characters from what it actually should be. Any help would be appreciated.
http://quabr.com/47297199/regex-remove-3-characters-before-and-after
CC-MAIN-2017-47
refinedweb
920
62.98
The Preprocessorsuggest change Before the C compiler starts compiling a source code file, the file is processed in a preprocessing phase. This phase can be done by a separate program or be completely integrated in one executable. In any case, it is invoked automatically by the compiler before compilation proper begins. The preprocessing phase converts your source code into another source code or translation unit by applying textual replacements. You can think of it as a “modified” or “expanded” source code. That expanded source may exist as a real file in the file system, or it may only be stored in memory for a short time before being processed further. Preprocessor commands start with the pound sign (”#”). There are several preprocessor commands; two of the most important are: - Defines: `#define` is mainly used to define constants. For instance, #define BIGNUM 1000000. Because `#define` just does advanced search and replace, you can also declare macros. For instance: #define ISTRUE(stm) do{stm = stm ? 1 : 0;}while(0) // in the function: a = x; ISTRUE(a); becomes: // in the function: a = x; do { a = a ? 1 : 0; } while(0); At first approximation, this effect is roughly the same as with inline functions, but the preprocessor doesn't provide type checking for `#define` macros. This is well known to be error-prone and their use necessitates great caution. Also note here, that the preprocessor would also replace comments with a blanks as explained below. - Includes: #include. - Logic operations: #if defined A || defined B variable = another_variable + 1; #else variable = another_variable * 2; #endif will be changed to: variable = another_variable + 1; if A or B were defined somewhere in the project before. If this is not the case, of course the preprocessor will do this: variable = another_variable * 2; This is often used for code, that runs on different systems or compiles on different compilers. Since there are global defines, that are compiler/system specific you can test on those defines and always let the compiler just use the code he will compile for sure. The Preprocessor replaces all comments in the source file by single spaces. Comments are indicated by // up to the end of the line, or a combination of opening /* and closing */ comment brackets.
https://essential-c.programming-books.io/the-preprocessor-41d9024326ff4f25a0d71fda62b23435
CC-MAIN-2021-25
refinedweb
369
53.21
CodeGuru Forums > Visual C++ & C++ Programming > C++ and WinAPI > Borland C++ Builder, Play Sound PDA Click to See Complete Forum and Search --> : Borland C++ Builder, Play Sound past September 6th, 2008, 11:32 AM Hello there, I am writing a code for a game. simply, I need to play a "wav" sound whenever the player click an object on the screen. I am using Borland C++ Builder 5. I have tried this by including: #include <MPlayer.hpp> TMediaPlayer *Mp; in my header file and Mp->filename = "sound27.wav"; Mp->open(); Mp->play(); in my main code; I can compile and build the project. but whenever i play the game, when it reaches to the point that it should play the sound, it hangs !! appreciate your help. what should i do to solve this problem.. thanks alot : ) Jehjoa September 7th, 2008, 03:49 AM I've never used Borland's compiler or the TMediaPlayer class, but it looks like you're trying to use a class without instantiating an object of that class. At the moment, you have a pointer to a TMediaPlayer object, not the object itself. What's worse, since you haven't initialized the pointer, it points to random data. That means that, when you do Mp->filename = "sound27.wav"; it writes that string to a random location in memory, thereby causing the crash/hang up. Moral of the story: Always initialize your pointers! In this case, if you had done Mp = 0; before doing Mp->filename = "sound27.wav"; it would have thrown an access violation exception, which would've at least given you an idea where to look for the mistake. Now, how to solve the mistake? You can do one of 2 things: Create the object dynamically. Remember that I said that your pointer isn't initialized? The Mp variable was meant to point to an object of type TMediaPlayer, but you never created that object! However, you also didn't set the pointer to 0, which means its current value is the value that happened to be in memory when the bytes used to store the pointer were allocated, causing your crash as explained above. So you need to create a TMediaPlayer object, and have Mp point to it, like so: Mp = new TMediaPlayer;. When you're done with the object, destroy it by using: delete Mp;. The TMediaPlayer object is now destroyed, but the pointer still points to where it used to be (which is now random data again)! So set Mp to 0 to prevent the same kind of crashes from happening again. Create the object statically on the stack. This is probably the best do to for you, since it seems you're relatively new to C++ programming. Whay you do is change the TMediaPlayer *Mp; line in your header to TMediaPlayer Mp; (without the *). Mp is now the actual object itself, not a pointer to it. Use the object by writing: Mp.filename = "sound27.wav"; Mp.open(); Mp.play(); Notice I replaced the ->'s with .'s!! I strongly advise you to read through some C++ texts again... The mistake you made is really a beginners mistake (although it happens to everyone who doesn't pay close attention). Even I could spot and explain it, and I don't have the language under my belt completely, either. ;) eero_p September 8th, 2008, 04:45 AM Haven't used the TMediaPlayer -component, but the description of method says: C++ __fastcall Open(); Use Open to open a multimedia device. The multimedia device type must be specified in the DeviceType property before a device can be opened. past September 8th, 2008, 11:00 AM Dear Jejoha and eero I am a beginner in C++, and I dont have any experience in it, I need to use it for a project. Thanks alot for your replies. I tried what you proposed but it gives me error. I tried to create the object dynamically as you proposed, it gives me the error as follow when compiling: "could not find a match for TMediaPlayer::TMediaPlayer()'. then I replaced "Mp = new TMediaPlayer;" with "Mp = new TMediaPlayer(this);" then I run it agian and this time it worked but when I am playing the game, when it reaches to where that it should play the sound, the program stops and gives the below error: "Project raised exception class EWin32Error with message "Win32 Error. Code:1400 invalid window handle'. Process stopped. Use step or Run to continue." this error appears on the line: Mp->Play(); ... Then I tried to create the object staticlly,it gives me following errors while compiling: - [C++ Error] ExperimentThread.h(23): E2459 VCL style classes must be constructed using operator new ==> on "TMediaPlayer Mp;" - [C++ Error] ExperimentThread.cpp(31): E2279 Cannot find default constructor to initialize member 'ExperimentThread::Mp' ==>at brgining of my main code I also included: Mp->DeviceType = dtWaveAudio; as a device to play the wav. before play thanks for you help .. appreciate it ! eero_p September 9th, 2008, 02:57 AM Try with: TMediaPlayer* Mp = new TMediaPlayer(this->Handle); codeguru.com
http://forums.codeguru.com/archive/index.php/t-460609.html
crawl-003
refinedweb
845
72.16
In a previous post, I talked about HttpHandlers – an underused but incredibly useful feature of ASP.NET. Today I want to talk about HttpModules, which are probably more common than HttpHandlers, but could still stand to be advertised a bit more. HttpModules are incredibly easy to explain, so this will hopefully be a short-ish post. Simply put, HttpModules are portable versions of the global.asax. So, in your HttpModule you’ll see things like BeginRequest, OnError, AuthenticateRequest, etc. Actually, since HttpModules implement IHttpModule, you actually only get Init (and Dispose if you have any cleanup to do). The Init method passes in the HttpApplication which lets you hook into all of those events. For example, I have an ErrorModule that I use on most projects: using System; using System.Web; using log4net; namespace Fuel.Web { public class ErrorModule : IHttpModule { #region IHttpModule Members public void Init(HttpApplication application) { application.Error += new EventHandler(application_Error); } public void Dispose() { } #endregion public void application_Error(object sender, EventArgs e) { //handle error } } } Now, the code in my error handler is pretty simple: HttpContext ctx = HttpContext.Current; //get the inner most exception Exception exception; for (exception = ctx.Server.GetLastError(); exception.InnerException != null; exception = exception.InnerException) { } if (exception is HttpException && ((HttpException)exception).GetHttpCode() == 404) { logger.Warn(“A 404 occurred”, exception); } else { logger.Error(“ErrorModule caught an unhandled exception”, exception); } I’m just using a log4net logger to log the exception, if it’s a 404 I’m just logging it as a warning. You can do this just as easily with a global.asax, but those things aren’t reusable across projects. That of course means that you’ll end up duplicating your code and making it hard to manage. With my ErrorModule class, I just put it in a DLL, drop it in my bin folder and add a couple lines to my web.config under <system.web>: <httpModules> <add name=”ErrorModule” type=”Fuel.Web.ErrorModule, Fuel.Web” /> </httpModules> And voila, I have a global error in place. In almost all cases, you should go with HttpModules over global.asax because they are simply more reusable. As another example, my localization stuff uses an HttpModule as the basis for adding a multilingual framework to any application. Simply drop the DLL in the bin and add the relevant line in your web.config and you’re on your way. Here’s the important code from that module: public void Init(HttpApplication context) { context.BeginRequest += new EventHandler(context_BeginRequest); } public void Dispose() {} private void context_BeginRequest(object sender, EventArgs e) { HttpRequest request = ((HttpApplication) sender).Request; HttpContext context = ((HttpApplication)sender).Context; string applicationPath = request.ApplicationPath; if(applicationPath == “/”) { applicationPath = string.Empty; } string requestPath = request.Url.AbsolutePath.Substring(applicationPath.Length); //just a function that parses the path for a culture and sets the CurrentCulture and CurrentUICulture LoadCulture(ref requestPath); context.RewritePath(applicationPath + requestPath); } If you are developing a shrink-wrap product, you don’t have a choice but to use HttpModules, because the last thing you want is to ship a global.asax which the user must use, overwriting the code in his own global.asax. The only time you want to use Global.asax is when using OutputCaching with the VaryByCustom property. As far as I know, the GetVaryByCustomString function _must_ be placed in the global.asax file. Anyways, switching from Global.asax to HttpModules is pretty straightforward. So I encourage you to look at where it makes sense (ie, where you see the potential for reuse across applications) and make it so. Karl. Great article and very clear explanation. We use Microsoft CRM and are trying to override the error trapping mechanism they have (probably in the global.asax). Trouble is this apporach is an alternative to the global.asax but how do we stop the existing error handling from running as well. I expect somehow we need to stop the module passing it on down the chain. Looking up more now…. This is a great article. Following is the MSDN link which provides overview of the entire ASP.Net application life cycle. It also provides the place where handlers are called.. Prasad Thanks for a good explanation of using httpModules. Do you have any samples of using them with Authentication? Sure you can do it using vb.net codiing Can we do this through vb.net coding ?. If we can then how to write the httphandlers in vb coding There is just one big difference that you should be aware of, Application_Start is called multiply times for HttpModule, but only once for Global.asax. This means that for singleton style stuff, it is easier to use the Global.asax I have a UnitOfWorkApplication that inherits from HttpApplication that handles this for me, and I just inherit from that in my Global.asax and I am done with it. Vikas 1 – Yes, an HttpModule seems like the right thing in your case. If you hook into the BeginRequest event, you can do whatever needs doing, and then the normal processing will occur. 2 – You can simply call base.Function(params), or in vb.net use MyBase.Function(), something like: public void override DoSomething(int value) { //put your overriding cod here; base.DoSomething(value); } 3 – Well, you can hook any button event to any button handler even if something else is already hooked into it, there are really two ways to do this, hook them directly into the same function: button1.Click += new ….ButtonClick; button2.Click += new …ButtonClick; OR button1.Click += new …. Button1Click; button2.click += new … Button2Click; and then call Button1Click from Button2click…I think the first approach is better. i want answers of questions please answer 1) I have a running site having 2000 pages and suddenly i required to call a certain single function on each page request. do i need to implement httpmodule for this if yes then how to call that function then do normal processing??? 2)i wanna override a base class functon and then after want to call the base class function what will be the way???? 3)i wanna call button 2 event handler on button1 event handler occasionaly i know all the answers but not sure kindly help me Karl: “As best as I know…There’s a single HttpApplication instance for a given site. This instance has a collection of all the modules as well as the facilities to determine the correct httphandler to use.” Karl, I don’t believe this is the case. The MSDN help says: “_.” In order to deal with multiple requests concurrently, ASP.NET would need to create several instances of this class. Take a look at this article for a bug that can occur if you assume there is only one instance of this class in your application Ken: Thanks for the comment. May I ask what website? This post was really useful for me. I actually found it on another website where someone cut and pasted it all but I found your comment at the bottom so now I can compliment the right person! The one thing that was really useful for me was the reference to HttpException.GetHttpCode(). I was writing my own Error Log code and wanted to catch all the exceptions on the Application.Error event, but I didn’t want to log 404 and 403 errors. I couldn’t figure out how to filter out the unwanted exceptions but now that I’ve got GetHttpCode everything’s great. Thanks again for posting this. It’s made my app that wee bit better. Ankit: A key point is that HttpModules fire for all requests, HttpHandlers only for requests matching the path=”” attribute of the configuration – and there can only be 1 match. As best as I know…There’s a single HttpApplication instance for a given site. This instance has a collection of all the modules as well as the facilities to determine the correct httphandler to use. A request comes in, the Application fires off a number of events which your HttpModules can hook into (like BeginRequest). As best I can tell they are fired in the order you put them in, but I wouldn’t rely on that. At some point, the correct HttpHandler is determine from the requested URL and ProcessRequest is called. Then a series of other events, such as EndRequest are fired. Hi Karl this is agreat article. Can you please describe the relation and sequence of HTTPHandler, HTTPModule, HttpApplication objects Thanks There’s a way to hook into those events, but as far as I know, it’s something of a nightmare. People have a really hard time doing it. It’s something like (which you place in your Init): HttpModuleCollection modules = application.Modules; SessionStateModule module = modules[“Session”] as SessionStateModule; if (module != null) { stateModule.Start += (new EventHandler (this.Session_Start)); stateModule.End += (new EventHandler(this.Session_End)); } The problem, as I understand it, is that there are multiple SessionStateModules loaded, and you need to find the right one for the start and end events. Also, I believe your own httpmodule fires multiple times and you need to find the right one of those too. I have no clue which the “right one” is though If you need Session_Start and/or Session_End events, then you still need Global.asax, isn’t that right? Vinny: As you can see from my example, the sender parameter to the function is actually an HttpApplication object. From it you can get access to the Response object (the same way I got access to the Request object) and redirect away Good alternative to the Global.asax, however I am trying to redirect user to a friendly error page after the error is handled. However, Response.Redirect is not an option from the HttpModule, how were you doing it? Prakash: this isn’t possible using HttpHandlers, mostly ‘cuz the close “event” is a client-side behaviour. Your best bet is to hook into the onbeforeunload javascript event (which is only supported in IE I think) and try to clean up there (possibly be opening up a very small window window.open(“terminate.aspx”) or something like it). It won’t be the most reliable, but I’m pretty sure it’s your only solution. Hi , I want Session to be end whenever the user directly closes the IE Browse X button. Dan: In your httpmodule, check to see if the requested page is GenericError.aspx, and if so, simply don’t redirect, something like: HttpRequest request = Context.Current.Request; if (string.Compare(request.LocalPath, “/GenericError.aspx”, true)) { Context.Current.Response.Write(“An unhandled error occurred”); Context.Current.Response.End(); } //should be safe to redirect to GenericError.aspx I use a similar HttpModule that handles unhandled exceptions from an .aspx.cs page by logging to a text file, logging to the event viewer’s app log, and sending out an email to the website developer and any interested parties. Finally, a user friendly page (GenericError.aspx) is Redirected To. My Page_Load event has no code, so when the page is unavailable, (for instance, the website is down), an unhandled event is generated and my HttpModule is called. Everything is executed as listed above, but the redirection to GenericError.aspx generates another unhandled exception (because the website is down), and thus we enter into a loop. I end up with 14000+ email messages, and so does everyone specified in a web.config list. The only fix I can see is to put a try/catch block in every Page’s Page_Load event. Is there some smarter way to handle this? use the “App_Code” assembly I think. Namespace.Class, App_Code Karl What is the proper syntax for the httpModules add element if you aren’t compiling your webapp into a DLL? I tried moving some of my global.asax code into a class but I’m uncertain what the proper syntax would be for, as an example, MyHttpModule.cs in the App_Code folder. Cameron: Most stand-alone libraries or whatnot require some type of configuration. It’s true that my example here is a little on the slim – so you’re only benefiting slightly. It’s generally easier to add a custom configuration section and add a few lines of code, than it is to have to rewrite somethign though. This is neat. And thanks for explaining it in a down-to-earth manner! My only question is we log our errors. If your error handler has something like ErrorClass.LogError(blah blah); then an HttpModule may not be much more portable than Global.asax because the developer is still going to have database connections and whatnot to configure. Good article. However, another useful and just as easy to implement solution for creating useful, reusable code at the application level is to create a base class (or hierarchy of them depending on your needs) to sit between your HttpApplication base class and your front end Global classes. Dan: I’m pretty sure you can safely put this code in the init function itself. It fires more or less around the same timeframe as the ApplicationStart for the global.asax – I can’t remember which fires first, but the app/site is at the same state in both cases. Great article. The error handling is very clear. What about tasks that we’re doing in Application_Start(…)? How can we port these to the IHttpModule? – Thanks, Dan ELMAH (a application-wide error logging module that is completely pluggable) is a pretty good example on HttpModules:
http://codebetter.com/karlseguin/2006/06/12/global-asax-use-httpmodules-instead/
CC-MAIN-2017-30
refinedweb
2,243
57.87
We are a USDA approved facility able to handle your birds from outside of the US. If you are planning to take a trip outside of the US please contact us to discuss the process of shipping, processing, and receiving the trophy shipment. Birds coming in from foreign countries have to be de-toxed per USDA guidelines. You CANNOT bring in birds through an approved facility and then take them to another establishment or keep them at your home. They have to go where the permit is for disposition/treating/mounting or returning the treated skins to you if desired. Failure to follow the set guidelines can result in seizure and/or fines. It's treated as seriously as smuggling any contraban into the country. Due to a recent outbreak, Mexico is considered to be affected with highly pathogenic avian influenza (HPAI). In response, the APHIS placed restrictions on the importation of avian (bird and poultry) products from Mexico; specifically, imported avian products must be accompanied by an import permit issued by Veterinary Services (VS). In most instances, the import permit will require that the product be mitigated for HPAI prior to importation. Bird trophies from Mexico either require processing to inactivate the HPAI virus or be consigned to an APHIS-approved establishment for processing (mounting). Bird trophies that are fully taxidermy finished (mounted) continue to be unrestricted. Game birds and waterfowl that are being imported as trophies must be sent to a taxidermy facility that has been approved by the U.S. Department of Agriculture's (USDA) Veterinary Services. A list of approved taxidermists in a particular state can be obtained from the Animal Products Staff, National Center for Import-Export telephone (301) 851-3300. Email: askNCIE.Products@aphis.usda.gov. For more information and guidelines see the Animal and Plant Inspection Service (APHIS) manual on Trophies. Bones, horns and hoofs that are imported as trophies may be imported without further restrictions if they are clean, dry and free of undried pieces of hide, flesh, or sinew. Many animals, game birds, products and byproducts from such animals and game birds are prohibited, or allowed only restricted, entry into the United States. Specific requirements vary according to the country of export. For more information about importations by country, please call the USDA, Animal and Plant Health Inspection Service(APHIS), the National Import-Export Center, tel. (301)851-3300, fax (301)734-8226. Since hours of service and availability of officers from the other agencies involved may vary from port to port, you are strongly urged to check with your anticipated port of arrival before importing a pet or other animal. This will assure expeditious processing and reduce the possibility of unnecessary delays. Game birds, deer, moose, elk, bison, wild sheep and goats from Canada are admissible into the U.S. at this time. Domestic sheep and goats are still prohibited without an import license. In order to recieve birds form Mexico, the approved establishment must be listed in VSPS as a "HPAI" approved facility. Only bird carcasses, skins, and capes will be processed on site. Bird carcases will be thawed on site and the hides/skins tanned. Trimmings will be incinerated. Area Offices and VS field personnel should note the current requirements for bird materials from Mexico:Bird and poultry products and by-products must be accompanied by an import permit. The import permit will specify HPAI mitigation measures.The permit application, Form VS 16-3, can be found at health/permits/.On the application, the hunter should be listed as the importer. Then, add that the application is “c/o” a contact person at the approved establishment, and give the approved establishment’s name and address. The hunter should sign the application. Unprocessed bird trophies may be consigned to approved establishments for processing (mounting) if accompanied by an import permit. Unprocessed hunter- harvested game bird meat for human consumption is prohibited. Website is Powered by: LiveEdit and Elite Media
http://flywaytaxidermy.com/pages/USDA/
CC-MAIN-2018-05
refinedweb
660
55.03
GuacGuac Guac is a package that provides monadic do-notation, inspired by Haskell, in Python. Monads provide "programmable semicolons" by which the behavior of programs can be changed. A common, useful monad is the list monad, which represents non-deterministic computations. The list monad makes it very easy to write backtracking searches. ExampleExample Here's an example that computes all the possible ways you can give $0.27 change with pennies, nickels, dimes, and quarters: from guac import * @monadic(ListMonad) def make_change(amount_still_owed, possible_coins): change = [] # Keep adding coins while we owe them money and there are still coins. while amount_still_owed > 0 and possible_coins: # "Nondeterministically" choose whether to give another coin of this value. # Aka, try both branches, and return both results. give_min_coin = yield [True, False] if give_min_coin: # Give coin min_coin = possible_coins[0] change.append(min_coin) amount_still_owed -= min_coin else: # Never give this coin value again (in this branch!) del possible_coins[0] # Did we charge them the right amount? yield guard(amount_still_owed == 0) # Lift the result back into the monad. yield lift(change) print(make_change(27, [1, 5, 10, 25])) Running this program will print a list of lists, each list containing a different set of numbers that add up to 27. You can imagine lots of cool ways this could be used, from unification to parsing! If you have ever used Python's asyncio package, this may feel familiar. That's because asyncio is actually a monad! Of course, they don't formalize it as such, but it could be implemented as one, and it uses coroutines in the exact same way. Unlike asyncio, which simply continues computation when a result is available, this library makes it possible to repeat computation from arbitrary yields in the coroutine. Building Your Own MonadsBuilding Your Own Monads Guac comes with a few simple monads, but it's super easy to implement your own monad. You don't need to worry about any of the coroutine logic---Guac handles that for you. You just have to implement two simple functions, lift and bind: class ListMonad(Monad): @staticmethod def lift(x): return [x] @staticmethod def bind(m, f): result = [] for elem in m: result += f(elem) return result Your definitions should ideally follow the monad laws, though the lack of types can make this a bit janky to reason about. Unspecialized Monadic ComputationsUnspecialized Monadic Computations You might have noticed that you use the @monadic decorated to turn a coroutine into a function that runs the monad. To specialize a compuation to a specific monad instance, you pass that instance as an argument to the decorator. Otherwise, you create an unspecialized monadic computation that will inherit its instance from the caller. Here's the implementation of the guard function used above: @monadic def guard(condition): if condition: yield unit() else: yield empty() Handy helper functions like unit and empty are defined by Guac. Some functions require a little bit more than a monad. For example, empty must be implemented in addition to lift and bind on your monad class to use these functions. UsageUsage RequirementsRequirements Guac requires an implementation of Python 3 that supports copy.deepcopy on generator functions. The most common distribution, CPython, is lacking this feature, but pypy implements it! InstallationInstallation If you already have the pypy distribution of Python 3, you can install this package with pip: pypy3 -m pip install guac If you don't yet have pypy, you can download and install it here. Alternatively, if you have Homebrew on macOS, you can run this: brew install pypy3
https://libraries.io/pypi/guac
CC-MAIN-2021-21
refinedweb
586
53.81
Problems to select hunt group in ACD CME 4.1 When I configure ACD in CME 4.1 the next message is display CME(config-app-param)# param service-name queue Jun 12 19:41:13.219: //-1//HIFS:/hifs_ifs_cb: hifs ifs file read succeeded. size=35485, url=flash:app-b-acd-aa-2.1.2.2.tcl Jun 12 19:41:13.219: //-1//HIFS:/hifs_free_idata: hifs_free_idata: 0x715DF5E8 Jun 12 19:41:13.219: //-1//HIFS:/hifs_hold_idata: hifs_hold_idata: 0x715DF5E8 Warning: parameter service-name has not been registered under aa namespace CME(config-app-param)# after I select dial by options menu the ephone-hunt groups are ignored, I hear the message the correspond to the voice mail number saying the the number does not exist. The option dial by extension works fine and also when I dial 0 option (operator) works fine I guess something wrong happend with the file app-b-acd-2.1.2.2.tcl because the result is the same even though I don't put service queue flash:app-b-acd-2.1.2.2.tcl param aa-hunt3 1003 param queue-len 5 param aa-hunt4 1004 param queue-manager-debugs 1 param number-of-hunt-grps 4 param aa-hunt2 1002
https://supportforums.cisco.com/discussion/10387026/problems-select-hunt-group-acd-cme-41
CC-MAIN-2017-17
refinedweb
209
56.55
CS::Animation::iBodySkeleton Struct Reference This class holds the physical description of the skeleton of an CS::Mesh::iAnimatedMesh. More... #include <imesh/bodymesh.h> Detailed Description This class holds the physical description of the skeleton of an CS::Mesh::iAnimatedMesh. For each relevant bone of the skeleton, one has to define an CS::Animation::iBodyBone that will hold the colliders, joint and properties of the bone. Subtrees of the skeleton are defined through the CS::Animation::iBodyChain object. Definition at line 122. Get an iterator over all bones in this body. Get an iterator over all body chains in this body. Return the name of the body skeleton. Get the skeleton factory associated with this body skeleton. Populate this body skeleton with default body chains. This method will try to create as less body chains as possible, covering all the bones that contains at least one collider. The name of the chains that are created are of the format 'default_' + root_bone_name. - Note: - This method should work well when called after PopulateDefaultColliders(). Populate this body skeleton body bone. Remove a body chain. The documentation for this struct was generated from the following file: - imesh/bodymesh.h Generated for Crystal Space 2.1 by doxygen 1.6.1
http://crystalspace3d.org/docs/online/api/structCS_1_1Animation_1_1iBodySkeleton.html
CC-MAIN-2015-22
refinedweb
206
60.82
OK, time for a new competion! The last one was perhaps finished and judged a little abruptly, but I blame force majeure. :) This time the task will be to write an AI for a well-known and simple board game -- connect four. You will each write your own AI and your programs will face each other in a virtual tournament until one program is the winner! I thought it'd be fun with a virtual tournament, and connect four is a relatively simple game to write AI for. The function prototype will be the following: The board argument will contain the playing field. The source code for this class will be provided below.The board argument will contain the playing field. The source code for this class will be provided below.Code: namespace Nickname{ int getMove( const Board& board, char player ); } The player argument will be either 'X' or 'O' depending on what character belongs to you. The function will return the column where you wish to move your next piece. Rules: - Don't take long time to determine your move, it's impossible to set any fixed time constraints, but the moves should be made "instantly". :) - The program who defeats the other programs in a tournament will be the winner. Participation As this contest requires a little more work than the others, please announce in this thread if you are willing to participate. Then, when you know the number of participants you can decide if it's worth the time and effort to write the program. Here's a list of interested persons The persons in bold have submitted: - Jeremy G - PJYelton - LuckY - jlou - goodvelo - vasanth - Perspective If you are on this list but won't submit anything, please let me know. You don't have to write a super-advanced algorithm. You could just do: DeadlineDeadlineCode: namespace ImLazy{ int getMove( const Board& board, char player ){ return std::rand()%Board::width; } } The deadline is November 10th. Post your submissions to p 0 4 p s t (a) efd.lth.se or by PM I'll confirm each submission I recieve.
http://cboard.cprogramming.com/contests-board/57475-ai-contest-connect-four-printable-thread.html
CC-MAIN-2014-42
refinedweb
351
71.44
Python module dependency analysis tool Project description.10 This is a minor feature release Features modulegraph.find_modules.find_needed_modules claimed to automaticly include subpackages for the “packages” argument as well, but that code didn’t work at all. Issue #9: The modulegraph script is deprecated, use “python -mmodulegraph” instead. Issue #10: Ensure that the result of “zipio.open” can be used in a with statement (that is, with zipio.open(...) as fp. No longer use “2to3” to support Python 3. Because of this modulegraph now supports Python 2.6 and later. Slightly improved HTML output, which makes it easier to manipulate the generated HTML using JavaScript. Patch by anatoly techtonik. Ensure modulegraph works with changes introduced after Python 3.3b1. Implement support for PEP 420 (“Implicit namespace packages”) in Python 3.3. modulegraph.util.imp_walk is deprecated and will be removed in the next release of this package. Bugfixes The module graph was incomplete, and generated incorrect warnings along the way, when a subpackage contained import statements for submodules. An example of this is sqlalchemy.util, the __init__.py file for this package contains imports of modules in that modules using the classic relative import syntax (that is import compat to import sqlalchemy.util.compat). Until this release modulegraph searched the wrong path to locate these modules (and hence failed to find them). 0.9.2 This is a bugfix release Bugfixes - The’ (modulegraph.find_modules.find_modules) 0.9.1 This is a bugfix release Bug fixes Fixed the name of nodes imports in packages where the first element of a dotted name can be found but the rest cannot. This used to create a MissingModule node for the dotted name in the global namespace instead of relative to the package. That is, given a package “pkg” with submodule “sub” if the “__init__.py” of “pkg” contains “import sub.nomod” we now create a MissingModule node for “pkg.sub.nomod” instead of “sub.nomod”. This fixes an issue with including the crcmod package in application bundles, first reported on the pythonmac-sig mailinglist by Brendan Simon. 0.9 This is a minor feature release Features: Documentation is now generated using sphinx and can be viewed at <>. The documention is very rough at this moment and in need of reorganisation and language cleanup. I’ve basiclly writting the current version by reading the code and documenting what it does, the order in which classes and methods are document is therefore not necessarily the most useful. The repository has moved to bitbucket Renamed modulegraph.modulegraph.AddPackagePath to addPackagePath, likewise ReplacePackage is now replacePackage. The old name is still available, but is deprecated and will be removed before the 1.0 release. modulegraph.modulegraph contains two node types that are unused and have unclear semantics: FlatPackage and ArchiveModule. These node types are deprecated and will be removed before 1.0 is released. Added a simple commandline tool (modulegraph) that will print information about the dependency graph of a script. Added a module (zipio) for dealing with paths that may refer to entries inside zipfiles (such as source paths referring to modules in zipped eggfiles). With this addition modulegraph.modulegraph.os_listdir is deprecated and it will be removed before the 1.0 release. Bug fixes: The __cmp__ method of a Node no longer causes an exception when the compared-to object is not a Node. Patch by Ivan Kozik. Issue #1: The initialiser for modulegraph.ModuleGraph caused an exception when an entry on the path (sys.path) doesn’t actually exist. Fix by “skurylo”, testcase by Ronald. The code no longer worked with python 2.5, this release fixes that. Due to the switch to mercurial setuptools will no longer include all required files. Fixed by adding a MANIFEST.in file The method for printing a .dot representation of a ModuleGraph works again. 0.8.1 This is a minor feature release Features: - from __future__ import absolute_import is now supported - Relative imports (from . import module) are now supported - Add support for namespace packages when those are installed using option --single-version-externally-managed (part of setuptools/distribute) 0.8 This is a minor feature release Features: Initial support for Python 3.x It is now possible to run the test suite using python setup.py test. (The actual test suite is still fairly minimal though) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/modulegraph/0.10/
CC-MAIN-2018-22
refinedweb
746
59.4
rb-libsvm -- Ruby language bindings for LIBSVM This package provides a Ruby bindings to the LIBSVM library. SVM is a machine learning and classification algorithm, and LIBSVM is a popular free implementation of it, written by Chih-Chung Chang and Chih-Jen Lin, of National Taiwan University, Taipei. See the book "Programming Collective Intelligence," among others, for a usage example. There is a JRuby implementation of this gem named jrb-libsvm by Andreas Eger. Note: There exist some other Ruby bindings for LIBSVM. One is named Ruby SVM, written by Rudi Cilibrasi. The other, more actively developed one is libsvm-ruby-swig by Tom Zeng, which is built using SWIG. LIBSVM includes a number of command line tools for preprocessing training data and finding parameters. These tools are not included in this gem. You should install the original package if you need them. It is helpful to consult the README of the LIBSVM package for reference when configuring the training parameters. Currently this package includes libsvm version 3.20. Dependencies None. LIBSVM is bundled with the project. Just install and go! Installation For building this gem from source on OS X (which is the default packaging) you will need to have Xcode installed, and from within Xcode you need to install the command line tools. Those contain the compiler which is necessary for the native code, and similar tools. To install the gem run this command gem install rb-libsvm Usage This is a short example of how to use the gem. require 'libsvm' # This library is namespaced. problem = Libsvm::Problem.new parameter = Libsvm::SvmParameter.new parameter.cache_size = 1 # in megabytes parameter.eps = 0.001 parameter.c = 10 examples = [ [1,0,1], [-1,0,-1] ].map {|ary| Libsvm::Node.features(ary) } labels = [1, -1] problem.set_examples(labels, examples) model = Libsvm::Model.train(problem, parameter) pred = model.predict(Libsvm::Node.features(1, 1, 1)) puts "Example [1, 1, 1] - Predicted #{pred}" If you want to rely on Bundler for loading dependencies in a project, (i.e. use Bundler.require or use an environment that relies on it, like Rails), then you will need to specify rb-libsvm in the Gemfile like this: gem 'rb-libsvm', require: 'libsvm' This is because the loadable name ( libsvm) is different from the gem's name ( rb-libsvm). Author Written by C. Florian Ebeling. Contributors License This software can be freely used under the terms of the MIT license, see file MIT-LICENSE. This package includes the source of LIBSVM, which is free to use under the license in the file LIBSVM-LICENSE. Posts about using SVMs with Ruby
http://www.rubydoc.info/gems/rb-libsvm/1.4.0
CC-MAIN-2017-47
refinedweb
433
58.69
: MyUtils: with DataWeave. However, you can reference its variables. Below is a simple Java class that has a method and a variable. package org.mycompany.utils; public class MyClass { private String foo; public MyClass(String foo) { this.foo = foo; } public String getFoo() { return foo; } } The DataWeave example below first imports the MyClass class, then creates a new instance of the class and calls its instance variable foo. Note you cannot call the private method getFoo() with DataWeave. %dw 2.0 import java!org::mycompany::utils::MyClass output application/json --- { a: MyClass::new("myString").foo } The script produces the following output: { "a":"myString" }
https://docs.mulesoft.com/dataweave/2.1/dataweave-cookbook-java-methods
CC-MAIN-2022-21
refinedweb
102
61.53
Q. C program to check whether a character is vowel or consonant. Here you will find an algorithm and program in C programming language to check whether the given character is vowel or consonant. Explanation : In English, five alphabets A, E, I, O, and U are called as Vowels. In English, all alphabets other than vowels are Consonant. Algorithm to check whether a character is vowel or consonant START Step 1 - Input the alphabets. Step 2 - Check if the alphabets is (a, e, i, o, u) if alphabet is among these then it is vowel. Step 3 - If the alphabets is vowel than print Vowel otherwise print Consonant. STOP C Program to Check Vowel or Consonant #include <stdio.h> int main() { char c='P'; int lc, uc; // evaluates to 1 if variable c is lowercase lc = (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u'); // evaluates to 1 if variable c is uppercase uc = (c == 'A' || c == 'E' || c == 'I' || c == 'O' || c == 'U'); // evaluates to 1 if c is either lowercase or uppercase if (lc || uc) printf("%c is a vowel.", c); else printf("%c is a consonant.", c); return 0; } Output P is a consonant.
https://letsfindcourse.com/c-coding-questions/c-program-to-check-vowel-or-consonant
CC-MAIN-2022-40
refinedweb
198
70.53
The python string title() method returns a copy of the string in which first characters of all the words are capitalized. The syntax of the method is: str.title(); Key Points : • Parametric values: The title() function doesn’t take any parameters. • Return Type: A title version of the string is returned by the title() function. In other terms, the first character of each phrase is capitalized (if the first character is a letter). • The new string returned is regarded as the title cased. Below is the python program to demonstrate the title() function: str = "the best learning resource for Online Education"; print (str.title()) When we run above the program, the outcome is as follows: The Best Learning Resource For Online Education We’ll be covering the following topics in this tutorial: The title() with apostrophes str = "I'm a developer, Let’s start coding" print(str.title()) When we run above the program, the outcome is as follows: I’M A Developer, Let’S Start Coding The string title() method also capitalizes the first apostrophic message. To solve this problem, regex may be used as follows: Using Regex to Title Case String import re def Tcase(s): return re.sub(r"[A-Za-z]+('[A-Za-z]+)?", lambda mo: mo.group(0)[0].upper() + mo.group(0)[1:].lower(), s) str = "I'm a developer, Let’s start coding" print(Tcase(str)) When we run above the program, the outcome is as follows: I’m A Developer, Let’S Start Coding Below are several other functions that we can use to work with string in Python 3
https://ecomputernotes.com/python/string_title
CC-MAIN-2022-21
refinedweb
268
60.04
KParts #include <mainwindow.h> Detailed Description A KPart-aware main window, whose user interface is described in XML. Inherit your main window from this class and don't forget to call setXMLFile() in the inherited constructor. It implements all internal interfaces in the case of a KMainWindow as host: the builder and servant interface (for menu merging). Definition at line 46 of file mainwindow.h. Constructor & Destructor Documentation Constructor, same signature as KMainWindow. Definition at line 61 of file mainwindow.cpp. - Deprecated: - , remove the name argument and use setObjectName instead Definition at line 68 of file mainwindow.cpp. Destructor. Definition at line 76 of file mainwindow.cpp. Member Function Documentation Definition at line 180 of file mainwindow.cpp. Create the GUI (by merging the host's and the active part's) You must call this in order to see any GUI being created. In a main window with multiple parts being shown (e.g. as in Konqueror) you need to connect this slot to the KPartManager::activePartChanged() signal - Parameters - Definition at line 81 of file mainwindow.cpp. Definition at line 140 of file mainwindow.cpp. Rebuilds the GUI after KEditToolbar changed the toolbar layout. - See Also - configureToolbars() KDE4: make this virtual. (For now we rely on the fact that it's called as a slot, so the metaobject finds it here). Definition at line 173 of file mainwindow.cpp. Called when the active part wants to change the statusbar message Reimplement if your mainwindow has a complex statusbar (with several items) Definition at line 135 of file mainwindow.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2013 The KDE developers. Generated on Wed Dec 4 2013 00:38:52 by doxygen 1.8.5 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
http://api.kde.org/4.x-api/kdelibs-apidocs/kparts/html/classKParts_1_1MainWindow.html
CC-MAIN-2013-48
refinedweb
305
50.02
Hi guys I'm just about to start with the pi universe. This is my first post. I hope i am asking in a accepted way. I do have 15 years experience on linux, but not a lot in scripting/programming. I am not at all savvy in electrical engineering. I have a Pi Zero WH Rev. 1.1 and connected a RPi Motor Driver Board on top through the 40pin header. I do power it through a 12V Power Adapter from an old QNAP TS209 that goes into VIN and GND on the Motor Driver Board. The Motor Driver Board powers the piz zero (through the header) and two 12V DC motors. AFAIK the motor board can do that ( Onboard 5V regulator, provides power to Raspberry Pi There is a IR receiver on the motor board too, although i do not know which one exactly (TSOP etc.). I have installed Raspian Lite Buster from July (this might be a source of the problem, as I do not know how the development state is for IR, Pi Zero etc. on Debian Buster) I want to use my IR Remote to control the motors with python (but IMPORTANT:without LIRC) and i have succeeded to do so using the waveshare's example code for that HAT board. But it seems as if only about 10% of all key-presses (scancodes) are captured by the python code. The rest is ignored. Often i can't stop the motors after one of several key-presses has started the motor(s). If run my pre-configured ir-keytable -t I do receive a scancode for all key-presses using different remotes (NEC with Terratec Cinergy, RC-5 with Hauppauge remote, NEC with a Yamaha HiFi remote). But when i am running the motor.py code it only captures NEC type scancodes and only about 10% of the key-presses are executed through the code. This is the ir remote python script ( motor.py) from Waveshare demo code files i am using What I find strange is that they already take away some parts of the scancode and only using the last bits. But I think this is not the source of the problems i am facing. How can i test/modify the script in order to fix this? Code: Select all import RPi.GPIO as GPIO import time PIN = 18 PWMA1 = 6 PWMA2 = 13 PWMB1 = 20 PWMB2 = 21 D1 = 12 D2 = 26 PWM = 50 GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.setup(PIN,GPIO.IN,GPIO.PUD_UP) GPIO.setup(PWMA1,GPIO.OUT) GPIO.setup(PWMA2,GPIO.OUT) GPIO.setup(PWMB1,GPIO.OUT) GPIO.setup(PWMB2,GPIO.OUT) GPIO.setup(D1,GPIO.OUT) GPIO.setup(D2,GPIO.OUT) p1 = GPIO.PWM(D1,500) p2 = GPIO.PWM(D2,500) p1.start(50) p2.start(50) def set_motor(A1,A2,B1,B2): GPIO.output(PWMA1,A1) GPIO.output(PWMA2,A2) GPIO.output(PWMB1,B1) GPIO.output(PWMB2,B2) def forward(): GPIO.output(PWMA1,1) GPIO.output(PWMA2,0) GPIO.output(PWMB1,1) GPIO.output(PWMB2,0) def stop(): set_motor(0,0,0,0) def reverse(): set_motor(0,1,0,1) def left(): set_motor(1,0,0,0) def right(): set_motor(0,0,1,0) def getkey(): if GPIO.input(PIN) == 0: count = 0 while GPIO.input(PIN) == 0 and count < 200: #9ms count += 1 time.sleep(0.00006) count = 0 while GPIO.input(PIN) == 1 and count < 80: #4.5ms count += 1 time.sleep(0.00006) idx = 0 cnt = 0 data = [0,0,0,0] for i in range(0,32): count = 0 while GPIO.input(PIN) == 0 and count < 15: #0.56ms count += 1 time.sleep(0.00006) count = 0 while GPIO.input(PIN) == 1 and count < 40: #0: 0.56ms count += 1 #1: 1.69ms time.sleep(0.00006) if count > 8: data[idx] |= 1<<cnt if cnt == 7: cnt = 0 idx += 1 else: cnt += 1 if data[0]+data[1] == 0xFF and data[2]+data[3] == 0xFF: #check return data[2] print('IRM Test Start ...') stop() try: while True: key = getkey() if(key != None): print("Get the key: 0x%02x" %key) if key == 0x18: forward() print("forward") if key == 0x08: left() print("left") if key == 0x1c: stop() print("stop") if key == 0x5a: right() print("right") if key == 0x52: reverse() print("reverse") if key == 0x15: if(PWM + 10 < 101): PWM = PWM + 10 p1.ChangeDutyCycle(PWM) p2.ChangeDutyCycle(PWM) print(PWM) if key == 0x07: if(PWM - 10 > -1): PWM = PWM - 10 p1.ChangeDutyCycle(PWM) p2.ChangeDutyCycle(PWM) print(PWM) except KeyboardInterrupt: GPIO.cleanup(); Important side information: I have manually recompiled WebIOPi (otherwise buster issues again) and can control all python functions as used in the IR script without any delay or problems. With WebIOPi it just works.
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=249572
CC-MAIN-2019-39
refinedweb
792
66.44
Hi there, is it possible to get the UID for a database query. I tried : import nfc #oeffent den NFC-Reader ueber URAT clf = nfc.Contactless #liest die NFC-Karte aus und uebergibt den Wert in eine Variable tag= clf.connect( for data in tag: test = str(data[0]) print test But the only result is: " TypeError 'Type4Tag' object is not iterable" Thx 4 help Question information - Language: - English Edit question - Status: - Answered - For: - nfcpy Edit question - Assignee: - No assignee Edit question - Last query: - 2015-05-29 - Last reply: - 2015-05-31 The tag identifier is available as "tag.uid" for Type 1, 2, and 4 Tags, and "tag.idm" for a Type 3 Tag. In the next release nfcpy 0.10 it will be accessible as "tag.identifier". Regarding your code example, it is only possible to iterate memory on a Type 1 and 2 Tag as you tried. Type 3 and 4 Tags doe not have a flat memory structure that would allow such mapping.
https://answers.launchpad.net/nfcpy/+question/267549
CC-MAIN-2020-16
refinedweb
167
64.41
You are here: Home ‣ Dive Into Python 3 ‣ Difficulty level: ♦♦♦♢♢ ❝ East is East, and West is West, and never the twain shall meet. ❞ — Rudyard Kipling Generators are really just a special case of iterators. A function that yields values is a nice, compact way of building an iterator without building an iterator. Let me show you what I mean by that. Remember the Fibonacci generator? Here it is as a built-from-scratch iterator: [download fibonacci2.py] PapayaWhip, and it doesn’t inherit from any other class. Class names are usually capitalized, EachWordLikeThis, but this is only a convention, not a requirement.. __init__()Method This example shows the initialization of the Fib class using the __init__ method. class Fib: '''iterator that yields numbers in the Fibonacci sequence''' ① def __init__(self, max): ② docstrings too, just like modules and functions. _. ⁂' Fibclass (defined in the fibonacci2module) and assigning the newly created instance to the variable fib. You are passing one parameter, 100, which will end up as the max argument in Fib’s __init__()method. Fibclass. __class__, which is the object’s class. Java programmers may be familiar with the Classclass, which contains methods like getName()and getSuperclass()to get metadata information about an object. In Python, this kind of metadata is available through attributes, but the idea is the same. On to the next line: class Fib: def __init__(self, max): self.max = max ① __init__()method as an argument. self.max is “global” to the instance. That means that you can access it from other methods. class Fib: def __init__(self, max): self.max = max ① . . . def __next__(self): fib = self.a if fib > self.max: ② __init__()method… _ Now you’re ready to learn how to build an iterator. An iterator is just a class that defines an __iter__() method. [download fibonacci2.py] fibneeds to be a class, not a function. Fib(max)is really creating an instance of this class and calling its __init__()method with max. The __init__()method saves the maximum value as an instance variable so other methods can refer to it later. _. __next__()method is called whenever someone calls next()on an iterator of an instance of a class. That will make more sense in a minute. _: forloop calls Fib(1000), as shown. This returns an instance of the Fibclass. Call this fib_inst. forloop calls iter(fib_inst), which returns an iterator object. Call this fib_iter. In this case, fib_iter == fib_inst, because the __iter__()method returns self, but the forloop doesn’t know (or care) about Now it’s time for the finale. Let’s rewrite the plural rules generator as an iterator. [download plural6.py], 3) LazyRulesclass, open the pattern file but don’t read anything from it. (That comes later.) _' __class__attribute to access the class itself. And now back to our show. def __iter__(self): ① self.cache_index = 0 return self ② __iter__()method will be called every time someone — say, a forloop — calls iter(rules). __iter__()method must do is return an iterator. In this case, it returns self, which signals that this class defines a __next__()method which will take care of returning values throughout the iteration. def __next__(self): ① . . . pattern, search, replace = line.split(None, 3) funcs = build_match_and_apply_functions( ② pattern, search, replace) self.cache.append(funcs) ③ return funcs __next__()method gets called whenever someone — say, a forloop — calls next(rules). This method will only make sense if we start at the end and work backwards. So let’s do that. build_match_and_apply_functions()function hasn’t changed; it’s the same as it ever was. self.cache. Moving backwards… def __next__(self): . . . line = self.pattern_file.readline() ① if not line: ② self.pattern_file.close() raise StopIteration ③ . . . readline()method (note: singular, not the plural readlines()) reads exactly one line from an open file. Specifically, the next line. (File objects are iterators too! It’s iterators all the way down…). Putting it all together, here’s what happens when: LazyRulesclass, called rules, which opens the pattern file but does not read from it. plural()function again to pluralize a different word. The forloop in the plural()function will call iter(rules), which will reset the cache index but will not reset the open file object.. importis instantiating a single class and opening a file (but not reading from it). ☜ ☞ © 2001–9 Mark Pilgrim
http://docs.activestate.com/activepython/3.2/diveintopython3/html/iterators.html
CC-MAIN-2014-10
refinedweb
728
69.48
I sometimes need to include source code in a document or report in order to walk the reader through that code. Invariably, I end up referring to a particular section of code as "starting from the 27th line from the top, ending at the 33rd line, inclusive" or some such convolution. It would be handy to be able to quickly and easily add line numbers to the particular code snippet, paste it into my document, and refer to "lines 27-33" instead. The code in this little utility provides that functionality. You can copy a code snippet (or any text, in fact) into a RichTextBox in the application, press a button to add line numbers, and the formatted, line-numbered code comes out the other end in the wink of an eye. This has saved me a lot of time and effort, particularly as I'm currently writing a book that includes a lot of code examples. RichTextBox Although the utility is presented as a simple application to which you can add additional bits and pieces as required, the guts of the line numbering functionality reside in a simple class that contains only two methods. As a result, this article concentrates on this class rather than on any other standard application functionality (Cut-Copy-Paste, etc.). Moreover, I haven't added any file I/O operations to the app (Open, Save, etc.) as personally I don't need it – but you may want to do that. In addition, although the line numbering class is presented in the context of a Windows application, the relevant code can easily be used or adapted to provide similar functionality via a web application, web service or whatever. The screenshot above displays a lump of code prior to the addition of line numbers. The Options fields allow you to specify the following parameters:- The main text area contains the source text that is to be converted. You can paste your text into here. Converted text is placed back into this control. To convert the text, press the "Add Line Numbers" button. The contents of the main text area will be replaced with the converted, line-numbered text. You can then copy and paste the converted text to another application as required. The screenshot below shows the converted text: The main worker class contained in the downloads for this article is named LineNumberBuilder. It contains four properties, one public method, and one private method. The properties (accessed using the usual "getters" and "setters") hold the values you set in the main window Options fields, as well as the source text you have pasted into the RichTextBox. Here's a snapshot of the class (the getter & setter code has been removed from the listing): LineNumberBuilder 1: public class LineNumberBuilder 2: { 3: private string inputText; 4: private int lineNumberPaddingWidth; 5: private bool convertTabsToSpaces; 6: private int tabToSpacesWidth; 7: 8: 9: public LineNumberBuilder() 10: { 11: } 12: 13: public void ConvertText(){...} 14: 15: private string GetFormattedLineNumber( int lineNumber, int numberMaxWidth, string padding ){...} 16: } The table below shows how the private class variables map to the public class properties. inputText Text lineNumberPaddingWidth LineNumberPaddingWidth convertTabsToSpaces ConvertTabsToSpaces tabToSpacesWidth TabToSpacesWidth The public method, ConvertText(), does the actual conversion of your source text and adds the formatted line numbers. The private method, GetFormattedLineNumber(), which is called by ConvertText(), creates and formats the line number for each line in the text. ConvertText() GetFormattedLineNumber() Let's have a quick look at the methods in this class, starting with the GetFormattedLineNumber() method. We will see how the class is used by the application later in the article. This private method is called by the public ConvertText() method. Its job is to build the line number text for a specific line. It returns the formatted line number to the calling routine as a string. 1: private string GetFormattedLineNumber( int lineNumber, int maxNumberWidth, string padding ) 2: { 3: StringBuilder line = new StringBuilder(); 4: 5: line.Append( lineNumber.ToString().PadLeft( maxNumberWidth, ' ' ) ); 6: line.Append( ":" ); 7: line.Append( padding ); 8: 9: return line.ToString(); 10: } The method has three parameters: the first contains the current line number we want to format; the second contains the length of the longest line number string - that is, if there are 200 lines in the source text, the length of the longest line number string is 3; the third parameter is a string that contains the padding we want to add - this separates the line number and colon from the start of the text for a line. First of all, we create a local StringBuilder object, line, that is used to hold the line number text as it is constructed. StringBuilder line Line 5 appends the current line number to the line object, left-padding it with spaces as specified in the maxNumberWidth parameter. This ensures that the line number is right-justified. If you want it to be left-justified, change PadLeft to PadRight. maxNumberWidth PadLeft PadRight Line 6 appends the colon to the padded line number, and line 7 appends the additional padding - the user specifies the padding width using the "Line Number Padding" field in the main window Options. Line 9 of this method simply returns the formatted line number text as a string. This method processes the source text, inserting a line number at the beginning of each line. 1: public void ConvertText() 2: { 3: StringBuilder output = new StringBuilder(); 4: 5: try 6: { 7: char[] end_of_line = {(char)10}; 8: string[] lines = this.Text.Split( end_of_line ); 9: 10: int line_count = lines.GetUpperBound(0)+1; 11: int linenumber_max_width = line_count.ToString().Length; 12: string padding = new String( ' ', this.LineNumberPaddingWidth); 13: 14: for ( int i=0; i<line_count; i++ ) 15: { 16: output.Append( this.GetFormattedLineNumber( i+1, linenumber_max_width, padding ) ); 17: output.Append( lines[i] ); 18: output.Append( "\r\n" ); 19: } 20: 21: if ( this.ConvertTabsToSpaces ) 22: { 23: string spaces = new String( ' ', this.TabToSpacesWidth); 24: output = output.Replace( "\t", spaces ); 25: } 26: } 27: catch ( Exception e ) 28: { 29: output.Append( e.Message ); 30: } 31: 32: this.Text = output.ToString(); 33: } Line 3 creates a new StringBuilder instance that we use to store the converted text as we build the individual lines. Lines 7-8 take the source text (which is contained in the Text property of the class instance) and splits it into a string array – one array item for each line of text. The end_of_line variable is used to specify the character(s) that we want to split the text on. In this case, it uses the \n newline character (ASCII value 10). (As homework, you might want to allow the user to specify this delimiter, perhaps as a field/property in the main window Options fields.) end_of_line \n Line 10 simply counts the number of items in the string array. Line 11 converts line_count to a string and gets its length. This is used to correctly pad the line numbers so that unwanted, ugly indentation is avoided and the colons that separate the line number from the line text line up vertically in the correct manner. Line 12 creates a new string that holds the padding spaces that separate the line number from the start of the line text. line_count Lines 14-19 perform the main formatting action in a for loop, processing the source text line by line. for Within the for loop, line 16 appends the line number to the output object. It calls the private method GetFormattedLineNumber() which creates the line number and adds the trailing colon and any necessary padding. output Line 17 appends the source text line (contained in the referenced string array item lines[i]). lines[i] Lines 18 simply adds an end-of-line character to the line so that the original line breaks are preserved when the conversion is completed. If the user has elected to convert tabs to spaces, lines 21-25 take care of this. A new string is created containing the number of spaces specified by the user via the "Tab to Spaces Padding" field in the main window Options. Any tab characters contained in the output object are then replaced with the contents of the spaces string. spaces Finally, the converted text is placed in the Text property of the class instance (line 32). When the user presses the "Add Line Numbers" button in the main window, the button's click event makes a call to a routine named ExecuteLineNumbering(). Here's what this routine looks like: ExecuteLineNumbering() 1: private void ExecuteLineNumbering() 2: { 3: if ( this.SourceMemo.Text.Length > 0 ) 4: { 5: LineNumberBuilder builder = new LineNumberBuilder(); 6: 7: builder.Text = this.SourceMemo.Text; 8: builder.LineNumberPaddingWidth = Convert.ToInt32( this.eLineNumberPadding.Text ); 9: builder.ConvertTabsToSpaces = this.cbConvertToSpaces.Checked; 10: builder.TabToSpacesWidth = Convert.ToInt32( this.eTabPadding.Text ); 11: 12: builder.ConvertText(); 13: this.SourceMemo.Text = builder.Text; 14: } 15: } First of all, we check (in line 3) to see if there's any text in the SourceMemo RichTextBox. If there is, we create a new instance of the LineNumberBuilder class (builder). In lines 7-10, we then populate the properties of this object with the data contained in the main window fields, including the contents of the RichTextBox. (To aid code readability, and out of sheer idleness, no error-checking is provided to trap invalid field values – this is left for you to do as homework.) SourceMemo builder Finally, in line 12, we call the builder.ConvertText() method to number the lines. In line 13, the converted lines - now returned in our builder.Text property - are placed into the RichTextBox Text field, replacing the text that was originally pasted there. builder.ConvertText() builder.Text As you can see from the formatted code above, this handy little utility makes the job of explicating code in a book or a document very much easier to do. And it avoids having to pepper body text with code snippets which may be difficult to follow out of context. I certainly find it easier to follow code explanations when line numbers are referred to by an author. Despite using a RichTextBox, this application doesn't handle rich-text formatting. There are two reasons for this: firstly, the documents/books I write don't really need or use syntax-highlighted text; secondly, the code required to properly handle RTF is much more complex and involves getting one's head around the Rich Text Format documentation. This is way out of scope for this article. This might seem a cop-out but, in this utility, the aim is to demonstrate a simple technique in an uncluttered way. If you want to preserve all of the font characteristics of your text, you should investigate this further on your own. Finally, the demo project was built using Visual Studio .NET 2003. If you are using an earlier version of VS.NET, just use the source code as there is nothing in the LineNumberBuilder class that is specific to or dependent on VS.NET 2003. This is the first (and only) version of this application... unless it breaks, in which case it's v0.1.1 Alpha. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here davidc912 wrote:Is there any way to make line number column not selectable? In other words, if I select some text and try to make copy, I would like only copy original text but not with line numbers. David C. "Adding line numbers to text" General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/7526/Adding-line-numbers-to-text
CC-MAIN-2017-43
refinedweb
1,962
62.27
数据安全分类分级实施指南 重点 (Top highlight) Balance within the imbalance to balance what’s imbalanced — Amadou Jarou Bah 在不平衡中保持平衡以平衡不平衡— Amadou Jarou Bah Disclaimer: This is a comprehensive tutorial on handling imbalanced datasets. Whilst these approaches remain valid for multiclass classification, the main focus of this article will be on binary classification for simplicity. 免责声明:这是有关处理不平衡数据集的综合教程。 尽管这些方法对于多类分类仍然有效,但为简单起见,本文的主要重点将放在二进制分类上。 介绍 (Introduction) As any seasoned data scientist or statistician will be aware of, datasets are rarely distributed evenly across attributes of interest. Let’s imagine we are tasked with discovering fraudulent credit card transactions — naturally, the vast majority of these transactions will be legitimate, and only a very small proportion will be fraudulent. Similarly, if we are testing individuals for cancer, or for the presence of a virus (COVID-19 included), the positive rate will (hopefully) be only a small fraction of those tested. More examples include: 正如任何经验丰富的数据科学家或统计学家都会意识到的那样,数据集很少会在感兴趣的属性之间均匀分布。 想象一下,我们负有发现欺诈性信用卡交易的任务-自然,这些交易中的绝大多数都是合法的,只有很小一部分是欺诈性的。 同样,如果我们正在测试个人是否患有癌症或是否存在病毒(包括COVID-19),那么(希望)阳性率仅是所测试者的一小部分。 更多示例包括: - An e-commerce company predicting which users will buy items on their platform 一家电子商务公司预测哪些用户将在其平台上购买商品 - A manufacturing company analyzing produced materials for defects 一家制造公司分析所生产材料的缺陷 - Spam email filtering trying to differentiation ‘ham’ from ‘spam’ 垃圾邮件过滤试图区分“火腿”和“垃圾邮件” - Intrusion detection systems examining network traffic for malware signatures or atypical port activity 入侵检测系统检查网络流量中是否存在恶意软件签名或非典型端口活动 - Companies predicting churn rates amongst their customers 预测客户流失率的公司 - Number of clients who closed a specific account in a bank or financial organization 在银行或金融组织中关闭特定帐户的客户数量 - Prediction of telecommunications equipment failures 预测电信设备故障 - Detection of oil spills from satellite images 从卫星图像检测漏油 - Insurance risk modeling 保险风险建模 - Hardware fault detection 硬件故障检测 One has usually much fewer datapoints from the adverse class. This is unfortunate as we care a lot about avoiding misclassifying elements of this class. 通常,来自不利类的数据点少得多。 这很不幸,因为我们非常在意避免对此类元素进行错误分类。 In actual fact, it is pretty rare to have perfectly balanced data in classification tasks. Oftentimes the items we are interested in analyzing are inherently ‘rare’ events for the very reason that they are rare and hence difficult to predict. This presents a curious problem for aspiring data scientists since many data science programs do not properly address how to handle imbalanced datasets given their prevalence in industry. 实际上,在分类任务中拥有完全平衡的数据非常罕见。 通常,我们感兴趣的项目本质上是“稀有”事件,原因是它们很少见,因此难以预测。 对于有抱负的数据科学家而言,这是一个令人好奇的问题,因为鉴于其在行业中的普遍性,许多数据科学程序无法正确解决如何处理不平衡的数据集。 数据集什么时候变得“不平衡”? (When does a dataset become ‘imbalanced’?) The notion of an imbalanced dataset is a somewhat vague one. Generally, a dataset for binary classification with a 49–51 split between the two variables would not be considered imbalanced. However, if we have a dataset with a 90–10 split, it seems obvious to us that this is an imbalanced dataset. Clearly, the boundary for imbalanced data lies somewhere between these two extremes. 不平衡数据集的概念有些模糊。 通常,在两个变量之间划分为49-51的二进制分类数据集不会被认为是不平衡的。 但是,如果我们有一个90-10分割的数据集,对我们来说显然这是一个不平衡的数据集。 显然,不平衡数据的边界介于这两个极端之间。 In some sense, the term ‘imbalanced’ is a subjective one and it is left to the discretion of the data scientist. In general, a dataset is considered to be imbalanced when standard classification algorithms — which are inherently biased to the majority class (further details in a previous article) — return suboptimal solutions due to a bias in the majority class. A data scientist may look at a 45–55 split dataset and judge that this is close enough that measures do not need to be taken to correct for the imbalance. However, the more imbalanced the dataset becomes, the greater the need is to correct for this imbalance. 从某种意义上说,“不平衡”一词是主观的,由数据科学家自行决定。 通常,当标准分类算法(固有地偏向多数类(在上一篇文章中有更多详细信息))由于多数类的偏向而返回次优解时,则认为数据集不平衡。 数据科学家可以查看45–55的分割数据集,并判断该数据集足够接近,因此无需采取措施来纠正不平衡。 但是,数据集变得越不平衡,就越需要纠正这种不平衡。 In a concept-learning problem, the data set is said to present a class imbalance if it contains many more examples of one class than the other. 在概念学习问题中,如果数据集包含一个类别的实例多于另一个类别的实例,则称该数据集存在类别不平衡。 As a result, these classifiers tend to ignore small classes while concentrating on classifying the large ones accurately. 结果,这些分类器倾向于忽略小类别,而专注于准确地对大类别进行分类。 Imagine you are working for Netflix and are tasked with determining which customer churn rates (a customer ‘churning’ means they will stop using your services or using your products). 想象您正在为Netflix工作,并负责确定哪些客户流失率(客户“流失”意味着他们将停止使用您的服务或产品)。 In an ideal world (at least for the data scientist), our training and testing datasets would be close to fully balanced, having around 50% of the dataset containing individuals that will churn and 50% who will not. In this case, a 90% accuracy will more or less indicate a 90% accuracy on both the positively and negatively classed groups. Our errors will be evenly split across both groups. In addition, we have roughly the same number of points in both classes, which from the law of large numbers tells us reduces the overall variance in the class. This is great for us, accuracy is an informative metric in this situation and we can continue with our analysis unimpeded. 在理想的世界中(至少对于数据科学家而言),我们的训练和测试数据集将接近完全平衡,大约50%的数据集包含会搅动的人和50%不会搅动的人。 在这种情况下,90%的准确度将或多或少地表明在正面和负面分类组中都达到90%的准确度。 我们的错误将平均分配给两个组。 此外,两个类中的点数大致相同,这从大数定律可以看出,这减少了类中的总体方差。 这对我们来说非常好,在这种情况下,准确性是一个有用的指标,我们可以继续进行不受阻碍的分析。 As you may have suspected, most people that already pay for Netflix don't have a 50% chance of stopping their subscription every month. In fact, the percentage of people that will churn is rather small, closer to a 90–10 split. How does the presence of this dataset imbalance complicate matters? 您可能会怀疑,大多数已经为Netflix付款的人没有50%的机会每月停止订阅。 实际上,会流失的人数比例很小,接近90-10。 这个数据集的不平衡如何使问题复杂化? Assuming a 90–10 split, we now have a very different data story to tell. Giving this data to an algorithm without any further consideration will likely result in an accuracy close to 90%. This seems pretty good, right? It’s about the same as what we got previously. If you try putting this model into production your boss will probably not be so happy. 假设拆分为90-10,我们现在要讲一个非常不同的数据故事。 将此数据提供给算法而无需进一步考虑,可能会导致接近90%的精度。 这看起来还不错吧? 它与我们之前获得的内容大致相同。 如果您尝试将这种模型投入生产,您的老板可能不会很高兴。 Given the prevalence of the majority class (the 90% class), our algorithm will likely regress to a prediction of the majority class. The algorithm can pretty closely maximize its accuracy (our scoring metric of choice) by arbitrarily predicting that the majority class occurs every time. This is a trivial result and provides close to zero predictive power. 给定多数类别(90%类别)的患病率,我们的算法可能会回归到多数类别的预测。 通过任意预测每次都会出现多数类,该算法可以非常精确地最大程度地提高其准确性(我们的选择评分标准)。 这是微不足道的结果,并提供接近零的预测能力。 Predictive accuracy, a popular choice for evaluating the performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. 当数据不平衡和/或不同错误的成本明显不同时,预测准确性是评估分类器性能的一种普遍选择,可能不合适。 Visually, this dataset might look something like this: 从视觉上看,该数据集可能看起来像这样: Machine learning algorithms by default assume that data is balanced. In classification, this corresponds to a comparative number of instances of each class. Classifiers learn better from a balanced distribution. It is up to the data scientist to correct for imbalances, which can be done in multiple ways. 默认情况下,机器学习算法假定数据是平衡的。 在分类中,这对应于每个类的比较实例数。 分类器从均衡的分布中学习得更好。 数据科学家可以纠正不平衡,这可以通过多种方式来完成。 不同类型的失衡 (Different Types of Imbalance) We have clearly shown that imbalanced datasets have some additional challenges to standard datasets. To further complicate matters, there are different types of imbalance that can occur in a dataset. 我们已经清楚地表明,不平衡的数据集对标准数据集还有一些其他挑战。 更复杂的是,数据集中可能会出现不同类型的失衡。 (1) Between-Class (1)课间 A between-class imbalance occurs when there is an imbalance in the number of data points contained within each class. An example of this is shown below: 当每个类中包含的数据点数量不平衡时,将发生类间不平衡。 下面是一个示例: An example of this would be a mammography dataset, which uses images known as mammograms to predict breast cancer. Consider the number of mammograms related to positive and negative cancer diagnoses: 这样的一个例子是乳腺X射线摄影数据集,它使用称为乳腺X线照片的图像来预测乳腺癌。 考虑与阳性和阴性癌症诊断相关的乳房X线照片数量: Note that given enough data samples in both classes the accuracy will improve as the sampling distribution is more representative of the data distribution, but by virtue of the law of large numbers, the majority class will have inherently better representation than the minority class. 请注意,如果两个类别中都有足够的数据样本,则精度会随着采样分布更能代表数据分布而提高,但是由于数量规律,多数类别在本质上要比少数类别更好。 (2) Within-Class (2)班内 A within-class imbalance occurs when the dataset has balanced between-class data but one of the classes is not representative in some regions. An example of this is shown below: 当数据集具有平衡的类间数据,但其中一个类在某些区域中不具有代表性时,会发生类内不平衡。 下面是一个示例: (3) Intrinsic and Extrinsic (3)内部和外部 An intrinsic imbalance is due to the nature of the dataset, while extrinsic imbalance is related to time, storage, and other factors that limit the dataset or the data analysis. Intrinsic characteristics are relatively simple and are what we commonly see, but extrinsic imbalance can exist separately and can also work to increase the imbalance of a dataset. 内在的不平衡归因于数据集的性质, 而外在的不平衡则与时间,存储以及其他限制数据集或数据分析的因素有关。 内部特征相对简单,这是我们通常看到的特征,但是外部不平衡可以单独存在,也可以用来增加数据集的不平衡。 For example, companies often use intrusion detection systems that analyze packets of data sent in and out of networks in order to detect malware of malicious activity. Depending on whether you analyze all data or just data sent through specific ports or specific devices, this will significantly influence the imbalance of the dataset (most network traffic is likely legitimate). Similarly, if log files or data packets related to suspected malicious behavior are commonly stored but normal log are not (or only a select few types are stored), then this can also influence the imbalance of the dataset. Similarly, if logs were only stored during a normal working day (say, 9–5 PM) instead of 24 hours, this will also affect the imbalance. 例如,公司经常使用入侵检测系统来分析进出网络的数据包,以检测恶意活动的恶意软件。 根据您是分析所有数据还是仅分析通过特定端口或特定设备发送的数据,这将严重影响数据集的不平衡(大多数网络流量可能是合法的)。 同样,如果通常存储与可疑恶意行为有关的日志文件或数据包,但不存储常规日志(或仅存储少数几种类型的日志),则这也可能会影响数据集的不平衡。 同样,如果日志仅在正常工作日(例如9-5 PM)而非24小时内存储,这也会影响不平衡。 不平衡的进一步复杂化 (Further Complication of Imbalance) There are a couple more difficulties increased by imbalanced datasets. Firstly, we have class overlapping. This is not always a problem, but can often arise in imbalanced learning problems and cause headaches. Class overlapping is illustrated in the below dataset. 不平衡的数据集会增加更多的困难。 首先,我们有班级重叠 。 这并不总是一个问题,但是经常会在学习不平衡的问题中出现并引起头痛。 下面的数据集说明了类重叠。 Class overlapping occurs in normal classification problems, so what is the additional issue here? Well, the class more represented in overlap regions tends to be better classified by methods based on global learning (on the full dataset). This is because the algorithm is able to get a more informed picture of the data distribution of the majority class. 在正常的分类问题中会发生类重叠,那么这里还有什么其他问题? 好吧,在重叠区域中表示更多的类倾向于通过基于全局学习的方法(在完整数据集上)更好地分类。 这是因为该算法能够获得多数类数据分布的更多信息。 In contrast, the class less represented in such regions tends to be better classified by local methods. If we take k-NN as an example, as the value of k increases, it becomes increasingly global and increasingly local. It can be shown that performance for low values of k has better performance on the minority dataset, and lower performance at high values of k. This shift in accuracy is not exhibited for the majority class because it is well-represented at all points. 相反,在此类区域中较少代表的类别倾向于通过本地方法更好地分类。 如果以k-NN为例,随着k值的增加,它变得越来越全球化,也越来越局部化。 可以证明,k值较低时的性能在少数数据集上具有较好的性能,而k值较高时的性能较低。 准确性的这种变化在大多数类别中都没有表现出来,因为它在所有方面都得到了很好的体现。 This suggests that local methods may be better suited for studying the minority class. One method to correct for this is the CBO Method. The CBO Method uses cluster-based resampling to identify ‘rare’ cases and resample them individually, so as to avoid the creation of small disjuncts in the learned hypothesis. This is a method of oversampling — a topic that we will discuss in detail in the following section. 这表明本地方法可能更适合于研究少数群体。 一种纠正此问题的方法是CBO方法 。 CBO方法使用基于聚类的重采样来识别“稀有”案例并分别对其进行重采样,以避免在学习的假设中产生小的歧义。 这是一种过采样的方法-我们将在下一节中详细讨论这个主题。 纠正数据集不平衡 (Correcting Dataset Imbalance) There are several techniques to control for dataset imbalance. There are two main types of techniques to handle imbalanced datasets: sampling methods, and cost-sensitive methods. 有几种控制数据集不平衡的技术。 处理不平衡数据集的技术主要有两种: 抽样方法和成本敏感方法 。 The simplest and most commonly used of these are sampling methods called oversampling and undersampling, which we will go into more detail on. 其中最简单,最常用的是称为过采样和欠采样的采样方法,我们将对其进行详细介绍。 Oversampling/Undersampling 过采样/欠采样 Simply stated, oversampling involves generating new data points for the minority class, and undersampling involves removing data points from the majority class. This acts to somewhat reduce the extent of the imbalance in the dataset. 简而言之,过采样涉及为少数类生成新的数据点,而欠采样涉及从多数类中删除数据点。 这在某种程度上减少了数据集中的不平衡程度。 What does undersampling look like? We continually remove like-samples in close proximity until both classes have the same number of data points. 欠采样是什么样的? 我们会不断删除附近的相似样本,直到两个类具有相同数量的数据点。 Is undersampling a good idea? Undersampling is recommended by many statistical researchers but is only good if enough data points are available on the undersampled class. Also, since the majority class will end up with the same number of points as the minority class, the statistical properties of the distributions will become ‘looser’ in a sense. However, we have not artificially distorted the data distribution with this method by adding in artificial data points. 采样不足是个好主意吗? 许多统计研究人员建议进行欠采样,但是只有在欠采样类别上有足够的数据点可用时,采样才是好的。 同样,由于多数类最终将获得与少数类相同的分数,因此从某种意义上说,分布的统计属性将变为“较弱”。 但是,我们没有通过添加人工数据点来使用这种方法人为地扭曲数据分布。 What does oversampling look like? In shot, the opposite of undersampling. We are artificially adding data points to our dataset to make the number of instances in each class balanced. 过采样看起来像什么? 在拍摄中,欠采样的情况与之相反。 我们正在人为地向数据集中添加数据点,以使每个类中的实例数量保持平衡。 How do we generate these samples? The most common way is to generate points that are close in dataspace proximity to existing samples or are ‘between’ two samples, as illustrated below. 我们如何生成这些样本? 最常见的方法是生成在数据空间中与现有样本接近或在两个样本“之间”的点,如下所示。 As you may have suspected, there are some downsides to adding false data points. Firstly, you risk overfitting, especially if one does this for points that are noise — you end up exacerbating this noise by adding reinforced measurements. In addition, adding these values randomly can also contribute additional noise to our model. 您可能已经怀疑过,添加错误的数据点有一些缺点。 首先,您可能会面临过度拟合的风险,特别是如果对噪声点进行过度拟合时,最终会通过添加增强的测量来加剧这种噪声。 此外,随机添加这些值也会给我们的模型带来额外的噪声。 SMOTE (Synthetic minority oversampling technique) SMOTE(合成少数群体过采样技术) Luckily for us, we don’t have to write an algorithm for randomly generating data points for the purpose of oversampling. Instead, we can use the SMOTE algorithm. 对我们来说幸运的是,我们不必编写用于过采样的随机生成数据点的算法。 相反,我们可以使用SMOTE算法。 How does SMOTE work? SMOTE generates new samples in between existing data points based on their local density and their borders with the other class. Not only does it perform oversampling, but can subsequently use cleaning techniques (undersampling, more on this shortly) to remove redundancy in the end. Below is an illustration for how SMOTE works when studying class data. SMOTE如何工作? SMOTE根据现有数据点的局部密度及其与其他类别的边界在新数据点之间生成新样本。 它不仅执行过采样,而且可以随后使用清除技术(欠采样,稍后对此进行更多介绍)最终消除冗余。 下面是学习班级数据时SMOTE如何工作的图示。 The algorithm for SMOTE is as follows. For each minority sample: SMOTE的算法如下。 对于每个少数族裔样本: – Find its k-nearest minority neighbours –寻找其k最近的少数族裔邻居 – Randomly select j of these neighbours –随机选择这些邻居中的j个 – Randomly generate synthetic samples along the lines joining the minority sample and its j selected neighbours (j depends on the amount of oversampling desired) –沿连接少数样本及其j个选定邻居的直线随机生成合成样本(j取决于所需的过采样量) Informed vs. Random Oversampling 知情vs.随机过采样 Using random oversampling (with replacement) of the minority class has the effect of making the decision region for the minority class very specific. In a decision tree, it would cause a new split and often lead to overfitting. SMOTE’s informed oversampling generalizes the decision region for the minority class. As a result, larger and less specific regions are learned, thus, paying attention to minority class samples without causing overfitting. 使用少数类的随机过采样 (替换)具有使少数类的决策区域非常具体的效果。 在决策树中,这将导致新的分裂并经常导致过度拟合。 SMOTE的明智超采样概括了少数群体的决策区域。 结果,学习了更大和更少的特定区域,因此,在不引起过度拟合的情况下注意少数类样本。 Drawbacks of SMOTE SMOTE的缺点 Overgeneralization. SMOTE’s procedure can be dangerous since it blindly generalizes the minority area without regard to the majority class. This strategy is particularly problematic in the case of highly skewed class distributions since, in such cases, the minority class is very sparse with respect to the majority class, thus resulting in a greater chance of class mixture. 过度概括。 SMOTE的程序可能很危险,因为它盲目地将少数民族地区泛化而无视多数阶级。 这种策略在阶级分布高度偏斜的情况下尤其成问题,因为在这种情况下,少数阶级相对于多数阶级而言非常稀疏,因此导致阶级混合的机会更大。 Inflexibility. The number of synthetic samples generated by SMOTE is fixed in advance, thus not allowing for any flexibility in the re-balancing rate. 僵硬。 SMOTE生成的合成样本的数量是预先固定的,因此再平衡速率不具有任何灵活性。 Another potential issue is that SMOTE might introduce the artificial minority class examples too deeply in the majority class space. This drawback can be resolved by hybridization: combining SMOTE with undersampling algorithms. One of the most famous of these is Tomek Links. Tomek Links are pairs of instances of opposite classes who are their own nearest neighbors. In other words, they are pairs of opposing instances that are very close together. 另一个潜在的问题是,SMOTE可能会在多数阶层的空间中过于深入地介绍人工少数群体的例子。 这个缺点可以通过杂交解决:将SMOTE与欠采样算法结合在一起。 其中最著名的就是Tomek Links 。 Tomek链接是一对相反类别的实例,它们是自己最近的邻居。 换句话说,它们是一对非常靠近的相对实例。 Tomek’s algorithm looks for such pairs and removes the majority instance of the pair. The idea is to clarify the border between the minority and majority classes, making the minority region(s) more distinct. Scikit-learn has no built-in modules for doing this, though there are some independent packages (e.g., TomekLink, imbalanced-learn). Tomek的算法会查找此类对,并删除该对的多数实例。 这样做的目的是弄清少数民族和多数阶级之间的界限,使少数民族地区更加鲜明。 尽管有一些独立的软件包(例如TomekLink , imbalanced -learn ),但Scikit-learn没有内置模块可以执行此操作。 Thus, Tomek’s algorithm is an undersampling technique that acts as a data cleaning method for SMOTE to regulate against redundancy. As you may have suspected, there are many additional undersampling techniques that can be combined with SMOTE to perform the same function. A comprehensive list of these functions can be found in the functions section of the imbalanced-learn documentation. 因此,Tomek的算法是一种欠采样技术,可作为SMOTE调节冗余的数据清洗方法。 您可能已经怀疑,还有许多其他的欠采样技术可以与SMOTE结合使用以执行相同的功能。 这些功能的全面列表可在不平衡学习文档的功能部分中找到。 An additional example is Edited Nearest Neighbors (ENN). ENN removes any example whose class label differs from the class of at least two of their neighbor. ENN removes more examples than the Tomek links does and also can remove examples from both classes. 另一个示例是“最近的邻居”(ENN)。 ENN删除任何其类别标签不同于其至少两个邻居的类别的示例。 与Tomek链接相比,ENN删除的示例更多,并且还可以从两个类中删除示例。 Other more nuanced versions of SMOTE include Borderline SMOTE, SVMSMOTE, and KMeansSMOTE, and more nuanced versions of the undersampling techniques applied in concert with SMOTE are Condensed Nearest Neighbor (CNN), Repeated Edited Nearest Neighbor, and Instance Hardness Threshold. SMOTE的其他细微差别版本包括Borderline SMOTE,SVMSMOTE和KMeansSMOTE,与SMOTE结合使用的欠采样技术的细微差别版本是压缩最近邻(CNN),重复编辑最近邻和实例硬度阈值。 成本敏感型学习 (Cost-Sensitive Learning) We have discussed sampling techniques and are now ready to discuss cost-sensitive learning. In many ways, the two approaches are analogous — the main difference being that in cost-sensitive learning we perform under- and over-sampling by altering the relative weighting of individual samples. 我们已经讨论了采样技术,现在准备讨论对成本敏感的学习。 在许多方面,这两种方法是相似的-主要区别在于在成本敏感型学习中,我们通过更改单个样本的相对权重来进行欠采样和过采样。 Upweighting. Upweighting is analogous to over-sampling and works by increasing the weight of one of the classes keeping the weight of the other class at one. 增重。 上权类似于过采样,其工作方式是增加一个类别的权重,将另一类别的权重保持为一个。 Down-weighting. Down-weighting is analogous to under-sampling and works by decreasing the weight of one of the classes keeping the weight of the other class at one. 减重。 减权类似于欠采样,它通过减小一个类别的权重而将另一类别的权重保持为一个来工作。 An example of how this can be performed using sklearn is via the sklearn.utils.class_weight function and applied to any sklearn classifier (and within keras). 如何使用sklearn执行此操作的示例是通过 sklearn.utils.class_weight函数并将其应用于任何sklearn分类器(以及在keras中)。 from sklearn.utils import class_weight class_weights = class_weight.compute_class_weight('balanced', np.unique(y_train), y_train) model.fit(X_train, y_train, class_weight=class_weights) In this case, we have set the instances to be ‘balanced’, meaning that we will treat these instances to have balanced weighting based on their relative number of points — this is what I would recommend unless you have a good reason for setting the values yourself. If you have three classes and wanted to weight one of them 10x larger and another 20x larger (because there are 10x and 20x fewer of these points in the dataset than the majority class), then we can rewrite this as: 在这种情况下,我们将实例设置为“平衡”,这意味着我们将根据它们的相对点数将这些实例视为具有均衡的权重-这是我的建议,除非您有充分的理由来设置值你自己 如果您有三个类别,并且想要将其中一个类别的权重放大10倍,将另一个类别的权重增大20倍(因为数据集中这些点的数量比多数类别少10倍和20倍),则可以将其重写为: class_weight = {0: 0.1, 1: 1., 2: 2.} Some authors claim that cost-sensitive learning is slightly more effective than random or directed over- or under-sampling, although all approaches are helpful, and directed oversampling, is close to cost-sensitive learning in efficacy. Personally, when I am working on a machine learning problem I will use cost-sensitive learning because it is much simpler to implement and communicate to individuals. However, there may be additional aspects of using sampling techniques that provide superior results of which I am not aware. 一些作者声称,成本敏感型学习比随机或有针对性的过度采样或欠采样略有效果,尽管所有方法都是有帮助的,有针对性的过度采样在效果上接近于成本敏感型学习。 就个人而言,当我处理机器学习问题时,我将使用成本敏感型学习,因为它易于实现并与个人进行交流 。 但是,使用采样技术可能存在其他方面,这些方面提供了我所不知道的优异结果。 评估指标 (Assessment Metrics) In this section, I outline several metrics that can be used to analyze the performance of a classifier trained to solve a binary classification problem. These include (1) the confusion matrix, (2) binary classification metrics, (3) the receiver operating characteristic curve, and (4) the precision-recall curve. 在本节中,我概述了几个可用于分析经过训练以解决二进制分类问题的分类器的性能的指标。 其中包括(1)混淆矩阵,(2)二进制分类指标,(3)接收器工作特性曲线和(4)精确调用曲线。 混淆矩阵 (Confusion Matrix) Despite what you may have garnered from its name, a confusion matrix is decidedly confusing. A confusion matrix is the most basic form of assessment of a binary classifier. Given the prediction outputs of our classifier and the true response variable, a confusion matrix tells us how many of our predictions are correct for each class, and how many are incorrect. The confusion matrix provides a simple visualization of the performance of a classifier based on these factors. 尽管您可能从它的名字中学到了什么,但是混乱矩阵显然令人困惑。 混淆矩阵是二进制分类器评估的最基本形式。 给定分类器的预测输出和真实的响应变量,混淆矩阵会告诉我们每个类别正确的预测有多少,不正确的预测有多少。 混淆矩阵基于这些因素提供了分类器性能的简单可视化。 Here is an example of a confusion matrix: 这是一个混淆矩阵的示例: Hopefully what this is showing is relatively clear. The TN cell tells us the number of true positives: the number of positive samples that I predicted were positive. 希望这显示的是相对清楚的。 TN细胞告诉我们真正的阳性数量:我预测的阳性样品数量为阳性。 The TP cell tells us the number of true negatives: the number of negative samples that I predicted were negative. TP单元告诉我们真实阴性的数量:我预测的阴性样品的数量为阴性。 The FP cell tells us the number of false positives: the number of negative samples that I predicted were positive. FP细胞告诉我们假阳性的数量:我预测的阴性样品的数量是阳性的。 The FN cell tells us the number of false negatives: the number of positive samples that I predicted were positive. FN细胞告诉我们假阴性的数量:我预测的阳性样品的数量为阳性。 These numbers are very important as they form the basis of the binary classification metrics discussed next. 这些数字非常重要,因为它们构成了下面讨论的二进制分类指标的基础。 二进制分类指标 (Binary Classification Metrics) There are a plethora of single-value metrics for binary classification. As such, only a few of the most commonly used ones and their different formulations are presented here, more details can be found on scoring metrics in the sklearn documentation and on their relation to confusion matrices and ROC curves (discussed in the next section) here. 二进制分类有很多单值指标。 因此,此处仅介绍一些最常用的方法及其不同的公式,有关更多详细信息,请参见sklearn文档中的评分指标以及它们与混淆矩阵和ROC曲线的关系(在下一节中讨论) 。 。 Arguably the most important five metrics for binary classification are: (1) precision, (2) recall, (3) F1 score, (4) accuracy, and (5) specificity. 可以说,二元分类最重要的五个指标是:(1)精度,(2)回忆,(3)F1得分,(4)准确性和(5)特异性。 Precision. Precision provides us with the answer to the question “Of all my positive predictions, what proportion of them are correct?”. If you have an algorithm that predicts all of the positive class correctly but also has a large portion of false positives, the precision will be small. It makes sense why this is called precision since it is a measure of how ‘precise’ our predictions are. 精确。 Precision为我们提供了以下问题的答案: “在我所有的积极预测中,有多少是正确的?” 。 如果您有一种算法可以正确预测所有肯定分类,但也有很大一部分误报,则精度会很小。 之所以将其称为“精度”是有道理的,因为它可以衡量我们的预测有多“精确”。 Recall. Recall provides us with the answer to a different question “Of all of the positive samples, what proportion did I predict correctly?”. Instead of false positives, we are now interested in false negatives. These are items that our algorithm missed, and are often the most egregious errors (e.g. failing to diagnose something with cancer that actually has cancer, failing to discover malware when it is present, or failing to spot a defective item). The name ‘recall’ also makes sense for this circumstance as we are seeing how many of the samples the algorithm was able to pick up on. 召回。 Recall为我们提供了一个不同问题的答案: “在所有阳性样本中,我正确预测的比例是多少?” 。 现在,我们对假阴性感兴趣了,而不是假阳性。 这些是我们的算法遗漏的项目,并且通常是最严重的错误(例如,未能诊断出确实患有癌症的癌症,无法发现恶意软件或存在缺陷的项目)。 在这种情况下,“召回”这个名称也很有意义,因为我们看到了该算法能够提取多少个样本。 It should be clear that these questions, whilst related, are substantially different to each other. It is possible to have a very high precision and simultaneously have a low recall, and vice versa. For example, if you predicted the majority class every time, you would have 100% recall on the majority class, but you would then get a lot of false positives from the minority class. 应当明确的是,这些问题虽然相关,但彼此之间却有很大不同。 可能有很高的精度,同时召回率也很低,反之亦然。 例如,如果您每次都预测多数派,则多数派将有100%的回忆率,但随后您将从少数派中得到很多误报。 One other important point to make is that precision and recall can be determined for each individual class. That is, we can talk about the precision of class A, or the precision of class B, and they will have different values — when doing this, we assume that the class we are interested in is the positive class, regardless of its numeric value. 另一个重要的观点是, 可以为每个单独的类确定精度和召回率 。 也就是说,我们可以谈论类A的精度或类B的精度,并且它们将具有不同的值-这样做时,我们假设我们感兴趣的类是正类,而不管其数值如何。 F1 Score. The F1 score is a single-value metric that combines precision and recall by using the harmonic mean (a fancy type of averaging). The β parameter is a strictly positive value that is used to describe the relative importance of recall to precision. A larger β value puts a higher emphasis on recall than precision, whilst a smaller value puts less emphasis. If the value is 1, precision and recall are treated with equal weighting. F1分数。 F1分数是一个单值指标,通过使用谐波均值(一种奇特的平均值)将精度和召回率结合在一起。 β参数是一个严格的正值,用于描述召回对精度的相对重要性。 β值较大时,对查全率的重视程度要高于精度,而β值较小时,对查全率的重视程度较低。 如果该值为1,则精度和召回率将以相等的权重处理。 What does a high F1 score mean? It suggests that both the precision and recall have high values — this is good and is what you would hope to see upon generating a well-functioning classification model on an imbalanced dataset. A low value indicates that either precision or recall is low, and maybe a call for concern. Good F1 scores are generally lower than good accuracies (in many situations, an F1 score of 0.5 would be considered pretty good, such as predicting breast cancer from mammograms). F1高分意味着什么? 它表明精度和查全率都具有很高的值-这很好,这是在不平衡数据集上生成功能良好的分类模型时希望看到的。 较低的值表示准确性或召回率较低,可能表示需要关注。 良好的F1分数通常低于良好的准确性(在许多情况下,F1分数0.5被认为是相当不错的,例如根据乳房X线照片预测乳腺癌)。 Specificity. Simply stated, specificity is the recall of negative values. It answers the question “Of all of my negative predictions, what proportion of them are correct?”. This may be important in situations where examining the relative proportion of false positives is necessary. 特异性。 简而言之,特异性就是召回负值。 它回答了一个问题: “在我所有的负面预测中,有多少比例是正确的?” 。 这在需要检查假阳性的相对比例的情况下可能很重要。 Macro, Micro, and Weighted Scores 宏观,微观和加权分数 This is where things get a little complicated. Anyone who has delved into these metrics on sklearn may have noticed that we can refer to the recall-macro or f1-weighted score. 这会使事情变得有些复杂。 认真研究了sklearn的这些指标的任何人都可能已经注意到,我们可以参考召回宏或f1加权得分。 A macro-F1 score is the average of F1 scores across each class. 宏观F1分数是每个课程中F1分数的平均值。 This is most useful if we have many classes and we are interested in the average F1 score for each class. If you only care about the F1 score for one class, you probably won’t need a macro-F1 score. 如果我们有很多班,并且我们对每个班的平均F1成绩感兴趣,这将是最有用的。 如果您只关心一个班级的F1分数,则可能不需要宏F1分数。 A micro-F1 score takes all of the true positives, false positives, and false negatives from all the classes and calculates the F1 score. 微型F1分数采用所有类别中的所有真实肯定,错误肯定和错误否定,并计算F1得分。 The micro-F1 score is pretty similar in utility to the macro-F1 score as it gives an aggregate performance of a classifier over multiple classes. That being said, they will give different results and understand the underlying difference in that result may be informative for a given application. 微型F1得分的效用与宏观F1得分非常相似,因为它提供了多个类别的分类器的综合性能。 话虽如此,他们将给出不同的结果,并了解该结果的根本差异可能对给定的应用程序有帮助。 A weighted-F1 score is the same as the macro-F1 score, but each of the class-specific F1 scores is scaled by the relative number of samples from that class. 加权F1分数与宏F1分数相同,但是每个类别特定的F1分数均根据该类别的样本的相对数量进行缩放。 In this case, N refers to the proportion of samples in the dataset belonging to a single class. For class A, where class A is the majority class, this might be equal to 0.8 (80%). The values for B and C might be 0.15 and 0.05, respectively. 在这种情况下, N是指数据集中属于单个类别的样本所占的比例。 对于A类,其中A类为多数类,这可能等于0.8(80%)。 B和C的值分别为0.15和0.05。 For a highly imbalanced dataset, a large weighted-F1 score might be somewhat misleading because it is overly influenced by the majority class. 对于高度不平衡的数据集,较大的F1加权分数可能会引起误导,因为它受到多数类别的过度影响。 Other Metrics 其他指标 Some other metrics that you may see around that can be informative for binary classification (and multiclass classification to some extent) are: 您可能会发现的一些其他指标可对二进制分类(在某种程度上,以及多类分类)有所帮助: Accuracy. If you are reading this, I would imagine you are already familiar with accuracy, but perhaps not so familiar with the others. Cast in the light of a metric for a confusion matrix, the accuracy can be described as the ratio of true predictions (positive and negative) to the sum of the total number of positive and negative samples. 准确性。 如果您正在阅读本文,我想您已经对准确性很熟悉,但对其他准确性可能不太了解。 根据混淆矩阵的度量标准,可以将准确度描述为真实预测(阳性和阴性)与阳性和阴性样本总数之和的比率。 G-Mean. A less common metric that is somewhat analogous to the F1 score is the G-Mean. This is often cast in two different formulations, the first being the precision-recall g-mean, and the second being the sensitivity-specificity g-mean. They can be used in a similar manner to the F1 score in terms of analyzing algorithmic performance. The precision-recall g-mean can also be referred to as the Fowlkes-Mallows Index. G均值。 G均值是一种不太常见的指标,与F1分数有些相似。 通常用两种不同的公式表示,第一种是精确调用g均值,第二种是敏感性特异性g均值。 就分析算法性能而言,它们可以与F1分数类似的方式使用。 精确调用g均值也可以称为Fowlkes-Mallows索引 。 There are many other metrics that can be used, but most have specialized use cases and offer little additional utility over the metrics described here. Other metrics the reader may be interested in viewing are balanced accuracy, Matthews correlation coefficient, markedness, and informedness. 可以使用许多其他指标,但是大多数指标都有专门的用例,并且与此处描述的指标相比,几乎没有其他用途。 读者可能感兴趣的其他指标是平衡的准确性 , 马修斯相关系数 , 标记性和信息灵通性 。 Receiver Operating Characteristic (ROC) Curve 接收器工作特性(ROC)曲线 An ROC curve is a two-dimensional graph to depicts trade-offs between benefits (true positives) and costs (false positives). It displays a relation between sensitivity and specificity for a given classifier (binary problems, parameterized classifier or a score classification). ROC曲线是一个二维图形,用于描述收益(真实肯定)和成本(错误真实)之间的权衡。 它显示了给定分类器(二进制问题,参数化分类器或分数分类)的敏感性和特异性之间的关系。 Here is an example of an ROC curve. 这是ROC曲线的示例。 There is a lot to unpack here. Firstly, the dotted line through the center corresponds to a classifier that acts as a ‘coin flip’. That is, it is correct roughly 50% of the time and is the worst possible classifier (we are just guessing). This acts as our baseline, against which we can compare all other classifiers — these classifiers should be closer to the top left corner of the plot since we want high true positive rates in all cases. 这里有很多要解压的东西。 首先,通过中心的虚线对应于充当“硬币翻转”的分类器。 也就是说,大约50%的时间是正确的,并且是最糟糕的分类器(我们只是在猜测)。 这是我们的基准,可以与所有其他分类器进行比较-这些分类器应更靠近图的左上角,因为在所有情况下我们都希望有较高的真实阳性率。 It should be noted that an ROC curve does not assess a group of classifiers. Rather, it examines a single classifier over a set of classification thresholds. 应该注意的是,ROC曲线不评估一组分类器。 而是,它在一组分类阈值上检查单个分类器 。 What does this mean? It means that for one point, I take my classifier and set the threshold to be 0.3 (30% propensity) and then assess the true positive and false positive rates. 这是什么意思? 这意味着,我将分类器的阈值设置为0.3(倾向性为30%),然后评估真实的阳性和假阳性率。 True Positive Rate: Percentage of true positives (to the sum of true positives and false negatives) generated by the combination of a specific classifier and classification threshold. 真实肯定率: 特定分类器和分类阈值的组合所生成的 真实肯定率 (相对于真实肯定率和错误否定率)。 False Positive Rate: Percentage of false positives (to the sum of false positives and true negatives) generated by the combination of a specific classifier and classification threshold. 误报率: 特定分类器和分类阈值的组合所产生的误报率(占误报率和真实否定值的总和)。 This gives me two numbers, which I can then plot on the curve. I then take another threshold, say 0.4, and repeat this process. After doing this for every threshold of interest (perhaps in 0.1, 0.01, or 0.001 increments), we have constructed an ROC curve for this classifier. 这给了我两个数字,然后可以在曲线上绘制它们。 然后,我将另一个阈值设为0.4,然后重复此过程。 在对每个感兴趣的阈值执行此操作后(可能以0.1、0.01或0.001为增量),我们为此分类器构建了ROC曲线。 What is the point of doing this? Depending on your application, you may be very averse to false positives as they may be very costly (e.g. launches of nuclear missiles) and thus would like a classifier that has a very low false-positive rate. Conversely, you may not care so much about having a highfalse positive rate as long as you get a high true positive rate (stopping most events of fraud may be worth it even if you have to check many more occurrences that are flagged by the algorithm as flawed). For the optimal balance between these two ratios (where false positives and false negatives are equally costly), we would take the classification threshold which results in the minimum diagonal distance from the top left corner. 这样做有什么意义? 根据您的应用,您可能会反对误报,因为误报的代价可能很高(例如,发射核导弹),因此希望分类器的误报率非常低。 相反,只要您获得很高的真实阳性率,您可能就不会太在意高假阳性率(即使必须检查该算法标记为的更多事件,停止大多数欺诈事件也是值得的)有缺陷的)。 为了在这两个比率之间实现最佳平衡(假阳性和假阴性的代价均相同),我们将采用分类阈值,以使距左上角的对角线距离最小。 Why does the top left corner correspond to the ideal classifier? The ideal point on the ROC curve would be (0,100), that is, all positive examples are classified correctly and no negative examples are misclassified as positive. In a perfect classifier, there would be no misclassification! 为什么左上角对应于理想分类器? ROC曲线上的理想点是(0,100) ,也就是说,所有正样本都正确分类,没有负样本被误分类为正样本。 在一个完美的分类器中,不会出现分类错误! Whilst a graph may not seem pretty useful in itself, it is helpful in comparing classifiers. One particular metric, the Area Under Curve (AUC) score, allows us to compare classifiers by comparing the total area underneath the line produced on the ROC curve.. 虽然图本身似乎不太有用,但它有助于比较分类器。 一种特殊的度量标准,即曲线下面积(AUC)得分,使我们可以通过比较ROC曲线上生成的线下的总面积来比较分类器。: . Precision-Recall (PR) Curves Precision-Recall (PR) Curves. Final Comments (Final Comments)! Newsletter (Newsletter) For updates on new blog posts and extra content, sign up for my newsletter. For updates on new blog posts and extra content, sign up for my newsletter. 翻译自: 数据安全分类分级实施指南
https://www.csdn.net/tags/MtjaAgzsMjE0NDMtYmxvZwO0O0OO0O0O.html
CC-MAIN-2021-25
refinedweb
5,295
51.28
Java as a Scripting Language? I came across a language comparison (which I wish I could still find) where the author presented a code sample in many different languages. The example he chose was computing the MD5 digest of a string. He showed a verbose Java version, some python etc. Finally php: md5("Hello World"); This example, the author asserted, showed that PHP coders can do as much with one line as a Java coder can do in 20. Of course any problem is easy in any language if there’s an available library call that’s just right. When I have to write a program to work with lines of text I’ll usually turn to Ruby or possibly Groovy (sorry Perl – not going there anymore). Scripting languages like these are usually geared toward text processing and their built in libraries make these jobs easy. I wouldn’t jump into Java because it’s such a pain to do this kind of thing. Ruby File.open("myfile.txt").each { |line| puts line if line =~ /blue/ } some standard Java to do the same thing import java.io.*; public class ProcessWords { public static void main(String [] args) throws IOException { BufferedReader input = new BufferedReader( new FileReader("myfile.txt")); String line; while ((line = input.readLine()) != null) if (line.indexOf("blue") != -1) System.out.println(line); input.close(); } } The Java code is obviously more verbose and uglier, matching many people’s opinion of Java in general. But is this because of the language or the API? Java’s APIs are all very general and give you complete control over everything you do. Ruby’s APIs make it easy to perform this common task. You could easily create a TextFile class in Java with a linesMatching method that returned Iterable<String> allowing you to iterate over lines that matched a regular expression. Now the task is easy: public class ProcessWords { public static void main(String [] args) throws IOException { for (String line : new TextFile("myfile.txt").linesMatching("blue")) { System.out.println(line); } } } The designers of Ant decided programmers like writing in XML. But I don’t. I’d rather write in Java than XML. Would ant build scripts be better expressed in Java? My hypothetical translation into Java. public class Ant implements ProjectDefinition { void targets(Project project) { project .add(new Target("clean") { void execute(Tasks tasks) { tasks.delete("build"); } }) .add(new Target("compile") { void execute(Tasks tasks) { tasks.mkdir("build/classes"); tasks.javac("src").destdir("build/classes"); } }); } } Both definitions are roughly the same size. The hypothetical Java version definitely has some cruft but is reasonably compact. Like Rake though, the Java version would allow you to use the power of a real language in your build script – conditionals loops, dynamic targets, variables. A Java version would give you instant IDE support, debugging, profiling on top of that. Both the line iteration and ant project definition examples show internal domain specific languages. The first domain is text processing. Scripting languages compete in this space, but I would argue it has a lot more to do with the libraries they provide than the language syntax. The second domain is build configurations. With the right APIs Java would do a very good job here too. Of course Java does have several strikes against it when you actually consider its use for scripting: - you have to compile all the files which turns some people off. You could easily write a ClassLoader to compile the code on the fly. - Java starts up slowly, which kills the performance of very quick scripts, if that matters. - Java has no meta programming. This is probably the biggest issue, although reflection and code generation can help here. (You could generate the ant task APIs for my build script example). I think more user friendly APIs could be written for Java for areas like text processing, XML parsing and creation, threading (even easier than java.util.concurrent), and file operations. Joda Time is a great example of a library that is cleary superior to Java’s in this respect. When I write an API (in any language) I try to think of what would make the user happiest when coding against it, not what necessarily matches the implementation. Chained method calls for example don’t help at all in the implementation of a class, but returning this from each method can help the callers in some cases and is easy enough to justify. In some cases providing a little internal DSL instead of a collection of getters/setters and unrelated methods makes the code a lot more readable and helps keep the focus on the caller instead of the implementation. From (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) ff aaa replied on Wed, 2008/09/03 - 5:36am nice, and i agree java can do pretty good with well designed helpers. i had a similar approach with SimpleFileReader class. Also a similar blog enrty. Ray Krueger replied on Wed, 2008/09/03 - 6:10am This is exactly why I like Groovy! All the power of the JVM and the classes and libraries that are already built for Java; supporting a language that's easy on the eyes and totally familiar to any Java geek. Your Ant example above is totally possible, but I'd definitely rather use Gant or Buildr personally. Ricky Clarkson replied on Wed, 2008/09/03 - 6:33am "This is exactly why I like Groovy! All the power of the JVM and the classes and libraries that are already built for Java; supporting a language that's easy on the eyes" And behaves completely differently. I know I keep saying this, but int i="hello"; is valid Groovy code, but the Groovy homepage claims Groovy supports static typing. No. Static things are checked without running the code. That's what static means. Dan Howard replied on Wed, 2008/09/03 - 12:46pm phil swenson replied on Wed, 2008/09/03 - 2:21pm I personally am a fan of using JRuby's Rake and using the AntWrap gem. I also have used Gant and it's very similar, and Gant itself is pretty nice.... I do find a lot of groovy-isms very irritating. For example it's not truly scripted, if you pull in java files to your build to leverage you have to have them compiled. Given that it's a build script, this is problematic. Groovy also gives the most heinous stack dumps I've ever seen. Every exception has about 100-200 lines of groovy cruft that has nothing to do with the error. Neither of these issues are easy to solved due to the nature of Groovy. Don't get me wrong, I like Groovy pretty well but just prefer Ruby. I find Ruby a more pure solution. Here's a snippet of a ruby rake file: @ant = Antwrap::AntProject.new(:ant_home=>'/usr/share/ant') @ant.mkdir(:dir => "classes") @ant.path(:id => "common.class.path"){ @ant.fileset(:dir => "#{common_lib_dir}"){ @ant.include(:name => "**/*.jar") } } desc "clean the classes directory" task :clean do @ant.delete(:dir => "classes") end desc "generate war file" task :war =>[:compile] do @ant.war(:update => false, :destfile=>"ei.war", :webxml=>"../web/WEB-INF/web.xml"){ @ant.fileset(:dir => web_dir) } end task :default => [:compile] desc "compile java classes" task :compile do puts "compiling java classes to [classes]..." @ant.javac(:srcdir => "../src", :destdir => "classes", :deprecation=>true, :debug=>true, :fork=>true){ @ant.classpath(:refid => "common.class.path") @ant.compilerarg(:value =>"-Xlint") } end Collin Fagan replied on Wed, 2008/09/03 - 2:58pm Alex Prayle replied on Wed, 2008/09/03 - 7:29pm A task commonly required by one developer may never be required by another. You just need to either find or build a more special purpose Java library to do whatever tasks you commonly do as concisely as you require: So for example, if text file processing is your thing, the above example could be implemented even more concisely in Java as: extractLines("myfile.txt", new ContainsCriteria("blue"), System.out); or even extractLinesContaining("myfile.txt", "blue"); A more general purpose API will generally require more code to use it. phil swenson replied on Wed, 2008/09/03 - 9:09pm in response to: Alex Prayle "A task commonly required by one developer may never be required by another. You just need to either find or build a more special purpose Java library to do whatever tasks you commonly do as concisely as you require" Yeah, you can do what everyone does and write a bunch of utility classes like they did for the Apache Commons. But I don't think Sun deserves a defense here. They always pick the most complex use cases and ignore the common ones. I don't mind handling the complex cases, but why do you have to deal with streams for opening a file? It's silly. Most other languages make it easy. There is a reason almost every project has classes like "XMLUtil, FileUtil, StringUtil, EmailUtil, etc." It's because Sun makes things much more complicated than they need to. Ray Krueger replied on Wed, 2008/09/03 - 9:43pm in response to: Ricky Clarkson You probably shouldn't keep sayin it then, because you're wrong :) int i="hello" This is not valid groovy code. You've declared i to be of type int, and therefore cannot assign a string to it. Just like Java :) If you want to drop the type declaration and make it valid to swap types you can "def" it without a type. def i = "hello" i = 10 println i That's valid groovy code, because I've explicitly stated that I don't care what i is, really all I care about is that it has a toString() for the println call. Ricky Clarkson replied on Thu, 2008/09/04 - 12:31am Artur Biesiadowski replied on Fri, 2008/09/05 - 8:14am in response to: Ricky Clarkson [quote=rickyclarkson]int i="hello"; is valid Groovy code. The compiler happily generates .class files. Yes, it throws an exception at runtime, but it should not get past the compiler if Groovy supports static typing.[/quote] I think that there is confusion about static versus strong typing. Plus even in java you can do int a = (Integer)(Object)"test"; which can be perfectly detected by compiler to be impossible and will fail only on runtime. Number of casts you have to do to fool the compiler is not a real difference between staticically/stronly types languages and their counterparts. Mike P(Okidoky) replied on Fri, 2008/09/05 - 12:16pm in response to: Ricky Clarkson And not only that, even if Groovy *knows* that a type is set, it still makes a function call for each operation, totally crippling performance. The Groovy boys should *really* fix this ! Without it, they can *not* expect to be taken seriously. Thomas Mueller replied on Sun, 2008/09/07 - 2:15pm in response to: ff aaa [quote]Would ant build scripts be better expressed in Java?[/quote] I have started the project PJMake, a Pure Java Make tool. I use this build tool in my H2 Database Engine. The target described in the article would look like this: Ricky Clarkson replied on Mon, 2008/09/08 - 3:56am "I think that there is confusion about static versus strong typing" Strong typing means whatever you want it to on the day. Static typing at least has a reasonably well-accepted definition. "Plus even in java you can do int a = (Integer)(Object)"test";" Every static type system disallows some valid programs, and allows some invalid programs (until you get to dependent typing, perhaps). Their quality doesn't make them more static or less static, just more useful or less useful. Haskell, for example, has no cast operator (though you can write one through 'unsafe' code), disallowing a huge set of invalid programs, but requiring the refactoring of some valid ones. Henry Story replied on Mon, 2008/09/08 - 6:17am Tim Boudreau had recently also suggesting replacing Ant with Java using a similar argument as the one given above. But I think what is missing here is something a lot more important. Replacing ant with java, with Groovy, with python etc. is all much of a muchness. You end up loosing something, which sadly Ant does not make very clearly visible. Networked data, or hyperdata. What would be a lot more interesting would be to replace ant with RDF, as I argue on a post of the same name. This is because most applications now rely on libraries distributed around the web. Doing that in Java too is possible, but it really is not designed for it. (Some very good RDF stores and libraries are written in Java on the other hand) Dominique De Vito replied on Mon, 2008/09/08 - 8:04am I rather prefer to script with Java than ANT, if possible. Because, following that way I can increase my own Java library that could be used also for my other Java projects. It could be interesting for the Java community to promote and support a Java-syntax-like language for scripting, may be a DSL, at least, a language that could be easily compiled into Java. I see the following advantages: - no more cumbersome XML (in fact the creator of ANT says he is not now happy to have introduced XML into ANT), - full advantages for calling Java APIs (easier than ANT) - easier to learn for Java programmers - it could be an experimental platform for introducing, for example, programming sugar that could be next introduced into the Java language itself - it could be also a good testbed for Java API in order to know if they are complete enough or if they provide short ways in order to program concisely. phil swenson replied on Mon, 2008/09/08 - 9:13am in response to: Dominique De Vito Dominique, what you describe is Groovy/Gant. I personally prefer Ruby/Rake as I described above, but Gant is a very nice way to go too. I don't think there is any reason to re-invent the wheel here. There are other more complex projects out there like Graven and Buildr as well, but I personally found them to be overkill. "- no more cumbersome XML (in fact the creator of ANT says he is not now happy to have introduced XML into ANT)" Yep, James Duncan Davidson, creator of ANT and Tomcat is now a huge ruby advocate. And you pure java guys are nuts IMO. I guess it's better than XML, but Java really sucks at this stuff. No closures, lousy file API, very verbose for a build file. Thomas Mueller replied on Mon, 2008/09/08 - 2:50pm in response to: Dominique De Vito Jose Maria Arranz replied on Thu, 2014/01/23 - 3:42am Six years later your idea is real: Take a look to code, don't ignore the hashbang (#!/usr/bin/env jproxysh): or this one:
http://java.dzone.com/news/java-a-scripting-language
CC-MAIN-2014-15
refinedweb
2,511
62.58
Using Fabric to Deploy CaltrainJS I wrote earlier about different options to deploy applications using Python, and one of the simplest tools is Fabric. While Fabric might not be enough for my work projects, it worked great on my Caltrain schedules application. My project is really simple, but once I started looking at running Javascript compression and checking tools I realized I needed to add some automation to the process. I considered three options: Makefiles, SCons and Fabric. In the end I chose Fabric because I didn’t need to do much in the build steps, but I needed a quick and painless way to upload my changes to a staging site for tests. I think Fabric handles the last step best. My build step is mostly copying files around. I have two Javascript files, one of which is produced by CaltrainPy. The format is pretty compressed already, so I just run sed on it to shorten AM to A, PM to P and take out - from certain fields. Previously I was doing this in the online application, but it makes sense to do this in the build step. I run the ShrinkSafe Javascript compression tool from the dojo project for the second file and combine the two scripts into a single file. I find it somewhat odd that a Javascript compression tool uses Java, but whatever works… The last mildly interesting part is the generation of a sitemap XML file, for which I just wrote a simple Python function. In the Fabric build step this becomes: def build(): "Build locally" # schedule.js is already compact, just minor tweak local("""sed -e 's/ AM"/ A"/g' -e 's/ PM"/ P"/g' -e 's/"-"/""/g' schedule.js >../stage/caltrain.js""") # our main script has a lot to compress local('java -jar custom_rhino.jar -w -fatal-warnings -c caltrain.js >>../stage/caltrain.js') sitemap() Deploying to staging is even simpler: def stage(): "Deploy into staging environment on server" put('../stage/index.html', 'staging/path/on/server/index.html') ... The one gotcha I run into with staging is that current version of Fabric does not seem to support wildcards. I first tried to put('*.html', '/path/on/server/') but that wouldn’t work. It seems like such a basic feature that I wouldn’t be surprised if it will be included soon..
http://www.heikkitoivonen.net/blog/2008/10/22/using-fabric-to-deploy-caltrainjs/
crawl-002
refinedweb
389
70.94
There used to be a time when developers would need to learn Swift/Objective C to build an iOS app, or Java if they wanted to build an Android app. We have now reached an exciting time where web developers can use their existing skills to build both websites and apps without having to learn a completely new language. React Native is a JavaScript library developed by Facebook. It was released back in 2013 and has helped shape apps like Skype, Bloomberg, Wix and many more. Not only can you use your existing knowledge of JavaScript but you can also use the same codebase to build for both iOS and Android. Building an app in React Native with this tutorial is a great starting point for your own app, and it could easily be improved upon by adding more screens, displaying errors on the front end and much more. You can get the project files you need at Github. Not quite what you're looking for? See our post full of different tutorials on how to build an app. 01. Get started To get started building your React Native project, you will need to make sure that you have Node.js installed. You can then install the create-react-native-app command line utility by opening a new a terminal window and running the following command: npm install -g create-react-native-app You can then create your new project by running the following command: create-react-native-app YourAppName You will then need to navigate to your folder via the command line and start the development server. cd myNewApp npm start You can then begin working on your app by opening the App.js file using a code editor. 02. Run your app Since you used create-react-native-app via the command line, you can use the Expo client app to test your application Since you used create-react-native-app via the command line to build your project, you can use the Expo client app to test your application. All you need to do is download the app from the iOS App Store or Google Play Store and then scan the QR code from inside the terminal. You will need to make sure your device is on the same Wi-Fi network as your computer. You can also use the iPhone or Android simulator, if you have Xcode or Android Studio installed. 03. Create a basic login We’re now going to create a fully functional login screen so users can login, register and sign out Let’s start by adding something very basic. To add some text to your application, you will need to type: <Text>Login Page<Text> Working with styles is very similar to CSS. If you wanted to add a style to the line of text that you just created, you would simply edit that line of text to: <Text style={styles.mainheader}> My Login App</Text> You can then add the text style under Stylesheet.create. mainheader: { color: ‘#000066’, textAlign: ‘center’, fontSize: 16 }, We are now going to create a fully functional login screen so that users can login, register for a new account, sign out and even reset their password. This is something you will see a lot in mobile apps, so it lays down a nice foundation for future projects. 04. Set up Firebase and NativeBase Firebase is what we will use for our user authentication. We will need to setup the Firebase config just underneath the import commands We are going to start by installing three more libraries. The first is called Firebase, which is what we will use for our user authentication, and the second is called NativeBase, which is a UI component library. The last one is called React Native Dialog Input, which enables us to display a dialogue box where users can enter text. Navigate to your project folder using the command line and enter the below: npm install firebase npm install native-base npm install --save react-native-dialog-input Make sure you import Firebase, NativeBase and React Native Dialog Input at the top of the App.js file. import * as firebase from ‘firebase’; import { Container, Content, Header, Form, Input, Item, Button, Label } from ‘native-base’ import DialogInput from ‘react-native-dialog-input’; Next, we will need to setup the Firebase config just underneath the import commands. You will need to go and setup an account with Firebase to get your various settings. You can do this by registering at Firebase and creating a new project. Remember that you will need to enable email and password authentication from the dashboard. var config = { apiKey: “<API_KEY>”, authDomain: “<PROJECT_ID>.firebaseapp.com”, databaseURL: “https://<DATABASE_NAME>.firebaseio. com”, projectId: “<PROJECT_ID>”, storageBucket: “<BUCKET>.appspot.com”, messagingSenderId: “<SENDER_ID>” }; firebase.initializeApp(config); 05. Build the container We will also create three buttons: one to login, one to sign up and the final button is for when a user wants to reset their password The next step is to remove the <View> section underneath render(), which was automatically placed there by React upon creating the project, and replace it with the following container to setup the login form. The form will contain a label and an input field for both an email address and password. We will also create three buttons: one to login, one to sign up and the final button is for when a user wants to reset their password. We will set a margin at the top of each button to 10 pixels and set the font colour to white. <Container style={styles.container}> <Text style={styles.mainheader}>Login Page</Text> <Form> <Item> <Label>Email</Label> <Input autocorrect={false}> </Item> <Item> <Label>Password</Label> <Input secureTextEntry={true}> </Item> <Button style={{marginTop:10}} primary full rounded> <Text style={{color: ‘white’}}>Login</Text> </Button> <Button style={{marginTop:10}} success full rounded> <Text style={{color: ‘white’}}>Sign Up</Text> </Button> <Button style={{marginTop:10}} warning full rounded> <Text style={{color: ‘white’}}>Forgot Password</ Text> </Button> </Form> </Container> 06. Set up the events Firstly, we need to set up a constructor to set up the default state. The email and password default values will be set to empty. We will also set the value of isDialogVisible to false: this is going to be used for our password reset dialog box later on. constructor(props) { super(props) this.state=({ email: ‘’, password: ‘’, isDialogVisible: false }) } We will now add onChangeText events to both of our text inputs, so that every time the user types something into the email or password fields, it will update the state of both email and password to that value. onChangeText={(email) => this.setState({ email })} onChangeText={(password) => this.setState({ password })} We also need to add onPress functions to our login, sign-up and forgotten password buttons. Each one will call a different function. The login button will call a function called loginUser, the sign-up button will call signUpUser and the forgotten password button will call forgotPassword. onPress={() => this.loginUser(this.state.email, this.state. password)}’ onPress={() => this.signUpUser(this.state.email, this.state. password)} onPress={() => this.forgotPassword()} 07. Make sign-up function It’s now time to begin building out our functions. We will begin with the sign-up function (signUpUser), which will attempt to create a new user inside Firebase; if it succeeds, then we will display an onscreen alert to inform the user that their account has been set up. However, if the user chooses a password that is less six characters in length, it will prompt them to enter something that is a minimum of six characters long. Finally, we need to add the catch error handler, so that if the sign-up attempt fails through Firebase, we will print the error message to the console. signUpUser = (email, password) => { try { if(this.state.password.length<6) { alert(“Please enter at least 6 characters”) return; } firebase.auth().createUserWithEmailAndPassword(ema il, password) alert(“Congratulations, your account has been setup”) } catch(error){ console.log(error.toString()) } } 07. Add login function If the user successfully signs in, it will display an alert to say the sign in was successfully, along with a sign out button Next, we will add the login (loginUser) function. This will try to log in the user with their email and password. If the user successfully signs in, it will display an alert to say that sign-in was successful, along with a sign out button. Once again, we will need to make sure we add a catch error handler in case there is an error with the login attempt. loginUser = (email, password) => { try { firebase.auth().signInWithEmailAndPassword(email, password).then((user) =>{ Alert.alert( ‘Signed In’, ‘You have signed in. Well done!’, [ {text: ‘Sign Out’, onPress: this.signOutUser}, ], { cancelable: false } ) }) } catch(error) { console.log(error.toString()) } } 08. Add sign out function It’s now on to the sign-out function, which ensures that the user is signed out once they click the sign out button on the alert. signOutUser = () => { firebase.auth().signOut().then(function (user){ }).catch(function(error) { console.log(error) }); } 09. Create forgot password function To finish off our project, we are going to build out a function that will enable the user to easily reset their password in case they’ve either forgotten it or want to change it for some other reason. First, we need to create the dialog box just outside of our <form> tags. <DialogInput isDialogVisible={this.state. isDialogVisible} title={“Forgot Password”} message={“Please input your email address”} hintInput ={“john@test.com”} submitInput={ (useremail) => {this. sendReset(useremail)} } closeDialog={ () => { this.setState({ isDialogVisible: this.state.isDialogVisible = false })}}> </DialogInput> We now need to make the dialog box appear, so we will create the forgotPassword function, which will change the state of isDialogVisible to true. forgotPassword = () => { this.setState({ isDialogVisible: this.state.isDialogVisible = true }) } The dialog box will prompt the user to enter their email address. If the user clicks the cancel button, then the box will close, as it changes the state of the isDialogVisible back to false. If the user clicks the submit button then it will call a function called sendReset along with the email address. Inside our sendReset, we will use the email address to create the Firebase sendPasswordResetEmail request. sendReset = (useremail) => { var auth = firebase.auth(); auth.sendPasswordResetEmail(useremail).then(function() { alert(“Password reset email has been sent”) }).catch(function(error) { console.log(error) }); } This article was originally published in issue 312 of net magazine. Buy issue 312 or subscribe here. Related articles:
https://www.creativebloq.com/features/build-cross-platform-apps-with-react-native
CC-MAIN-2019-30
refinedweb
1,746
60.95
bug in PIL 1.1.7 Image.split() It looks like there is a bug in image.py in PIL 1.1.7 ( 1.1.6 was OK) giving an error of: object has no attribute 'bands' Changing the lines 1494 (etc) to the below in "Image.py" seems to be a workaround def split(self): "Split image into bands" self.load() if self.im.bands == 1: ims = [self.copy()] else: ims = [] self.load() for i in range(self.im.bands): ims.append(self._new(self.im.getband(i))) return tuple(ims) this is fixed in the master branch, and the fix was backported to 1.1.7 in the debian/ubuntu packages. I've forked pil-117 into and pushed the fix, and have just now issued a pull request.
https://bitbucket.org/effbot/pil-2009-raclette/issue/12/bug-in-pil-117-imagesplit
CC-MAIN-2015-14
refinedweb
131
79.97
From No, your Virtual Machines don't shut down when you disconnect from rdp/ssh. However: There's always the possibility that your machine will restart, even if infrequently (e.g. failed hardware, Host OS maintenance, etc.). So... your essential programs should be set up to auto-start at boot time, without requiring you to rdp/ssh in to start them up again. I think this might be the download URL you're looking for - If you access the Java application like this: 10.0.0.1JavaApp.jar you are still running the application from the local machine. In order for it to be run on the server you would have a Java process running on that server and then on the local machine you would have another Java application that is the client for the server application. Nope! The only way to internally connect that different resources is via Azure Virtual Network. And as of today, Web Sites does not support Virtual Network. Cloud Services however do support Virtual Network, as do Virtual Machines. So if you move your deployment to Cloud Service (Web Role) you will be able to connect it to the VM via internal IP Address and never have to go "out". My believe (and my speculation) that we might eventually see in the future Web Sites being able to connect to Azure Virtual Network, at least for the Reserved Instances. Ok, I found a workaround to the issue. It seems that the issue only occurs when the virtual machine is on the same machine as the remote machine. After launching a virtual machine independently from the remote machine, it worked. You probably want to use Windows Management Instrumentation (WMI). In .NET, you access it through classes in the System.Management namespace. You can also connect to remote machine event logs using the System.Diagnostics.EventLog class. There are some good examples in the documentation. You might also get some relevant information using performance counters. See PerformanceCounter and related classes.. 1) You can simply backup your databases and restore them onto the VM's SQL Server 2) Yes. 3) Correct. 4) Telerik controls are client controls and should have no bearing on your server-side deployment platform 5) OS space on the VM is part of the price of the VM's hourly fee 6) What are you looking to do with virtual network? I had same problem. I'm just assuming but you probably forgot to to edit the mailer and add a 'user' parameter to each action. # app/mailers/user_mailer.rb def activation_needed_email(user) ... def activation_success_email(user) ... I found the solution to my problem. I am slightly embarassed by the solution. The solution was to adjust the size of the label template, which was done in "Advanced" under printer settings. After having done that I could enjoy my labels printed out in full size :) I never found out why there was a difference in the printer settings from the host machine to the virtual machine. I thought the settings would be the same. But never mind, now it works. First of all make sure in the setting of the virtual machine that it is in Bridged Adaptor type connectivity. Give a static ip to the virtual machine using sudo ifconfig eth0 10.0.0.100 netmask 255.255.255.0 the above is an example. You got what I mean. 3.Third, Try pinging the virtual machine with the base machine running the vbox to make sure they are connected. ping ipaddress-basemachine 4 . Now if everything is working fine from there then connect with this virtual machine with other base machine using Remote Desktop Viewer or any other similar application. Specify, the ip-address of the vbox and username, password. It will be able to connect with it now. 5 . If it still is not able to then try to check the firewall rules on both the virtual box machine and the base machine I ended up doing the following: Stop the instance, detach the root volume and attach it as another partition on another Windows instance. Once attached, go to disk management and set it online, mount the registry hive and make the proper changes. Navigate to the following location in the registry). If the server was placed in drain mode, then locate a REG_DWORD value named TSServerDrainMode and change the value data from 1 (Remote Desktop disabled) to 0 (Remote Desktop enabled). Unmount registry, set t Edit your Catlina.bat so that your -Xmx settings are less than your physical memory See Tomcat 7: How to set initial heap size correctly? You might find webHDFS REST API useful. I have tried it to write content from my local FS to the HDFS and it works fine. But being REST based it should work fine from local FS of a remote machine as well, provided both machines are connected. You're getting that error because the (localDB)v11.0 is already a user instance of SQL Server. When you were using the SQLEXPRESS SQL Server instance, well, it's not. So, just get rid of the User Instance=true on the connection string. This has been posted quite a while back and I guess that you have already found your solution...But for other people out there: I was trying to create a virtual directory in VS2012 for my web service and I was getting the same error as you: "unable to create the virtual directory. the url is mapped to a different folder on an iis" So I check the IIS to make sure that I hadn't already configured one with the name "mywebservice", and it wasn't configured, still I was getting the error. Solution: I created "mywebservice" virtual directory in the IIS, which was successful, then deleted it, and created it again, this time via VS2012 and voila! all happy now Which version of WP are you developing for? WP 8 requires Hyper-V to be enabled and installed. You can enable Hyper - V in you BIOS, if your processor supports it. Once you enable it in BIOS, you also have to enable it in Programs and Settings in the control panel. Here's an MSDN reference on the topic of Hyper-V and WP8: Arun Rana, That should be possible and AFAIK there isn't any restriction for SQL Server licence to be used in Azure, however take a look at License Mobility through Software Assurance on Windows Azure. If your licence is from MSDN, then it now absolutely legal use it on Azure. If you are unsure about the licence problem and other litigation, you may spin up a VM with SQL Server 2012 pre-installed and this will solve your licence trouble (but right now only available with the combo of SQL Server 2012 on Windows Server 2008 R2). PS / Personal Experience : SQL Server + some server component isn't performing well for a Medium instance. Either split the server roles or consider choosing a bigger instance size like xLarge or A6. I think the SendGrid can help you achieve what you are looking for. Here are the details about it: here is the documentation for the technicalities: I hope this helps you, let me know if you need anything else. Yes. Hurry... your question will be frozen soon by the mods here :) All the best buddy. Detailed answer here; If your trial goes off and you want to continue in using the Windows Azure, you have to start pay for it. This means, buy on of them subscriptions. After purchasing a subscription, in my case it was pay-as-you-go subscription, you have to (!!!) contact support by creating Support ticket and ask for migrating your servers from Trial subscription to your new subscription. Fortunately, the support works pretty fast and this whole transition had taken only about two hours. But still, this is silly solution from MS. I little bit offtopic, but have you consider using azure media services? it supports on demand and live streaming plus a set of options to use 3d party encoders, encryption, drm protection. Yes, your guest VM will be able to run OpenGL 1.x and newer. Install the VirtualBox Guest Additions via the "Devices" menu in the virtual machine's menu bar VirtualBox has a handy menu item named "Install guest additions", which mounts the Guest Additions ISO file inside your virtual machine This will enable 3D hardware acceleration in your guest machine (OpenGL and Direct3D 8/9). VirtualBox states that because 3D support is still experimental at this time, it is disabled by default and must be manually enabled in the VM settings The GC throws this exception when too much time is spent in garbage collection without collecting anything. I believe the default settings are 98% of CPU time being spent on GC with only 2% of heap being recovered. This is to prevent applications from running for an extended period of time while making no progress because the heap is too small. You can turn this off with the command line option -XX:-UseGCOverheadLimit What you are seeing is a little magic behind the scenes of the portal. When you deploy a Virtual Machine it does indeed end up in a Cloud Service container. This was recently changed to be more obvious in the portal as well, previously they were hiding this from you. When people would delete the VM or add other VMs to the same group (previously called something like "attach to") the cloud service would show up in the portal. People got confused on what this was and it just caused more questions than it was worth. In the portal now you are prompted for a cloud service. If you do the quick create when it asks for the DNS name (and shows the .cloudapp.net next to it) that's the Cloud Service name you are providing. When you do the create via the Gallery it is even more obvious on ste You need both the virtual machines to be under the same cloud service. Only then you get the option to load balance them. There is no way add existing VMs to the same network. there are operations in the Service management API (usable through powershell) to create a new VM. You can use that to create a fresh VM from your existing image and connect it to the same service as your first VM. Then you'llhave the necessary options enabled for load balancing.). There is no general advice which version to install. Visual Studio runs in a 32bit process, but your project can target 64bit because it will be debugged within another process. The main question is: How much RAM do you think your database will need? There aren't any drawbacks on installing 64bit software on 64bit machines, but 32bit software will be executed on a compatibility layer (WoW64 - windows on windows 64). Using 32 or 64 bit software on old hardware makes no difference, either. As long as the processor supports it! The problem is you haven't actually "deployed" anything. You just copied your rails app to a VM and ran rails s. A production rails applications is not deployed in this fashion. Consider using either nginx with unicorn or apache/nginx with passenger. Create an alias in SQL Server Configuration Manager and give it the name the replication component expects. The alias contains the cloupapp url and port. Now reconnect using the simple alias. Just did this for my cloudapp.net VM. Azure attached disks, just like the OS disk, is stored as a vhd, inside a blob in Azure Storage. This is durable storage: triple-replicated within the data center, and optionally geo-replicated to a neighboring data center. That said: If you delete something, it's instantly deleted. No going back. So... then the question is, how to deal with backups from an app-level perspective. To that end: You can make snapshots of the blob storing the vhd. A snapshot is basically a linked list of all the pages in use. In the event you make changes, then you start incurring additional storage, as additional page blobs are allocated. The neat thing is: you can take a snapshot, then in the future, copy the snapshot to its own vhd. Basically it's like rolling backups with little space used, in the event If in IIS you click on the virtual Directory and click convert to application. Visual Studio will no longer build or publish that virtuqal directory (now applciation). You do not need to convert your web site project to a web application project, or go through anything more complicated. Of note: I was having trouble with the CFIDE virtual directory, as this site is also using coldfusion for some other pre-built parts. Converting CFIDE from a virtual directory to an application has had no affect on Coldfusion, nor any files inside the native CFIDE directory. The VM sizes each have their own bandwidth limitations. | VM Size | Bandwidth | | ------------- |:-------------:| | Extra Small | 5 (Mbps) | | Small | 100 (Mbps) | | Medium | 200 (Mbps) | | Large | 400 (Mbps) | | Extra Large | 800 (Mbps) | I suspect you always have one copy of your mounted VHD and have ~150 instances hitting it. Increasing the VM size of the VM hosting the VHD would be a good test but an expensive solution. Longer term put the files in blob storage. That means modifying your scripts to access RESTful endpoints. It might be easiest to create 2-3 drives on 2-3 different VMs and write a script that ensures they have the same files. Your scripts could randomly hit one of the 2-3 mounted VHDs to spread out the load. H You problem is that (probably) your mysql is bind to 127.0.0.1 instead of 0.0.0.0. You should change bind in /etc/mysql/my.cnf to 0.0.0.0 bind-address = 0.0.0.0 And then restart mysql of course. When you use Windows Azure Virtual Machines and you take care of software installed on it (inc. SQL Server whatever version), you have to proceed as normally on a local ritualized environment. Meaning that you don't necessarily have to implement retry logic. Late answer, but for the benefit of anyone arriving here from a search engine, see answer from Craig Landis at this link: In summary, there was a change recently so that when you create an endpoint in the portal, it is automatically load balanced and regularly "probed" to check if the endpoint is alive. At the time of writing, the portal will still display the endpoint status as 'not load balanced'. You have to use the Azure CLI tool or PowerShell to recreate the endpoint without load balancing - it cannot be done from the portal. It is the load balancer that tries to probe the endpoint every 15 seconds, and can take the endpoint out of service if it doesn't Yes, the .class file is JVM bytecode. The JVM (invoked by java foo for foo.class) interprets (some JVMs JIT the bytecode into native machine code at runtime) the bytecode.
http://www.w3hello.com/questions/When-an-Active-Directory-User-Logs-in-to-Local-Windows-2012-Server-Launch-a-Virtual-Machine-Possible-
CC-MAIN-2018-17
refinedweb
2,532
63.39
Tracking memory leaks with Dowser. Have you got a recipe for dowser with Turbogears? I have a nasty memory leak I'd love to squash. Thanks, this is an interesting app that I hadn't known about. Rob, The dowser package consists of a single CherryPy Root object. You should be able to mount it on your Turbogears app like any other controller: cherrypy.tree.mount(dowser.Root(), '/dowser', config) I'm using the stable version of Turbogears which is based on CherryPy 2, but dowser seems to only work with CherryPy 3. from cherrypy import tools ImportError: cannot import name tools I got around that, but then there's quickstart. Do you think it'll be port-able to CP 2 without too much work? Thanks, Rob Rob, CP 2.3 support added in. You may have to tweak the staticfilter and/or url settings if you mount dowser somewhere other than /. Awesome, thanks, I'll give it a shot with TurboGears. What license is this under? I don't see any license in the code or svn tree. For Turbogears it was as simple as: import dowser Then in my Root controller: class Root(controllers.RootController): memleak = dowser.Root() Then point your browser at localhost:7878/memleak Hmmm. Now to figure out what all that output is telling me... Here's a minimal but complete example for anyone just trying to get started. import cherrypy import dowser class Root: @cherrypy.expose def index(self): return "hello, world." index.exposed = True if name == 'main': cherrypy.engine.autoreload.unsubscribe() cherrypy.config.update({'server.socket_port': 8088}) # cherrypy.tree.mount(dowser.Root(), '/dowser') # # hello, world. cherrypy.quickstart(Root(),'/')
http://www.aminus.org/blogs/index.php/2008/06/11/tracking-memory-leaks-with-dowser
CC-MAIN-2017-22
refinedweb
276
70.19
Originally posted by jaiganesh javab: Hi Friends, Here is the link for Java Interview Questions. I hope it will be useful for all. Visit the site:Java Interview Questions Regards, JaiJava Interview Questions Q: Can I have multiple main methods in the same class? A: No the program fails to compile. The compiler says that the main method is already defined in the class. Originally posted by Ankur Sharma: public class Sample{ public static void main(String args[]){ System.out.println(" Hello World in Original"); main(new int[] {1}); } public static void main(int a[]){ System.out.println(" Hello: "+a[0]); } } Correct this information. So that nobody may mislead Originally posted by Devesh H Rao: By multiple main's he meant PSVM's. The main that you have defined is not a PSVM.... it is a method which belongs to class Sample. PSVM is the short form for public static void main method which is always looked up by the JVM for the starting point to a program Originally posted by Srinivasa Raghavan: I do agree that PSVM is public static void main but not public static void main with String args. So going to the main discussion Can I have multiple main methods in the same class Yes can have, because it also a normal method & can be overriden with different args. For your answer the question should be like this Can JVM invoke a main method with an argument other than String[] ? A big No !
http://www.coderanch.com/t/31395/Jobs/careers/Java-Interview-Questions
CC-MAIN-2014-15
refinedweb
247
72.46
tipple Tipple is simple - so simple in fact, it has no dependencies. If you're working with REST and want an easy way to manage data fetching on the client side, this might just be the way to go. How does it work? There's two key parts to Tipple: - Request state managment - a fancy way of saying Tipple will manage the numerous states of your API calls so you don't have to. - Domain based integrity - because each request is tied to a domain (e.g. users, posts, comments), Tipple can force data to be re-fetched whenever domain(s) have been mutated. Getting started Install tipple I'm sure you've done this before npm i tipple Configure the context Tipple uses React's context to store the responses and integrity states of requests. You'll want to put the provider in the root of your project. import { TippleProvider } from 'tipple'; import { AppContent } from './AppContent'; export const App = () => ( <TippleProvider baseUrl=""> <AppContent /> </TippleProvider> ); Start requesting The useFetch hook will fetch the data you need on mount import { useFetch } from 'tipple'; interface User { id: number; name: string; } const MyComponent = () => { const [state, refetch] = useFetch<User[]>('/', { domains: ['users'] }); const { fetching, error, data } = state; if (fetching && data === undefined) { return <p>Fetching</p>; } if (error || data === undefined) { return <p>Something went wrong</p>; } return ( <> {data.map(user => ( <h2 key={user.id}>{user.name}</h2> ))} <button onClick={refetch}>Refetch</button> </> ); };
https://reactjsexample.com/a-lightweight-dependency-free-library-for-fetching-data-over-rest-in-react/
CC-MAIN-2021-21
refinedweb
235
62.17
This is the mail archive of the gdb@sourceware.org mailing list for the GDB project. 5.3 branch snapshots disabled 8-byte register values on a 32-bit machine Re: [ECOS] Can't connect to remote Evaluator 7T [Fwd: [rfa] `struct _bfd' -> `struct bfd'] [maint/gnats] need more categories [maint] drop a few things [maint] Drop sim/mips maintainership [maint] GDB needs more local maintainers [maint] The GDB maintenance process [PATCH] Re: regcache (Re: GDB respin) [Proposal] GDB honouring RPATH in binaries. [rfc] C++ namespaces Re: [RFC] File-I/O, target access to host file system via gdb remoteprotocol enhancement Re: [rfc] xfailed tests in gdb.c++/classes.exp Re: [Various] obsoleting the annotate level 2 interface Adding commandline parameters in a multi-arch port Adding file to gdb Adding MAC registers on H8300 Simulator [H8S/2600] An Annotated Proposal ARI `asection' and `sec_ptr' Arm Register Transmission arm-gdbserver support thread debug? arm-wince-pe: ARM simulator ARM7, remote GDB, Software Breakpoints automatic commands...? Backtrace not giving meaningful info Breakpoints in shared objects not working build broken when no msgfmt available? building GDB to debug code on arm-thumb cross-compiler c++ stabs+ direction needed Re: c++/1025: failures in gdb.c++/ovldbreak.exp Re: c++/544: gdb.c++/annota2.exp: annotate-quit test sometimes fails Can I get read access to /cvs/gdbadmin? Can you help me? Can't build gdb CVS sources for {sparc64,sparc}-linux, redefinition of structs in reg.h Re: Can't configure gdb-5.3 for sparc64-linux: Could not find a term library Can't connect to remote Evaluator 7T can't run arm-elf-gdb in Cygwin Re: can't run arm-elf-gdb in cygwin Clean up gdb.c++ tests for dwarf 1 Configuring gdb for arm Current snapshot d10v is a deprecated free zone deferred breakpoints Dummy frames broken on PowerPC DWARF 2 sections and padding ? Re: DWARF-2 and address sizes DWARF2 reader dwarf2_get_pc_bounds problem emulating single-step in gdbserver? frame_register_unwind(): "frame != NULL" assertion failure gdb (CVS) plus breakpoint at main oddity gdb + dynamic libs problem gdb 5.3 bug GDB 5.3 BUG Report for H8/300H and H8S gdb 5.3 doesn't find line numbers gdb 5.3 versus gdb 2003-02-23-cvs gdb 5.3 versus gdb HEAD%20030201 gdb 5.3 versus gdb HEAD%20030201 (draft #2) gdb 5.3 versus gdb HEAD%200302015 gdb detach GDB honouring RPATH in binaries. Re: GDB respin Re: GDB Speak: `inferior' rather than `target'? GDB's roles gdb-mi, gdb-5.3 or cvs? gdb.c++/annota2.exp: annotate-quit GDB/MI GDB/MI & CLI commands GDB/MI absolute path GDB/MI revisited GDB/MI stream separation gdb_suppress_entire_file must die gdbserver relocation? GNATS trouble? gnatsweb host guess damaged in CVS? How Do I see (Disabled) data? How long to leave a PR in feedback? How much should I cleanup? How to configure gdb on arm-linux (for CDB89712) How to debug a 64bits application? Re: How to define shared libray symbol file for GDB in cross debug en vironment? How to define shared libray symbol file for GDB in cross debug environment? How to know which processor I'm targetting at runtime how to obtain output with debug information in dwarf3 format Identifying a dummy frame using a frame id inferior calls / call dummies Re: Info about gnu dg/ux tools info threads CLI output is broken Insight assertions (arm-elf) Insight Snapshots.... Insight testsuite updated interps-20030202-branch created Is stub support for the 's' packet optional or required? local labels in functions LSYM/LBRAC/RBRAC order Re: MAC instructions on H8300S [H8S/2600] make me a linespec maintainer! moving watchpoint fns into target vector? Multi-Arch symbol read warning message Multiprocessor remote debugging Need some help with Running GDB on Strong Arm New Ada-aware GDB 5.3 sources available New categories: win32 and testsuite newbie gdb help... obsoleting annotate level 2 Re: obsoleting the annotate level 2 interface The perils of strcmp_iw PING: c++ stabs+ direction needed please call me Priority `high' is reserved Problems in the edge of functions Problems outputting a string Problems with "disassemble" in CVS Proposal to obsolete i[3456]86-*-dgux* a question about gdb and simulator Question on GDB debug trace synchronization question wrt clone & gdb Questions about GDB/MI random gdb.trace failures read_register() and write_register() on deprecate hit list Reference to .debug_loc regcache (Re: GDB respin) relative file paths? Re: relocation of shared libs not based at 0 Remote breakpoint problem remote debugging: reading code from the executable iso remote memory Re: remote debugging: reading code from the executable iso remotememory RFC: Variables in blocks of registers ser-pipe.c porting to MinGW Re: sh-elf disassembly broken (Was: Re: RFC: Moving disassembler_command to cli land and using newer disassembler code) Shared Lib error. solib-search-path not honoured after program start src/dejagnu/Makefile.am ssh CVS access from another machine Re: Status of Ada support in GDB Status of Ada support in GDB? String handling in GDB command language variables Summary of differences between FSF GDB and ST's Micro Connectversion Target child question Target independent extension target z8k-coff Time to enable the TUI? Todays CVS checkout fails on frv, ppceabi, mips, ... top level configure isn't working Tracepoints and not stopping the target Unreviewed PATCH: SH Simulator - MAC.L implementation and MAC.W correction using GDB to debug on ARM processor running thumb mode using the 'call' command Wall Patch & Repair Tape What has replaced fork to launch external commands? What to do with threads? Where to document supported versions of binutils?
http://www.sourceware.org/ml/gdb/2003-02/subjects.html
CC-MAIN-2019-47
refinedweb
942
56.05
Tk_RestackWindow - Change a window's position in the stacking order #include <tk.h> int Tk_RestackWindow(tkwin, aboveBelow, other) Token for window to restack. Indicates new position of tkwin relative to other; must be Above or Below. Tkwin will be repositioned just above or below this window. Must be a sibling of tkwin or a descendant of a sibling. If NULL then tkwin is restacked above or below all siblings. Tk_RestackWindow changes the stacking order of $widget relative to its siblings. If other is specified as NULL then $widget is repositioned at the top or bottom of its stacking order, depending on whether aboveBelow is Above or Below. If other has a non-NULL value then $widget is repositioned just above or below other. The aboveBelow argument must have one of the symbolic values Above or Below. Both of these values are defined by the include file <X11/Xlib.h>. above, below, obscure, stacking order
http://search.cpan.org/~srezic/Tk-804.031/pod/pTk/Restack.pod
CC-MAIN-2017-04
refinedweb
154
58.99
. Iconst int sensorPin = 0; // Photo-resistor pinconst int controlPin = 1; // Potentiometer pinconst int buzzerPin = 9; // Buzzer pinconst int rLedPin = 10; // Red LED pinconst int bLedPin = 11; // Blue LED pin// Always called at startupvoid setup(){ // Set the two LED pins as output pinMode(rLedPin, OUTPUT); pinMode(buzzerPin, OUTPUT);}// This loops forevervoid also use a pulsating source of light and detect the frequency on the sensor. This will just make it more harder to break. If you just care about the code, then jump to the end: } I. I had very limited idea about how mixed mode programming on .NET works. In mixed mode the binary can have both native and managed code. They are generally programmed in a special variant of the C++ language called C++/CLI and the sources needs to be compiled with /CLR switch. For some recent work I am doing I had to ramp up on Managed C++ usage and how the .NET runtime supports the mixed mode assemblies generated by it. I wrote up some notes for myself and later thought that it might be helpful for others trying to understand the inner workings. The initial foray of C++ into the managed world was via the managed extension for C++ or MC++. This is deprecated now and was originally released on VS 2003. This MC++ syntax turned out to be too confusing and wasn’t adopted well. The MC++ was soon replaced with C++/CLI. C++/CLI added limited extension over C++ and was more well designed so that the language feels more in sync with the general C++ language specification. The code looks like below. ref class CFoo { public: CFoo() { pI = new int; *pI = 42; str = L"Hello"; } void ShowFoo() { printf("%d\n", *pI); Console::WriteLine(str); } int *pI; String^ str; }; In this code we are defining a reference type class CFoo. This class uses both managed (str) and native (pI) data types and seamlessly calls into managed and native code. There is no special code required to be written by the developer for the interop. The managed type uses special handles denoted by ^ as in String^ and native pointers continue to use * as in int*. A nice comparison between C++/CLI and C# syntax is available at the end of. Junfeng also has a good post at The benefits of using mixed mode Seamless, static type-checked, implicit, interop between managed and native code is the biggest draw to C++/CLI. Calls from managed to native and vice versa are transparently handled and can be intermixed. E.g. managed --> unmanaged --> managed calls are transparently handled without the developer having to do anything special. This technology is called IJW (it just works). We will use the following code to understand the flow. #pragma managed void ManagedAgain(int n) { Console::WriteLine(L"Managed again {0}", n); } #pragma unmanaged void NativePrint(int n) { wprintf(L"Native Hello World %u\n\n", n); ManagedAgain(n); } #pragma managed void ManagedPrint(int n) { Console::WriteLine(L"Managed {0}", n); NativePrint(n); } The call flow goes from ManagedPrint --> NativePrint –> ManagedAgain Native to Managed For every managed method a managed and an unmanaged entry point is created by the C++ compiler. The unmanaged entry point is a thunk/call-forwarder, it sets up the right managed context and calls into the managed entry point. It is called the IJW thunk. When a native function calls into a managed function the compiler actually binds the call to the native forwarding entry point for the managed function. If we inspect the disassembly of the NativePrint we see the following code is generated to call into the ManagedAgain function 00D41084 mov ecx,dword ptr [n] // Store NativePrint argument n to ECX 00D41087 push ecx // Push n onto stack 00D41088 call ManagedAgain (0D4105Dh) // Call IJW Thunk Now at 0x0D4105D is the address for the native entry point. If forwards the call to the actual managed implementation ManagedAgain: 00D4105D jmp dword ptr [__mep@?ManagedAgain@@$$FYAXH@Z (0D4D000h)] Managed to Native In the case where a managed function calls into a native function standard P/Invoke is used. The compiler just defines a P/Invoke signature for the native function in MSIL .method assembly static pinvokeimpl(/* No map */) void modopt([mscorlib]System.Runtime.CompilerServices.CallConvCdecl) NativePrint(int32 A_0) native unmanaged preservesig { .custom instance void [mscorlib]System.Security.SuppressUnmanagedCodeSecurityAttribute::.ctor() = ( 01 00 00 00 ) // Embedded native code // Disassembly of native methods is not supported. // Managed TargetRVA = 0x00001070 } // end of method 'Global Functions'::NativePrint The managed to native call in IL looks as Manged IL: IL_0010: ldarg.0 IL_0011: call void modopt([mscorlib]System.Runtime.CompilerServices.CallConvCdecl) NativePrint(int32) The virtual machine (CLR) at runtime generates the correct thunk to get the managed code to P/Invoke into native code. It also takes care of other things like marshaling the managed argument to native and vice-versa. Managed to Managed While it would seem this should be easy, it was a bit more convoluted. Essentially the compiler always bound to native entry point for a given managed method. So a managed to managed call degenerated to managed -> native -> managed and hence resulted in suboptimal double P/Invoke. See This was fixed in later versions by using dynamic checks and ensuring managed calls always call into managed targets directly. However, in some cases managed to managed calls still degenerate to double P/Invoke. So an additional knob provided was the __clrcall calling convention keyword. This will stop the native entry point from being generated completely. The pitfall is that these methods are not callable from native code. So if I stick in a __clrcall infront of ManagedAgain I get the following build error while compiling NativePrint. Error 2 error C3642: 'void ManagedAgain(int)' : cannot call a function with __clrcall calling convention from native code <filename> If a C++ file is compiled with this flag, instead of mixed mode assembly (one that has both native and MSIL) a pure MSIL assembly is generated. So all methods are __clrcall and the Cpp code is compiled into MSIL code and NOT to native code. This comes with some benefits as in the assembly becomes a standard MSIL based assembly which is no different from another managed only assembly. Also it comes with some limitation. Native code cannot call into the managed codes in this assembly because there is no native entry point to call into. However, native data is supported and also the managed code can transparently call into other native code. Let's see a sample I moved all the unmanaged code to a separate /C++:CLI dll as void NativePrint(int n) { wprintf(L"Native Hello World %u\n\n", n); } Then I moved my managed C++ code to a new project and compiled it with /C++:PURE #include "stdafx.h" #include #include "..\Unmanaged\Unmanaged.h" using namespace System; void ManagedPrint(int n) { char str[30] = "some cool number"; // native data str[5] = 'f'; // modifying native data Console::WriteLine(L"Managed {0}", n); // call to BCL NativePrint(n); // call to my own native methods printf("%s %d\n\n", str, n); // CRT } int main(array ^args) { ManagedPrint(42); return 0; } The above builds and works fine. So even with C/++:PURE I was able to However, no native code can call into ManagedPrint. Also do note that even though Pure MSIL is generated, the code is unverifiable (think C# unsafe). So it doesn't get the added safety that the managed runtime provides (e.g. I can just do str[200] = 0 and not get any bounds check error) /CLR:safe compiler switch generates MSIL only assemblies whose IL is fully verifiable. The output is not different from anything generated from say C# or VB.NET compilers. This provides more security to the code but at the same time losses on several capabilities over and above the PURE variant So for /CLR:Safe we need to do the following [DllImport("Unmanaged.dll")] void NativePrint(int i); void ManagedPrint(int n) { //char str[3000] = "some cool number"; // will fail to compile with //str[5] = 'f'; // "this type is not verifiable" Console::WriteLine(L"Managed {0}", n); NativePrint(n); // Hand coded P/Invoke MSDN has some nice articles on people trying to migrate from /CLR to Long manage the :) These :) For some time now, my main box got a bit slow and was glitching all the time. After some investigation I found that some power profile imposed by our I T department enabled CPU parking on my machine. This effectively parks CPU on low load condition to save power. However, Windows Task Manager (Ctrl + Shift + Esc and then Performance tab) clearly shows this parking feature. 3 parked cores show flatlines You can also find out if your machine is behaving the same from Task Manager -> Performance Tab -> Resource Monitor -> CPU Tab. The CPU graphs on the right will show which cores if any are parked. To disable this you need to For detailed steps see the video Everything is back to normal once this is taken care off I. If the bug meets the fix bar and is critical enough, just go fix it. This is the high priority critical bug which rarely gives rise to any confusion. Something like the “GC crashes on GC_Sweep for application X”. A. All objects smaller than 16KB has a 8 byte object header. The 32-bits in the flag are used to store the following information. I Like Fast. The states an application goes through is documented in If app continues to run code, e.g. in another thread and modifies any application state then that state cannot be persisted (as there will be no subsequent Deactivated type event). Managed threads stopped at Freeze are re-started at this point.. This is the event that the application gets and it is required to re-build it’s state when the activation is from Tombstone or just re-use the state in memory when the activation is from Dormant stage... We use 2 generations on the WP7 referred to as Gen0 and Gen1. A collection could be any of the following 4 types The list above is in the order of increasing latency (or time they take to run). So from all of the above, the 3 key takeaways are). In principle both the desktop GC and the WP7 GC are similar in that they use mark-sweep generational GC. However, there are differences based on the fact that the WP7 GC targets a more constrained device. About. Even in ephemeral collection the GC needs to deterministically find all objects in Gen-0 which are not reachable. This means the following objects needs to survive a Gen-0 collection. G Now. If you are looking for information on the new Generational GC on Windows Phone Mango please visit Many The GC is NOT run in the following cases (I am explicitly calling these out because in various conferences and interactions I’ve heard folks thinking it might be) For folks migrating from NETCF 3.5 the list below gives you the changes Let me start by saying that using mm-dd-yyyy is just plain wrong. No really it just doesn’t make any sense to me. Neither does it make any sense to most people world-over if you go my the date-format map up at If one uses dd-mm-yyyy it makes sense because it’s in decreasing order of granularity (kind of LSB first). yyyy-mm-dd makes ever more sense because d:\MyStuff\Personal\Pictures>dir 2010* Volume in drive D is Data Volume Serial Number is 3657-F386 Directory of d:\MyStuff\Personal\Pictures 06/17/2010 01:06 PM <DIR> 2010_0501 06/17/2010 01:07 PM <DIR> 2010_0504 06/17/2010 01:16 PM <DIR> 2010_0508 06/17/2010 01:20 PM <DIR> 2010_0509 06/17/2010 01:24 PM <DIR> 2010_0515 06/17/2010 01:29 PM <DIR> 2010_0517 06/17/2010 01:30 PM <DIR> 2010_0523 06/17/2010 01:33 PM <DIR> 2010_0528 06/17/2010 01:37 PM <DIR> 2010_0529 06/17/2010 01:43 PM <DIR> 2010_0605 06/17/2010 01:47 PM <DIR> 2010_0606 06/21/2010 08:40 PM <DIR> 2010_0616 06/28/2010 10:33 PM <DIR> 2010_0619 0 File(s) 0 bytes 13 Dir(s) 55,925,829,632 bytes free But I just cannot fathom why would anyone use mm/dd/yyyy. In what way is that intuitive? Learn from the dev lead and PM of Windows Phone 7 Emulator on how it works and delivers the awesome performance. Some key points. Like user code the system is also capable of pitching the shared code when there is a low memory situation on the device.
http://blogs.msdn.com/b/abhinaba/
CC-MAIN-2013-20
refinedweb
2,126
59.74
json-kcqljson-kcql This is a small utility library allowing you to translate the shape of a JSON document. Let's say we have the following json (A description of a Pizza): { "ingredients": [ { "name": "pepperoni", "sugar": 12.0, "fat": 4.4 }, { "name": "onions", "sugar": 1.0, "fat": 0.4 } ], "vegetarian": false, "vegan": false, "calories": 98, "fieldName": "pepperoni" } using the library one can apply to types of queries: - to flatten it - to retain the structure while cherry-picking and/or rename fields The difference between the two is marked by the withstructure* keyword. If this is missing you will end up flattening the structure. This library is dependant on com.datamountaineer.kcql hence you still have to provide a 'from topic' Let's take a look at the flatten first. There are cases when you are receiving a nested json and you want to flatten the structure while being able to cherry pick the fields and rename them. Imagine we have the following JSON: { "name": "Rick", "address": { "street": { "name": "Rock St" }, "street2": { "name": "Sunset Boulevard" }, "city": "MtV", "state": "CA", "zip": "94041", "country": "USA" } } Applying this SQL like syntax SELECT name, address.street.*, address.street2.name as streetName2 FROM topic the projected new JSON is: { "name": "Rick", "name_1": "Rock St", "streetName2": "Sunset Boulevard" } There are scenarios where you might want to rename fields and maybe reorder them. By applying this SQL like syntax on the Pizza JSON SELECT name, ingredients.name as fieldName, ingredients.sugar as fieldSugar, ingredients.*, calories as cals FROM topic withstructure we end up projecting the first structure into this one: { "name": "pepperoni", "ingredients": [ { "fieldName": "pepperoni", "fieldSugar": 12.0, "fat": 4.4 }, { "fieldName": "onions", "fieldSugar": 1.0, "fat": 0.4 } ], "cals": 98 } Flatten rulesFlatten rules - you can't flatten a json containing array fields - when flattening and the column name has already been used it will get a index appended. For example if field name appears twice and you don't specifically rename the second instance (name as renamedName) the new json will end up containing: name and name_1 How to use itHow to use it import JsonKcql._ val json: JsonNode= ... json.kcql("SELECT name, address.street.name as streetName FROM topic") As simple as that! Query ExamplesQuery Examples You can find more examples in the unit tests, however here are a few used: - flattening //rename and only pick fields on first level SELECT calories as C ,vegan as V ,name as fieldName FROM topic //Cherry pick fields on different levels in the structure SELECT name, address.street.name as streetName FROM topic //Select and rename fields on nested level SELECT name, address.street.*, address.street2.name as streetName2 FROM topic - retaining the structure //you can select itself - obviousely no real gain on this SELECT * FROM topic withstructure //rename a field SELECT *, name as fieldName FROM topic withstructure //rename a complex field SELECT *, ingredients as stuff FROM topic withstructure //select a single field SELECT vegan FROM topic withstructure //rename and only select nested fields SELECT ingredients.name as fieldName, ingredients.sugar as fieldSugar, ingredients.* FROM topic withstructure Release NotesRelease Notes 0.1 (2017-03-15) - first release BuildingBuilding Requires gradle 3.4.1 to build. To build gradle compile To test gradle test You can also use the gradle wrapper ./gradlew build To view dependency trees gradle dependencies #
https://index.scala-lang.org/landoop/json-kcql/json-kcql/1.0.1?target=_2.11
CC-MAIN-2021-39
refinedweb
550
56.25
Here is a complete, working Python program. It probably makes absolutely no sense to you. Don’t worry about that; we’re going to dissect it line by line. But read through it first and see what, if anything, you can make of it. If you have not already done so, you can download this and other examples used in this book. def buildConnectionString(params): """Build a connection string from a dictionary of parameters. Returns string.""" return ";".join(["%s=%s" % (k, v) for k, v in params.items()]) if __name__ == "__main__": myParams = {"server":"mpilgrim", \ "database":"master", \ "uid":"sa", \ "pwd":"secret" \ } print buildConnectionString(myParams) Now run this program and see what happens.
http://www.faqs.org/docs/diveintopython/odbchelper_divein.html
CC-MAIN-2016-44
refinedweb
111
67.25
Originally posted by Naveen Zed: code given in phazam today is What will happen when you attempt to compile and run the following code? public class LovesBlind implements Runnable{ String sName=""; public static void main(String argv[]){ LovesBlind lb = new LovesBlind(); lb.sName="1"; lb.run(); LovesBlind lb2 = new LovesBlind(); lb2.sName="2"; new Thread(lb2).start(); } public void run(){ System.out.println("run"+sName); } } and the answars given are 1. Compile time error, the line that creates a new Thread is faulty. 2. Compilation and output of run1 and run2, but the order cannot be determined. 3. Compilation and output of run1 followed by run2 4. Compilation and output of run2 followed by run1 My problem is, Why the answar is not 2?Can't be main thread interrupted and the chance given to the other thread (started with lb2) ,before the run() method print run 1.Why there is no probability of printing run2 before run1?
http://www.coderanch.com/t/260094/java-programmer-SCJP/certification/thread-calling
CC-MAIN-2015-22
refinedweb
158
67.86
A Django REST API in a Single File2020-10-15 I previously covered writing a Django application in a single file, for both synchronous and asynchronous use cases. This post covers the angle of creating a REST API using Django in a single file. Undeniably, REST API’s are a very common use case for Django these days. Nearly 80% of this year’s Django community survey respondents said they use Django REST Framework (DRF). DRF is great for building API’s and provides many of the tools you’d want in a production-ready application. But for building a very small API, we can get by solely with tools built into Django itself. Without further ado, our example application is below. You can save it as app.py, and run it with python app.py runserver (tested with Django 3.1). An explanation follows after the code: import os import sys from dataclasses import dataclass from django.conf import settings from django.core.wsgi import get_wsgi_application from django.http import HttpResponseRedirect, Json MIDDLEWARE=["django.middleware.common.CommonMiddleware"], ) @dataclass class Character: name: str age: int def as_dict(self, id_): return { "id": id_, "name": self.name, "age": self.age, } characters = { 1: Character("Rick Sanchez", 70), 2: Character("Morty Smith", 14), } def index(request): return HttpResponseRedirect("/characters/") def characters_list(request): return JsonResponse( {"data": [character.as_dict(id_) for id_, character in characters.items()]} ) def characters_detail(request, character_id): try: character = characters[character_id] except KeyError: return JsonResponse( status=404, data={"error": f"Character with id {character_id!r} does not exist."}, ) return JsonResponse({"data": character.as_dict(character_id)}) urlpatterns = [ path("", index), path("characters/", characters_list), path("characters/<int:character_id>/", characters_detail), ] app = get_wsgi_application() if __name__ == "__main__": from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) Neat, just 73 lines, or 63 not counting imports. The first thing we do, following imports, is to call settings.configure() with the minimum configuration to get Django running. I covered most of these settings in my first single-file app post which I won’t repeat too much here. One extra thing we’re using compared to that post is CommonMiddleware, one of Django’s many “included batteries”. In its default configuration it will redirect URL’s not ending with a slash (“/”) to those with one, useful for getting users to their intended content. Second, we define some static data for our API to serve, using dataclasses (new in Python 3.7). These are great for storing and serving a small amount of unchanging data. At some point we’d want to move to using a database, but for our purposes it is easier to avoid setting this up. (I’ve also shown my bad taste by making this a Rick and Morty character API.) Third, we define three views: indexredirects to the character list URL, as that’s our only data type in the API. If we expanded our API, we might want to show a “front page”. characters_listreturns a list of characters. If our list of characters grew large, we might want to paginate this to return only slices of characters at a time. characters_detailreturns the representation of a single character. This also has an error path for when we’re given an ID that doesn’t match. Fourth, we map URL’s to our views in the urlpatterns list. Fifth, we create the WSGI application object, which allows us to deploy the application. For example, if we’d saved this file as app.py, we could run it on a production server with gunicorn app:app. Sixth, we introduce manage.py style handling when the module is run as "__main__". This allows us to run the application with python app.py runserver locally. Trying It Out Here’s a sample of using that API with httpie, a neat command-line tool for making HTTP requests. First, hitting the index URL: $ http localhost:8000 HTTP/1.1 302 Found Content-Length: 0 Content-Type: text/html; charset=utf-8 Date: Thu, 15 Oct 2020 21:05:09 GMT Location: /characters/ Server: WSGIServer/0.2 CPython/3.8.5 This redirects us to /characters/ as expected. Fetching that, we see the JSON dat for both characters: $ http localhost:8000/characters/ HTTP/1.1 200 OK Content-Length: 101 Content-Type: application/json Date: Thu, 15 Oct 2020 21:05:15 GMT Server: WSGIServer/0.2 CPython/3.8.5 { "data": [ { "age": 70, "id": 1, "name": "Rick Sanchez" }, { "age": 14, "id": 2, "name": "Morty Smith" } ] } We might try fetching Morty’s page: $ http localhost:8000/characters/2 HTTP/1.1 301 Moved Permanently Content-Length: 0 Content-Type: text/html; charset=utf-8 Date: Thu, 15 Oct 2020 21:05:19 GMT Location: /characters/2/ Server: WSGIServer/0.2 CPython/3.8.5 Aha! We didn’t add the trailing slash, so our request has been redirected. Fetching the complete URL, we see Morty’s data: $ http localhost:8000/characters/2/ HTTP/1.1 200 OK Content-Length: 53 Content-Type: application/json Date: Thu, 15 Oct 2020 21:05:21 GMT Server: WSGIServer/0.2 CPython/3.8.5 { "data": { "age": 14, "id": 2, "name": "Morty Smith" } } Success! Tests These days I can’t write a blog post without mentioning testing. We can use Django’s built-in test framework to write some quick tests that cover all our endpoints. Save this code in a file called tests.py, in the same folder as the application: from django.test import SimpleTestCase class AppTests(SimpleTestCase): def test_index(self): response = self.client.get('/') self.assertEqual(response.status_code, 302) self.assertEqual(response['Location'], '/characters/') def test_list(self): response = self.client.get('/characters/') self.assertEqual(response.status_code, 200) self.assertEqual( response.json()['data'][0], {"id": 1, "name": "Rick Sanchez", "age": 70}, ) def test_detail(self): response = self.client.get('/characters/1/') self.assertEqual(response.status_code, 200) self.assertEqual( response.json()['data'], {"id": 1, "name": "Rick Sanchez", "age": 70}, ) These tests use Django’s test client to make requests and then assert on the response status code and content. Because we don’t have a database, the tests can inherit from the faster SimpleTestCase, rather than using TestCase which adds some database management. We can run these tests using app.py like so: $ python app.py test System check identified no issues (0 silenced). ... ---------------------------------------------------------------------- Ran 3 tests in 0.004s OK Great! Fin I hope this has helped you figure out a way into building small API’s with Django. If your API takes off, do check out Django REST Framework! For a longer tutorial that uses the database, check out Curtis Maloney’s post. —Adam Working on a Django project? Check out my book Speed Up Your Django Tests which covers loads of best practices so you can write faster, more accurate tests. One summary email a week, no spam, I pinky promise. Related posts: Tags: django
https://adamj.eu/tech/2020/10/15/a-single-file-rest-api-in-django/
CC-MAIN-2021-04
refinedweb
1,140
59.7
My rendering code is now ported to .Net Standard 2.0 assemblies for portability. Next step is to consume said assemblies in a Xamarin Android app. However I do not seem to be able to consume for example the System.Drawing.Graphics type in my Xamarin code, even though I have added a reference to the System.Drawing.Common package. Am I attempting the impossible? Must anything System.Drawing.Common be completely encapsulated in my .NET Standard assemblies? Or is there a viable workaround? Answers Xamarin use Mono instead of .Net, the difference between them should be clear. Monocode and .Net are almost identical, they can even be compatible with each other (.net library can be mono-referenced and used), but Monois a separate cross-platform library. You could refer to the document:. Not all the .Netcode could be used in Mono. System.Drawing.Graphicsdoes not exist in Xamarin, as it conflict with Android.Graphics. You could use Android.Graphicsin Xamarin.Androidinstead. @YorkGo - thanks for the tip, but Xamarin.Android is not portable code. The whole point (for me) of going to .Net Standard and accompanying System.Drawing.Common package is to have portable rendering code, for all platforms supported by .Net Standard. Note - this is imaging, not UI. Install the System.Drawing.Commonnuget package for your portable library should be work. @YorkGo once again appreciate the tip. That was my thinking too, but System.Drawing.Graphics is not resolved even with System.Drawing.Common package added. My conclusion is that it seems to not be possible to reference any symbols in the System.Drawing.Common assembly from my Xamarin code. And that's really what my question is about - should it be possible? Is there a workaround? Is what I am seeing a .Net Standard problem, or a Xamarin problem? Or am I making a user error? FYI: As far as I can tell the System.Drawing namespace also exists in the Mono.Android assembly. All System.Drawing classes I can reference (Color, Point, PointF, Rectangle, RectangleF, Size, SizeF) originate in the Mono.Android assembly. For anyone attempting to reproduce... my projects: .Net Standard 2.0 assembly, that references System.Drawing.Common package, and exposes System.Drawing.Graphics as a parameter in an abstract class. This assembly builds fine. Currently building for x86, not sure if it matters at this stage. Xamarin Android class library that references the above assembly, and System.Drawing.Common package, and subclasses the abstract class above: VS2017 class view for the Xamarin project shows no symbols in the System.Drawing.Common project reference. In the .Net Standard project, all System.Drawing symbols are visible in the class view. Additionally I see a NuGet error: NU1201 Project mochro.android is not compatible with netstandard2.0 (.NETStandard,Version=v2.0). Project mochro.android supports: monoandroid81 (MonoAndroid,Version=v8.1) I guess that confirms lack of compatibility. Found this doc page that sheds some light on the situation. Not sure yet to what degree it is relevant. After some research I concluded that this is due to a problem in the net core. System.Drawing.Common.dll should be present in my deployed app folder on Android, but it's not. Manually adding it seemingly eliminates the symptom (but exposes another symptom). The bug report is here: Solution seems to be targeted for .Net Standard 3.0. I finally got PlatformNotSupportedException from System.Drawing.Common.dll. Pretty clear sign. I have to write my own rendering primitives if I want portable code.
https://forums.xamarin.com/discussion/comment/363099/
CC-MAIN-2019-26
refinedweb
585
55.1
> > > On reflection, it seems fairly improbable to me that people would use > \if and friends interactively. They're certainly useful for scripting, > but would you really type commands that you know are going to be ignored? > I'm thinking the one use-case is where a person is debugging a non-interactive script, cuts and pastes it into an interactive script, and then scrolls through command history to fix the bits that didn't work. So, no, you wouldn't *type* them per se, but you'd want the session as if you had. The if-then barking might really be useful in that context. > > Therefore, I don't think we should stress out about fitting branching > activity into the prompts. That's just not the use-case. (Note: we > might well have to reconsider that if we get looping, but for now it's > not a problem.) Moreover, if someone is confused because they don't > realize they're inside a failed \if, it's unlikely that a subtle change in > the prompt would help them. So your more in-the-face approach of printing > messages seems good to me. > Glad you like the barking. I'm happy to let the prompt issue rest for now, we can always add it later. If we DID want it, however, I don't think it'll be hard to add a special prompt (Thinking %T or %Y because they both look like branches, heh), and it could print the if-state stack, maybe something like: \if true \if true \if false \if true With a prompt1 of '%T> ' Would then resolve to ttfi> for true, true, false, ignored. This is just idle musing, I'm perfectly happy to leave it out entirely. > This seems more or less reasonable, although: > > > # \endif > > active \endif, executing commands > > looks a bit weird. Maybe instead say "exited \if, executing commands"? > +1 > > BTW, what is your policy about nesting these things in include files? > My immediate inclination is that if we hit EOF with open \if state, > we should drop it and revert to the state in the surrounding file. > Otherwise things will be way too confusing. > That's how it works now if you have ON_ERROR_STOP off, plus an error message telling you about the imbalance. If you have ON_ERROR_STOP on, it's fatal. All \if-\endif pairs must be balanced within a file (well, within a MainLoop, but to the user it looks like a file). Every new file opened via \i or \ir starts a new if-stack. Because commands in an inactive branch are never executed, we don't have to worry about the state of the parent stack when we do a \i, because we know it's trues all the way down. We chose this not so much because if-endif needed it (we could have thrown it into the pset struct just as easily), but because of the issues that might come up with a \while loop: needing to remember previous GOTO points in a file (if the input even *is* a file...) is going to be hard enough, remembering them across files would be harder, and further complicated by the possibility that a file \included on one iteration might not be included on the next (or vice versa)...and like you said, way too confusing.
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg304155.html
CC-MAIN-2021-04
refinedweb
556
68.6