text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi Personally I would prefer to give function's definition before making calls to it because it provides more understanding how C++ works. As some member here told me that time-travel (moving back and forth) is not possible all the time in C++. The statements are executed in a sequence so we should also try to write the code in a sequence to make it more understandable. for(int j=0; j<45; j++) - The for loop is executed, then it's value is incremented, then the incremented value is checked against the test expression. So, don't you think having the syntax this way, for(int j=0; j++; j<45), makes more sense? Please help me. Thanks a lot. Code:// table.cpp // demonstrates simple function #include <iostream> using namespace std; void starline(); //function declaration // (prototype) int main() { starline(); //call to function cout << "Data type Range" << endl; starline(); //call to function cout << "char -128 to 127" << endl << "short -32,768 to 32,767" << endl << "int System dependent" << endl << "long -2,147,483,648 to 2,147,483,647" << endl; starline(); //call to function return 0; } //-------------------------------------------------------------- // starline() // function definition void starline() //function declarator { for(int j=0; j<45; j++) //function body cout << '*'; cout << endl; }
https://cboard.cprogramming.com/cplusplus-programming/137640-function-function-call-loop.html
CC-MAIN-2017-30
refinedweb
206
65.05
In a previous post we explored how you can greatly speed up certain types of long-running computations in R by parallelizing your code using multicore package*. I also mentioned that there were a few other ways to speed up R code; the one I will be exploring in this post is using Rcpp to replace time-critical inner-loops with C++. In general, good C++ code almost always runs faster than equivalent R code. Higher level language affordances like garbage collection, dynamic typing, and bounds checking can add a lot of computational overhead. Further, C/C++ compiles down to machine code, whereas R byte-code has to be interpreted. On the other hand, I would hate to do all my statistics programming in a language like C++, precisely because of those higher-level language affordances I mentioned above. When development time (as opposed to execution time) is taken into account, programming in R is much faster for me (and makes me a very happy programmer). On occasion, though, there are certain sections of R code that I wish I could rewrite in C/C++. They may be simple computations that get called thousands or millions of times in a loop. If I could just write these time-critical snippets with C/C++ and not have to throw the proverbial baby out with the bath water (and rewrite everything in C), my code would run much faster. Though there have been packages to make this sort of thing since the early 2000s, Rcpp (and the Rcpp family**) has made this even easier; now interfacing with R objects is seamless. To show an example of how you might use Rcpp, I’ve used the same example from my post "Parallel R (and air travel)". In this example, we use longitude and latitude info from all US airports to derive the average (mean) distance between every two US airports. The function I will be replacing with C++ is the function to compute the distance between two longitude latitude pairs (Haversine's formula) on a sphere (which is just an approximation). The R functions to do this look like this: to.radians<-function(degrees){ degrees * pi / 180 } haversine <- function(lat1, long1, lat2, long2, unit="km"){ radius <- 6378 # radius of Earth in kilometers delta.phi <- to.radians(lat2 - lat1) delta.lambda <- to.radians(long2 - long1) phi1 <- to.radians(lat1) phi2 <- to.radians(lat2) term1 <- sin(delta.phi/2) ^ 2 term2 <- cos(phi1) * cos(phi2) * sin(delta.lambda/2) ^ 2 the.terms <- term1 + term2 delta.sigma <- 2 * atan2(sqrt(the.terms), sqrt(1-the.terms)) distance <- radius * delta.sigma if(unit=="km") return(distance) if(unit=="miles") return(0.621371*distance) } While the C++ functions look like this: #include <iostream> #include <math.h> #include <Rcpp.h> // [[Rcpp::export]] double to_radians_cpp(double degrees){ return(degrees * 3.141593 / 180); } // [[Rcpp::export]] double haversine_cpp(double lat1, double long1, double lat2, double long2, std::string unit="km"){ int radius = 6378; double delta_phi = to_radians_cpp(lat2 - lat1); double delta_lambda = to_radians_cpp(long2 - long1); double phi1 = to_radians_cpp(lat1); double phi2 = to_radians_cpp(lat2); double term1 = pow(sin(delta_phi / 2), 2); double term2 = cos(phi1) * cos(phi2) * pow(sin(delta_lambda/2), 2); double the_terms = term1 + term2; double delta_sigma = 2 * atan2(sqrt(the_terms), sqrt(1-the_terms)); double distance = radius * delta_sigma; /* if it is anything *but* km it is miles */ if(unit != "km"){ return(distance*0.621371); } return(distance); } Besides for the semicolons, other assignment operator and the type declarations, these codes are almost identical. Next, we put the C++ code above in a C++ source file. We will call it, and automatically compile and link to it from our driver R code thusly***: calc.distance.two.rows <- function(ind1, ind2, version=haversine){ return(version(air.locs[ind1, 2], air.locs[ind1, 3], air.locs[ind2, 2], air.locs[ind2, 3])) } air.locs <- read.csv("airportcodes.csv", stringsAsFactors=FALSE) combos <- combn(1:nrow(air.locs), 2, simplify=FALSE) num.of.comps <- length(combos) mult.core <- function(version=haversine_cpp){ the.sum <- sum(unlist(mclapply(combos, function(x){ calc.distance.two.rows(x[1], x[2], version) }, mc.cores=4))) result <- the.sum / num.of.comps return(result) } mult.core(version=haversine_cpp) Comparing the R version against the C++ version over a range of sample sizes yielded a chart like this: To run this to completion would have taken 4 hours but, if my math is correct, rewriting the distance function shaved of over 15 minutes from the completion time. It is not uncommon for the Rcpp to speed up R code by *orders of magnitude*. In this link, Dirk Eddelbuettel demonstrates an 80-fold speed increase (albeit with a contrived example). So why did we not get an 80-fold increase? I could have (and will) rewrite more of this program in Rcpp to avoid some of the overhead with repeated calls to compiled C++. My point here was more to show that we can use Rcpp to speed up this program with very little work–almost for nothing. Again, besides for certain syntactical differences and type declarations, the R and C++ functions are virtually the same. As you become more comfortable with it–and use it more within the same scripts–Rcpp will likely pay higher and higher dividends. The next time we revisit this contrived airport example, we will be profiling it expanding the C++ and eventually, use distributed computing to get it as fast as we can. share this: * the 'multicore' package is now deprecated in favor of 'parallel' ** RCpp11 (for modern C++), RccpEigen (for use of the Eigen C++ linear algebra template library), RCppArmadillo (for use of the Eigen C++ linear algebra template library), and a few others *** this code is a little bit different than the code in the first airport distance blog post because I switched from using the 'multicore' package to the 'parallel' package >>IMAGE...
http://www.r-bloggers.com/squeezing-more-speed-from-r-for-nothing-rcpp-style/
CC-MAIN-2014-42
refinedweb
974
56.35
Introducing Chauffeur Introducing Chauffeur, a new classy way to delivery changes around Umbraco instances. Have you ever been working with the Dictionary<TKey, TValue> object in .NET and just wanted to find some way in which you can do this: var dictionary = new Dictionary<string, string> { { "hello", "world!" } }; ... var something = dictionary.hello; It'd be sweet, but it's not possible. The dictionary is just a bucket and there isn't a way it can know at compile type about the objects which are within it. Damn, so you just have to go via the indexer of the dictionary. But really, using dot-notation could be really cool! Well with the .NET 4.0 framework we now have a built in DLR so can we use the dynamic features of the C# 4 to this? Well the answer is yes, yes you can do this, and it's really bloody easy, in fact you can do it in about 10 lines of code (if you leave out error checking and don't count curly braces :P). First off you need to have a look at the DynamicObject which is in System.Runtime. There's a lot of different things you can do with the DynamicObject class, and things which you can change. For this we are going to work with TryGetMember, with this we just need to override the base implementation so we can add our own dot-notation handler! So lets start with a class: using System; using System.Collections.Generic; using System.Dynamic; namespace AaronPowell.Dynamics.Collections { public class DynamicDictionary<TValue> : DynamicObject { private IDictionary<string, TValue> dictionary; public DynamicDictionary(IDictionary<string, TValue> dictionary) { this.dictionary = dictionary; } } } Essentially this is just going to be a wrapper for our dynamic implementation of a dictionary. So we're actually making a class which has a private property which takes a dictionary instance into the constructor. Now we've got our object we need do some work to get it handle our dot-notation interaction. First we'll override the base implementation: public override bool TryGetMember(GetMemberBinder binder, out object result) { var key = binder.Name; if (dictionary.ContainsKey(key)) { result = dictionary[key]; return true; } throw new KeyNotFoundException(string.Format("Key \"{0}\" was not found in the given dictionary", key)); } And you know what, we're actually done! Now all you have to do: var dictionary = new Dictionary<string, string> { { "hello", "world!" } }; dynamic dynamicDictionary = new DyanmicDictionary(dictionary); Console.WriteLine(dynamicDictionary.hello); //prints 'world' I'm going to be releasing the source for this shortly (well, an improved version), along with a few other nifty uses for dynamic. So keep watching this space for that ;). While we were working on some sexy features for Umbraco 5 over the CodeGarden 10 retreat we kept saying that we should look at using as many of the cool new .NET framework features which we can possibly get away with. To this extent we kept saying we need to work out how to implement the dynamic keyword in some way. Well that's where the idea for the above code came from, in fact we've got a similar piece of code which will be usable within the framework of Umbraco 5 and entity design. But the full info on that will belong to another post ;). I've rolled the above code (with some improvements mind you) into a new project that I've been working on for making working with dynamics in .NET a whole lot easier. You can check out my Dynamics Library and get dynamacising.
http://www.aaron-powell.com/posts/2010-06-28-dynamic-dictionaries-with-csharp-4.html
CC-MAIN-2014-42
refinedweb
590
65.01
The lambda expression was introduced first time in Java 8. Its main objective to increase the expressive power of the language. But, before getting into lambdas, we first need to understand functional interfaces. What is Functional Interface? If a Java interface contains one and only one abstract method then it is termed as functional interface. This only one method specifies the intended purpose of the interface. For example, the Runnable interface from package java.lang; is a functional interface because it constitutes only one method i.e. run(). Example 1: Define a Functional Interface in java import java.lang.FunctionalInterface; @FunctionalInterface public interface MyInterface{ // the single abstract method double getValue(); } In the above example, the interface MyInterface has only one abstract method getValue(). Hence, it is a functional interface. Here, we have used the annotation @FunctionalInterface. The annotation forces the Java compiler to indicate that the interface is a functional interface. Hence, does not allow to have more than one abstract method. However, it is not compulsory though. In Java 7, functional interfaces were considered as Single Abstract Methods or SAM type. SAMs were commonly implemented with Anonymous Classes in Java 7. Example 2: Implement SAM with anonymous classes in java public class FunctionInterfaceTest { public static void main(String[] args) { // anonymous class new Thread(new Runnable() { @Override public void run() { System.out.println("I just implemented the Runnable Functional Interface."); } }).start(); } } Output: I just implemented the Runnable Functional Interface. Here, we can pass an anonymous class to a method. This helps to write programs with fewer codes in Java 7. However, the syntax was still difficult and a lot of extra lines of code were required. Java 8 extended the power of a SAMs by going a step further. Since we know that a functional interface has just one method, there should be no need to define the name of that method when passing it as an argument. Lambda expression allows us to do exactly that. Introduction to lambda expressions Lambda expression is, essentially, an anonymous or unnamed method. The lambda expression does not execute on its own. Instead, it is used to implement a method defined by a functional interface. How to define lambda expression in Java? Here is how we can define lambda expression in Java. (parameter list) -> lambda body The new operator ( ->) used is known as an arrow operator or a lambda operator. The syntax might not be clear at the moment. Let's explore some examples, Suppose, we have a method like this: double getPiValue() { return 3.1415; } We can write this method using lambda expression as: () -> 3.1415 Here, the method does not have any parameters. Hence, the left side of the operator includes an empty parameter. The right side is the lambda body that specifies the action of the lambda expression. In this case, it returns the value 3.1415. Types of Lambda Body In Java, the lambda body is of two types. 1. A body with a single expression () -> System.out.println("Lambdas are great"); This type of lambda body is known as the expression body. 2. A body that consists of a block of code. () -> { double pi = 3.1415; return pi; }; This type of the lambda body is known as a block body. The block body allows the lambda body to include multiple statements. These statements are enclosed inside the braces and you have to add a semi-colon after the braces. Note: For the block body, you can have a return statement if the body returns a value. However, the expression body does not require a return statement. Example 3: Lambda Expression Let's write a Java program that returns the value of Pi using the lambda expression. As mentioned earlier, a lambda expression is not executed on its own. Rather, it forms the implementation of the abstract method defined by the functional interface. So, we need to define a functional interface first. import java.lang.FunctionalInterface; // this is functional interface @FunctionalInterface interface MyInterface{ // abstract method double getPiValue(); } public class Main { public static void main( String[] args ) { // declare a reference to MyInterface MyInterface ref; // lambda expression ref = () -> 3.1415; System.out.println("Value of Pi = " + ref.getPiValue()); } } Output: Value of Pi = 3.1415 In the above example, - We have created a functional interface named MyInterface. It contains a single abstract method named getPiValue() - Inside the Main class, we have declared a reference to MyInterface. Note that we can declare a reference of an interface but we cannot instantiate an interface. That is, // it will throw an error MyInterface ref = new myInterface(); // it is valid MyInterface ref; - We then assigned a lambda expression to the reference. ref = () -> 3.1415; - Finally, we call the method getPiValue()using the reference interface. When System.out.println("Value of Pi = " + ref.getPiValue()); Lambda Expressions with parameters Till now we have created lambda expressions without any parameters. However, similar to methods, lambda expressions can also have parameters. For example, (n) -> (n%2)==0 Here, the variable n inside the parenthesis is a parameter passed to the lambda expression. The lambda body takes the parameter and checks if it is even or odd. Example 4: Using lambda expression with parameters @FunctionalInterface interface MyInterface { // abstract method String reverse(String n); } public class Main { public static void main( String[] args ) { // declare a reference to MyInterface // assign a lambda expression to the reference MyInterface ref = (str) -> { String result = ""; for (int i = str.length()-1; i >= 0 ; i--) result += str.charAt(i); return result; }; // call the method of the interface System.out.println("Lambda reversed = " + ref.reverse("Lambda")); } } Output: Lambda reversed = adbmaL Generic Functional Interface Till now we have used the functional interface that accepts only one type of value. For example, @FunctionalInterface interface MyInterface { String reverseString(String n); } The above functional interface only accepts String and returns String. However, we can make the functional interface generic, so that any data type is accepted. If you are not sure about generics, visit Java Generics. Example 5: Generic Functional Interface and Lambda Expressions // GenericInterface.java @FunctionalInterface interface GenericInterface<T> { // generic method T func(T t); } // GenericLambda.java public class Main { public static void main( String[] args ) { // declare a reference to GenericInterface // the GenericInterface operates on String data // assign a lambda expression to it GenericInterface<String> reverse = (str) -> { String result = ""; for (int i = str.length()-1; i >= 0 ; i--) result += str.charAt(i); return result; }; System.out.println("Lambda reversed = " + reverse.func("Lambda")); // declare another reference to GenericInterface // the GenericInterface operates on Integer data // assign a lambda expression to it GenericInterface<Integer> factorial = (n) -> { int result = 1; for (int i = 1; i <= n; i++) result = i * result; return result; }; System.out.println("factorial of 5 = " + factorial.func(5)); } } Output: Lambda reversed = adbmaL factorial of 5 = 120 In the above example, we have created a generic functional interface named GenericInterface. It contains a generic method named func(). Here, inside the Main class, GenericInterface<String> reverse- creates a reference to the interface. The interface now operates on Stringtype of data. GenericInterface<Integer> factorial- creates a reference to the interface. The interface, in this case, operates on the Integertype of data. Lambda Expression and Stream API The new java.util.stream package has been added to JDK8 which allows java developers to perform operations like search, filter, map, reduce, or manipulate collections like Lists. For example, we have a stream of data (in our case a List of String) where each string is a combination of country name and place of the country. Now, we can process this stream of data and retrieve only the places from Nepal. For this, we can perform bulk operations in the stream by the combination of Stream API and Lambda expression. Example 6: Demonstration of using lambdas with the Stream API import java.util.ArrayList; import java.util.List; public class StreamMain { // create an object of list using ArrayList static List<String> places = new ArrayList<>(); // preparing our data public static List getPlaces(){ // add places and country to the list places.add("Nepal, Kathmandu"); places.add("Nepal, Pokhara"); places.add("India, Delhi"); places.add("USA, New York"); places.add("Africa, Nigeria"); return places; } public static void main( String[] args ) { List<String> myPlaces = getPlaces(); System.out.println("Places from Nepal:"); // Filter places from Nepal myPlaces.stream() .filter((p) -> p.startsWith("Nepal")) .map((p) -> p.toUpperCase()) .sorted() .forEach((p) -> System.out.println(p)); } } Output: Places from Nepal: NEPAL, KATHMANDU NEPAL, POKHARA In the above example, notice the statement, myPlaces.stream() .filter((p) -> p.startsWith("Nepal")) .map((p) -> p.toUpperCase()) .sorted() .forEach((p) -> System.out.println(p)); Here, we are using the methods like filter(), map() and forEach() of the Stream API. These methods can take a lambda expression as input. We can also define our own expressions based on the syntax we learned above. This allows us to reduce the lines of code drastically as we saw in the above example.
https://www.programiz.com/java-programming/lambda-expression
CC-MAIN-2021-04
refinedweb
1,477
50.02
score:0 const App = () => { const array = [ { title: 'news1', content: 'brabrabrabra1', color: 'red' }, { title: 'news2', content: 'brabrabrabra2', color: 'blue' }, { title: 'news3', content: 'brabrabrabra3' color: 'green' } ] return ( <> {array.map((item, i) => (<InfoTitle style={{backgroundColor: item.color}} key={i}>{item.title}</InfoTitle>))} </> ) } Source: stackoverflow.com Related Query - Is there any way I can apply different inline style when I mapping through array of objects inside jsx? - Is there a way that we can trigger mouseover event on echarts library when hovering on any point of the series, rather than hover on a data point? - Is there any way I can manipulate a component's state, from a different component? - Is there any way to display 0 when an array is [""]? - Generating a grid-container style when mapping through an array - Is there any way to see names of 'fields' in React multiple state with React DevTools when using 'useState' hooks - Invalid hook call. Hooks can only be called inside of the body of a function component when apply style to class base component with material-ui - React native mapping through array with object childs - working different as in react web? - Is there any way to update a specific index from the array in Firestore - Is there any way that I can only show the hovered line in tooltip (Recharts)? - Is there any way to change image in tailwindcss when dark mode toggled? - When I format two different times they both come out as the same time, is there a way to fix this? - Is there any way i can trigger the onEnded attribute on the video element while testing using jest? - Is there any way we can export the Antd table data into an excel sheet on button click with reactjs - Is there any way I can use isMulti without it deleting choices I've already picked? In otherwords, having duplicate/repeated selected options? - How can i only re-render the new child component when mapping an array from Redux state? - Within a parent component, is there any way I can access a prop from a child component? - When testing a React component, is there any way to directly call its non-lifecycle methods? - Is there any way I can use index inside a cy.get('k")? - Is there a way I can use button in react to delete a item in an array that is stored in the state - How can I change the style after mapping the array with js? - Is there any other way to hide dropdown menu when clicked outside? - Is there any way to play a sound for a certain time interval when a onClick method is triggered on react js - Is there any way to set a default value for cells when adding a new row in react material-table? - Is there any way that I can run child_process in electronjs and send json file to react that is being created by that child_process? - Is there a way to make react-compound-timer Auto-reset when it hits a value without pressing any buttons? - Is there any way to change where Material-UI adds style tag html elements? - how can I search in 2 different arrays of objects if there is an id withn both when doing a map? - Is there a clean way to access properties of an object that can be undefined but is only rendered when it is defined? - Is there any way that I can add another value to a key in an object within React? More Query from same tag - How do I fetchJsonp this into my react app> - Parsing error: The keyword 'enum' is reserved - useState set method and state not change in onClick function - Passing in parameter to Styled Components - How to include a php script in react app - ReactJS : Fetch response not returning to componentDidMount; Failed to load PDF - How to implement a social media feed using MERNG? - React: Add user inputted form data to current state - React Test: Invariant Violation: Element type is invalid: undefined - Col span and Row span in material-table header React - What to do when I have a placeholder type as string but a value type as number? - Focus on TextField not working when button pressed in Dialog - Material UI - Typescript: fetch file from API and read it's original filename - whats wrong with this array map via typescript syntax - Regex to validate size units like 10,10Gi,10.5Mi,10Ki? - Why is my firebase function returning a null object? - React destructuring applied in a native JS project - Change resultant's key name in cube.js - How to use lazy loading in data-grid material-ui - How do i pass from a class component to a separate api component the search input - Trying to persist my react state and getting infinite loop on render - Why won't the button float right? - How to implement unmanaged multi-row dragging in ag-grid react? - Render Component with props based on an API call? - How to write a jest unit-test using the gatsby.navigate function? - How to access callback through function call - How to create code editor like Autocomplete dropdown with Material UI? - Jest how to mock api call - React-native android WebView handle clicked Url before loading - Conditionally rendered map function
https://www.appsloveworld.com/reactjs/200/301/is-there-any-way-i-can-apply-different-inline-style-when-i-mapping-through-array
CC-MAIN-2022-40
refinedweb
876
58.21
28 December 2011 03:29 [Source: ICIS news] SINGAPORE (ICIS)--PTT Global Chemical (PTTGC) restarted its 616,000 tonne/year paraxylene (PX) unit in Map Ta Phut and expects to reach on-spec production over the next few days, a company source said on Wednesday. The source said PTTGC will lift the force majeure [FM] once the plant completely starts-up and the legal department agrees. The PX unit was shut on 13 December following a leak at its facility, and a FM on supplies was declared late on 14 December. PTTGC is a Thai aromatics maker and a key PX supplier to ?xml:namespace> In addition, the Thai aromatics maker exports PX to the Chinese market on a monthly basis via northeast Asia-based traders.
http://www.icis.com/Articles/2011/12/28/9519294/pttgc-restarts-map-ta-phut-px-unit-expects-on-spec-in-few-days.html
CC-MAIN-2014-42
refinedweb
126
55.98
TRULY Understanding ViewState: MISUNDERSTANDING OF VIEWSTATE WILL LEAD TO... - Leaking sensitive data - ViewState Attacks - aka the Jedi Mind Trick -- *waves hand* that plasma tv is for sale for $1.00 - Poor performance - even to the point of NO PERFORMANCE - Poor scalability - how many users can you handle if each is posting 50k of data every request? - Overall poor design - Headache, nausea, dizziness, and irreversible frilling of the eyebrows. WHAT DOES VIEWSTATE DO? - Stores values per control by key name, like a Hashtable - Tracks changes to a ViewState value's initial state - Serializes and Deserializes saved data into a hidden form field on the client - Automatically restores ViewState data on postbacks This is a list of ViewState's main jobs. Each of these jobs serves a very distinct purpose. Next we'll learn exactly how it fulfills those jobs. WHAT DOESN'T VIEWSTATE DO? - Automatically retain state of class variables (private, protected, or public) - Remember any state information across page loads (only postbacks) (that is unless you customize how the data is persisted) - Remove the need to repopulate data on every request - ViewState is not responsible for the population of values that are posted such as by TextBox controls (although it does play an important role) - Make you coffee 1. VIEWSTATE STORES VALUES If you've ever used a hashtable, then you've got it. There's no rocket science here. ViewState has an indexer on it that accepts a string as the key and any object as the value. For example: ViewState["Key1"] = 123.45M; // store a decimal value ViewState["Key2"] = "abc"; // store a string ViewState["Key3"] = DateTime.Now; // store a DateTime Actually, "ViewState" is just a name. ViewState is a protected property defined on the System.Web.UI.Control class, from which all server controls, user controls, and pages, derive from. The type of the property is System.Web.UI.StateBag. Strictly speaking, the StateBag class has nothing to do with ASP.NET. It happens to be defined in the System.Web assembly, but other than it's dependency on the State Formatter, also defined in System.Web.UI, there's no reason why the StateBag class couldn't live along side ArrayList in the System.Collections namespace. In practice, Server Controls utilize ViewState as the backing store for most, if not all their properties. This is true of almost all Microsoft's built in controls (ie, label, textbox, button). This is important! You must understand this about controls you are using. Read that sentance again. I mean it... here it is a 3rd time: SERVER CONTROLS UTILIZE VIEWSTATE AS THE BACKING STORE FOR MOST, IF NOT ALL THEIR PROPERTIES. Depending on your background, when you think of a traditional property, you might imagine something like this: public string Text { get { return _text; } set { _text = value; } } What is important to know here is that this is NOT what most properties on ASP.NET controls look like. Instead, they use the ViewState StateBag, not a private instance variable, as their backing store: public string Text { get { return (string)ViewState["Text"]; } set { ViewState["Text"] = value; } }. It is also important to understand how DEFAULT VALUES are implemented using this technique. When you think of a property that has a default value, in the traditional sense, you might imagine something like the following: public class MyClass { private string _text = "Default Value!"; public string Text { get { return _text; } set { _text = value; } } } The default value is the default because it is what is returned by the property if no one ever sets it. How can we accomplish this when ViewState is being used as the private backing? Like this: public string Text { get { return ViewState["Text"] == null ? "Default Value!" : (string)ViewState["Text"]; } set { ViewState["Text"] = value; } }. 2. VIEWSTATE TRACKS CHANGES! Basically, tracking allows the StateBag to keep track! stateBag["key"] = "abc"; stateBag.IsItemDirty("key"); // returns false stateBag.TrackViewState(); stateBag["key"] = "abc"; stateBag.IsItemDirty("key"); // returns true ViewState could. That kind of comparison is not important for ViewState to do it's job, so it doesn't. So that's tracking in a nutshell. But you might wonder why StateBag would need this ability in the first place. Why on earth would anyone need to know only changes since TrackViewState() is called? Why wouldn't they just utilize the entire collection of items? This one point seems to be at the core of all the confusion on ViewState. I have interviewed many professionals, sometimes with years and years of ASP.NET experience logged in their resumes, who have failed miserably to prove to me that they understand this point. Actually, I have never interviewed a single candidate who has! First, to truly understand why Tracking is needed, you will need to understand a little bit about how ASP.NET sets up declarative controls. Declarative controls are controls that are defined in your ASPX or ASCX form. Here: <asp:Label, sets. This little trick ASP.NET uses to populate properties allows it to easily detect the difference between a declaratively set value and dynamically set value. If you don't yet realize why that is important, please keep reading. 3. SERIALIZATION AND DESERIALIZATION. Here is where it finally comes together... are you ready?). Despite this smart optimization employed by ASP.NET, unnecessary data is still persisted into ViewState all the time due to misuse. I will get into examples that demonstrate these types of mistakes later on. POP QUIZ If you've read this far, congratulations, I am rewarding you with a pop quiz. Aren't I nice? Here it is: Let's say you have two nearly identical ASPX forms: Page1.aspx and Page2.aspx. Contained within each page is just a form tag and a label, like so: <form id="form1" runat="server"> <asp:Label </form> They are identical except for one minor difference. In Page1.aspx, we shall declare the label's text to be "abc": <asp:Label ...And on Page2.aspx, we shall declare the label's text to be something much longer (the preamble to the Constitution of the United States of Americ. The question is: Are the two sizes you noted the same, or are they different? Before we get to the answer, lets make it a little bit more involved. Lets say you also put a button next to the label (on each page): <asp:Button... Are the encoded ViewState values the same, or different? The correct answer to the first part the question is THEY ARE THE SAME!. The correct answer to the second part is again, THEY ARE THE SAME! LosFormatter for ASP.NET 1.x or the ObjectStateFormatter for ASP.NET 2.0. (v1.x) or the ObjectStateFormatter (v2.0). 4. AUTOMATICALLY RESTORES DATA previous request. next postback. Brilliant! Whew. Now you are an expert on ViewState management. IMPROPER USE OF VIEWSTATE. CASES OF MISUSE - Forcing a Default - Persisting static data - Persisting cheap data - Initializing child controls programmatically - Initializing dynamically created controls programmatically This is one of the most common misuses, and it is also the easiest to fix. The fixed code is also usually more compact than the wrong code. Yes, doing things the right way can lead to less code. Imagine that.: <abc:JoesControl. Well, since we all know which sex rules this world, we can assume Jane ends up getting Joe to fix his control. Much to Jane's delight, this is what Joe comes up with: public class JoesControl : WebControl { public string Text { get { return this.ViewState["Text"] == null ? Session["SomeSessionKey"] : this.ViewState["Text"] as string; } set { this.ViewState["Text"] = value; } } }! 2. Persisting static data: (ShoppingCart.aspx) <asp:Label (ShoppingCart.aspx.cs) protected override void OnLoad(EventArgs args) { this.lblUserName.Text = CurrentUser.Name; base.OnLoad(e); }. <asp:Label: <%= CurrentUser.Name %> control in ASP.NET: The LITERAL! <asp:Literal No span tag here. 3. Persisting cheap data. A common instance of this mistake is when populating a dropdown list of U.S. States. Unless you are writing a web application that you plan on warping back in time to December 7, 1787 (here),. Our proverbial programmer Joe decides he will populate his dropdown list from a USSTATES table in a database. The eCommerce site is already using a database, so its trivial for him to add the table and query it. ! 4. Initializing child controls programmatically! Lets say Joe would like to display the current date and time in a label declared on the form. <asp:Label protected override void OnInit(EventArgs args) { this.lblDate.Text = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); base.OnInit(e); } Even though Joe is setting the label text in the earliest event possible: <asp:Label: private void cmdRemoveDate_Click(object sender, EventArgs args) { this.lblDate.Text = "--/--/---- --:--:--"; } to it would be persisted in ViewState. Like I said... ASP.NET does not provide an easy way to accomplish this task. For you ASP.NET 2.0 developers out there, you do have the $ sign syntax, which allows you to use expression builders to declare values that actually come from a dynamic source (ie, resources, declared connection strings). There's no expression builder for "just run this code" so I don't think that helps you either (UPDATE: Unless you use my custom CodeExpressionBuilder!).. The root of the problem is simply that we need to be able to assign the Text property of the label BEFORE it begins:: public class DateTimeLabel : Label { public DateTimeLabel() { this.Text = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); } }. 5. Initializing dynamically created controls programmatically This is the same problem as before, but since you are in more control of the situation, it is much easier to solve. Lets say Joe has written a custom control that at some point is dynamically creating a Label. public class JoesCustomControl : Control { protected override void CreateChildControls() { Label l = new Label(); this.Controls.Add(l); l.Text = "Joe's label!"; } }. That means my friends: public class JoesCustomControl : Control { protected override void CreateChildControls() { Label l = new Label(); l.Text = "Joe's label!"; this.Controls.Add(l); } } Subtle. Instead of initializing the label's text after adding it to the control collection, Joe initializes it before it is added. This ensures without a doubt that the Label is not tracking ViewState when it is initialized.: public class JoesCustomControl : Control {!
https://weblogs.asp.net/infinitiesloop/Truly-Understanding-Viewstate
CC-MAIN-2019-18
refinedweb
1,709
58.69
This is made possible with Mark's new and improved super-fast (and I'll add, actually functional!) graphics library :) I've uploaded the code to the app store here, but in the meantime here's the heart of the functionality (notice it's still copied line for line from the Processing demo book, since all the functions are ported!): for(int i = 0; i <= width; i += 20) { for(int j = 0; j <= width; j += 20) { float size = dist(mouseX, mouseY, i, j); size = size/max_distance * 15; ellipse(i, j, size, size); } } 12 comments: Thanks, its an interesting algorithm :) Good job! Question... Is there an easier way to download the new touchshield core? I manually D/L'd the individual files from the github, but couldn't find a way to download the whole enchilada -- I could only get the older cores. Thanks, Sasha Sasha, Your totally right, pulling the cores manually from github is a pain. They do have the "download as zip" feature, which makes grabbing all cores slightly easier. You also bring up a good point, the Arduino package should be synced up tighter with the new cores. So that's why I'll work with Omar to bang out another package. Lets say, in about a day? Hope that helps. Sasha meanwhile I have uploaded all the new cores here in order to can work with them: I can't seem to get that new core to work, am I not including the SubPGraphics library? OLD SubPGraphics.cpp void background(uint8_t redValue, uint8_t greenValue, uint8_t blueValue) { setbcolor(redValue, greenValue, blueValue); fillback(); } NEW SubPGraphics.cpp void background(uint8_t redValue, uint8_t greenValue, uint8_t blueValue) { setbcolor(redValue, greenValue, blueValue); fillback(); } by the way, nice graphic Matt! I think you just inspired my next project. Mike you have to include this line: #include "SubPGraphics.h" Download the new core files from this url: And then replace all the files in the slide with that ones. Add the line #include "SubPGraphics.h" to your programas Tell me if it works please. Isidro. Thanks all. Hope I didn't come across as demanding -- re-reading my post, I seemed rather terse... Meant to add that the processing demo is cool. I tried the same thing with the 'circles that change size and stroke-color based on mouse velocity' sketch as well...can't remember the name off the top of my head. I thought I posted a second comment as well... I, too, noticed the need to 'include' SubPGraphics. I was also wondering if Mark S could give a hint as to how to use the Hershey fonts. When I tried 'including' some of the hershey files and calling some functions I could not compile. Thanks again for posting the new files, all the great work and I look forward to a new Arduino package. Also, here is a video of my Arduino/Touchshield clock... Thanks, Sasha @Sasha - nope, not at all ... I'm a total beginner when it comes to github personally, so I honestly don't know the first thing when it comes to pulling code from it (cough Chris help cough)... And speaking of ... whoa! That video was officially ridiculously cool... where is the time coming from?! Can I write a blog article about it or maybe do you have a blog that I can link to? Were you thinking of maybe possibly sharing your code (and maybe making it the official first non-Mike, non-Chris, non-Matt "app" for the slide on the app store?!?!!) Ha I guess I can always hope... :-) jaja i also love that example!! It is possible to download the code from anywhere? Try using HersheyFonts like this: char cadena1[]="HELLO"; void setup() { background(0,0,0); stroke(40, 153, 224); HersheyDrawCString(4,60,80, cadena1, 30, 0, 1); } All of them are NUMBERS not CHARACTHERS because they are short ints ok? Tell me if it works with than change. Isidro. Matt -- The latest version of my clock is using a Real-time-clock chip for time keeping duties, a light sensor and an accelerometer(thanks to sparkfun). An earlier version I made was faking the time via an ~800ms delay. I'll totally find the older code(clean it up to prevent embarrassment) and send it to you for the app store(I'll even send the current code that has all the serial/i2c stuff, though it won't be as useful without the peripheral chips). I'd be flattered if you blogged about it ^_^. I'll write up a description of what is going on and send it as well. i'll see if I can find any other goodies.. I had a fun program that draws a grid of squares whose colors are based on touch input...(have you noticed that if you touch two places at once it seems to average the x,y values? Poorman's multi-touch...) Thanks, Sasha Here is a link to a few programs... I'll add some more and a write up of the 'proper' clock project... - Sasha Sasha thanks for that code! The clock is fantastic :)
http://antipastohw.blogspot.com/2009/02/touchshield-processing-bubbles.html
CC-MAIN-2017-34
refinedweb
856
73.17
On Thu, Feb 25, 2010 at 8:27 PM, Edward Cherlin <echerlin at gmail.com> wrote: > On Thu, Feb 25, 2010 at 19:33, kirby urner <kirby.urner at gmail.com> wrote: >> I briefly blogged about our meeting last night. >> >> > > Thank you. > >> I posted Ed Cherlin's Chinese + Arabic sig > > Japanese/Urdu in language, but you are correct as to writing system. > UNICODE ======= Got it, was wondering about that. Those unicode strings showed up fine in the chat window when I cut and pasted them, with independent confirmation from a distant reader. In contrast, my recent unicode tests to the Math Forum show Drexel's system remembers the entities (the surd and phi symbols in this case) but doesn't display them. (fyi) << hi Carl >> > I am a great admirer of the tribe and the languages, although I also > enjoy the UnLambda language. > Yeah, I'm not eager to feud with the Great Lambda tribe, find them somewhat intimidating. I get the sense they've been waiting for a place in the sun a long time, especially when it comes to math teaching. All these so-called imperative / procedural languages (Python among them) just getting in the way, hogging the road. >From that point of view, my OOish type stuff is like a bad dream, because I'm aiming to perpetuate what's supposed to be shriveling on the vine around now. Rex Page really doesn't want people using for-loops to explain Sigma notation (the greek letter thing), says it takes really advanced mathematics to intelligently discuss for-loops, so we just shouldn't do it at the pre-college level. Maria D., somewhat new to this debate, courageously invited those present at the meeting last night to dive in on these issues, but we seemed too preoccupied getting a handle on this form of synchronous communication (the fancy control panel, the division of roles...). Rex was at our meeting last night as well. NUDGING PHILOSOPHERS ===================== I've recently gone banging on doors in the philosophy department suggesting long-simmering debates of this nature should be getting a hearing in those chambers. Lets get these ostensibly articulate, widely-read individuals to take a big picture view and play air traffic controller more successfully. We need to stop squabbling and give a next generation a better ride. Better to bring these debates to a head, under expert managers, than to let them simmer and fester decade after decade. That just leads to nothing happening. In recruiting for this cause, I'm hearkening back to when philosophy was still proudly a source of overview, was at the apex of the trivium / quadrivium (co-piloting with theology), its avatars confidant they could render their diplomatic services across disciplines where needed. I'm overtly nostalgic for those liberal arts values, associated with the Italian Renaissance in some degree. In my readings, the great bugaboo we all need to counter is "over-specialization". Whereas narrowing one's focus is usually considered a "good thing" (how one becomes "a professional"), I was signaled to read 'Operating Manual for Spaceship Earth' at the Woodrow Wilson School of Public Affairs (where I enrolled in some courses). Now I'm seeing a big need for "glue languages" performing a role analogous to Python's in a computerized ecosystem (Python plays well with others, talks to lots of different software). Philosophers should consider it a part of their job description to come up with better glue languages (ones humans might use). Douglas Hofstadter has done valuable of work in this direction, with 'Godel Escher Bach' etc. And what about Wolfram? >> The Great Lambda worshipers are a tribe to our north, inherit through >> LISP, LOGO and Scheme, although that middle language donated its >> turtle to the OO camp, also working in Ruby right? > > And in Turtle Art, written in Python, on Sugar. > We have this sort of punk grunge techno-anarchist coffee shop called Duke's Landing on SE Belmont that's like a HQS for XO activity. Michael D. is always upgrading his, is now running that Xtra Ordinary system on one of them. I contributed an XO to the mix just the other night in fact, in the context of one of our 'Vegans not Pigs' events (bands get together and play, then eat vegan food cooked on the premises and marvel at how affordable that is, only $2 a plate -- but then Evelyn fries up her chicken as an option, for extra $). (event poster) (XO at Duke's) (event writeup) My journals and Photostream are full of pro-XO propaganda. I've done some of the better pictures I think. I love that they go to the trouble at the XO website to talk about the skull-and-bones motif, i.e. the O could be a skull, the X some crossed bones. They're not saying that's right, but at least they address the issue. The bias in this conversation is that a skull-and-bones would be bad, but then part of kid culture is to romanticize pirates so I really don't see a need to get defensive about that particular connotation. That would be my spin in case anyone asks. >> >> >> >> Re: "Great Lambda worshipers": talking about the functional >> programming camp, not wanting to pollute thinking like a mathematician >> with the "mutable variables" of the computer scientists. > > There are many kinds of mathematician. I worked a bit with Ken Iverson > of APL and J fame on how to add combinatory logic abstraction to J and > other languages, and published a paper on it. J supports several > flavors of Computer Science and math, including FP, OO, and > traditional APL. > Yes, Kenneth helped me fix a couple typos on my beginner-level web page called 'Jiving in J'. Way back to the beginning of this edu-sig archive, you'll find me bringing up J and APL, just as you do. I admire them both. I brought up J over on math-thinking-l as well, but didn't get any takers. I don't get the feeling anyone frequenting that list is actually familiar with J, even if calling themselves a functional programmer. """ One needs more than one paradigm to know what "paradigm" means, so I would at least advocate for a minimum of two paradigms in any subject claiming to teach about paradigms. Whether K-12 should is an open question, however, Kuhn's Structure of Scientific Revolutions has been around long enough to have made these concepts rather universal. So I'm all for OO and FP as a minimal combo, though others will think of others. I like the J language, and wonder if that's embraced as functional programming by anyone. Inherits from APL. """ [ ] >> Python has "little lambda" (a token lambda). >> >> We seemed pretty much in agreement during this meeting that Computer >> Science is going away as a high school discipline, leaving only >> Mathematics (cite: death of CS AP test, only a pale shadow of its >> former self). > > I am working on Third Grade CS, in Turtle Art and Smalltalk. > Yeah, we're not all in the same namespace here. The politics in Oregon is to eliminate CS with an eye towards freeing CS teachers to finally teach something for credit (high school math) not just an elective, for which too many people, especially young women, just don't have the time. Per my Great Bugaboo above (over-specialization), my bias is to question specialization getting in too early. At our meeting in London hosted by Shuttleworth Foundation (Alan Kay present), a South African math teacher, really good at her job, kept saying "I teach students, not subjects" by which she meant whole individual human beings were her focus, not academic turfdoms that, at the end of the day, all need to co-exist inside each individual mind. Maybe we should just have one subject through age 15 called "language games for children" that's all-encompassing, includes outdoor sports, other activities. Or call it "philosophy" if you like. Then we specialize later, become computer scientists, mathematicians, pirates, clowns or whatever. But we start out with something more integrated and whole. >> The only question seems to be whether folding CS into Math means >> keeping some programming, or going with the New Zealand unplugged >> route (CS on paper). >> >> I'm not sure even New Zealand is going the NZ unplugged route. Nat >> Torkington has a say: >> >> >> >> Lets see how students "vote with their feet" on that one, i.e. it's >> not entirely up to the teachers. > > Amen, brothers and sisters. That's my greatest hope for giving > children Internet access on XOs, that we will be able to hear from > them, and they will be able to hear from each other in numbers for the > first time. > Per blog, I went to the wrong virtual Elluminate session at first, owing to some last minute changes. The session I sat in on was more about generic collaboration tools and their role in distance education these days, the big difference they're making. One of the participants made the good point that these technologies were changing interpersonal dynamics, as you can't so easily shout down, dominate, monopolize discussions in these tools, even those designed to bring people together synchronously. She said in the actual classroom she'd always been the shy wallflower type, never got a word in edgewise, but since moving to these alternative Internet tools, she'd found her voice, her ability to get into the game and stay there. Loud guys with strong opinions were no longer part of her obstacle course, praise Allah. I bring that up to underline your hopes, for children getting to speak up for themselves more. If our cyber-environment serves children better, then probably more adults will feel better served as well. Many hitherto marginalized and/or semi-voiceless players have new reasons for hope, given the spread of collaborative technologies. Kir. > >
https://mail.python.org/pipermail/edu-sig/2010-February/009838.html
CC-MAIN-2014-10
refinedweb
1,661
59.74
In this tutorial you’ll learn advanced Python web automation techniques: using Selenium with a “headless” browser, exporting the scraped data to CSV files, and wrapping your scraping code in a Python class. Motivation: Tracking Listening Habits Suppose that you have been listening to music on bandcamp for a while now, and you find yourself wishing you could remember a song you heard a few months back. Sure, you could dig through your browser history and check each song, but that might be a pain… All you remember is that you heard the song a few months ago and that it was in the electronic genre. “Wouldn’t it be great,” you think to yourself, “if I had a record of my listening history? I could just look up the electronic songs from two months ago, and I’d surely find it.” Today, you will build a basic Python class, called BandLeader that connects to bandcamp.com, streams music from the “discovery” section of the front page, and keeps track of your listening history. The listening history will be saved to disk in a CSV file. You can then explore that CSV file in your favorite spreadsheet application or even with Python. If you have had some experience with web scraping in Python, you are familiar with making HTTP requests and using Pythonic APIs to navigate the DOM. You will do more of the same today, except with one difference. Today you will use a full-fledged browser running in headless mode to do the HTTP requests for you. A headless browser is just a regular web browser, except that it contains no visible UI element. Just like you’d expect, it can do more than make requests: it can also render HTML (though you cannot see it), keep session information, and even perform asynchronous network communications by running JavaScript code. If you want to automate the modern web, headless browsers are essential. Free Bonus: Click here to download a "Python + Selenium" project skeleton with full source code that you can use as a foundation for your own Python web scraping and automation apps. Setup Your first step, before writing a single line of Python, is to install a Selenium supported WebDriver for your favorite web browser. In what follows, you will be working with Firefox, but Chrome could easily work too. Assuming that the path ~/.local/bin is in your execution PATH, here’s how you would install the Firefox WebDriver, called geckodriver, on a Linux machine: $ wget $ tar xvfz geckodriver-v0.19.1-linux64.tar.gz $ mv geckodriver ~/.local/bin Next, you install the selenium package, using pip or whatever you like. If you made a virtual environment for this project, you just type: $ pip install selenium Note: If you ever feel lost during the course of this tutorial, the full code demo can be found on GitHub. Now it’s time for a test drive. Test Driving a Headless Browser To test that everything is working, you decide to try out a basic web search via DuckDuckGo. You fire up your preferred Python interpreter and type the following: >>> from selenium.webdriver import Firefox >>> from selenium.webdriver.firefox.options import Options >>> opts = Options() >>> opts.set_headless() >>> assert opts.headless # Operating in headless mode >>> browser = Firefox(options=opts) >>> browser.get('') So far, you have created a headless Firefox browser and navigated to. You made an Options instance and used it to activate headless mode when you passed it to the Firefox constructor. This is akin to typing firefox -headless at the command line. Now that a page is loaded, you can query the DOM using methods defined on your newly minted browser object. But how do you know what to query? The best way is to open your web browser and use its developer tools to inspect the contents of the page. Right now, you want to get ahold of the search form so you can submit a query. By inspecting DuckDuckGo’s home page, you find that the search form <input> element has an id attribute You found the search form, used the send_keys method to fill it out, and then the submit method to perform your search for "Real Python". You can checkout the top result: >>> results = browser.find_elements_by_class_name('result') >>> print(results[0].text) Real Python - Real Python Get Real Python and get your hands dirty quickly so you spend more time making real applications. Real Python teaches Python and web development from the ground up ... Everything seems to be working. In order to prevent invisible headless browser instances from piling up on your machine, you close the browser object before exiting your Python session: >>> browser.close() >>> quit() Groovin’ on Tunes You’ve tested that you can drive a headless browser using Python. Now you can put it to use: - You want to play music. - You want to browse and explore music. - You want information about what music is playing. To start, you navigate to and start to poke around in your browser’s developer tools. You discover a big shiny play button towards the bottom of the screen with a class attribute that contains the value "playbutton". You check that it works: >>> opts = Option() >>> opts.set_headless() >>> browser = Firefox(options=opts) >>> browser.get('') >>> browser.find_element_by_class('playbutton').click() You should hear music! Leave it playing and move back to your web browser. Just to the side of the play button is the discovery section. Again, you inspect this section and find that each of the currently visible available tracks has a class value of "discover-item", and that each item seems to be clickable. In Python, you check this out: >>> tracks = browser.find_elements_by_class_name('discover-item') >>> len(tracks) # 8 >>> tracks[3].click() A new track should be playing! This is the first step to exploring bandcamp using Python! You spend a few minutes clicking on various tracks in your Python environment but soon grow tired of the meagre library of eight songs. Exploring the Catalogue Looking a back at your browser, you see the buttons for exploring all of the tracks featured in bandcamp’s music discovery section. By now, this feels familiar: each button has a class value of "item-page". The very last button is the “next” button that will display the next eight tracks in the catalogue. You go to work: >>> next_button = [e for e in browser.find_elements_by_class_name('item-page') if e.text.lower().find('next') > -1] >>> next_button.click() Great! Now you want to look at the new tracks, so you think, “I’ll just repopulate my tracks variable like I did a few minutes ago.” But this is where things start to get tricky. First, bandcamp designed their site for humans to enjoy using, not for Python scripts to access programmatically. When you call next_button.click(), the real web browser responds by executing some JavaScript code. If you try it out in your browser, you see that some time elapses as the catalogue of songs scrolls with a smooth animation effect. If you try to repopulate your tracks variable before the animation finishes, you may not get all the tracks, and you may get some that you don’t want. What’s the solution? You can just sleep for a second, or, if you are just running all this in a Python shell, you probably won’t even notice. After all, it takes time for you to type too. Another slight kink is something that can only be discovered through experimentation. You try to run the same code again: >>> tracks = browser.find_elements_by_class_name('discover-item') >>> assert(len(tracks) == 8) AssertionError ... But you notice something strange. len(tracks) is not equal to 8 even though only the next batch of 8 should be displayed. Digging a little further, you find that your list contains some tracks that were displayed before. To get only the tracks that are actually visible in the browser, you need to filter the results a little. After trying a few things, you decide to keep a track only if its x coordinate on the page fall within the bounding box of the containing element. The catalogue’s container has a class value of "discover-results". Here’s how you proceed: >>> discover_section = self.browser.find_element_by_class_name('discover-results') >>> left_x = discover_section.location['x'] >>> right_x = left_x + discover_section.size['width'] >>> discover_items = browser.find_element_by_class_name('discover_items') >>> tracks = [t for t in discover_items if t.location['x'] >= left_x and t.location['x'] < right_x] >>> assert len(tracks) == 8 Building a Class If you are growing weary of retyping the same commands over and over again in your Python environment, you should dump some of it into a module. A basic class for your bandcamp manipulation should do the following: - Initialize a headless browser and navigate to bandcamp - Keep a list of available tracks - Support finding more tracks - Play, pause, and skip tracks Here’s the basic code, all in one go: from selenium.webdriver import Firefox from selenium.webdriver.firefox.options import Options from time import sleep, ctime from collections import namedtuple from threading import Thread from os.path import isfile import csv BANDCAMP_FRONTPAGE='' class BandLeader(): def __init__(self): # Create a headless browser opts = Options() opts.set_headless() self.browser = Firefox(options=opts) self.browser.get(BANDCAMP_FRONTPAGE) # Track list related state self._current_track_number = 1 self.track_list = [] self.tracks() def tracks(self): ''' Query the page to populate a list of available tracks. ''' # Sleep to give the browser time to render and finish any animations sleep(1) # Get the container for the visible track list discover_section = self.browser.find_element_by_class_name('discover-results') left_x = discover_section.location['x'] right_x = left_x + discover_section.size['width'] # Filter the items in the list to include only those we can click discover_items = self.browser.find_elements_by_class_name('discover-item') self.track_list = [t for t in discover_items if t.location['x'] >= left_x and t.location['x'] < right_x] # Print the available tracks to the screen for (i,track) in enumerate(self.track_list): print('[{}]'.format(i+1)) lines = track.text.split('\n') print('Album : {}'.format(lines[0])) print('Artist : {}'.format(lines[1])) if len(lines) > 2: print('Genre : {}'.format(lines[2])) def catalogue_pages(self): ''' Print the available pages in the catalogue that are presently accessible. ''' print('PAGES') for e in self.browser.find_elements_by_class_name('item-page'): print(e.text) print('') def more_tracks(self,page='next'): ''' Advances the catalogue and repopulates the track list. We can pass in a number to advance any of the available pages. ''' next_btn = [e for e in self.browser.find_elements_by_class_name('item-page') if e.text.lower().strip() == str(page)] if next_btn: next_btn[0].click() self.tracks()() def play_next(self): ''' Plays the next available track ''' if self._current_track_number < len(self.track_list): self.play(self._current_track_number+1) else: self.more_tracks() self.play(1) def pause(self): ''' Pauses the playback ''' self.play() Pretty neat. You can import this into your Python environment and run bandcamp programmatically! But wait, didn’t you start this whole thing because you wanted to keep track of information about your listening history? Collecting Structured Data Your final task is to keep track of the songs that you actually listened to. How might you do this? What does it mean to actually listen to something anyway? If you are perusing the catalogue, stopping for a few seconds on each song, do each of those songs count? Probably not. You are going to allow some ‘exploration’ time to factor in to your data collection. Your goals are now to: - Collect structured information about the currently playing track - Keep a “database” of tracks - Save and restore that “database” to and from disk You decide to use a namedtuple to store the information that you track. Named tuples are good for representing bundles of attributes with no functionality tied to them, a bit like a database record: TrackRec = namedtuple('TrackRec', [ 'title', 'artist', 'artist_url', 'album', 'album_url', 'timestamp' # When you played it ]) In order to collect this information, you add a method to the BandLeader class. Checking back in with the browser’s developer tools, you find the right HTML elements and attributes to select all the information you need. Also, you only want to get information about the currently playing track if there music is actually playing at the time. Luckily, the page player adds a "playing" class to the play button whenever music is playing and removes it when the music stops. With these considerations in mind, you write a couple of methods: def is_playing(self): ''' Returns `True` if a track is presently playing ''' playbtn = self.browser.find_element_by_class_name('playbutton') return playbtn.get_attribute('class').find('playing') > -1 def currently_playing(self): ''' Returns the record for the currently playing track, or None if nothing is playing ''' try: if self.is_playing(): title = self.browser.find_element_by_class_name('title').text album_detail = self.browser.find_element_by_css_selector('.detail-album > a') album_title = album_detail.text album_url = album_detail.get_attribute('href').split('?')[0] artist_detail = self.browser.find_element_by_css_selector('.detail-artist > a') artist = artist_detail.text artist_url = artist_detail.get_attribute('href').split('?')[0] return TrackRec(title, artist, artist_url, album_title, album_url, ctime()) except Exception as e: print('there was an error: {}'.format(e)) return None For good measure, you also modify the play() method to keep track of the currently playing track:() sleep(0.5) if self.is_playing(): self._current_track_record = self.currently_playing() Next, you’ve got to keep a database of some kind. Though it may not scale well in the long run, you can go far with a simple list. You add self.database = [] to BandCamp’s __init__() method. Because you want to allow for time to pass before entering a TrackRec object into the database, you decide to use Python’s threading tools to run a separate process that maintains the database in the background. You’ll supply a _maintain() method to BandLeader instances that will run in a separate thread. The new method will periodically check the value of self._current_track_record and add it to the database if it is new. You will start the thread when the class is instantiated by adding some code to __init__(): # The new init def __init__(self): # Create a headless browser opts = Options() opts.set_headless() self.browser = Firefox(options=opts) self.browser.get(BANDCAMP_FRONTPAGE) # Track list related state self._current_track_number = 1 self.track_list = [] self.tracks() # State for the database self.database = [] self._current_track_record = None # The database maintenance thread self.thread = Thread(target=self._maintain) self.thread.daemon = True # Kills the thread with the main process dies self.thread.start() self.tracks() def _maintain(self): while True: self._update_db() sleep(20) # Check every 20 seconds def _update_db(self): try: check = (self._current_track_record is not None and (len(self.database) == 0 or self.database[-1] != self._current_track_record) and self.is_playing()) if check: self.database.append(self._current_track_record) except Exception as e: print('error while updating the db: {}'.format(e) If you’ve never worked with multithreaded programming in Python, you should read up on it! For your present purpose, you can think of thread as a loop that runs in the background of the main Python process (the one you interact with directly). Every twenty seconds, the loop checks a few things to see if the database needs to be updated, and if it does, appends a new record. Pretty cool. The very last step is saving the database and restoring from saved states. Using the csv package, you can ensure your database resides in a highly portable format and remains usable even if you abandon your wonderful BandLeader class! The __init__() method should be yet again altered, this time to accept a file path where you’d like to save the database. You’d like to load this database if it is available, and you’d like to save it periodically, whenever it is updated. The updates look like this: def __init__(self,csvpath=None): self.database_path=csvpath self.database = [] # Load database from disk if possible if isfile(self.database_path): with open(self.database_path, newline='') as dbfile: dbreader = csv.reader(dbfile) next(dbreader) # To ignore the header line self.database = [TrackRec._make(rec) for rec in dbreader] # .... The rest of the __init__ method is unchanged .... # A new save_db() method def save_db(self): with open(self.database_path,'w',newline='') as dbfile: dbwriter = csv.writer(dbfile) dbwriter.writerow(list(TrackRec._fields)) for entry in self.database: dbwriter.writerow(list(entry)) # Finally, add a call to save_db() to your database maintenance method def _update_db(self): try: check = (self._current_track_record is not None and self._current_track_record is not None and (len(self.database) == 0 or self.database[-1] != self._current_track_record) and self.is_playing()) if check: self.database.append(self._current_track_record) self.save_db() except Exception as e: print('error while updating the db: {}'.format(e) Voilà! You can listen to music and keep a record of what you hear! Amazing. Something interesting about the above is that using a namedtuple really begins to pay off. When converting to and from CSV format, you take advantage of the ordering of the rows in the CSV file to fill in the rows in the TrackRec objects. Likewise, you can create the header row of the CSV file by referencing the TrackRec._fields attribute. This is one of the reasons using a tuple ends up making sense for columnar data. What’s Next and What Have You Learned? You could do loads more! Here are a few quick ideas that would leverage the mild superpower that is Python + Selenium: - You could extend the BandLeaderclass to navigate to album pages and play the tracks you find there. - You might decide to create playlists based on your favorite or most frequently heard tracks. - Perhaps you want to add an autoplay feature. - Maybe you’d like to query songs by date or title or artist and build playlists that way. Free Bonus: Click here to download a "Python + Selenium" project skeleton with full source code that you can use as a foundation for your own Python web scraping and automation apps. You have learned that Python can do everything that a web browser can do, and a bit more. You could easily write scripts to control virtual browser instances that run in the cloud. You could create bots that interact with real users or mindlessly fill out forms! Go forth and automate!
https://realpython.com/modern-web-automation-with-python-and-selenium/
CC-MAIN-2020-16
refinedweb
3,012
58.28
Containers the hard way It is a set of Linux's operating system primitives that provide the illusion of a container. A process or a set of processes can shed their environment or namespaces and live in new namespaces of their own, separate from the host's default namespace. Container management systems like Docker make it incredibly easy to manage containers on your machine. But how are these containers constructed? It is just a sequence of Linux system calls (involving namespaces and cgroups, mainly), at the very basic level while also leveraging other existing Linux technologies for container file system, networking, etc.. Gocker explanation Gocker and how it works is explained at the Linux system call level on the Unixism blog. If you are interested in that level of detail, please read it. Why Gocker? When I came across bocker, which is Docker-like container management written system in Bash shell script, I found 2 problems with it: - Bocker uses various Linux utilities. While you get the point, command line utilities are opaque, and you don't get to understand what they are doing at the Linux system call level. Also, a single command can sometime issue a more than one pertinent system calls. - Bocker's last commit is more than 5 years ago, and it does not work anymore. Docker Hub API changes seem to have broken it. Gocker on the other hand is pure Go source code which allows you to see what exactly goes on at the Linux system call level. This should give you a way better understanding of how containers actually work. Don't get me wrong here. Bocker is still a fantastic and very creatively written tool. If you want to understand how containers work, you should still take a look at it and I'm confident you'll learn a thing or two from it, just like I <container-id> </path/to/command> - List locally available images gocker images - Remove a locally available image gocker rmi <image-id> Other capabilities - Gocker uses the Overlay): - File system (via chroot) - PID - IPC - UTS (hostname) - Mount - Network While cgroups to limit the following are created, containers) An example Gocker session ➜ sudo ./gocker images 2020/06/12 08:32:23 Cmd args: [./gocker images] IMAGE TAG ID centos latest 470671670cac redis latest c349430fd524 ubuntu 18.04 c3c304cb4f22 latest 1d622ef86b13 ➜ sudo ./gocker run alpine /bin/sh 2020/06/12 08:33:33 Cmd args: [./gocker run alpine /bin/sh] 2020/06/12 08:33:33 New container ID: 7bfe9b0f1c2e 2020/06/12 08:33:33 Downloading metadata for alpine:latest, please wait... 2020/06/12 08:33:36 imageHash: a24bb4013296 2020/06/12 08:33:36 Checking if image exists under another name... 2020/06/12 08:33:36 Image doesn't exist. Downloading... 2020/06/12 08:33:38 Successfully downloaded alpine 2020/06/12 08:33:38 Uncompressing layer to: /var/lib/gocker/images/a24bb4013296/fe8bebfdf212/fs 2020/06/12 08:33:38 Image to overlay mount: a24bb4013296 2020/06/12 08:33:38 Cmd args: [/proc/self/exe setup-netns 7bfe9b0f1c2e] 2020/06/12 08:33:38 Cmd args: [/proc/self/exe setup-veth 7bfe9b0f1c2e] 2020/06/12 08:33:38 Cmd args: [/proc/self/exe child-mode --img=a24bb4013296 7bfe9b0f1c2e /bin/sh] / # ifconfig) veth1_7bfe9b Link encap:Ethernet HWaddr 02:42:6E:E8:FC:06 inet addr:172.29.41.13 Bcast:172.29.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:6eff:fee8:fc06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:22 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2328 (2.2 KiB) TX bytes:586 (586.0 B) / # ps aux PID USER TIME COMMAND 1 root 0:00 /proc/self/exe child-mode --img=a24bb4013296 7bfe9b0f1c2e /bin/sh 7 root 0:00 /bin/sh 9 root 0:00 ps aux / #. >>> exit() / # exit 2020/06/12 08:34:34 Container done. ➜ sudo ./gocker run ubuntu /bin/bash 2020/06/12 08:35:13 Cmd args: [./gocker run ubuntu /bin/bash] 2020/06/12 08:35:13 New container ID: c7eb7bab7e4c 2020/06/12 08:35:13 Image already exists. Not downloading. 2020/06/12 08:35:13 Image to overlay mount: 1d622ef86b13 2020/06/12 08:35:13 Cmd args: [/proc/self/exe setup-netns c7eb7bab7e4c] 2020/06/12 08:35:13 Cmd args: [/proc/self/exe setup-veth c7eb7bab7e4c] 2020/06/12 08:35:13 Cmd args: [/proc/self/exe child-mode --img=1d622ef86b13 c7eb7bab7e4c /bin/bash] [email protected]:/# [On another terminal] ➜ sudo ./gocker ps [sudo] password for shuveb: 2020/06/12 08:36:19 Cmd args: [./gocker ps] CONTAINER ID IMAGE COMMAND c7eb7bab7e4c ubuntu:latest /usr/bin/bash ➜ sudo ./gocker exec c7eb7bab7e4c /bin/bash 2020/06/12 08:37:15 Cmd args: [./gocker exec c7eb7bab7e4c /bin/bash] [email protected]:/# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 1153100 6132 ? Sl 03:05 0:00 /proc/self/exe child-mode --img=1d622ef86b13 root 8 0.0 0.0 4116 3236 ? S+ 03:05 0:00 /bin/bash root 11 0.0 0.0 4116 3376 ? S 03:07 0:00 /bin/bash root 14 0.0 0.0 5888 2956 ? R+ 03:07 0:00 ps aux [email protected]:/# Gocker limitations Here are some limitations I'd love to fix in a future release: - Gocker does not currently support exposing container ports on the host. Whenever Docker containers need to expose ports on the host, Docker uses the program docker-proxyas a proxy to get that done. Gocker needs a similar proxy developed. While Gocker containers can access the internet today, the ability to expose ports on the host will be a great feature to have (mainly to learn how that's done). - Gocker does not do error handling well. Should something go wrong especially when attempting to run a container, Gocker might not cleanly unmount some file systems. Containers accessing internet When you run Gocker for the first time, a new bridge, gocker0 is created. Since all container network interfaces are connected to this bridge, they can talk to each other without you having to do anything. For containers to be able to reach the internet though, you need to enable packet forwarding on the host. For this, a convenience script enable_internet.sh has been provided. You might need to change it to reflect the name of your internet connected interface before you run it. There are instructions in the script. After you run this, Gocker containers should be able to reach the internet and install packages, etc. External Go libraries used - GoContainerRegistry for downloading container images from a container registry, the default being Docker Hub. - PFlag for handling command line flags. - Netlink to configure Linux network interfaces without having to get bogged down by Netlink socket programming. - Unix Because Unix :) Disclaimer Gocker runs as root. Use at your own risk. This is my first Go program beyond a reasonable number of lines, and I'm sure there are better ways to write Go programs and there might still be a lot of bugs lingering in here. Here are some things Gocker does to your system so you know: - It creates the gocker0bridge if it does not exist. - It blindly assumes that the IP address range 172.29.*.*is available and uses it. - It creates various namespaces and cgroups. - It mounts overlay file systems. To this end, the safest way to run Gocker might be in a virtual machine. Distributions I developed Gocker on my day-to-day Arch Linux based computer. I also tested Gocker on an Ubuntu 20.04 virtual machine. It works great. Building and running Once you clone the repo, assuming you have Go installed on your machine, change into the Gocker directory and use the following command to retrieve dependencies: go mod download Then, to build gocker, run the following command: go build -o gocker .
https://golangexample.com/gocker-a-mini-docker-written-in-go/
CC-MAIN-2021-39
refinedweb
1,346
64.61
Hope you found the last tutorial of some use. I know I did. This will be a real quick and easy tutorial. It won't get too much more complicated at this point. #include <conio.h> #include <time.h> #include <stdlib.h> #include <al/al.h> #include <al/alc.h> #include <al/alu.h> #include <al/alut.c> // Buffers hold sound data. ALuint Buffer; // Sources are points of emitting sound. ALuint Source; // Position of the source sound. ALfloat SourcePos[] = { 0.0, 0.0, 0.0 }; // Velocity of the source sound. ALfloat SourceVel[] = { 0.0, 0.0, 0.1 }; // }; There is only one change in the code since the last tutorial in this fist section. It is that we altered the sources velocity. It's 'z' field is now 0.1.; alutLoadWAVFile("wavdata/Footsteps.wav", &format, &data, &size, &freq, &loop); alBufferData(Buffer, format, data, size, freq); alutUnloadWAV(format, data, size, freq); //, AL_TRUE ); // Do an error check and return. if (alGetError() != AL_NO_ERROR) return AL_FALSE; return AL_TRUE; } Two changes in this section. First we are loading the file "Footsteps.wav". We are also explicitly setting the sources 'AL_LOOPING' value to 'AL_TRUE'. What this means is that when the source is prompted to play it will continue to play until stopped. It will play over again after the sound clip has ended. void SetListenerValues() { alListenerfv(AL_POSITION, ListenerPos); alListenerfv(AL_VELOCITY, ListenerVel); alListenerfv(AL_ORIENTATION, ListenerOri); } void KillALData() { alDeleteBuffers(1, &Buffer); alDeleteSources(1, &Source); alutExit(); } Nothing has changed here. source playing. alSourcePlay(Source); // Loop ALint time = 0; ALint elapse = 0; while (!kbhit()) { elapse += clock() - time; time += elapse; if (elapse > 50) { elapse = 0; SourcePos[0] += SourceVel[0]; SourcePos[1] += SourceVel[1]; SourcePos[2] += SourceVel[2]; alSourcefv(Source, AL_POSITION, SourcePos); } } return 0; } The only thing that has changed in this code is the loop. Instead of playing and stopping the audio sample it will slowly get quieter as the sources position grows more distant. We do this by slowly incrementing the position by it's velocity over time. The time is sampled by checking the system clock which gives us a tick count. It shouldn't be necessary to change this, but if the audio clip fades too fast you might want to change 50 to some higher number. Pressing any key will end the loop.)
http://devmaster.net/p/2889/openal-lesson-2-looping-and-fadeaway
CC-MAIN-2016-07
refinedweb
377
69.58
This patch introduces Cryptomatte output passes to Cycles. There are two modes: "Accurate" mode, which is CPU only, tracks all IDs encountered in a pixel before it sorts them by coverage and strips any that exceed the number of selected Cryptomatte layers. Since that relies on dynamic memory allocations, it is CPU only. The other mode keeps track of no more IDs than the requested number of layers. When the number of IDs for a given pixel exceed the number of layers, it is possible that an ID with a higher coverage gets rejected in favor of an ID with lower coverage, if the latter was sampled first. It would be good improvement in the future to allow GPU renders too to track any number of IDs per pixel, regardless of the number of Cryptomatte layers. In order to support a user-selectable number of Cryptomatte layers, Cycles passses are now not only distinguished by type but also by name. That way, any number of passes of type PASS_CRYPTOMATTE can be added to a render. Going forward, this could become useful for other types too, should we introduce light path expressions or custom AOVs. string_startswith() ? Indentation. std::min -> this is what util_algorithm.h is for. Include statements are supposed to be sorted alphabetically. See render/buffers.h above. I'm sure this madness of ifdef is easily avoidable. First you go away from string to char* when calling this function, then you go away from string to char* here in the pass? Whats' the point? Just compare string to string. util_algorithm.h I think I found a way to write id passes using atomics. I benchmarked the Victor scene at 50% size, all Cryptomatte options enabled. Compiled with Visual Studio 2015 running on 2x Xeon E5-2660v2, that is 40 threads. Accurate mode: 8m21s Stochastic mode: 8m18s Cryptomatte off: 8m10s So STL appears to be a measurable overhead, but it is below 1% of total render time for this scene. A custom linked list from a memory pool still sounds good though, since that could also be a way of getting the accurate mode for GPUs. In D3538#81848, @Stefan Werner (swerner) wrote: I think I found a way to write id passes using atomics. That seems like it would work, as long as there are enough free ID slots. If there are not enough it's random which object ID get skipped, but I guess that is acceptable for now. Without resorting to dynamic memory allocations inside the kernel or grossly over allocating memory, there will always be a chance of randomly skipped data. In D3538#81903, @Stefan Werner (swerner) wrote: Without resorting to dynamic memory allocations inside the kernel or grossly over allocating memory, there will always be a chance of randomly skipped data. I mean random in that the results could be different between renders, so the result is not exactly repeatable. A solution could be to write the minimum sample number next to each ID and prioritize lower samples, but I'm not sure it's worth it in practice. Addressed some of Sergey's comment.s Some more inline comments. Since CAS always returns the old value, we could actually detect the case that another thread already allocated this slot for the same ID here and get rid of the "--slot;" part. For consistency, I'd prefer to also test for slot ID == ID_NONE here. Could we move 2 * kernel_data.film.cryptomatte_depth into the macro? It's always the same and it would make this part easier to read. I think this change can be removed? Why is the sorting part of the kernel? I'd prefer to just move it to buffers.cpp, that would simplify the code a lot. Also, sample is uninitialized if work_index >= total_work_size. The code here could be deduplicated by having a wrapper function (or maybe function pointer?) that calls either flatten_buffer or sort_buffer. Looks good to me except for one detail. This should check .x instead of .y I guess? In that case, the assert needs to be changed to .y == 0.0f. Narrowing from int. Is really needed to be unsigned short ? Move to a function, call from the if() statement above. This is quite scary. Need update? Is being updated? A function pointer to function which updates the ray? Clearly, only scale y and w. Without comment ;) The convention is to: There is pair in ccl namespace, see util_map.h. Either use int for pixel_index or do a cast of tile.w to`size_t` to make math happen in size_t. Currently it's happening in int. Why this index is in int? Can we typedef unordered_map<float, float> for the clarity. Will shorten so much code! Make a named enum (ideally), and do not use use_ for not-a-boolean. Just int. Can we have this file more cycles-styled? Some more updates, addressing Sergey's comments. Moving it to buffers.cpp excludes it from GPU acceleration. Putting it in buffers.cpp isn't straight forward. RenderBuffers is agnostic of total sample count, and sorting every time get_pass_rect is called is wasteful. RenderTile knows about total sample count but is just a structure of data without methods. Turning it into its own RenderTile::Task (similar to denoising) seems like overkill to me.
https://developer.blender.org/D3538
CC-MAIN-2020-45
refinedweb
885
76.22
Hello! I'm new to Java ME programming and I ran into problems with LWUIT. Maybe not LWUIT specific problem but with LWUIT it is reproduced perfectly both on simulator and on the device - Asha 310). In portrait mode the VKB is working as expected. Also, if you open the VKB in portrait mode and then rotate the phone, it is also working until you close VKB and then try to edit another field. The screen flickers but VKB does not appear and the blinking cursor does not appear in the text field.' The following code is the minimal test case where the problem appears. It is really frustrating as the only workaround seems to be to lock the midlet to portrait mode, which is not a good solution. Any ideas why is this happening and how to fix it? Code:package com.test; import com.sun.lwuit.Form; import com.sun.lwuit.TextField; public class FolderForm extends Form { FolderForm() { super("my form"); addComponent(new TextField()); } }
http://developer.nokia.com/Community/Discussion/showthread.php/240070-LWUIT-VKB-not-working-in-landscape-orientation
CC-MAIN-2013-48
refinedweb
166
65.62
[Date Index] [Thread Index] [Author Index] Re: Mathlink - To: mathgroup at smc.vnet.net - Subject: [mg64294] Re: Mathlink - From: "Jens-Peer Kuska" <kuska at informatik.uni-leipzig.de> - Date: Fri, 10 Feb 2006 02:13:31 -0500 (EST) - Organization: Uni Leipzig - References: <dsetls$jgp$1@smc.vnet.net> - Sender: owner-wri-mathgroup at wolfram.com Hi, AFAIK MathLink will work with Borland C/C++ (free Personal edition), Visual Studio (the free Visual Studio Express Edition) Open Watcom C/C++ () compilers. But it will not work with Cygwin and you should try one of the compilers above -- none costs money for personal use. Regards Jens "M. Prabhakar Rao" <mrao1 at umbc.edu> schrieb im Newsbeitrag news:dsetls$jgp$1 at smc.vnet.net... | Hello: | | I have a question regarding MathLink. The basic idea is to be able to | access a function written in C via Mathematica. I followed the procedure | illustrated in the Mathematica Book. I wrote a template file for the | function and used mprep to generate corresponding .c file. However, when | I compile this .c file to generate the executable, I get the following | error: | | In fuunction 'int _MLMain(char**, char**, char*): | error: '_fstrncpy' undeclared (first use this function) | error: (Each undeclared identifier is reported only once for each function | it appears in.) | | When I look at the .c file generated by mprep, the function _fstrncpy is | declared as follows: | #if WIN32_MATHLINK && !defined(_fstrncpy) | # define _fstrncpy | #endif | | System Information: | I am running Mathematica on a laptop under Windows XP. Since I do not have | Microsoft Visual Studio, I am using cygwin to write all my code in C. | | Please help. | | Best regards, | | Prabhakar Rao | -- | M. Prabhakar Rao, Ph.D. | Department of Mechanical Engineering | University of Maryland Baltimore County | |
http://forums.wolfram.com/mathgroup/archive/2006/Feb/msg00223.html
CC-MAIN-2020-40
refinedweb
289
50.94
Tim, > ...to check it and it still occurs. I'm starting to think this has something > to do with how JUnit use class loaders. Any ideas? I think this is because JUnit creates new class loader for every run (see BaseTestRunner.loadSuiteClass() and BaseTestRunner.getLoader() for more details). So, I think you should somehow unload all libraries you load after suite completed. Hope this helps. With best regards, Oleg. Hi all, I was trying to use JUnit to test an SAP JCO wrapper I had written. I began to notice a problem with re-running the test from the JUnit GUI. I could run the test successfully the first time, but the second run would error with... java.lang.ExceptionInInitializerError: JCO.classInitialize(): Could not load middleware layer 'com.sap.mw.jco.rfc.MiddlewareRFC' Native Library C:\Dev\externals\sapjco-ntintel-2.0.8\sapjcorfc.dll already loaded in another classloader at com.sap.mw.jco.JCO.(Unknown Source) at SAPTest.testSAP(SAPTest.java:29) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl .java:25) ...so I wrote a simplifed JUnit test... import com.sap.mw.jco.JCO; import junit.framework.TestCase; public class SAPTest extends TestCase { private String client = "750"; private String lang = "en"; private String host = "xyz"; private String sysnr = "03"; private String userid = "fred"; private String password = "super111"; public void testSAP() { // Get a client and connect JCO.Client jclient = JCO.createClient(client, userid, password, lang, host, sysnr); } } ...to check it and it still occurs. I'm starting to think this has something to do with how JUnit use class loaders. Any ideas? I guess I'm off to start trolling through the JUnit source. :-) Thanks, Tim.. I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/junit/mailman/message/3339183/
CC-MAIN-2017-30
refinedweb
334
53.68
Details Description Entity.writeTo(OutputStream out, int mode) wraps the output stream in a Base64OutputStream if the transfer encoding is BASE64. Later the wrapper stream gets closed and (despite of a comment that says otherwise) the inner stream gets closed, too. Activity - All - Work Log - History - Activity - Transitions The attached patch mime4j-base64.patch would resolve the problem but you might prefer another solution.. I think content encoding / decoding streams MUST NOT close the underlying stream EVER, as the encoded content may be just a part of a larger message. Oleg If there are no use cases where the underlying stream should be closed as well you could simply remove the statement out.close() from Base64OutputStream.close() (line 393) to resolve the issue. Slightly different take at fixing the problem. Encoding / decoding streams no longer close the underlying stream. They become unreadable / non-writable instead. Here's the patch includes test cases. Please review. Oleg I think this is a good idea and your patch looks fine to me. But maybe the same concept should also be applied to QuotedPrintableOutputStream? Although it is an inner class of CodecUtil it can be accessed the same way as the base64 version through wrapQuotedPrintable(OutputStream, boolean). Maybe it should even become a public class for consistency reasons.. One more thing that you might want to consider is the general contract of InputStream / OutputStream. Although it is more slack than the one of Reader / Writer it seems to indicate that an IOException should be thrown in read() / write() once the stream has been closed. Closing an already closed stream should have no effect. For example see InputStream.read(byte[]): "Throws: IOException - If the first byte cannot be read for any reason other than the end of the file, if the input stream has been closed, or if some other I/O error occurs." My bad.. I fixed this issue in the old Base64OutputStream implementation and I forgot about it when I changed the implementation. I thought "tests pass, so it must be ok".. mea culpa I didn't wrote a unit test the first time I fixed it. The patch proposed by Oleg seems fine; Markus hint about the IOException may be appropriate but the class is for internal use ATM, and it should never happen to call that methods when the stream is closed in mime4j, so we can improve this even in a future release. Before this gets postponed please review my mime4j-close-codec-ioex.patch. It is based on Oleg's patch with the following changes: - encoders / decoders throw IOException from read / write once closed - makes QuotedPrintableOutputStream a public class - QuotedPrintableOutputStream.write(int) no longer throws an UnsupportedOperationException - unit tests for QuotedPrintableOutputStream - QuotedPrintableEncoder becomes a package private class Patch applied. The only change from the proposed patch is the "closed = true" when "len < 0" (in Base64OutputStream) because len == -1 is the way we are notified of a stream end and writing more data would result in corrupted output, so it seems safer to consider the stream closed after that call. I also changed some really trivial code formatting things. Adding "closed = true" would not have been necessary because this branch only gets executed if len < 0. len < 0 would normally be an illegal argument for this method and should be punished by throwing an IllegalArgumentException. In case of this implementation it is used by close() to indicate EOF. And close() already takes care of setting closed to true in its finally block. @Markus: as we don't actually throw the IllegalArgumentException I prefer to deal with a possible len == -1 updating the closed variable. Of course the whole thing could be refactored to do that code in another method and not "abuse" the "standard" write method. Do you see any harm with the change I made? @Stefano: Sorry, I did not mean to imply that it does any harm. Thanks for applying the patch. Closing all issues fixed previously, after a brief review of each. See attachment Base64BugDemo.java for a simple test that shows the problem.
https://issues.apache.org/jira/browse/MIME4J-79?focusedCommentId=12633918&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-32
refinedweb
677
64.51
- Language, Framework, and Runtime - Speed - Expressiveness - Readability - Summary What Is a Programming Language? Programming languages, like spoken languages, are ways of communicating ideas. When two people who know the same language talk, they’re able to understand each other because they both know the rules that formalize how to translate sounds into meaning and vice versa. Computers don’t understand human languages. Worse, the languages they do understand—their instruction sets—don’t mesh very well with how most humans speak. Imagine for a moment a French person talking to a Chinese person. Neither of them understands the other’s native language, but if they speak a common second language, such as English, they can still talk to each other. This is more or less the situation when it comes to programming languages. The English skills of the speakers in that last example also have direct parallels in terms of programming languages. When you have one person speaking and another person listening, comprehension depends on two things: - The speaker’s ability to translate ideas into the language - The listener’s ability to translate the spoken words into ideas The speaker’s ability is equivalent to the programmer’s skill, and the listener’s ability is equivalent to the compiler’s efficiency. A programming language is a compromise. Translating a language such as English directly into a machine language is very difficult for a machine. Similarly, "speaking" a machine language well is very difficult for a human. A programming language is one that a human can speak reasonably well, and that a computer can translate into a language that it understands. Language, Framework, and Runtime Most languages are very small; for example, C contains only about 20 keywords. They control the flow of a program, but do little else. Writing a useful program using just the C language is almost impossible. Fortunately, the designers recognized this problem. The C language specification also defines some functions that must be available for C programs to call. The majority of these functions, known together as the C Standard Library, are themselves written in C, although a few primitive ones may need to be written in some language that’s yet more primitive. Most languages have some kind of equivalent to the C Standard Library. Java, for example, has an enormous catalogue of classes in the java.* namespace that must be present for an implementation of the language to be called "Java." Twenty years ago, Common Lisp had a similarly large collection of functions. In some cases, the language is most commonly used with a collection of libraries, usually referred to as a framework, that are specified separately from the language itself. Sometimes the line between language and framework is blurry. It’s clear that C++ and the Microsoft Foundation Classes are separate entities, for example, but for Objective-C the distinction is less clear. The Objective-C specification defines very little by way of a standard library. (Although, because Objective-C is a proper superset of C, the C Standard Library is always there.) Objective-C is almost always used in conjunction with the OpenStep API; these days, the common implementations are Apple’s Cocoa and GNUstep. When you start adding libraries to a language, you lose some of the clarity of what makes the language unique. Writing an OpenGL program in Java has a lot more in common with writing an OpenGL program in C than it does with writing a Swing application in Java. Another source of confusion is the runtime system. Languages such as Smalltalk and Lisp were traditionally run in a virtual machine (although most current Lisp implementations are compiled into native machine code). This requirement can lead people to perceive a given language as slow. Interpreted code is almost always slower than compiled code. This doesn’t have anything to do with the language, but it is important. Code run in a Lisp interpreter will be much slower than compiled Lisp code. When judging a language, it’s important to differentiate between characteristics of a language and characteristics of an implementation of the language. Early implementations of Java were very slow; the just-in-time compiler was little better than an interpreter. This led to people calling Java an "interpreted language." The GNU project’s Java compiler destroyed this myth, although other design decisions in the Java language still prevent it from being run as fast as some other languages.
http://www.informit.com/articles/article.aspx?p=661370
CC-MAIN-2017-26
refinedweb
741
54.12
The DIALOGEX resource-definition statement specifies a dialog box. The statement defines the position and dimensions of the dialog box on the screen as well as the dialog box style. It also defines the following: nameID DIALOGEX x, y, width, height [ , helpID] [optional-statements] {control-statements} Unique name or a unique 16-bit unsigned integer value that identifies the dialog box. Location on the screen of the left side of the dialog box, in dialog units. Location on the screen of the top of the dialog box, in dialog units. Width of the dialog box, in dialog units. Height of the dialog box, in dialog units. Numeric expression indicating the ID used to identify the dialog box during WM_HELP processing. Options for the dialog box. This can be zero or more of the following statements. Body of the DIALOGEX resource is made up of any number of control statements. There are four families of control statements: generic, static, button, and edit. For more information, see Remarks. Certain attributes are also supported for backward compatibility. For more information, see Common Resource Attributes. The valid operations that may be contained in any of the numeric expressions in the statements of DIALOGEX are as follows: The body of the resource is made up of generic, static, button, and edit control statements. While each of these families of statements uses a different syntax for defining specific features of its controls, they all share a common syntax for defining the position, size, extended styles, help identification number, and control-specific data. For more information, see Common Control Parameters. CONTROL controlText, id, className, style Window text for the control. For more information, see Common Control Parameters. Control identifier. For more information, see Common Control Parameters. Name of the class. This may be either a string enclosed in double quotation marks (") or one of the following predefined system classes: BUTTON, STATIC, EDIT, LISTBOX, SCROLLBAR, or COMBOBOX. Window styles (explicit WS_*, BS_*, SS_*, ES_*, LBS_*, SBS_*, and CBS_* style values defined in Winuser.H can be used by adding an include to the .rc file: #include "winuser.h") #include "winuser.h" staticClass controlText, id LTEXT, RTEXT, or CTEXT. buttonClass controlText, id AUTO3STATE, AUTOCHECKBOX, AUTORADIOBUTTON, CHECKBOX, PUSHBOX, PUSHBUTTON, RADIOBUTTON, STATE3, or USERBUTTON. editClass id EDITTEXT, BEDIT, HEDIT, or IEDIT. Send comments about this topic to Microsoft Build date: 5/7/2009
http://msdn.microsoft.com/en-us/library/aa381002(VS.85).aspx
crawl-002
refinedweb
392
50.53
Python - Remove duplicates from a String While working with strings, there are many instances where it is needed to remove duplicates from a given string. In Python, there are many ways to achieve this. Method 1: Remove duplicate characters without order In the below example, a function called MyFunction is created which takes a string as an argument and converts it into a set called MySet. As in a set, elements are unordered and duplication of elements are not allowed, hence all duplicate characters will be removed in an unordered fashion. After that, the join method is used to combine all elements of MySet with an empty string., def MyFunction(str): MySet = set(str) NewString = "".join(MySet) return NewString MyString = "Hello Python" print(MyFunction(MyString)) The output of the above code will be: HtPy nloeh Method 2: Remove duplicate characters with order In this example, an inbuilt module called collections is imported in the current script to use its one of the class called OrderedDict. The fromkeys() method of class OrderedDict is used to remove duplicate characters from the given string., import collections as ct def MyFunction(str): NewString = "".join(ct.OrderedDict.fromkeys(str)) return NewString MyString = "Hello Python" print(MyFunction(MyString)) The output of the above code will be: Helo Pythn
https://www.alphacodingskills.com/python/pages/python-remove-duplicate-characters-from-a-string.php
CC-MAIN-2021-10
refinedweb
212
51.18
by Gil Hello, in this article, I’ll try to explain what Photon-pump is, and write an easy example so you can start using it for your own projects. Photon-pump [] is a client for Event Store [] we developed at made.com [], it’s the little brother to atomic puppy [] (which is another eventstore client), it’s async first, works using TCP so it’s also faster (atomicpuppy uses HTTP). I won’t talk about eventsourcing since it’s been talked about on previous posts, so this will be just a very simple and silly example of event sourcing. So, let’s say we have a game, for a game to happen we need players, so we need to create them. So we’re going to pretend that we have an application that creates players, which will later, create an event and place it in the appropriate stream of Event Store. This is the example of the “player created” event, it’s a json blob: {“name”: “Gil”} Now, we also need to pick a stream which is just a string representing the “bucket” where the event will be put, we’ll use “adventure” which is the name of our imaginary game, not very creative, but it’s better than “game”. An event will also have a type, which is like a sub category inside the stream. This is how the event is looking like: Event( stream=”adventure”, type=”player_created”, data=json.dumps({“name”: “Gil”}) ) So how would we add this event into Event Store using Photon-pump in a single python script? writer.py import asyncio import photonpump async def write_event(conn): await conn.publish_event( ‘adventure’, ‘player_created’, body={‘name’: ‘Gil’} ) async def run(): async with photonpump.connect(‘localhost’) as conn: await write_event(conn) if name == ‘main’: event_loop = asyncio.get_event_loop() event_loop.run_until_complete(run()) So, line by line, we have an async function called write_event which will as the name states, write the event into Event Store, using a Photon-pump connection passed in the argument. Next, we have the run function which will simply create the connection and pass it to write_event. finally, the ugly if __name… to both create the event_loop, and run it synchronously. Now if you have your Event Store running locally (if you don’t change it in the script), go to this url: and you should see the new event there. Now that we have an event there, let’s move on to the second part: reading the events from python, and doing something with them. For this post, we’ll just stick with a simple print. Since we wrote the event in the adventure stream, we want to read the events from that stream in a separate script. Here is all the code we need: reader.py import asyncio import photonpump async def read_an_event(conn): for event_record in await conn.get(‘adventure’): print(event_record.event.type, event_record.event.json()) async def run(): async with photonpump.connect(‘localhost’) as conn: await read_an_event(conn) if name == ‘main’: event_loop = asyncio.get_event_loop() event_loop.run_until_complete(run()) Ignoring run and if __name… the read_an_event function uses the method get from Photon-pump to collect all the events using it like an iterator and printing each of the events. We get event_records, and each contains the event, so we can print out the type and the data. This was just a very simple example that I came up with, but if you want to make it more like the real world, how about trying to follow the previous posts about CQRS using Photon-pump to store and read the events. Stay tuned for the next part where we will talk about subscriptions. BONUS: If you want to replicate this code, you will need python 3.6+ (Remember to install Photon-pump pip install photon-pump) and docker or Event Store installed on your machine. Simply start Event Store in docker (docker run -p 1113:1113 -p 2113:2113 eventstore/eventstore) and run those python scripts ( writer.py and reader.py) in sequence to see it work.tags: python - open-source - photonpump
https://io.made.com/blog/2018-08-24-how-to-use-photon-pump.html
CC-MAIN-2021-49
refinedweb
680
69.92
Strip trailing spaces on file save. Currently I've got CTRL+SHIFT+T mapped to trim, but would really like a way to do this automatically. This should be already implemented - the save action should remove trailing space from all lines is current file. Can you please verify it once more time if it doesn't work for you. closing as resolved due to inactivity and no verifiaction from reporter. Thanks. Sorry for the delay. The save action does NOT strip trailing spaces on save. I tested with spaces on new lines and existing lines, and nothing was stripped off the ends of the lines. Can you provide the exact build number and sample file with marked position of cursor while saving? Thanks Build: 200905282243 File: class This def this print "this" [cursor] end end (trailing spaces after every line) I save. No change. I close and reopen. No change. I have to run the strip command then save before it works. Netbeans (6.8 M2) doesn't remove trailing spaces in the line of the cursor (only before the cursor). These whitespaces are also stored if i close and reopen the document, set the cursor into another line and save the document again. This works fine in my dev build. Just to make sure that we all understand how this feature works. The trailing spaces are removed (1) only from lines that you modified before saving and (2) they are not removed on the line where the cursor is. #1 - needed for VCS controlled files #2 - the editor must not change the caret position when saving files We had bug reports requesting both #1 and #2 and so it is highly unlikely that the current behavior will be changed. Eg. Eclipse, Zend Studio and Kate (KDE Advanced Editor) removes the trailing spaces on ALL lines (also in the current cursor line) when saving. It would be nice if NetBeans may have the same behavior than the other major IDEs. Well, I'm sure there will be people objecting at least as loudly as you are supporting this cause. We could probably support both styles and add an options that would control, which one is used. Isn't this duplicate of issue 157561? I think so. *** This bug has been marked as a duplicate of bug 157561 ***
https://netbeans.org/bugzilla/show_bug.cgi?id=166451
CC-MAIN-2019-51
refinedweb
387
72.97
Office plant monitor - openhardware.io last edited by openhardware.io - scalz Hardware Contributor last edited by So nice! Very nice indeed ! Do you have an idea of the battery life on this sensor ? Why not change to 1MHz to allow the voltage to go down to 1.9 or 1.8 V ? @Nca78 from the description: Expected duration more than 2 years for the CR2032 and about 3.5 with the two AA (power consumption less than 5uA during sleep time; measured using a uCurrent Gold). @Nca78 Like @mfalkvidd wrote, the expected life duration is greater than 2 years. But it's just theory. My first node (on 2*AA) was started several weeks ago and the battery level did not change. So I am confident I measure the power consumption when working at 1 MHz and there was no difference between 1 or 8 MHz. As I have to decrease the transmission speed at 1 MHz, I got some issues. The main was probably with the signing feature. - Nca78 Hardware Contributor last edited by Nca78 @carlierd you cannot compare the behavior of AA batteries and CR2032. Button cells have a high internal resistance, you can't draw much current from them else the voltage drops very quickly. The more current you draw the more you lose in heat because of this internal resistance, the solution is to use a big capacitor to provide the current, then you lose energy because capacitor has a current leak, but usually it's better than what you lose with internal resistance. Maybe your 100uF is doing that job if data sending is quick enough, but at 8MHz and the voltage drop when sending you might already be very close to brown out level (or crash level if you have disabled brown out), you should check the voltage just after sending and not before to have an idea. Look at the difference in voltage level in the second "pulse mode" graph on page 2, which I believe uses a similar pulse current draw than your board when you transmit, after only 50mAh of capacity used the voltage is down to 2.4V... Interesting ! I will check in few weeks the result. There is several capacitors to help the stability maybe it could be enough. If it's not the case I will replace the CR2032 by 2 * AAA. . I wonder whether a higher capacity coincell, like maybe a CR2450 (620mah) or a CR2477 (1000mah capacity), might give you the pulse current you need without needing a capacitor, thus avoiding leakage losses? Plus maybe you wouldn't need to change it as often. sure 2450/77 would be better. 2477 is a bit expensive. but not sure if those coincell would be better without capacitor (I mean in the long term), looking at coincells datasheets.. which are always given for x kohms load. leakage current for ceramic common quality capacitor is in nA range (hopefully), so i think it's better to have it buffering to help the coincell (still for the long term, or at a momemt it won't be enough strong). if alka, there would no need.. that's imho. Stop wondering. I tried with 2477 cells and it was not much better than 2032 without capacitor @scalz : I missed your reply ! Interesting withe paper ! . - scalz Hardware Contributor last edited by @carlierd yep, this doc inspired me too, plus coincell datasheets,. I use mega-pile for sourcing batt and they have lot of battery datasheet. I have also recently designed few small sensors I need to put on a panel now - moisture+temp - moisture+salinity/conductivity+temp How did you source the probe portion of the sensor? Could this be used with a capacitative soil sensor to avoid corrosion of the sensor over time ? @tomtastic said in Office plant monitor: Could this be used with a capacitative soil sensor to avoid corrosion of the sensor over time ? Yes. Do you have one in mind? @NeverDie Something like ? @tomtastic said in Office plant monitor: @NeverDie Something like ? I don't know whether it's better or worse, but the Chirp! is a capacitive soil moisture probe that has been around for awhile and is, IIRC, atmel based. It used to sell for around $15, but I just noticed that you can buy it for less than $5 from a number of ebay sellers, such as: Historically, one problem with PCB probes has been that over time water intrudes into the PCB and throws off the calibration. Not sure if there's a solution for that problem, though it seems like one should exist. [Edit: I see that the original author of the Chirp does have some suggestions now regarding ways to waterproof it: ] BTW, the Chirp is open source: There's also this, which I'm not familiar with: You might be inbterested in this thread: Anyhow, the probe you referenced looks a lot like this one: As an aside, people seem happy with the Vegetronix probe, up until its PCB suffers water intrusion. The asking price is rather high though. If I knew how to, I'd make one of those and simply waterproof it better. @NeverDie The person behind 'Chirp' actually left a comment in your last link (zerocharactersleft). He posts an interesting link back to his own studies too : @tomtastic said in Office plant monitor: @NeverDie The person behind 'Chirp' actually left a comment in your last link (zerocharactersleft). He posts an interesting link back to his own studies too : Thanks for the link. For the best results, running the square wave at 80Mhz or faster seems to be important. That's what vegetronix does. Now, the good news is that the clock on an ESP8266 can supply that frequency.. However, all this analog circuitry is beyond my purview, so I inevitably hit a wall with that. Is that something you (or someone reading this) knows how to do? For instance, I don't know whether the chirp guy's circuit that you linked to works as-is at 80Mhz, or whether it requires modification. If the latter, that's where I get stuck not knowing what to do. It turns out that the circuit the Chirp guy described in the link that you provided is, in fact, what is built into Chirp: None of his calculations make reference to frequency, so maybe (?) you could hook up the circuit to 80Mhz and it would "just work." I suppose the switching speed of those transistors might be a factor. I have no insight into that, but it would probably be easy for someone to test. IIRC, the higher frequency makes a capacitive soil moisture sensor much less influenced by soil characteristics other than moisture, so it's really necessary to have that in order to make a good probe. [Edit: What you don't want is a soil moisture probe that requires frequent manual re-calibration. Ordinary conductive probes all have that as an inherent problem. Hence the hunt for a worthwhile capacitive soil moisture sensor.] ] Completely unrelated, but I just noticed this: So with that, you could just use a regular humidity sensor inside it to judge soil moisture. Pretty cool, yes? Not sure how big it is, but maybe you could even fit your entire sensor node inside it. @NeverDie said in Office plant monitor: Completely unrelated, but I just noticed this: So with that, you could just use a regular humidity sensor inside it to judge soil moisture. Pretty cool, yes? Not sure how big it is, but maybe you could even fit your entire sensor node inside it. One of the local shops I buy electronic parts from has those. Unfortunately it's too small to house an entire sensor. This thread also has a lot of good background information and links about capacitive soil moisture sensors: It might be that one could more or less whack an 80Mhz frequency oscillation chip into the Chirp circuit: i agree with @NeverDie It's better to use high freq for a reliable soil moisture, regarding soils, calibration etc. and some studies argues even more than 200Mhz. but with 80-100 it's nice. I have one design at this freq, non corrosive design, and not same as chirp though, but can't really help you (mine is not open yet, and busy on others projects). Regarding the chirp design, I think it may need an opamp for better results regarding the 1M resistor etc, and some more tuning if going high freq. Trying to get this working but getting alot of errors. New to this so bear with me. In file included from C:\...\Arduino\libraries\Moisture_sensor\Moisture_sensor.ino:55:0: C:\...\Arduino\libraries\MySensors/MyTransportRFM69.h:31:1: error: expected class-name before '{' token { ^ C:\...\Arduino\libraries\MySensors/MyTransportRFM69.h:33:36: error: 'RFM69_FREQUENCY'); ^ C:\...\Arduino\libraries\MySensors/MyTransportRFM69.h:33:71: error: 'RFM69_NETWORKID'); ^ In file included from C:\...\Arduino\libraries\Moisture_sensor\Moisture_sensor.ino:56:0: C:\...\Arduino\libraries\MySensors/MySigningAtsha204Soft.h:55:1: error: expected class-name before '{' token { ^ C:\...\Arduino\libraries\MySensors/MySigningAtsha204Soft.h:62:25: error: 'MY_RANDOMSEED_PIN' was not declared in this scope uint8_t randomseedPin=MY_RANDOMSEED_PIN); ^ Moisture_sensor:83: error: call to 'MySigningAtsha204Soft::MySigningAtsha204Soft(bool, uint8_t)' uses the default argument for parameter 2, which is not yet defined MySigningAtsha204Soft signer; ^ Moisture_sensor:84: error: 'MyHwATMega328' does not name a type MyHwATMega328 hw; ^ Moisture_sensor:85: error: call to 'MyTransportRFM69::MyTransportRFM69(uint8_t, uint8_t, uint8_t, uint8_t, bool, uint8_t)' uses the default argument for parameter 1, which is not yet defined MyTransportRFM69 transport; ^ Moisture_sensor:85: error: call to 'MyTransportRFM69::MyTransportRFM69(uint8_t, uint8_t, uint8_t, uint8_t, bool, uint8_t)' uses the default argument for parameter 2, which is not yet defined Moisture_sensor:86: error: 'MySensor' does not name a type MySensor node(transport, hw, signer); ^ C:\...\Arduino\libraries\Moisture_sensor\Moisture_sensor.ino: In function 'void setup()': Moisture_sensor:112: error: 'node' was not declared in this scope node.begin(); ^ C:\...\Arduino\libraries\Moisture_sensor\Moisture_sensor.ino: In function 'void loop()': Moisture_sensor:163: error: 'node' was not declared in this scope node.send(msgMoisture.set((moistureLevel + oldMoistureLevel) / 2.0 / 10.23, 1)); ^ C:\...\Arduino\libraries\Moisture_sensor\Moisture_sensor.ino: In function 'int readMoisture()': Moisture_sensor:200: error: 'node' was not declared in this scope node.sleep(STABILIZATION_TIME); ^ exit status 1 call to 'MySigningAtsha204Soft::MySigningAtsha204Soft(bool, uint8_t)' uses the default argument for parameter 2, which is not yet defined I have looked through the code but can't seem to see what is wrong with it. Please help @simbic did you modify the sketch in any way? If so, could you please post it? My guess would be an incomplete #define or using incompatible versions. Which version of MySensors are you using? I have assembled a copy of @carlierd 's office plant monitor since it looked kinda neat. When I first tried to upload @carlierd 's Moisture_sensor.ino it complained that several libraries were missing. So I started google-fu around to find them. What was missing was: MyTransportRFM69.h MyTransport.h MySigningAtsha204Soft.h MySigning.h MyMessage.h aswell as drivers, that I placed in folder /utility/ ATSHA204.h RFM69.h sha256.h The code is as follows. I have only pressed include mysensors' library since it complained that it was. * * Code and idea from mfalkvidd (). * */ /**************************************************************************************/ /* Moisture sensor. */ /* */ /* Version : 1.2.5 */ /* Date : 11/04/2016 */ /* Modified by : David Carlier */ /**************************************************************************************/ /* --------------- */ /* RST | | A5 */ /* RX | | A4 */ /* TX | ARDUINO | A3 */ /* RFM69 (DIO0) --------- D2 | UNO | A2 */ /* D3 | | A1 --------- Moisture probe */ /* Power --------- D4 | ATMEGA 328p | A0 --------- Moisture probe */ /* +3v --------- VCC | | GND --------- GND */ /* GND --------- GND | 8MHz int. | REF */ /* OSC | | VCC --------- +3v */ /* OSC | | D13 --------- RFM69 (SCK) */ /* D5 | | D12 --------- RFM69 (MISO) */ /* D6 | | D11 --------- RFM69 (MOSI) */ /* D7 | | D10 --------- RFM69 (NSS) */ /* LED --------- D8 | | D9 */ /* --------------- */ /* */ /* +3v = 2*AA */ /* */ /**************************************************************************************/ #include <SPI.h> #include <MySensors.h> #include <MyTransportRFM69.h> #include <MySigningAtsha204Soft.h> //Define functions #define round(x) ((x)>=0?(long)((x)+0.5):(long)((x)-0.5)) #define N_ELEMENTS(array) (sizeof(array)/sizeof((array)[0])) //Constants for MySensors #define SKETCH_NAME "Moisture Sensor" #define SKETCH_VERSION "1.2.6" #define CHILD_ID_MOISTURE 0 #define CHILD_ID_VOLTAGE 1 #define LED_PIN 8 #define THRESHOLD 1.1 // Only make a new reading with reverse polarity if the change is larger than 10% #define STABILIZATION_TIME 1000 // Let the sensor stabilize before reading //#define BATTERY_FULL 3143 // 2xAA usually gives 3.143V when full #define BATTERY_FULL 3100 // CR2032 usually gives 3.1V when full #define BATTERY_ZERO 2340 // 2.34V limit for 328p at 8MHz #define SLEEP_TIME 7200000 // Sleep time between reads (in milliseconds) (close to 2 hours) const int SENSOR_ANALOG_PINS[] = {A0, A1}; //Variables byte direction = 0; int oldMoistureLevel = -1; //Construct MySensors library MySigningAtsha204Soft signer; MyHwATMega328 hw; MyTransportRFM69 transport; MySensor node(transport, hw, signer); MyMessage msgMoisture(CHILD_ID_MOISTURE, V_HUM); MyMessage msgVolt(CHILD_ID_VOLTAGE, V_VOLTAGE); /**************************************************************************************/ /* Initialization */ /**************************************************************************************/ void setup() { //Get time (for setup duration) #ifdef DEBUG unsigned long startTime = millis(); #endif //Setup LED pin pinMode(LED_PIN, OUTPUT); blinkLedFastly(3); //Set moisutre sensor pins for (int i = 0; i < N_ELEMENTS(SENSOR_ANALOG_PINS); i++) { pinMode(SENSOR_ANALOG_PINS[i], OUTPUT); digitalWrite(SENSOR_ANALOG_PINS[i], LOW); } //Start MySensors and send the sketch version information to the gateway node.begin(); node.sendSketchInfo(SKETCH_NAME, SKETCH_VERSION); //Register all sensors node.present(CHILD_ID_MOISTURE, S_HUM); node.present(CHILD_ID_VOLTAGE, S_MULTIMETER); //Setup done ! blinkLedFastly(3); //Print setup debug #ifdef DEBUG int duration = millis() - startTime; Serial.print("[Setup duration: "); Serial.print(duration, DEC); Serial.println(" ms]"); #endif } /**************************************************************************************/ /* Main loop */ /**************************************************************************************/ void loop() { //Get time (for a complete loop) #ifdef DEBUG unsigned long startTime = millis(); #endif //Get moisture level int moistureLevel = readMoisture(); //Send rolling average of 2 samples to get rid of the "ripple" produced by different resistance in the internal pull-up resistors //See for more information //Check if it was first reading, save current value as old if (oldMoistureLevel == -1) { oldMoistureLevel = moistureLevel; } //Verify if current measurement is not too far from the previous one if (moistureLevel > (oldMoistureLevel * THRESHOLD) || moistureLevel < (oldMoistureLevel / THRESHOLD)) { //The change was large, so it was probably not caused by the difference in internal pull-ups. //Measure again, this time with reversed polarity. moistureLevel = readMoisture(); } //Store current moisture level oldMoistureLevel = moistureLevel; //Report data to the gateway long voltage = getVoltage(); node.send(msgMoisture.set((moistureLevel + oldMoistureLevel) / 2.0 / 10.23, 1)); node.send(msgVolt.set(voltage / 1000.0, 2)); int batteryPcnt = round((voltage - BATTERY_ZERO) * 100.0 / (BATTERY_FULL - BATTERY_ZERO)); if (batteryPcnt > 100) {batteryPcnt = 100;} node.sendBatteryLevel(batteryPcnt); //Print debug #ifdef DEBUG Serial.print((moistureLevel + oldMoistureLevel) / 2.0 / 10.23); Serial.print("%"); Serial.print(" "); Serial.print(voltage / 1000.0); Serial.print("v"); Serial.print(" "); Serial.print(batteryPcnt); Serial.print("%"); int duration = millis() - startTime; Serial.print(" "); Serial.print("["); Serial.print(duration, DEC); Serial.println(" ms]"); Serial.flush(); #endif //Sleep until next measurement blinkLedFastly(1); node.sleep(SLEEP_TIME); } /**************************************************************************************/ /* Allows to get moisture. */ /**************************************************************************************/ int readMoisture() { //Power on the sensor and read once to let the ADC capacitor start charging pinMode(SENSOR_ANALOG_PINS[direction], INPUT_PULLUP); analogRead(SENSOR_ANALOG_PINS[direction]); //Stabilize and read the value node.sleep(STABILIZATION_TIME); int moistureLevel = (1023 - analogRead(SENSOR_ANALOG_PINS[direction])); //Turn off the sensor to conserve battery and minimize corrosion pinMode(SENSOR_ANALOG_PINS[direction], OUTPUT); digitalWrite(SENSOR_ANALOG_PINS[direction], LOW); //Make direction alternate between 0 and 1 to reverse polarity which reduces corrosion direction = (direction + 1) % 2; return moistureLevel; } /**************************************************************************************/ /* Allows to fastly blink the LED. */ /**************************************************************************************/ void blinkLedFastly(byte loop) { byte delayOn = 150; byte delayOff = 150; for (int i = 0; i < loop; i++) { blinkLed(LED_PIN, delayOn); delay(delayOff); } } /**************************************************************************************/ /* Allows to blink a LED. */ /**************************************************************************************/ void blinkLed(byte pinToBlink, int delayInMs) { digitalWrite(pinToBlink,HIGH); delay(delayInMs); digitalWrite(pinToBlink,LOW); } /**************************************************************************************/ /* Allows to get the real Vcc (return value in mV). */ /* */ /**************************************************************************************/ long getVoltage() { ADMUX = (0<<REFS1) | (1<<REFS0) | (0<<ADLAR) | (1<<MUX3) | (1<<MUX2) | (1<<MUX1) | (0<<MUX0); delay(50); // Let mux settle a little to get a more stable A/D conversion //Start a conversion ADCSRA |= _BV( ADSC ); //Wait for it to complete while (bit_is_set(ADCSRA, ADSC)); //Compute and return the value } I have just installed Arduino IDE, and are using the latest mysensors library 2.1.1. Another thing it complains about is: Invalid library found in C:\...\Arduino\libraries\Moisture_sensor: C:\...\Arduino\libraries\Moisture_sensor``` which is wierd since I have just copy+pasted it from here: aswell as built the sensor from there. If you can help, I would be very glad, as I just hoped it would be smooth sailing using mysensors. Build, upload and let it work its magic. Tack på förhand! Simon @simbic that sketch is made for MySensors 1.x so it won't work with MySensors 2.x. You either need to convert it (guide:) or use/create one for 2.x, for example I see now that they have expanded their selection of waterproof sensor shells: Perhaps one of those might be big enough? Or, if not, perhaps it could be joined onto a larger waterproof cavity, and just the sensor itself goes into it? Unfortunately for me, the parts appear to be metric. However, maybe a search would yield some kind of metric-to-imperial union, at which point I could then leverage cheap local parts from Home Depot or the like, to fabricate a larger cavity for the rest of a wireless mote. On the face of it, it seems plausible. It's main virtue is simplicity. Not sure what the failure modes are, or how they might be avoided if there are any. Plainly, you don't want to use components which might rust or otherwise corrode from humidity. Probably my biggest worry would be the possibility of condensate forming on the humidity sensor and skewing results until it evaporated. However, a sensor heater, such as some sensors (e.g. si7021: ) already have, might remedy that if it were to occur. Has anyone tried either one of the above or anything similar? For instance, perhaps wrapping a mote in Tyvek and sealing the seam with Dupont housewrap tape would work just as well. @NeverDie they are all very small inside (5-6mm) so are only suitable for very thin PCB with SMD version of the sensor, even the breakout of si7021 that you linked is too big. @Nca78 Golly, even an nRF52832 chip all by its lonesome self is bigger than 5-6mm. Those shells must be intended for just the sensor and nothing but the sensor (except maybe connecting wires). That would explain the presence of gland seals on some of them, such as: or or @Nca78 said in Office plant monitor: @NeverDie they are all very small inside (5-6mm) so are only suitable for very thin PCB with SMD version of the sensor, even the breakout of si7021 that you linked is too big. I haven't confirmed it, but FWIW according to: "The model shown in this link is too small for I2C boards, but seller has much larger models. L-04 has 12.8mm opening, L-06 has 17 mm and L-10 has 23mm opening." @NeverDie I've since had some communication with the seller. He said, "We have these dimensions," in reference to the photo below that he had attached: The seller says I can buy 3 of the 1" diameter units at $5/pc. The communication with the seller is like playing 20 questions, but I only get to ask one question at a time, with typically one day turnaround for an answer. If I ask more than one question per interaction, he doesn't really answer any of them. So, getting truly meaningful answers is a slow process. @NeverDie that sounds expensive, and I'm worried that at this size it's going to be difficult to stick in the soil for small plants... @Nca78 Well, for small potted plants, you might have to re-pot the plant. However, the nice benefit that would arise is: no visible sensors, which carries with it very high WAF. - aramko aramko Banned last edited by aramko aramko This post is deleted!
https://forum.mysensors.org/topic/4432/office-plant-monitor
CC-MAIN-2022-27
refinedweb
3,274
54.73
Testing is hard and not everyone likes to spend their time writing unit tests when they could be building shiny new features. While almost every developer understands the value of testing their code, most of us fall prey of laziness in the face of approaching deadlines and the prospect of more exciting work. In this post, let’s understand why unit tests serve as the backbone of successful products and learn a new way of writing tests that is much simpler, intuitive and appealing. Why Testing Matters Since I mostly work with Android applications. I’ll let you in on a little secret. More than half of the code bases you will come across will have little to no tests. Not just the crappy ones, but sometimes products with thousands of users. This is often due to the complex and dull nature of old school JUnit based testing. Not only the process is quite boring, it also requires a lot of repetitive boilerplate code to be written for even the simplest scenarios, making testing a very time-consuming process. Obviously, when you are in a rush to take the product out as soon as possible, testing becomes the least desirable practice. More often than not, the end result is that the project becomes unmaintainable within a year. I have seen organizations rewriting already published products only because the code base has become unmaintainable due to spaghetti code that is rotten with bugs. Obviously this is not a desirable situation. Testing your code for bugs and architectural flaws from the beginning solidifies your product to be manageable in the longer run. “If you can not measure it, you can not improve it.” - Lord Kelvin Testing As You Go Laying the foundations of your project with extensive testing is not a new idea. Behavior Driven Development has been around for decades. The practice is to turn feature specifications (or user stories) into a set of unit tests and then writing code that satisfies those specs via passing tests. This results in co-development of both your features and their tests side by side. Coupled with a Continuous Integration solution, this setup almost guarantees that you will never have any regression bugs. Specification Based Testing In Practice Cucumber is the most widely used BDD testing framework right now. It provides a plain language parser called Gherkin which can be used to write tests in plain English that non-programmers can also understand. The framework turns these specifications into acceptance tests which also serves as the documentation for each feature. Here is what a test is written in Gherkin looks like: Neat! Isn’t it? This spec results in unit tests that run with Cucumber. Since this post is not about Cucumber, I won’t go into the implementation details. You can visit this tutorial for the full tutorial on BDD style testing with Cucumber. I used the same article to borrow the above example. Some other popular BDD frameworks include Lettuce, Jasmine, SpecFlow, Spek, Behat, Jdave and Jbehave Problems with JUnit JUnit is an industry-standard at this point for all things Java. It has ruled the Java development for decades now and it works. But since it was built for Java, it brings with it most of the common pitfalls of Java-based technologies that feel backward, especially in the world of modern programming languages like Kotlin, JavaScript and Python. Having said that, writing all styles of tests are entirely possible with JUnit. Mockito provides a BDD style extension in its core library that allows you to do similar Give, When, Then style testing within JUnit test cases. Let’s take a look at some of the problems associated with it and why I believe vanilla JUnit is not a good combination for a powerful language like Kotlin: Code Repetition Like everything in Java, JUnit is also quite verbose. This results in too much repetition of similar contents with a slight change of logic to test different scenarios. How many time have you had a test for a HAS and a HAS-NOT condition with all the same contents except for a boolean? For example, a testUserAccessWhenHasToken and then testUserAccessWhenNotHasToken. Of course, you can stuff in all your assertions in one test at the cost of readability. However, a good test is supposed to be granular i.e. targeting one case per test. Also, your test code is guaranteed to grow as your project gets bigger and at some point, your test code will surpass your application code. If you want to look at examples of this, try looking into the source code of any popular open-source projects such as RxJava, Retrofit, OkHttp, Picasso, etc. Almost all of them have 1.5x to 2x more test code compared to their business logic. Lack Of Contextual Information If you are like me, then one of the first things that you may have done while switching to Kotlin was to change your long JUnit test names like fun onTouchOutside_shouldDismissDialogAndResumeStreaming() with back tick notation. While a big improvement, it gave us a license to go wild with it in our attempts to provide more contextual information about each test case. So now our test case has evolved into something like this: fun `on touch outside, dismiss the dialog and resume streaming the paused song` () The real issue here is that these names are not enough to provide full context about the test. Unless you are willing to write an entire paragraph to define the pre and post conditions for the scenario. Another problem here is with the organization of test cases. Most of the test cases have some shared code that can be logically structured into a hierarchy. However, because JUnit only allows you to write tests in the form of class methods. So you end up writing more tests, more code but with less context about the overall theme of the current group of tests. A Better Way To Write Tests The reason behind giving you a taste of Cucumber BDD was to show what it would be like to have such idiomatic tests. Let’s define a set of specification that we want to build and test. We will be using two libraries Kotlintest and Mockk to help us write tests. Feature Let’s build a grade calculator that tells you your grade based on the marks you obtained. Here are the rules for grading: - Installation First, create an empty Kotlin or Android project and add the following two dependencies: testImplementation 'io.kotlintest:kotlintest-runner-junit5:3.3.2' testImplementation 'io.mockk:mockk:1.9.3.kotlin12' The first dependency is Kotlintest which is a testing library built on top of JUnit. It takes advantage of Kotlin’s DSL capabilities to support various type of testing styles. Mockk is a Kotlin-based mocking library with a very clean syntax that blends really well with Kotlintest’s specs. It also provides a much more flexible API and a much wider set of feature compared to Mockito or Powermock. Implementation Let’s start by creating an empty GradeCalculator class and converting our specifications into a Spec. class GradeCalculatorSpec : BehaviorSpec({ Given("a grade calculator") { val calculator = spyk(GradeCalculator()) every { calculator.totalMarks } returns 100 val total = calculator.totalMarks") {} } } }) This does not quite look like our traditional JUnit test. So let’s break it down and understand bit by bit: - BehaviorSpec - We are extending something called a BehaviorSpec which is basically a Spec written in BDD style (remember the Give, When, Then from Cucumber?). There are dozens of different other Spec styles available in Kotlintest. - Spyk - You may notice that I wrapped the GradeCalculatorobject with a spykmethod. If you have used Mockito before than the concept is the same. Basically, a spy is a wrapper that lets you mock some methods and variables of the object while using the actual values for the rest. I did it to demonstrate the use of Mockk here. - Every/Returns - Similar to Mockito’s when/thenstyle, this construct is used by Mockk to prepare mock values. All we are saying is to return a mock value whenever anyone in this code block asks for this value. The value we are mocking is total marks which are going to help calculate the grade. - When/Then - Finally, there are a bunch of Thenblocks nested inside Whenblocks with some description. The Whenblock is nothing but a way to organize tests in a logical fashion while each Thenserves as the actual test where the assertion happens. So you can have a test with just a Thenstatement but then there’s no point of using it. Next, let's add some functionality to the GradeCalculator class. class GradeCalculator { var totalMarks = 0 fun getGrade(obtainedMarks: Int, totalMarks: Int): String { val percentage = getPercentage(obtainedMarks, totalMarks) return when { percentage >= 90 -> "A" percentage in 80..89 -> "B" percentage in 70..79 -> "C" percentage in 60..69 -> "D" else -> "F" } } private fun getPercentage(obtainedMarks: Int, totalMarks: Int): Int { return (obtainedMarks / totalMarks.toFloat() * 100).roundToInt() } } Here I added a totalMarks field which we mock in our test. This value is used in the getPercentage method to calculate the percentage between total and obtained marks. Finally, the getGrade calculates the grade by comparing the calculated percentage with different ranges. You can build this class in a TDD fashion by running the tests first and adding the functionality to make the failing tests pass one by one. I believe the end result would still be somewhat similar. In the end, let's add some assertions to test the specifications we just wrote. The final implementation would look something like this: package com.zuhaibahmad.bddtestingtutorial import io.kotlintest.shouldBe import io.kotlintest.specs.BehaviorSpec import io.mockk.every import io.mockk.spyk class GradeCalculatorSpec : BehaviorSpec({ Given("a grade calculator") { val calculator = spyk(GradeCalculator()) every { calculator.totalMarks } returns 100 val total = calculator.totalMarks When("obtained marks are 90 or above") { val grade = calculator.getGrade(93, total) Then("grade is A") { grade.shouldBe("A") } } When("obtained marks are between 80 and 89") { val grade = calculator.getGrade(88, total) Then("grade is B") { grade.shouldBe("B") } } When("obtained marks are between 70 and 79") { val grade = calculator.getGrade(78, total) Then("grade is C") { grade.shouldBe("C") } } When("obtained marks are between 60 and 69") { val grade = calculator.getGrade(68, total) Then("grade is D") { grade.shouldBe("D") } } When("obtained marks are below 60") { val grade = calculator.getGrade(59, total) Then("grade is F") { grade.shouldBe("F") } } } }) Outcome I discussed above that JUnit tests lack contextual information and proper grouping of co-related tests. You can see how every test in the spec has a hierarchy which can be used to compose complex cases. Furthermore, for this particular style of testing, kotlintest provides an additional And block to allow you to create even more complex tests without losing contextual information. For example, you may want to construct a test like this: When("user list is fetched from API") { And("the internet is NOT available"){ Then("Test something"){ // Some assertions } } And("the internet is available"){ Then("Test something"){ // Some assertions } } } Finally, not just the tests are well organized, the test results in Android Studio have that nested style too. Here’s the result of our tests in the IDE: Conclusion - Testing your code right from the start of the project provides a solid foundation for future development. - The idea of TDD and BDD is not new, it forces you to write your domain-specific code and the architecture itself to be testable. However, due to the verbosity of JUnit and tight deadlines, testing falls behind quite often. - Aside from code repetition, another big problem with JUnit is the lack of contextual information due to the flat hierarchy of tests. When every case is represented in the form of a class method, it becomes quite difficult to group and organize related tests. - Modern testing frameworks like kotlintest takes advantage of the flexibility provided by Kotlin to allow writing more intuitive unit tests. - The result is less boilerplate code and more meaningful tests. You can find complete source code for this post here For suggestions and queries, just contact me. Discussion (1) I liked this a lot! props to you. Very helpful when it comes to breaking down each section to clarify and not assume the users base knowledge.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/xuhaibahmad/painless-unit-testing-with-kotlintest-mockk-1n82
CC-MAIN-2021-10
refinedweb
2,058
63.59
I created the program below, but can't figure out how to get the test data to show 40 days worth, but only the EVEN days from 2 to 40. // Program created by MY NAME to determine a person's earnings in // pennies, by doubling each day, starting at 1 penny. #include <iostream> using namespace std; int main() { // Variables int days; // Each day worked. double final = 1, total = 0.00; // Asks for number of days worked to determine salary. cout << "Enter the number of days: "; cin >> days; for( int i = 1; i < ( days + 1 ); i++ ) { cout << "Day " << i << ": " << "$" << final / 100 << endl; total += final; final += final; } cout << endl; cout << "Total: " << "$" << total / 100 << endl; return 0; } Nevermind I figured it out. Just added the following line under my for statement. if(i % 2 == 0) Originally Posted by JerBear24 Nevermind I figured it out. Just added the following line under my for statement. if(i % 2 == 0) Why not just increment i by 2 every time? Forum Rules
http://forums.codeguru.com/showthread.php?503751-Read-webBrowser1-source-and-find-text-within-HTML-tags-by-id&goto=nextnewest
CC-MAIN-2017-13
refinedweb
165
72.16
This patch implements a gang-of-threads which are designed tobe used for dirty data writeback. "pdflush" -> dirty pageflush, or something.The number of threads is dynamically managed by a simpledemand-driven algorithm."Oh no, more kernel threads". Don't worry, kupdate andbdflush disappear later.The intent is that no two pdflush threads are ever performingwriteback against the same request queue at the same time. It would be wasteful to do that. My current patches don'tquite achieve this; I need to move the state into the request queue itself...The driver for implementing the thread pool was to avoid thepossibility where bdflush gets stuck on one device's get_request_wait()queue while lots of other disks sit idle. Also generality,abstraction, and the need to have something in place to performthe address_space-based writeback when the buffer_head-basedwriteback disappears.There is no provision inside the pdflush code itself to preventmany threads from working against the same device. That'sthe responsibility of the caller.The main API function, `pdflush_operation()' attempts to finda thread to do some work for you. It is not reliable - it mayreturn -1 and say "sorry, I didn't do that". This happens ifall threads are busy.One _could_ extend pdflush_operation() to queue the work so thatit is guaranteed to happen. If there's a need, that additionalminor complexity can be added.Patch is against 2.5.8-pre3+ratcache+readahead+pageprivate=====================================--- 2.5.8-pre3/include/linux/mm.h~dallocbase-40-pdflush Tue Apr 9 23:29:41 2002+++ 2.5.8-pre3-akpm/include/linux/mm.h Tue Apr 9 23:29:41 2002@@ -589,6 +589,9 @@ static inline struct vm_area_struct * fi extern struct vm_area_struct *find_extend_vma(struct mm_struct *mm, unsigned long addr); +extern int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0);+extern int pdflush_flush(unsigned long nr_pages);+ extern struct page * vmalloc_to_page(void *addr); #endif /* __KERNEL__ */--- 2.5.8-pre3/include/linux/sched.h~dallocbase-40-pdflush Tue Apr 9 23:29:41 2002+++ 2.5.8-pre3-akpm/include/linux/sched.h Tue Apr 9 23:29:41 2002@@ -368,6 +368,7 @@ do { if (atomic_dec_and_test(&(tsk)->usa #define PF_MEMDIE 0x00001000 /* Killed for out-of-memory */ #define PF_FREE_PAGES 0x00002000 /* per process page freeing */ #define PF_NOIO 0x00004000 /* avoid generating further I/O */+#define PF_FLUSHER 0x00008000 /* responsible for disk writeback */ /* * Ptrace flags--- /dev/null Thu Aug 30 13:30:55 2001+++ 2.5.8-pre3-akpm/mm/pdflush.c Tue Apr 9 23:33:41 2002@@ -0,0 +1,216 @@+/*+ * mm/pdflush.c - worker threads for writing back filesystem data+ *+ * Copyright (C) 2002, Linus Torvalds.+ *+ * 09Apr2002 akpm@zip.com.au+ * Initial version+ */++#include <linux/sched.h>+#include <linux/list.h>+#include <linux/signal.h>+#include <linux/spinlock.h>+#include <linux/gfp.h>+#include <linux/init.h>+#include <linux/module.h>+++/*+ * Minimum and maximum number of pdflush instances+ */+#define MIN_PDFLUSH_THREADS 2+#define MAX_PDFLUSH_THREADS 8++static void start_one_pdflush_thread(void);+++/*+ *.+ */++/*+ * All the pdflush threads. Protected by pdflush_lock+ */+static LIST_HEAD(pdflush_list);+static spinlock_t pdflush_lock = SPIN_LOCK_UNLOCKED;++/*+ * The count of currently-running pdflush threads. Protected+ * by pdflush_lock.+ */+static int nr_pdflush_threads = 0;++/*+ * The time at which the pdflush thread pool last went empty+ */+static unsigned long last_empty_jifs;++/*+ * The pdflush thread.+ *+ * Thread pool management algorithm:+ * + * - The minumum and maximum number of pdflush instances are bound+ * by MIN_PDFLUSH_THREADS and MAX_PDFLUSH_THREADS.+ * + * - If there have been no idle pdflush instances for 1 second, create+ * a new one.+ * + * - If the least-recently-went-to-sleep pdflush thread has been asleep+ * for more than one second, terminate a thread.+ */++/*+ * A structure for passing work to a pdflush thread. Also for passing+ * state information between pdflush threads. Protected by pdflush_lock.+ */+struct pdflush_work {+ struct task_struct *who; /* The thread */+ void (*fn)(unsigned long); /* A callback function for pdflush to work on */+ unsigned long arg0; /* An argument to the callback function */+ struct list_head list; /* On pdflush_list, when the thread is idle */+ unsigned long when_i_went_to_sleep;+};++/*+ * preemption is disabled in pdflush. There was a bug in preempt+ * which was causing pdflush to get flipped into state TASK_RUNNING+ * when it performed a spin_unlock. That bug is probably fixed,+ * but play it safe. The preempt-off paths are very short.+ */+static int __pdflush(struct pdflush_work *my_work)+{+ daemonize();+ reparent_to_init();+ strcpy(current->comm, "pdflush");++ /* interruptible sleep, so block all signals */+ spin_lock_irq(¤t->sigmask_lock);+ siginitsetinv(¤t->blocked, 0);+ recalc_sigpending();+ spin_unlock_irq(¤t->sigmask_lock);++ current->flags |= PF_FLUSHER;+ my_work->fn = NULL;+ my_work->who = current;++ preempt_disable();+ spin_lock_irq(&pdflush_lock);+ nr_pdflush_threads++;+ for ( ; ; ) {+ struct pdflush_work *pdf;++ list_add(&my_work->list, &pdflush_list);+ my_work->when_i_went_to_sleep = jiffies;+ set_current_state(TASK_INTERRUPTIBLE);+ spin_unlock_irq(&pdflush_lock);++ schedule();++ preempt_enable();+ (*my_work->fn)(my_work->arg0);+ preempt_disable();++ /*+ * Thread creation: For how long have there been zero+ * available threads?+ */+ if (jiffies - last_empty_jifs > 1 * HZ) {+ /* unlocked list_empty() test is OK here */+ if (list_empty(&pdflush_list)) {+ /* unlocked nr_pdflush_threads test is OK here */+ if (nr_pdflush_threads < MAX_PDFLUSH_THREADS)+ start_one_pdflush_thread();+ }+ }++ spin_lock_irq(&pdflush_lock);++ /*+ * Thread destruction: For how long has the sleepiest+ * thread slept?+ */+ if (list_empty(&pdflush_list))+ continue;+ if (nr_pdflush_threads <= MIN_PDFLUSH_THREADS)+ continue;+ pdf = list_entry(pdflush_list.prev, struct pdflush_work, list);+ if (jiffies - pdf->when_i_went_to_sleep > 1 * HZ) {+ pdf->when_i_went_to_sleep = jiffies; /* Limit exit rate */+ break; /* exeunt */+ }+ }+ nr_pdflush_threads--;+ spin_unlock_irq(&pdflush_lock);+ preempt_enable();+ return 0;+}++/*+ * Of course, my_work wants to be just a local in __pdflush(). It is+ * separated out in this manner to hopefully prevent the compiler from+ * performing unfortunate optimisations agains the auto variables. Because+ * there are visible to other tasks and CPUs. (No problem has actually+ * been observed. This is just paranoia).+ */+static int pdflush(void *dummy)+{+ struct pdflush_work my_work;+ return __pdflush(&my_work);+}++/*+ * Attempt to wake up a pdflush thread, and get it to do some work for you.+ * Returns zero if it indeed managed to find a worker thread, and passed your+ * payload to it.+ */+int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0)+{+ unsigned long flags;+ int ret = 0;++ if (fn == NULL)+ BUG(); /* Hard to diagnose if it's deferred */++ spin_lock_irqsave(&pdflush_lock, flags);+ if (list_empty(&pdflush_list)) {+ spin_unlock_irqrestore(&pdflush_lock, flags);+ ret = -1;+ } else {+ struct pdflush_work *pdf;++ pdf = list_entry(pdflush_list.next, struct pdflush_work, list);+ list_del_init(&pdf->list);+ if (list_empty(&pdflush_list))+ last_empty_jifs = jiffies;+ spin_unlock_irqrestore(&pdflush_lock, flags);+ pdf->fn = fn;+ pdf->arg0 = arg0;+ wmb(); /* ? */+ wake_up_process(pdf->who);+ }+ return ret;+}++static void start_one_pdflush_thread(void)+{+ kernel_thread(pdflush, NULL,+ CLONE_FS | CLONE_FILES | CLONE_SIGNAL);+}++static int __init pdflush_init(void)+{+ int i;++ for (i = 0; i < MIN_PDFLUSH_THREADS; i++)+ start_one_pdflush_thread();+ return 0;+}++module_init(pdflush_init);--- 2.5.8-pre3/mm/Makefile~dallocbase-40-pdflush Tue Apr 9 23:29:41 2002+++ 2.5.8-pre3-akpm/mm/Makefile Tue Apr 9 23:29:41 2002@@ -14,6 +14,7 @@ export-objs := shmem.o filemap.o mempool obj-y := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \ vmalloc.o slab.o bootmem.o swap.o vmscan.o page_io.o \ page_alloc.o swap_state.o swapfile.o numa.o oom_kill.o \- shmem.o highmem.o mempool.o msync.o mincore.o readahead.o+ shmem.o highmem.o mempool.o msync.o mincore.o readahead.o \+ pdflush.o include $(TOPDIR)/Rules.make--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2002/4/10/6
CC-MAIN-2018-13
refinedweb
1,157
55.95
React throwaway app 2: Movie Search App Emmanuel Okiche Jul 4 '18 ・1 min read In the first article, I introduced you to the aim of the series and you built a currency converter. In this one, you would be building a movie search app. The Rules (just to remind you) - Your app should be completed within 60 minutes (depending on the complexity). - Must be pure React (no react-router or redux). - Must delete the project after one week. Why? These are basic apps you should be able to build anytime and not worthy of showcasing as a portfolio for a serious job interview. - Don't spend much time on designing. Remember, the idea is to check if you think in React. You could style to your taste after 60 minutes. - Don't look at my solution until you have completed yours. Else, you would be struck with 5 years of 'tutorial purgatory' App 2 - Movie Search App - Build a Movie App that connects to an external api. - Duration should be within 1 - 2 hours (including styling). Here is a screenshot of what i expect you to build: This app would show that you understand how: - components and states work - to request data from an api - component life cycle methods - to use events - to update your UI based on state change Your time starts now! Remember not to look at my solution until you're done with yours. My Solution I used the OMDb API to get my movie data. You have to get an api key (it is free). I must confess, i spent above 60 minutes to complete this because i had to get familiar with the api by playing around with different requests on PostMan. As always, i used create-react-app to generate my project. To structure my app, i had to decide what would be containers and components. Here is my folder structure: MovieCard.js: This component is used to display the selected movie. It receives its movie data via props. import React from 'react'; import './MovieCard.css'; const MovieCard = (props) => { return ( <div className="container"> <div className="movie-card"> <div className="movie-header" style={{ backgroundImage: `url(${props.movie.Poster})` }}> </div> <div className="movie-content"> <div className="movie-content-header"> <h3 className="movie-title">{props.movie.Title}</h3> </div> <div className="movie-info"> <div className="info-section"> <label>Released</label> <span>{props.movie.Released}</span> </div> <div className="info-section"> <label>IMDB Rating</label> <span>{props.movie.imdbRating}</span> </div> <div className="info-section"> <label>Rated</label> <span>{props.movie.Rated}</span> </div> <div className="info-section"> <label>Runtime</label> <span>{props.movie.Runtime}</span> </div> </div> <div className="plot" style={{fontSize: '12px'}}> <p>{props.movie.Plot}</p> </div> </div> </div> </div> ); }; export default MovieCard; MovieCard.css: .container { display: flex; flex-wrap: wrap; max-width: 100%; margin-left: auto; margin-right: auto; justify-content: center; } .movie-card { background: #ffffff; box-shadow: 0px 6px 18px rgba(0,0,0,.1); width: 100%; max-width: 290px; margin: 2em; border-radius: 10px; display:inline-block; z-index: 10; } .movie-header { padding:0; margin: 0; height: 434px; width: 100%; display: block; border-top-left-radius: 10px; border-top-right-radius:10px; background-size: cover; } .movie-content { padding: 18px 18px 24px 18px; margin: 0; } .movie-content-header, .movie-info { display: table; width: 100%; } .movie-title { font-size: 24px; margin: 0; display: table-cell; cursor: pointer; } .movie-title: "hover {" color:rgb(228, 194, 42); } .movie-info { margin-top: 1em; } .info-section { display: table-cell; text-transform: uppercase; text-align:center; } .info-section:first-of-type { text-align:left; } .info-section:last-of-type { text-align:right; } .info-section label { display: block; color: rgba(0,0,0,.5); margin-bottom: .5em; font-size: 9px; } .info-section span { font-weight: 700; font-size: 11px; } @media only screen and (max-width: 400px) { .movie-header { height: 400px; } } Search.js Next, we have the Search component which contains the search input and the returned list of result. Here is the Search.js: import React from 'react'; import './Search.css'; const Search = (props) => { let resultList = null if (props.searching && (props.defaultTitle !== '')) { resultList = ( <ul className="results"> {props.results.map(item => ( <li key={item.imdbID} onClick={() => props.clicked(item)}> <img src={item.Poster} {item.Title} </li> ))} </ul> ) } return ( <div className="search"> <input type="search" name="movie-search" value={props.defaultTitle} onChange={props.search} /> {resultList} </div> ); }; export default Search; Search.css .search { position: relative; margin: 0 auto; width: 300px; margin-top: 10px; } .search input { height: 26px; width: 100%; padding: 0 12px 0 25px; background: white; border: 1px solid #babdcc; border-radius: 13px; box-sizing: border-box; box-shadow: inset 0 1px #e5e7ed, 0 1px 0 #fcfcfc; } .search input:focus { outline: none; border-color: #66b1ee; box-shadow: 0 0 2px rgba(85, 168, 236, 0.9); } .search .results { display: block; position: absolute; top: 35px; left: 0; right: 0; z-index: 20; padding: 0; margin: 0; border-width: 1px; border-style: solid; border-color: #cbcfe2 #c8cee7 #c4c7d7; border-radius: 3px; background-color: #fdfdfd; } .search .results li { display: flex; align-items: center; padding: 5px; border-bottom: 1px solid rgba(88, 85, 85, 0.3); text-align: left; height: 50px; cursor: pointer; } .search .results li img { width: 30px; margin-right: 5px; } .search .results li:hover { background: rgba(88, 85, 85, 0.1); } MovieSearch.js I made MovieSearch to be a stateful component because i want to manage all my states there and pass the data to other components via props. First, make sure you get your api key from omdb api. Here is my MovieSearch.js container: import React, { Component } from 'react'; import axios from 'axios'; import MovieCard from '../../components/MovieCard/MovieCard'; import Search from '../../components/Search/Search'; class MovieSearch extends Component { state = { movieId: 'tt1442449', // default imdb id (Spartacus) title: "''," movie: {}, searchResults: [], isSearching: false, } componentDidMount() { this.loadMovie() } componentDidUpdate(prevProps, prevState) { if (prevState.movieId !== this.state.movieId) { this.loadMovie() } } loadMovie() { axios.get(`{this.state.movieId}`) .then(response => { this.setState({ movie: response.data }); }) .catch(error => { console.log('Opps!', error.message); }) } // we use a timeout to prevent the api request to fire immediately as we type timeout = null; searchMovie = (event) => { this.setState({ title: "event.target.value, isSearching: true })" clearTimeout(this.timeout); this.timeout = setTimeout(() => { axios.get(`{this.state.title}`) .then(response => { if (response.data.Search) { const movies = response.data.Search.slice(0, 5); this.setState({ searchResults: movies }); } }) .catch(error => { console.log('Opps!', error.message); }) }, 1000) } // event handler for a search result item that is clicked itemClicked = (item) => { this.setState( { movieId: item.imdbID, isSearching: false, title: "item.Title," } ) } render() { return ( <div onClick={() => this.setState({ isSearching: false })}> <Search defaultTitle={this.state.title} search={this.searchMovie} results={this.state.searchResults} clicked={this.itemClicked} searching={this.state.isSearching} /> <MovieCard movie={this.state.movie} /> </div> ); } } export default MovieSearch; This container is used to handle the state and update changes in our application. The code above simply loads a an initial movie when it mounts. Whenever we search and update the movieId state, it updates the content of the MovieCard via props. Conclusion You might think that this was a little bit rushed. Remember, this is not a tutorial but a challenge for beginners that feel they can think in React. My code was just a guide. Thanks for reading and i hope to see you in the next part. I don't think i would throw this one away ;) (open source and free forever ❤️) How do you style components with React Native? Styled components are great for React, but do they work with RN? I just looked at your solution and the first thing I noticed is that I would have split the infosection into a new component. I haven't looked deeper but just want to mention this. Nonetheless I like the idea to create a simple app just to practice. Thanks for your feedback. I really appreciate it. You make a good point. The aim of this series is just to build basic stuffs fast (within the time limit) and throw they away. This is to test if beginners thin in React. The final version on my computer has some improvements. Thanks once again and your comment shows that you "Think in React." cool i like these, I'm building my own component library so helps to see how others think and work Thanks. I'm glad you liked it. Can I get the source code? Unfortunately i don't have the source hosted on github since this is a throwaway app but that's the entire source in the tutorial ;)
https://dev.to/fleepgeek/react-throwaway-app-2-movie-search-app-3f3d
CC-MAIN-2019-04
refinedweb
1,410
60.72
Type conversion simply means converting data from one data type to another. There are formally two types of type conversion – Implicit type conversion and explicit type conversion. This article is written in context with C++, but these concepts can be applied to any programming language. Implicit Type Conversion Let’s look at some basic points about this type of conversion: 1. It is also known as ‘automatic type conversion’. 2. It is automatically done by compiler 3. It takes place when there are different data types used in a particular expression. Say for example, int i = 54; double d = 6.64; double d1 = d + i; In cases like this, implicit type conversion occurs. 4. It is more of a type promotion. Data types of smaller size (range wise) are automatically converted to that of bigger size to avoid loss of information. 5. All the data types of the variables are upgraded to the data type of the variable with largest data type. Rules of promotion, bool => char => short int => int => unsigned int => long => unsigned long long => float => double => long double - Sometimes, implicit conversion can create problems for us. They can result in loss of sign, when a signed int is promoted to an unsigned int. It can also result in loss of information, when a long long int in promoted to float. Let's see an example to understand it better. #include <iostream> using namespace std; int main() { int x = 10; // integer x char y = 'a'; // character c // y implicitly converted to int. ASCII // value of 'a' is 97 x = x + y; // x is implicitly converted to float float z = x + 1.0; cout<<"x = "<<x<<endl; cout<<"z = "<<z<<endl; return 0; } Output of the above program is x = 107, z = 108.000000 Explicit Type Conversion Now let’s look at some basic points about this type of conversion: 1. It is also known as ‘type casting’. 2. It is not automatically done by compiler, rather it is user defined i.e. explicitly done by user. 3. Any type can be explicitly converted to any other data type, but we should be careful while doing this otherwise problems like data loss and overflow can occur. The syntax in C++: (type) expression type indicated the data type to which the final result is converted. As we know the best way of understanding, examples! #include <iostream> using namespace std; int main() { double x = 1.2; // Explicit conversion from double to int int sum = (int)x + 1; cout<<"sum = "<<sum; return 0; } Output of the above program is sum = 2 In this article, we saw type conversions of both types. Implicit and explicit. These concepts come in very handy when we work on data with mixed data types.
https://boostlog.io/@sophia91/type-conversion-of-data-types-in-c-5a9e59d9e922f1008c7efa9b
CC-MAIN-2019-30
refinedweb
456
65.01
#include "rrel_misc.h" #include <vcl_cmath.h> #include <vnl/vnl_math.h> Go to the source code of this file. Chebychev approximation to erfc. (Taken from "Numerical Recipes in C".) Definition at line 9 of file rrel_misc.cxx. Inverse of the Gaussian CDF. Provided by Robert W. Cox from the Biophysics Research Institute at the Medical College of Wisconsin. This function is based off of a rational polynomial approximation to the inverse Gaussian CDF which can be found in M. Abramowitz and I.A. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. John Wiley & Sons. New York. Equation 26.2.23. pg. 933. 1972. Definition at line 50 of file rrel_misc.cxx.
http://public.kitware.com/vxl/doc/release/contrib/rpl/rrel/html/rrel__misc_8cxx.html
crawl-003
refinedweb
113
55.5
I implemented a recommender named AnonymousRecommender for anonymous users, and in AnonymousRecommender I write a method like this to make recommendations. #----------- public synchronized List<RecommendedItem> recommend(PreferenceArray anonymousUserPrefs, int howMany) throws TasteException { plusAnonymousModel.setTempPrefs(anonymousUserPrefs); List<RecommendedItem> recommendations = recommend(PlusAnonymousUserDataModel.TEMP_USER_ID, howMany, null); plusAnonymousModel.setTempPrefs(null); return recommendations; } #-------------- And in servlet I will use this recommender to process request,but I can't import the AnonymousRecommender class to invoke the recommend method I write. When mvn package, I got /Users/samsam/Lab/mahout-0.3/taste-web/src/main/java/org/apache/mahout/cf/taste/web/AnonymousRecommenderServlet.java:[36,38] package net.gamestreamer.recommendation does not exist Who knnow how to import the AnonymousRecommender class? Best Regards. On Thu, Jul 8, 2010 at 12:48 AM, samsam <yanguango@gmail.com> wrote: > thanks very much! > > > On Thu, Jul 8, 2010 at 12:46 AM, Sean Owen <srowen@gmail.com> wrote: > >> That's a bit of example code for the book. It is in the source code >> made available with the MEAP book. It should be downloadable -- if >> it's not apparent where it's available I'll ask Manning where it is. >> >> I can send it to you -- see attached. You should get it though the >> mailing list won't I believe. But you should find all the source since >> there are more classes than just this. >> >> Sean >> >> On Wed, Jul 7, 2010 at 5:42 PM, samsam <yanguango@gmail.com> wrote: >> > I seen LibimsetiRecomender in book <mahout in action>,but i can't find >> it in >> > mahout docs.What is it? >> > >> > On Tue, Jul 6, 2010 at 12:07 AM, samsam <yanguango@gmail.com> wrote: >> > >> >> I become more clear about that,thanks for your help very much. >> >> >> >> >> >> On Mon, Jul 5, 2010 at 11:52 PM, Sean Owen <srowen@gmail.com> wrote: >> >> >> >>> Pre-compute the similarity based on what information? You mention that >> >>> you don't want to use Pearson and mention item attributes. >> >>> >> >>> If you are trying to use domain-specific attributes of items, then >> >>> it's up to you to write that logic. If you want to say books have a >> >>> "0.5" similarity when they are within the same genre, and "0.9" when >> >>> by the same author, you can just write that logic. That's not part of >> >>> the framework. >> >>> >> >>> The hook into the framework comes when you implement ItemSimilarity >> >>> with logic like that. Then just use that ItemSimilarity instead of one >> >>> of the given implementations. That's all. >> >>> >> >>> On Mon, Jul 5, 2010 at 4:32 PM, samsam <yanguango@gmail.com> wrote: >> >>> > About the second question,I have not the similarity,I want to know >> is >> >>> how to >> >>> > pre-compute the item similarity. >> >>> > >> >>> > On Mon, Jul 5, 2010 at 11:20 PM, Sean Owen <srowen@gmail.com> >> wrote: >> >>> > >> >>> >> 1) Good question. One answer is to make these "anonymous" users >> real >> >>> >> users in your data model, at least temporarily. That is, they need >> not >> >>> >> be anonymous to the recommender, even if they're not yet a >> registered >> >>> >> user as far as your site is concerned. >> >>> >> >> >>> >> There's a class called PlusAnonymousUserDataModel that helps you do >> >>> >> this. It wraps a DataModel and lets you quickly add a temporary >> user, >> >>> >> recommend, then un-add that user. It may be the easiest thing to >> try. >> >>> >> >> >>> >> (BTW the book Mahout in Action covers this in section 5.4, in the >> >>> >> current MEAP draft.) >> >>> >> >> >>> >> 2) Not sure I fully understand. You already have some external, >> >>> >> pre-computed notion of item similarity? then just feed that in to >> >>> >> GenericItemSimilarity and use it from there. >> >>> >> >> >>> >> Sean >> >>> >> >> >>> >> On Mon, Jul 5, 2010 at 1:52 PM, samsam <yanguango@gmail.com> >> wrote: >> >>> >> > Hello,all >> >>> >> > I want to build recommendation engine with apache mahout,I have >> read >> >>> some >> >>> >> > reading material,and I still have some questions. >> >>> >> > >> >>> >> > 1)How to recommend for anonymous users >> >>> >> > I think recommendation engine should return recommendations >> given a >> >>> item >> >>> >> > id.For example,a anonymous user reviews some items, >> >>> >> > and tell the recommendation what he reviews,and compute with the >> >>> reviews >> >>> >> > histories. >> >>> >> > >> >>> >> > 2)How to compute the items similarity dataset >> >>> >> > Without use items similarity dataset,we can make >> ItemBasedRecommender >> >>> >> > with PearsonCorrelationSimilarity,but >> >>> >> > we need to make recommendations with extra attributes of items, >> >>> >> > so we should use the items similarity dataset,how to build the >> >>> dataset is >> >>> >> > the key point. >> >>> >> > -- >> >>> >> > I'm samsam. >> >>> >> > >> >>> >> >> >>> > >> >>> > >> >>> > >> >>> > -- >> >>> > I'm samsam. >> >>> > >> >>> >> >> >> >> >> >> >> >> -- >> >> I'm samsam. >> >> >> > >> > >> > >> > -- >> > I'm samsam. >> > >> > > > > -- > I'm samsam. > -- I'm samsam.
http://mail-archives.us.apache.org/mod_mbox/mahout-user/201007.mbox/%3CAANLkTikTOOGTuhjDaSHZxT8_3smnwBntUm6GctfxH-H8@mail.gmail.com%3E
CC-MAIN-2020-05
refinedweb
734
58.79
This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project. Hi, some people might recall my patch to gdb.base/recurse.exp, sent to this list on 2001-09-14 which added the following comment to the exp file (which actually was a description I got from Michael Snyder): # The former version expected the test to return to main(). # Now it expects the test to return to main or to stop in the # function's epilogue. # # The problem is that gdb needs to (but doesn't) understand # function epilogues in the same way as for prologues. # # If there is no hardware watchpoint (such as a x86 debug register), # then watchpoints are done "the hard way" by single-stepping the # target until the value of the watched variable changes. If you # are single-stepping, you will eventually step into an epilogue. # When you do that, the "top" stack frame may become partially # deconstructed (as when you pop the frame pointer, for instance), # and from that point on, GDB can no longer make sense of the stack. # # A test which stops in the epilogue is trying to determine when GDB # leaves the stack frame in which the watchpoint was created. It does # this basically by watching for the frame pointer to change. When # the frame pointer changes, the test expects to be back in main, but # instead it is still in the epilogue of the callee. The below patch basically adds the predicate `IN_EPILOGUE(CORE_ADDR addr)' to gdb. It's defined to return a non-zero value if the given address `addr' is in the epilogue of the function. The epilogue of the function is defined as the part of a function between the eventually destroying of the stack frame and the trailing `return to caller' instruction. Ok, now we have a definition of a predicate which offers (in which way ever) the information if we're currently in an epilogue or not. How does that help in the aforementioned case of recurse.exp? This is part two of the patch, the actual usage of IN_EPILOGUE(). Currently there's only one point in the code at which I have added a call to IN_EPILOGUE(), breakpoint.c (watchpoint_check)i, line 2308 The comment says it all. The problem in watchpoint_check() at that point is that _if_ we're actually in the epilogue of a function we can't rely on any value of local variables. They could have changed or not, who knows? However, that's not what we are interested in. When the value of a local variable is different by coincidence we don't mind. The above added code does IMO what should be done when we're currently in an epilogue. It immediately leaves the function watchpoint_check() without checking the watchpoints. The epilogue is treated as `twilight zone'. This results in that we first leave the current function before checking for watchpoints again. Targets suffering from that problem now leave the function first before the watchpoint will be deleted. One result: They pass the recurse.exp test. If that's not already clear: IN_EPILOGUE() returns 0 by default, so if your target doesn't have a problem with the above behaviour or your target doesn't provide a reliable way to determine the epilogue, you just don't touch IN_EPILOGUE(). The whole code just behaves as before then. The complete patch follows. I have again only send gdbarch.sh and not the autogenerated gdbarch.[ch] to save some space. Hope, that helps, Corinna 2001-11-01 Corinna Vinschen <vinschen@redhat.com> * arch-utils.c (generic_in_epilogue): New function. * arch-utils.h (generic_in_epilogue): Declare extern. * breakpoint.c (watchpoint_check): Add test if the pc is currently in the epilogue of a function. * gdbarch.c: Autogenerated from gdbarch.sh. * gdbarch.h: Ditto. * gdbarch.sh (function_list): Add `IN_EPILOGUE' definition. Index: arch-utils.c =================================================================== RCS file: /cvs/src/src/gdb/arch-utils.c,v retrieving revision 1.37 diff -u -p -r1.37 arch-utils.c --- arch-utils.c 2001/10/31 23:21:33 1.37 +++ arch-utils.c 2001/11/01 15:46:06 @@ -111,6 +111,12 @@ generic_in_solib_call_trampoline (CORE_A return 0; } +int +generic_in_epilogue (CORE_ADDR pc) +{ + return 0; +} + char * legacy_register_name (int i) { Index: arch-utils.h =================================================================== RCS file: /cvs/src/src/gdb/arch-utils.h,v retrieving revision 1.22 diff -u -p -r1.22 arch-utils.h --- arch-utils.h 2001/10/31 23:21:33 1.22 +++ arch-utils.h 2001/11/01 15:46:06 @@ -134,4 +134,6 @@ extern CORE_ADDR generic_skip_trampoline extern int generic_in_solib_call_trampoline (CORE_ADDR pc, char *name); +extern int generic_in_epilogue (CORE_ADDR pc); + #endif Index: breakpoint.c =================================================================== RCS file: /cvs/src/src/gdb/breakpoint.c,v retrieving revision 1.55 diff -u -p -r1.55 breakpoint.c --- breakpoint.c 2001/10/20 23:54:29 1.55 +++ breakpoint.c 2001/11/01 15:46:10 @@ -2308,6 +2308,14 @@ watchpoint_check (PTR Index: gdbarch.sh =================================================================== RCS file: /cvs/src/src/gdb/gdbarch.sh,v retrieving revision 1.84 diff -u -p -r1.84 gdbarch.sh --- gdbarch.sh 2001/10/31 23:21:33 1.84 +++ gdbarch.sh 2001/11/01 15:46:11 @@ -546,6 +546,15 @@ f:2:SKIP_TRAMPOLINE_CODE:CORE_ADDR:skip_ # trampoline code in the ".plt" section. IN_SOLIB_CALL_TRAMPOLINE evaluates # to nonzero if we are current stopped in one of these. f:2:IN_SOLIB_CALL_TRAMPOLINE:int:in_solib_call_trampoline:CORE_ADDR pc, char *name:pc, name:::generic_in_solib_call_trampoline::0 +# A target might have problems with watchpoints as soon as the stack frame +# of the current function has been destroyed. This mostly happens as the +# first action in a funtion's epilogue. IN_EPILOGUE() is defined to return +# a non-zero value if either the given addr is one instruction after the stack +# destroying instruction up to the trailing return instruction or if we can +# figure out that the stack frame has already been invalidated regardless +# of the value of addr. Targets which don't suffer from that problem could +# just let this functionality untouched. +f:2:IN_EPILOGUE:int:in_epilogue:CORE_ADDR addr:addr::0:generic_in_epilogue::0 EOF }
https://sourceware.org/legacy-ml/gdb-patches/2001-11/msg00003.html
CC-MAIN-2022-05
refinedweb
1,007
67.45
JSX JSX is a syntax extension of JavaScript that combines the JavaScript and HTML-like syntax to provide highly functional, reusable markup. It’s used to create DOM elements which are then rendered in the React DOM. While not required in React, JSX provides a neat visual reqresentation of the application’s UI. A JavaScript file containing JSX will have to be compiled before it reaches a web browser. Syntax JSX looks a lot like HTML: const headerElement = <h1>This is a header</h1>; In the block of code, we see the similarities between JSX syntax and HTML: they both use the angle bracket opening <h1> and closing </h1> tags. Under the hood, after it’s been processed to regular JavaScript, it looks like this: const headerElement = React.createElement('h1', 'This is a header'); JavaScript code, such as variables and functions, can be used in JSX, as well: import React from 'react';const App = () => {return (<React.Fragment><button onClick={() => 'The button was clicked!'}>Click!</button></React.Fragment>);}; JSX Attributes The syntax of JSX attributes closely resembles that of HTML attributes. const example = <h1 id="example">JSX Attributes</h1>; In the block of code, inside of the opening tag of the <h1> JSX element, we see an id attribute with the value "example". Nested JSX Elements In order for the code to compile, a JSX expression must have exactly one outermost element. In the below block of code, the <a> tag is the outermost element. const myClasses = (<a href=""><h1>Sign Up!</h1></a>); Multiline JSX Expression A JSX expression that spans multiple lines must be wrapped in parentheses ( and ). const myList = (<ul><li>Item 1</li><li>Item 2</li><li>Item 3</li></ul>); Here, we see the opening parentheses on the same line as the constant declaration, before the JSX expression begins. We see the closing parentheses on the line following the end of the JSX expression. JSX with Conditionals JSX does not support if/ else syntax in embedded JavaScript. There are three ways to express conditionals for use with JSX elements: Using Ternary Operator Using ternary operator within curly braces in JSX: const headline = <h1>{age >= drinkingAge ? 'Buy Drink' : 'Do Teen Stuff'}</h1>; Using if Statement Using if/ else statement outside of JSX element: let text;if (age >= drinkingAge) {text = 'Buy Drink';} else {text = 'Do Teen Stuff';}const headline = <h1>{text}</h1>; Using && Operator Using && AND operator: // Renders as empty div if length is 0const unreadMessages = ['hello?', 'remember me!'];const update = (<div>{unreadMessages.length > 0 && (<h1>You have {unreadMessages.length} unread messages.</h1>)}</div>);
https://www.codecademy.com/resources/docs/react/jsx?utm_source=ccblog&utm_medium=ccblog&utm_campaign=ccblog&utm_content=cw_react_interview_questions
CC-MAIN-2022-33
refinedweb
427
53.81
Introduction to Higher-Order Functions Understanding higher-order functions like map, filter, and reduce with examples Higher-Order Functions A higher-order function is a function that accepts another function as a parameter or returns a function as its return type. The easiest way to understand this is by realizing that functions can be treated just like any other piece of data. In languages that support higher-order functions, much like you would with a String or Integer, you can pass functions as parameters, store them, and return them. Anonymous Functions Before exploring higher-order functions, it is first important to cover anonymous functions, which are often used alongside higher-order functions. An anonymous function, or lambda, is simply a function that was declared without any named identifier to refer to it. They can still be stored in variables and therefore can still be named within the code, but the function itself is declared without a name attached. For example, in Java, a simple anonymous function looks like this: x -> x + 1 The input of this function is x, and the output is x + 1. Syntax for anonymous functions varies slightly across languages, but it is typically written in the form of (inputs) → (output). Anonymous functions are often used because they avoid the boilerplate code associated with formally declaring them as named functions. So, for simple functions that may not be used in more than one place, such as the above, it may be more appropriate to use an anonymous function. Example Now, we will go into a very basic example of a higher-order function that uses anonymous functions. This example uses Python, where the syntax for anonymous functions is lambda (inputs): (output). def add_n(n): return lambda x: x + n This example takes a number n and returns a function that adds n to the input. So, if we wanted to add 1 + 2, we could do that by calling add_n(2)(1). The first call, add_n(2), returns a function that adds 2 to its input, and the second call (1) uses that function to add 2 to 1. Common Higher-Order Functions There are a few specific higher-order functions that are essential to understanding modern codebases. These primarily deal with iterating or summing up lists of data. They provide a much cleaner way to deal with common list operations, saving you from having to create a lot of helper code to do basic list operations. This creates code that is more explicit in its intention. For these examples, I will use Python-ish code throughout. Most other modern languages, such as Java and JavaScript, also have these features. Map The map operation allows you to apply a function to each element of a list, and then return a new list with those new values. Without map, your code to do this would look like this: new_list = [] old_list = [...]for e in old_list: new_list.append(some_function(e))return new_list Here, our goal was to apply some_function to every element of old_list. At the end of this loop, new_list now contains this result. With map, we can do this in a much cleaner way. The map function takes in a function and a list, and returns a new list with that function applied to every element of the list. It is used as follows: old_list = [...] return map(some_function, old_list) So, if old_list = [1, 2, 3] and some_function = lambda x: x + 1 , the returned value would be [2, 3, 4] . Filter The filter operation allows you to return the elements of a list that meet a condition. Without filter, your code to accomplish this would look like this: new_list = [] old_list = [...]for e in old_list: if some_function(e): new_list.append(e)return new_list Here, we want to return the subset of elements in old_list for which some_function returns True. With filter, we can, again, do this in a much cleaner way. Filter takes in a function and a list, and returns a new list with only the elements in that list for which the function returns True. We use filter as follows: old_list = [...] return filter(some_function, old_list) So, if old_list = [1, 2, 3] and some_function = lambda x: x == 2 , then the return value of this would be [2] . Reduce The reduce operation allows you to calculate a single value based on a function used to combine all of the elements of the list. For example, this could be adding all of the elements of a list to find the list’s sum. Without reduce, your code would look like this: accumulator = 0 old_list = [...]for e in old_list: accumulator += ereturn accumulator The accumulator variable is initialized with a value ( 0, in this case), and holds the accumulated result of applying a function to the accumulator and each list element. In this case, the function is lambda accumulator, e: accumulator + e. So, for a given function f, the reduce operation is equivalent to calling f(...(f(f(initial, list[0]), list[1]),...), list[n]). For example, say old_list = [1, 2, 3, 4] and we use the function lambda accumulator, e: accumulator + e with an initial value of 0. Then, the value of our reduce operation would be ((((0 + 1) + 2) + 3) + 4), which is exactly the same result as the snippet above. We can use reduce as follows: old_list = [...] initial = 0 return reduce(lambda acc, e: acc + e, old_list, initial) (Note that Python’s actual reduce function does not take an initial value as a parameter — it just uses the first value of the list as the initial value. The above syntax is more generalized to what other languages do.)
https://medium.com/better-programming/introduction-to-higher-order-functions-3ff0a05b40eb?source=rss-5f4d2b8b896d------2
CC-MAIN-2019-47
refinedweb
938
61.56
- Issued: - 2019-06-06 - Updated: - 2019-06-06 RHBA-2019:0794 - Bug Fix Advisory Synopsis OpenShift Container Platform 3.11 bug fix update Type/Severity Bug Fix Advisory Topic Red Hat OpenShift Container Platform release 3.11.104.11.104. See the following advisory for the container images for this release: This update fixes the following bugs: - Director-deployed pods would stop in the `CrashLoopBackOff` state after a rolling reboot of a node. This was because the `READY` sequence would display a node before it had started. Now, the `READY` indicator allows components to come online before displaying as a ready state. (BZ#1654044) - Ansible playbook `health.yml` assumed `curator` was a `deploymentconfig` instead of a `cronjob`. Now, the `health.yml` playbook checks for `curator` as a `cronjob`. This change properly evaluates the `curator` status. (BZ#1676720) - The `NetworkPolicy` plugin did not clean up rules from deleted namespaces. Now, all `OpenVSwitch` flows associated with a namespace are deleted properly when a namespace is deleted. (BZ#1686025) - During `Satellite` registry-based installations of OpenShift Container Platform 3.11, the example template URLs were installed at the wrong file path. This would cause example resources to be configured incorrectly. Now, a condition has been added to replace the image URL with the `Satellite` location. As a result, the example resources are configured with valid image URLs. (BZ#1689848) - Previously, cluster logging did not store secret names in service accounts. When secrets were required to be whitelisted, the logging service accounts were unable to access their required secrets. Now, secret names are added appropriately to their service accounts. (BZ#1690605) - `oc cp` commands were not checking links from tar files used to copy files between pods and user's workstations. The `oc cp` command could cause a directory traversal and replace or delete files on a user's workstation. Now, escaping links are not permitted. As a result, the `oc cp` command verifies files copied between pods and workstations without allowing escape from directories. (BZ#1693315) - During previous upgrades, the `tuned` package and profiles could have been removed. The `tuned` role was not being applied during an upgrade, but only during a fresh install. Now, the `tuned` role is applied during upgrades to ensure `tuned` profiles are applied appropriately. (BZ#1694131) - `NetworkPolicy` rules were not updated reliably after service restarts. A bug in the re-initialization process of the plugin state after a restart of the SDN service would ignore changes in a namespace. Now, updates to `NetworkPolicies` are now correctly tracked at all times. (BZ#1694704) - Monitoring certificates were not updated after certificate redeployment. As a result, `prometheus`, `grafana`, and `alertmanager` user interfaces were inoperable. Now the TLS secrets and pods are removed during certificate redeployment and the user interfaces work correctly after certificate redeployment. (BZ#1696198) All OpenShift Container Platform 3.11 users are advised to upgrade to these updated packages and images. Solution Before applying this update, ensure all previously released errata relevant to your system are applied. See the following documentation, which will be updated shortly for release 3.11.104, - 1427274 - Kibana header "container-brand" image should be properly branded for the deployment - BZ - 1633892 - OCP 3.11: CRI-O cluster install with openshift-ansible fails on large AWS instances m5.12xlarge and m4.16xlarge due to Master node openshift-sdn pod in CrashLoopBackOff state - BZ - 1634151 - [3.11.z] CRI-O cluster install with openshift-ansible fails on large AWS instances m5.12xlarge and m4.16xlarge due to Master node openshift-sdn pod in CrashLoopBackOff state - BZ - 1651393 - rotate-server-certificates gets set to false when set separately - BZ - 1654044 - OCP 3.11: pods end up in CrashLoopBackOff state after a rolling reboot of the node - BZ - 1670418 - inventory variable which points out to IP address of NFS server is not being expanded, so that the installer creates the PVC with wrongs data. - BZ - 1680063 - python2-certifi points to the wrong location - BZ - 1686025 - [3.11] Network Policy Plugin does not clean up flows from deleted namespaces - BZ - 1689000 - Custom managed nic causing install to fail - BZ - 1689848 - example template image url does not work on Satellite - BZ - 1690605 - Aggregated Logging installation does not add secret to serviceaccount [3.11.z] - BZ - 1690900 - etcd rpm is installed when etcd is co-located on a master - BZ - 1690951 - [3.11.z] KubeletTooManyPods statically compares against 100 instead of --max-pods (-10) - BZ - 1691893 - KubeCPUOvercommit factors in Completed and Failed pods - BZ - 1693035 - Fluentd doesn't output it's logs to STDOUT when LOGGING_FILE_PATH=console - BZ - 1694106 - [crio-tool] Installing or upgrading OCP 3.11 when using crio, keeps changing the crictl.yaml file with the incorrect pathvim - BZ - 1694131 - Tuned profiles are not applied to OpenShift clusters that get upgraded from 3.6 - BZ - 1694704 - [3.11] NetworkPolicy rules don't update reliably after a service restart - BZ - 1694899 - The kibana was authorized as CN=system.logging.kibana by mistake - BZ - 1695271 - [3.11] Wildcard routes get 503 intermittently - BZ - 1695856 - Task failure restart docker while running redeploy-certificates - BZ - 1696198 - after running redeploy-certificates.yml playbook grafana, prometheus and alertmanager routes are not accessible anymore - BZ - 1697169 - Addition of custom ca bundle in Grafana throws error - BZ - 1697295 - Prometheus shows different monitoring history with Grafana dashboard refresh CVEs (none) References (none) Red Hat OpenShift Container Platform 3.11 Red Hat OpenShift Container Platform for Power 3.11 The Red Hat security contact is secalert@redhat.com. More contact details at.
https://access.redhat.com/errata/RHBA-2019:0794
CC-MAIN-2020-10
refinedweb
913
57.06
Publisher: ManningPages: 432ISBN: 978-1935182597Aimed at: Java and Ruby programmers, in particular those already working with ClojureRating: 3.5Pros: Comprehensive and practicalCons: Doesn't do a good job of explaining ClojureReviewed by: Mike James Clojure seems to be getting a lot of attention at the moment. Is an "in action" approach a good way of finding out about it? Clojure is a dialect of Lisp for the JVM - hence it is attracting attention from Java programmers who really only know the object-oriented way of doing things. The book's back jacket says "This book assumes you're familiar with an OO language like Java, C# or C++ but requires no background in Lisp or Clojure itself." I have a "background" in Lisp in that I used it for a number of AI projects, and as a result I have admired the language for a long time for its economy and power. What puzzles me about Clojure is that it claims to be helpful in situations where I would not think of using Lisp, simply because the freedom of expression it provides is not needed and may even be counter productive. Part 1 of the book is Getting Started and Chapter 1 is an introduction to Clojure. Essentially this is just a history and an overview of where Clojure fits in as a JVM language. From here we have a "whirlwind tour". This is a basic guide to installing the language and using it. The big problem is that it presents the syntax and semantics of the language in a way that is superficially easy but if you don't already have some idea what makes a language like Lisp special then you are probably going to be left behind. It tries to get across a flavour of the language but without really explaining the grand plan. In general the a topic is introduced with a small program which is then explained as a specific example of a general principle that you are suppose to extract from the description It would be much more efficient to present the general principle and then show a few examples. By the start of Chapter 3 you have seen if not absorbed or fully understood a great deal of Clojure. Next you start to look at the details - functions, scope rules, namespaces and so on. Chapter 4 introduces multimethods as a key feature in building a Clojure program even though my feeling is that most readers will not have much idea as yet what such a program would be like. Chapter 5 deals with Java and Clojure interop. While this is an important real world topic, I felt I hadn't really got to grips with Clojure before starting to worry about interop. Chapter 6 deals with state management, which is a particular puzzle for anyone new to functional languages with their immutable data objects. This chapter does a very good job of explaining why immutability is good if you are using any sort of multi-threading. However, you still need to learn quite a lot to make it all work and it is arguable that many similar methods are used in object-oriented languages to control access. The final chapter of the section is on using macros. This is arguably the most dangerous feature of Clojure. If you fail to see how to keep such a feature under control and organized then the result is a program that is impossible to understand. Of course, if you take to it then you can turn the whole approach into a methodology and do little else but create Domain Specific Languages (DSLs). At the end of Part 1 we are now expected to move onto Part 2, which deals with applications of Clojure. Personally I felt I hadn't had a good enough introduction to the language to enable me to do so - and I knew Lisp before starting the book. My guess is that you need to have met and worked with Clojure and at least started to understand its particular strengths in not distinguishing too clearly between data and program. If you do know Clojure then the second part of the book will be useful to you. It starts off with the ever-trendy topic of test driven development. Chapter 9 is on using Clojure with MySQL, HBase and Redis. Then we have a chapter on web programming - HTTP, Ring, Compojure, and raw HTML. Chapter 11 is on messaging systems - JMS, STOMP, RabbitMQ and so on with an overview of how to use message passing as a way of implementing distributed parallel processing. Chapter 12 is on data processing but perhaps not what you might expect but on map/reduce and master/slave parallelism. The final three chapters form a sort of advanced look at Clojure with functional programming, macros and DSLs taking center stage. This is a well written book but it does assume a level of understanding that a reader coming to it to actually learn Clojure from scratch probably wouldn't have. It also isn't an easy read the style of presentation means you have to read some sections more than once to try and see the general from the specific example. If you have an idea what makes Lisp like language special then you might get something from Part 1 sufficient to be able to find Part 2 of use. If you haven't a clue what is going on, and find the way that data and program are one and the same, recursion and immutability all a mystery, then I would suggest a gentler introduction to Clojure. It certainly isn't going to win any converts from Java programmers who pick it up just to see if Clojure has anything to offer. This book is going to be most effective if you have already left the starting blocks and want to see how Clojure fits with other systems and real world tasks. If you fit this description then you will find this an interesting read..
http://i-programmer.info/bookreviews/14-other-languages/4491-clojure-in-action.html
CC-MAIN-2016-50
refinedweb
1,007
65.05
Just starting some basic C for a module and while I had a brief affair with C++ years, most of my coding over the past 6 months has been Java in the Eclipse environment. For this, I'm using gcc and gedit. I am trying to copy content from a file, and reverse the characters. Eg. File contains abcdefghijklmnopqrstuvwxyz; ABCDEFGHIJKLMNOPQRSTUVWXYZ, should display zyx etc....; ZYX etc.., My code is as follows copyFile.c My Utils.c file isMy Utils.c file isCode:#include <stdio.h> #include <stdlib.h> #include <string.h> #include "utils.h" int main(int argc, char* argv[]) { char c; FILE *from, *to; int flipping = (strcmp(argv[1], "-f") == 0); from = fopen(argv[2], "r"); to = fopen(argv[3], "w"); if (from == NULL) { perror(argv[2]); exit(1); } if (to == NULL) { to = fopen(argv[3], "w+"); } /* file exists, so start reading */ if (flipping) { putc(flipChar(c), to); } else { putc((c), to); } fclose(from); fclose(to); exit(0); } My utils.h file isMy utils.h file isCode:#include "utils.h" char flipChar(char c) { if ('a' <= c && c <= 'z') { return c-'a' + 'A'; } else if ('A' <= c && c <= 'Z') { return c-'a' + 'A'; } else if (0 <= c && c <= 9) { return (c + 1); } return c; } I am using the following commandsI am using the following commandsCode:#ifndef UTILS_H #define UTILS_H char flipChar(); #endif gcc -c -o copyFile.o copyFile.c gcc -c -o utils.o utils.c gcc copyFile.o utils.o -o copyFile However, at this point, I get an error utils.o: file not recognized: File format not recognized collect2: error: ld returned 1 exit status It's probably simple to ye but I'm out of my language and IDE environment!!! :)
http://forums.devshed.com/programming-42/copying-file-flipping-932376.html
CC-MAIN-2014-41
refinedweb
284
75.61
#include <diskstream.h> This class handles the loading of files into memory. Instead of using read() from the standard library, this uses mmap() to map the file into memory in chunks of the memory pagesize, which is much faster and less resource intensive. get the pagesize and cache the value References _SC_PAGESIZE, CLOCK_REALTIME, and gnash::MAX_PAGES. Close the open disk file and it's associated stream. Close the open disk file, but stay resident in memory. References free(), and MAP_FAILED. Referenced by loadToMem(), main(), play(), writeToDisk(), and ~DiskStream(). This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Dump the internal data of this class in a human readable form. References CLOCK_REALTIME. Referenced by gnash::operator<<(). Get the base address for the memory page. Referenced by main(), and operator=(). Get the size of the file. Get the time of the last access. References loadToMem(). Referenced by loadToMem(), play(), and seek(). Load a chunk of the file into memory This offset must be a multipe of the pagesize. We only map memory in pages of pagesize, so if the offset is smaller than that, start at page 0. If the data pointer is legit, then we need to unmap that page to mmap() a new one. If we're still in the current mapped page, then just return the existing data pointer. References _, __FUNCTION__, CLOCK_REALTIME, close(), errno, FILETYPE_FLV, free(), gnash::key::i, malloc(), MAP_FAILED, and cygnal::Flv::TAG_METADATA. Load a chunk (pagesize) of the file into memory. This loads a pagesize of the disk file into memory. We read the file this way as it is faster and takes less resources than read(), which add buffering we don't need. This offset must be a multipe of the pagesize. References loadToMem(). Open a file to be streamed. Referenced by main(), and writeToDisk(). copy another DiskStream into ourselves, so they share data in memory. References get(), getFileFd(), getFilespec(), getFileType(), getNetFd(), and getState(). Pause the stream currently being played. References __PRETTY_FUNCTION__. Stream the file that has been loaded,. References __FUNCTION__, gnash::NetStats::addBytes(), close(), CLOSED, CREATED, DONE, errno, loadToMem(), MULTICAST, NO_STATE, OPEN, PAUSE, PLAY, PREVIEW, SEEK, THUMBNAIL, UPLOAD, and gnash::Network::writeNet(). Stream a preview of the file. A preview is a series of video frames from the video file. Each video frame is taken by sampling the file at a set interval. Seek within the stream. References loadToMem(). Set the memory page size This is a cached value of the system configuration value for the default size in bytes of a memory page. Stream a series of thumbnails. A thumbnail is a series of jpg images of frames from the video file instead of video frames. Each thumbnail is taken by sampling the file at a set interval. Write the data in memory to disk. Referenced by writeToDisk(). References writeToDisk(). References cygnal::Buffer::allocated(), cygnal::Buffer::reference(), and writeToDisk(). Write the data in memory to disk. References close(), errno, open(), and gnash::amf::write(). Referenced by cygnal::HTTPServer::processPostRequest().
http://gnashdev.org/doc/html/classgnash_1_1DiskStream.html
CC-MAIN-2015-35
refinedweb
510
60.72
Code. Collaborate. Organize. No Limits. Try it Today. Our designer gave me some cool Photoshop screen drafts of what she thought our application ought to look like. The drafts had several examples of where she had made text with a halo outline around the words. Like all good designers, she had sliced the images up into discrete sections, allowing me to do the integration easily. The problem with this approach was that it was time consuming to integrate each image she gave me, and when we wanted to localize the product later, we would need to redo them all. I needed a way to generate these on the fly. WinForms' System.Drawing.* provides very powerful ways of creating images with alpha blending. The plan was to create a bitmap for the text. The text would be smeared around in a background color on the bitmap, and then finally written in the middle in the foreground color on the bitmap. Alpha blending would be applied to the background, smeared, version of the text so that it looked feathery towards the edges and could be rendered over any other background. (Alpha valued colors, added on top of alpha valued colors will increase the intensity, so the very edges should look dim, and the closer to the text you are, the more intense the background color will be). System.Drawing.* I expected this to take a little while to do, so I planned on having a function return the bitmap for the final text image so that I could squirrel it away and use that as a cached version of the text from that point onwards, thus only suffering the performance hit once. This seemed like a workable solution, so I set about implementing it... This was going to take some brushes, some bitmaps, some graphics contexts etc., and we may call this function hundreds of times, so judicious resource management would be important. For those that don't know, when a managed object goes out of scope, it is marked as "unreferenced" to be garbage collected automatically, later in time. When? You may have no idea if you don't explicitly call garbage collection yourself (not necessarily recommended if you use things like weak references, etc., but that is beyond the scope of this article). These managed objects, like the SolidBrush class, for instance, are, in real life, managed wrappers on GDI+ unmanaged objects, and these objects behind the scenes may consume valuable system resources and memory. Thus, you should not just allow a Brush or Pen or Bitmap or any of the myriad of these wrappers on unmanaged drawing objects simply to fall out of scope in the hope they will be cleaned up soon, as they will insidiously eat precious resources. You need to call Dispose on each of them to clean up the unmanaged resources. A preferred method is to use the using construct. "using" will cause the object's Dispose method to be called when you leave the using() scope (which will, in turn, free the underlying GDI+ object). You could call Dispose yourself at the end of your function, but if you exited the function before calling Dispose, or if you had an exception before Dispose(), then you would have the same resource problem you had before. using ensures that no matter what happens to cause the control to leave the scope of the using() statement, the object's Dispose method would be called. SolidBrush Brush Pen Bitmap Dispose using using() Dispose() using (SolidBrush brFore=new SolidBrush(clrFore)) { ... use the brush inside of here } In order to make the containing bitmap, we will need to know how big the text will be in pixels. We use the Graphics method MeasureString to determine this. MeasureString comes in various flavors, some taking into account alignment and spacing of text etc. We use the basic call as we don't intend on setting any exotic flags when we finally call Graphics.DrawString. From the returned size, we can make a bitmap that will contain a single rendering of the text which we will use as a basis for rendering onto a second, destination bitmap, several times, to make the blurred background. Graphics MeasureString Graphics.DrawString SizeF sz=g.MeasureString(strText, fnt); GDI+ is built for both speed and beauty, and by default, it picks a happy medium. There are properties you can set on a Graphics object that will ensure you get the best possible rendering. We use SmoothingMode, InterpolationMode, and TextRenderingHint to control the level of output we require. SmoothingMode InterpolationMode TextRenderingHint gBmp.SmoothingMode=SmoothingMode.HighQuality; gBmp.InterpolationMode=InterpolationMode.HighQualityBilinear; gBmp.TextRenderingHint=TextRenderingHint.AntiAliasGridFit; From the bitmap, we create a graphics context (Graphics.FromImage). This will allow us to draw the string onto the bitmap directly. We make some brushes that we know we will need a little later here, so that we can keep all the using statements bunched together, and avoid overly nesting this function, to aid readability. Also, we fully expect this function to always complete all the way through, so the marginal overhead over disposing of this object exactly when they are not needed is not worth the expense of readability and maintainability of this function. Note the alpha value of 16 on the brBack brush, this is a very transparent value, but will be overlaid many times as we smear the background. Graphics.FromImage brBack using (Bitmap bmp=new Bitmap((int)sz.Width,(int)sz.Height)) using (Graphics gBmp=Graphics.FromImage(bmp)) using (SolidBrush brBack=new SolidBrush(Color.FromArgb(16,clrBack.R, clrBack.G, clrBack.B))) using (SolidBrush brFore=new SolidBrush(clrFore)) { ... } We now create another image, made bigger by blurAmount, to accommodate the smearing, and proceed to get a Graphics from that so that we can render on the first bitmap we created onto it, and draw it blurAmount times in the X direction for every blurAmount times in the Y direction. This rectangular blur approximates a more traditional rounded blur as the alpha values towards the outside are sufficiently low as to make it feather; to be more accurate, you would base the rendering in a circle from the midpoint. The simplification is justified in this case, again for readability, ease of coding, and because the difference would be slight. blurAmount bmpOut=new Bitmap(bmp.Width+blurAmount,bmp.Height+blurAmount); ... // smear image of background of text about // to make blurred background "halo" for (int x=0;x<=blurAmount;x++) for (int y=0;y<=blurAmount;y++) gBmpOut.DrawImageUnscaled(bmp,x,y); After rendering the blur, we finally render the actual text again, in the center (blurAmount/2 offset in both X & Y positions) in the foreground color. // draw actual text gBmpOut.DrawString(strText, fnt, brFore, blurAmount/2, blurAmount/2); Here is how you would call the code to generate an image for some text, and how you would render the final fancy text image onto a Windows form: public class Form1 : System.Windows.Forms.Form { private Bitmap _bmpText; public Form1() { this.BackColor = System.Drawing.Color.IndianRed; this.ClientSize = new System.Drawing.Size(358, 126); using (Font fnt=new Font("Arial", 20, FontStyle.Bold)) _bmpText = (Bitmap) FancyText.ImageFromText("Hello Code" + " Project Fans!", fnt, Color.Green, Color.Yellow); } protected override void OnPaint(PaintEventArgs e) { e.Graphics.DrawImageUnscaled(_bmpText, 10, 40); } } This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here bmpOut = new Bitmap(bmp.Width + blurAmount, bmp.Height + blurAmount); Color backColor = bmpOut.GetPixel(1, 1); bmpOut.MakeTransparent(backColor); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/13444/Creating-fancy-text-effects-with-Csharp
CC-MAIN-2014-23
refinedweb
1,309
53.21
In the previous lesson on Composition, we noted that object composition is the process of creating complex objects from simpler ones., bob is created independently of department, and then passed into department‘s constructor. When department is destroyed, the m_teacher reference is destroyed, but the teacher itself is not destroyed, so it still exists until it is independently destroyed later in main(). bob department m_teacher. Deallocations are left to an external party to do. group. std::reference_wrapper In the Department/Teacher example above, we used a reference in the Department to store the Teacher. This works fine if there is only one Teacher, but if there is a list of Teachers, say std::vector, we can’t use references anymore. Department Teacher std::vector List elements cannot be references, because references have to be initialized and cannot be reassigned. Instead of references, we could use pointers, but that would open the possibility to store or pass null pointers. In the Department/Teacher example, we don’t want to allow null pointers. To solve this, there’s std::reference_wrapper. Essentially, std::reference_wrapper is a class that acts like a reference, but also allows assignment and copying, so it’s compatible with lists like. get() Here’s an example using std::reference_wrapper in a std::vector: To create a vector of const references, we’d have to add const before the std::string like so std::string If this seems a bit obtuse or obscure at this point (especially the nested types), come back to it later after we’ve covered template classes and you’ll likely find it more understandable. Quiz time Question #1 Show Solution Question #2 This should print: Department: Bob Frank Beth Bob still exists! Frank still exists! Beth still exists! Show Hint 2. Feedback on Quiz The first part up to class Teacher was fine. Once class Department came in, the tutorial turned into a nightmare of confusion. Very confusing. Especially the getName(). The placement of the 2 "const" will kill you. The effect on the "std::cout << department;" is huge. I hope this is acceptable. c) Composition: Departments can’t exist in absence of a university. Can't departments exist in companies, military and schools? Sorry, I think this is just nitpicking but I have to say it. I have a background in other languages, and I tend to mostly use aggregations instead of compositions (meaning I usually create complex objects outside the class and pass them to the constructor). I think this is the only way to make the code really unit-testable, because you can then pass different implementations to the object (dummy for testing and the real object for productive code). As you recommend favoring compositions over aggregations, is there a way to achieve the same with compositions? You can use a `std::unique_ptr` to a common base of your implementation/mock class. Then pass the implementation or mock to the class in the constructor. While the object isn't technically inside of the class, it's owned by the class. You need virtual functions (Interfaces) and smart pointers, both are covered later. How about moving this line in best practices : Compositions should be favored over aggregations. (Under summarizing composition and aggregation ) I have two questions about the second task in a quiz: 1. How does this line works? out << teacher.get().getName() << ' '; 2. I wrote a code, and I keep getting "no matching function for call to 'Department::Department(<brace-enclosed initializer list>' "error in line 59. I tried to provide empty constructor or set default value to const Teacher& teacher in constructor, but it wasn't working. But when I create fourth object of type Teacher and initialize Department in line 59 with it, program works as intended to, printing only t1,2 and 3. here is my code: 1. `teacher` is `std::reference_wrapper` `teacher.get()` returns a reference to a `Teacher` `teacher.get().getName()` returns the name of the teacher 2. `Department` has a reference member variable (`m_teacher`). References have to be initialized, you cannot have an empty default constructor. If `teacher` in the constructor had a default argument, `m_teacher` would dangle. If you want that `Department` can exist without a teacher, you can't use a reference member. Use a pointer instead, they can be `nullptr`. Thank you. It works fine now. Do you think it would be worth adding an extra subquestion for the Question #2? For instance, making sure that the department only adds the teachers once into the array? If in the main, we have a duplicate of the teacher Beth, the Department should only contains Beth once This would require the definition of an operator==/!= for the Teacher (which is a follow-up on the previous section), and would show that std::find can also be used on the std::vector<std::reference_wrapper>> ? I've added this in the add function of the Department What do you think? Would it make sense (or we shouldn't have t3 and t4 with the same name in the first place)? I think it's good. > and would show that std::find can also be used on the std::vector<std::reference_wrapper>> ? But does std::find work well in this case ? hey! About the for-loop: in each iteration we get the object of class Teacher saved in department.m_teachers. But why do we need to use 'get()' again to access the object? I mean we should be able to do the following: //////////////////////////////////////////////////////////////////////////// I guess it is because m_teachers is a vector of ref_wrappers and to get things out of the wrapper you have to call get(). @chai is right. This is because we're using `std::reference_wrapper`. `operator.` cannot be overloaded to let you directly access the teacher. And for some reason, `std::reference_wrapper` doesn't overload `operator->`, so you have to use `.get()` to access the object. Why object type 'const int&' is not possible? Severity Code Description Project File Line Suppression State Error C2440 'initializing': cannot convert from 'initializer list' to 'std::vector<std::reference_wrapper<const int &>,std::allocator<std::reference_wrapper<const int &>>>' ConsoleApplication1 F:\ConsoleApplication1\ConsoleApplication1\SourceMain.cpp 69 change > std::vector<std::reference_wrapper<const int&>> test{ a, b, c }; to > std::vector<std::reference_wrapper<const int>> test{ a, b, c }; >>2) When you create your std::reference_wrapper wrapped object, the object can’t be an anonymous object (since anonymous objects have expression scope would leave the reference dangling). I didn't get this ' anonymous objects ' part. Would you give an example of this? Because I could create an anonymous object here below: We also have 'berta' here: >>Although it might seem a little silly in the above example that the Teacher’s don’t know what Department they’re working for, Shouldn't it be "the Teachers don't know..."? Because Department is aggregation type, can't we have a normal member variable like 'name' for the Department class? or 'name' also should be defined as a pointer or reference type? >>Consequently, an aggregation usually either takes the objects it is going to point to as constructor parameters, or it begins empty and the subobjects are added later via access functions or operators. Doesn't this look like the example of Point2d and Creature in the previous section (10.2) where an object of Point2d were passed to the constructor of the Creature class? >> However, these member variables are typically either references or pointers that are used to point at objects that have been created outside the scope of the class. So, if we have the Creature class that have location as pointer or reference not a normal variable, then would that Creature be qualified as an aggregation not composition? >>Because these parts exist outside of the scope of the class, when the class is destroyed, the pointer or reference member variable will be destroyed (but not deleted). Consequently, the parts themselves will still exist. Doesn't this result in dangling pointers? Hello I was trying to solve the second question without using .pushback but when I try to run the program it gives me these errors. while compiling class template member function 'Teacher *std::vector<Teacher,std::allocator<Teacher>>::_Ufill(Teacher *,const unsigned int,std::_Value_init_tag)' see reference to function template instantiation 'Teacher *std::vector<Teacher,std::allocator<Teacher>>::_Ufill(Teacher *,const unsigned int,std::_Value_init_tag)' being compiled I thought the problem might have been std::reference_wrapper but that was not the case. Here is the code Thank you in advance. If you don't modify something, make it `const`. `std::vector::resize` inserts default-constructed elements into the vector, but `Teacher` isn't default-constructible. Add a default constructor. Hi guys, congrats for this amazing project! Regarding the solution of the second problem, when you use auto in the for loop, shouldn't we use auto & ? Otherwise we do too many copies... So, instead of It is better to use: correct? Updated to Thanks! nascardriver help me please , I don't understand std::reference_wrapper. but since we dealing with addresses all the time this code is acceptable ?. i tested already and it's working . i rewrite the code between tags again , because auto formatter leaves big white spaces . That's way way more complicated than it needs to be. What is it about `std::reference_wrapper` that you don't understand? after reading again, I understand now how to use' std::reference_warpper' thank you for note . I really appreciated each note which really support us . Thanks Alex and Nascardriver for that great work :) "In this case, bob is created independently of department, and then passed into department‘s constructor. When department is destroyed, the m_teacher reference is destroyed, but the teacher itself is not destroyed, so it still exists until it is independently destroyed later in main()." Hi nascardriver, I have a query about memory deallocation; say we need to allocate memory from 'heap' let us say that the location of x , which is in the stack is 1000, and the address that x holds ,which is in heap, is 2000. if we forget to delete x and x went out the scope, 1000 memory is freed, and we lose memory 2000, but when deleting x we actually return 2000 to operating system to be used again. So when department is destroyed, just the location of m_teacher reference is freed without touching the original teacher. Am I right ? That's right. Memory that was allocated dynamically always has to be freed. If it's not freed and the last pointer to it goes out of scope, it's unreachable and will remain used. Why wouldn't the compiler accept the overloaded << function (line 72)? You don't have an `operator<<` for `Department`. But it's there... declared at line 38 and defined at 42. That's `operator<<` for a `Teacher`, but you're trying to print a `Department` the pointer or reference member variable will be destroyed (but not deleted). Consequently, the parts themselves will still exist. Hi, as I learned from this great tutorial that when deleting something, say pointer, we are actually returning the address of the data, that the pointer point to, to the operating system to be used again. so what is the difference between deletion and destruction? When Dep member variable teacher, which is an alias to the one outside, gets destroyed what happens exactly, won't the data to both variables remove? When a pointer or reference to some object is destroyed (eg. because it goes out of scope), only the pointer/reference variable gets release, not the objects it's pointing to. The pointed-to object is unaffected. `delete` would destroy the pointed-to object. Thank you so much ❤ Hello, I would like to understand this behavior better, I added chaining the add calls, i.e. when I defined add as below, it failed with the vector only holding the first (non-chained) add. I added a print statement to the Department constructor, and it was only called once. replacing the return type to be Department&, caused the vector to hold all 3. As I typed this comment, I also tried auto &, and that worked. So was there was some sort of "ghost" department that was returned on the stack ? That was never instanciated... Thanks in advance for your thoughts. -Keith Constructors can be elided for optimization reasons. There are also many different kinds of special constructors, which have haven't covered all at this point. Constructors are not a reliable source of saying what happened. Your first version is identical to When you return, a new `Department` is created. Thank you for your good works here, your generosity is much appreciated. -Keith Minor typo in the opening: "In the previous lesson on Composition, we noted that object composition is the process of creating complex objects from simpler one[s]." - needs an "s" And under "A few warnings": "One final note: In the lesson Structs, we defined aggregate data types (such as structs and classes) as data types that group[s] multiple variables together." - doesn't need its "s" Thank you for these tutorials! C++ feels really strange when you're used to Python, but it's starting to make sense. Thanks for pointing out the typos, I fixed them. Have fun! In the quiz no. 2, you implemented the add function using push_back(). Do you think the following code should be semantically equivalent? First we resize our vector to be 1-length longer to hold the new teacher, then we assign the new teacher into the extra length we just added. So why do I get the following error? "Error C2512 'std::reference_wrapper<const Teacher>::reference_wrapper': no appropriate default constructor available" I have double checked the rest of my code such that if I use your implementation of the add function, my code works as expected. `resize` has to insert an element into the empty slots it creates. To create the element, it tries to default-construct a `reference_wrapper`, but a `reference_wrapper` cannot be created without an object to reference (References have to reference something), so it fails. `push_back` doesn't need to create an empty `reference_wrapper`, because we told it what the new element will be. I see. Is it thus correct to to infer that `resize` cannot be used on any std::vector that uses a `reference_wrapper`? Correct Thank you nascardriver! Here is my solution for problem 2. Also, in the solution, why do you return the teachers name (inside getName()) by const reference? (const std::string&) There's no reason to copy the name every time `getName` gets called. Copying data is slow and should be avoided. By returning a reference, we avoid the copy. In your line 27, " " is a single character in a string. Strings are expensive. Since you only have 1 character, use single quotation marks ' '. I don't see what the dynamic allocation adds to the example. It's fully ok to create the Teacher variables by normal invocation of the constructor and then pass their address to the Department::add(): I would also say that the Department::add() function could be built just as well using (non const) references, avoiding that the user passes in an array of Teachers. But I see that this is just a matter of preference--that way you tell the user that his variable needs already exist, ie passing in a temporary object would cause issues. My solution: For some reason the range based variant of the loop doesn't work. Why is that? (More precisely it adds the last constituent of tt). > I don't see what the dynamic allocation adds to the example. Agreed, marked for an update. > Department::add() function could be built just as well using (non const) references It should. The problem aren't arrays, but the possibility to pass a `nullptr`. Also noted. > otherwise line 63 wouldn't work The quiz is wrong too. It should be a `const std::string&` (For you, `const name_t&`). I suppose this quiz is very old. Your code didn't work because your line 63 uses wrong braces. The inner braces try to initialize the first element (A `Teacher`) of the vector. A `Teacher` is initialized by a `std::string`, but there is not `std::string` constructor that takes a list of `const char*`. You need braces around each `Teacher`: > the range based variant of the loop doesn't work Never pass/loop class-types by value unless they're guaranteed cheap to copy. In your range-based loop, `i` is a copy of the current teacher. `i` dies at the end of each loop's iteration, but you're storing a reference in `d` (Bad names, don't use abbreviations). Loop over references Thanks for pointing out the old code! If you find more, please point them out. It's likely that you find more. The further you get into the lessons, the fewer people have read them and helped improve them. Thanks for the quick response. I'll make as much remarks as I can. > Never pass ... Noted! How stupid of me. 10.4 also uses dynamic allocation btw. What is wrong with ie one brace (easier to type)? You can do that too if you like to. I prefer it when it's obvious that I'm calling a constructor. That way it's also easier to update the vector creation if I ever add a parameter to the `Teacher` constructor. For the update, I stole the `std::reference_wrapper` section from chapter 12. Please read the new `std::reference_wrapper` section and quiz 2 in this lesson, as otherwise you'd miss the introduction of `std::reference_wrapper`. Thanks for pointing me at it! I think ,at the example of department and teacher, the department must be created first then the teacher; cuz department knows about the existence of the teacher, and department may hold many teachers. Also the definition of department class and teacher class must also be changed. i am combining the two codes above i.e. the one for single Teacher type pointer and the one with updation in Department class due to addition of add(Teacher *teacher) functon #include <string> using namespace std; #include <iostream> class Teacher { private: string m_name; public: Teacher(string name) : m_name(name) { } string getName() { return m_name; } }; class Department { private: Teacher *m_teacher;// This dept holds only one teacher for simplicity, but it could hold many teachers public: Department() { cout<<"Hello"<<endl; } int add(Teacher *teacher) { m_teacher=teacher<<endl; } }; int main() { // Create a teacher outside the scope of the Department Teacher *teacher1 = new Teacher("Bob"); // create a teacher Teacher *teacher2 = new Teacher("Henry"); // create a teacher Teacher *teacher3 = new Teacher("Sara"); // create a teacher // Create a department and use the constructor parameter to pass // the teacher to it. Department dept; dept.add(teacher1); dept.add(teacher2); dept.add(teacher3); // dept goes out of scope here and is destroyed // Teacher still exists here because dept did not delete m_teacher cout << teacher1->getName() <<teacher2->getName() <<teacher3->getName() << " still exists!"; delete teacher1; delete teacher2; delete teacher3; return 0; } Can u plz Make a program in which object of class employee and object of class students are attributes of class Manager and class scientist and object of class employee is an attribute of class labourer Thanks for the lesson, I think it's very well explained, but I have hard time grasping abstract concepts :/ I'm writing a card game and there are 2 classes: 'Card' and 'Deck'. Which kind of relationship they have? It seems somewhat similar to the 1e question of the quiz (a bag of marbles), so I'd think it's aggregation, but then again there are some composition-like points too... Aggregation: - a card(s) can exist independently of a deck Composition: - a deck can't exist without cards (intrinsic property?) - a card can't belong to more than one deck at a time I don't know whether a deck should be responsible for destroying cards... I'd say yes, but only because it seems easier to manage memory this way (allocate memory for the deck (eg. Card array) and then 'delete' it as a whole, rather than manage each card separately). Abstract thinking is not at all my thing...(better don't ask me whether a card is aware of being part of a deck lol) Is it very important to grasp the difference between composition and aggregation? One thing that's maybe not as clear as it could be is that some relationships can be implemented using more than one technique. A Deck could be implemented as a composition or an aggregation, depending on whether Cards do or don't need to exist independently of the Deck itself. I'd favor composition over aggregation if it meets your requirements. If this stuff doesn't gel with you, I'd move on. Implementing it as a composition seemed like a better choice to me too, but I wasn't sure (esp. after getting to the question with bag of marbles). Thank you for the reply! I still don't really get it. we can just use normal member variable here and ofc Bob still exist because we declared it outside the scope of the Department. So what is the benefit of using reference/pointer? For scope things? So we can use them in multiple functions (bcs they don't get destroyed until we explicitly delete them)? You're creating copies, that's slow. Modifying one instance doesn't affect the others. If a teacher changes their name, the department still has the old name. So aggregation typically using reference or pointer member just for performance reason? No. Aggregation uses pointer/references because if you store the member directly, that's composition. The performance is thanks to using a pointer, but it's unrelated to aggregation. Hmm okay then. Thanks Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/aggregation/
CC-MAIN-2021-17
refinedweb
3,668
64.71
The only slightly non-obvious, until you have seen it, is where are the header or "fixed" rows and columns. Often the header rows and columns are assigned an index of zero in the main table but FlexGrid is more logical and allows the data area to be zero based. The fixed rows and columns displayed in a grid are implemented as two separate grid panels. This might seem complicated at first but, in addition to restoring the zero-based indexing in the data part of the grid, you also get the ability to have multiple fixed rows and columns. For example, to add a simple text label to the first column all you have to do is: c1FlexGrid1.ColumnHeaders[0, 0] = "First Column"; You can see that the ColumnHeaders collection can be indexed in the same way as the main grid. The RowHeaders collection does the same job for the rows: c1FlexGrid1.RowHeaders[0, 0] = "First Row"; Notice that the column headers are rows and the row headers are columns - think about it! You can also add additional rows and columns to the fixed rows and columns. For example: c1FlexGrid1.ColumnHeaders.Rows.Add(new Row()); this adds a new row to the ColumnHeader collection. Now you can write: c1FlexGrid1.ColumnHeaders[1, 0] = "second header"; There are a great many more basic grid operations that we could look at, but you now have the basic anatomy of the grid - i.e. you know where the rows, columns and cells are and you know about the fixed rows and columns. It is worth mentioning that there are properties that give you direct access to properties that would otherwise be deeper in the object hierarchy. This is a feature of ComponentOne controls called Clear Style and it is a simple but very useful idea. For example, suppose you wanted to change the background property of the ColumnHeader rows. Then you normally would use something like: c1FlexGrid1.ColumnHeaders.Background = Brushes.Red; Not difficult, but you could use the Clear Style approach and set the ColumnHeaderBackground property of the main control: c1FlexGrid1.ColumnHeaderBackground = Brushes.Blue; In this case there isn't much saving in typing, but you can see the general idea. Clear Style provides properties on the control object that set properties on one or more sub-objects within the control. For example: c1FlexGrid1.AlternatingRowBackground = Brushes.Beige; sets the brush used for odd numbered rows in the grid. What could be easier! Although it is interesting to find out how the grid works in its simplest state, not to see it used with data binding would be to miss out on meeting it in its natural state. So before we leave the subject, it is worth looking at the simplest case of a databound grid. All you have to do in theory is set the grid's ItemSource property and, as long as the object it is set to supports IEnumerable, everything should just work. To see this in action we first create a suitable object type to hold the data: public class Person { public string name { set; get; } public int age { set; get; } public bool member { set; get; } }: c1FlexGrid1.ItemsSource = myList; As long as AutoGenerateColumns is set to true, the grid will be created for you and the properties of the Person data structure will be used to label the columns. You can now work with the table as in the case of the unbound example. You can refer to columns and cells using either a numerical index or the by the property names displayed at the top of each column. For example: c1FlexGrid1[0, "age"] = 99; The grid also supports user editing and the data binding is two-way so the data source is updated. In most cases it is better to use a PagedCollectionView as the data source simply because it wraps the original data with some methods that allow you to process it, sort, group and filter and as it name suggest it supports paging for large datasets (simply set the pagesize property). However this just gives you more flexibility in how you handle the data. The FlexGrid is lightweight, easy to use and provides a logical connection between data and grid. ComponentOne Using the WPF .NET 4.0 DataGrid To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter. Another one of those "the world has gone mad" announcements at Build is that now Visual Studio supports C++ development for Linux. As always with this sort of development, the question is why? The only real problem with the previous release of Google's TensorFlow was that it would only work on one computer. Training neural networks is computationally intensive so the good news is that the l [ ... ]
http://www.i-programmer.info/programming/wpf-workings/4069-flexgrid-a-lightweight-data-grid.html?start=1
CC-MAIN-2016-18
refinedweb
808
60.65
STL has a lot of powerful features which are undiscovered for many programmers. Complexity of templates stops people from discovering the scope of STL. Here, I am removing that complexity by explaining with template’s generated code with commonly used data types. This article intends to trigger you for exploring the internals of STL and use its powerful features. Once you understand the features of STL, you will get addicted of using it in your development. Explaining all the container and algorithm internals are out of the scope of this article. I choose vector and some sample algorithms to explain the usage of STL.. STL gives generic collections and algorithms which can be applied on those collections. STL code can be used across the platforms. Since STL containers and algorithms are C++ templates, you are able to use any data type, if it satisfies the requirement of the templates. You can make generic code which will accept all types of container classes. Examples of STL containers are deque, list, map, multimap, multiset, set, and vector; and examples of algorithms are find, copy, remove, max, sort, and accumulate. Let us try to have an idea of how algorithm ‘remove’ works on a vector. Vector is an STL container which grows dynamically and have similar features of an array. remove #include <iostream> #include <vector> #include <deque> #include <list> #include <algorithm> using namespace std; void main() { typedef /*list/deque*/vector<int > intVect ; typedef intVect::iterator intVectIt ; intVectIt end ,it,last; intVect Numbers ; Numbers.push_back ( 10 );Numbers.push_back ( 33 ); Numbers.push_back(50); Numbers.push_back(33); cout << "Before calling remove" << endl ; end = Numbers.end(); cout << "Numbers { " ; for( it = Numbers.begin(); it != end; it++) cout << *it << " " ; cout << " }\n" << endl ; last = remove(Numbers.begin(), end, 33) ; cout << "After calling remove" << endl ; cout << "Numbers { " ; for(it = Numbers.begin(); it != last; it++) cout << *it << " " ; cout << " }\n" << endl ; } Before calling remove Numbers { 10 33 50 33 } After calling remove Numbers { 10 50 } In the above example, we insert 10, 33, 50, and 33 into vector Numbers. Algorithm ‘remove’ is used to remove all the elements which has a value 33. ‘remove’ uses iterator of vector for this operation. iterators have similarities to pointers and are used to traverse an array of objects. In implementation of vector<int>, it's iterator is int*. Numbers vector<int> int* Now, change vector to deque or list in the first line. You can see our program is working exactly same as in the vector case! For changing the container implementation, we had to change only one line of code. This is one of the big benefits which is offered by STL. Now, take the above piece of code to some other operating system (I tested with Windows and Linux ), build and run it. Without any change, it works! This is another big advantage STL offers. You can try different data types instead of int for vector container. int Now, let us look at how default constructor and constructor with an integer argument work. I will be using object of the following class for inserting into the vector. class A { int iValue; A() { cout << "A()" << endl; } }; Class vector<A> has four member variables: allocator, _First, _Last, and _End. allocator is of type std::allocator<A>, and _First, _Last, and _End are iterators which come out to be A*. Default constructor of vector<A> initializes allocator with std::allocator<A> object. _First, _Last, _End are initialized to null pointers. vector<A> allocator _First _End std::allocator<A> A* Now, let us see how constructor vector(size_type _N, const _Ty& _V = _Ty(), const _A& _Al = _A()) works. By removing template variables for class A, prototype of the constructor becomes: vector(size_type _N, const _Ty& _V = _Ty(), const _A& _Al = _A()) A vector ( unsigned int _N , const A& _V = A(), const std::allocator<A> & _A1 = std::allocator<A> () ) This constructor does the following things to initialize its member variables: _First = allocator.allocate(_N, (void *)0); _Ufill(_First, _N, _V); _Last = _First + _N; _End = _Last; allocator.allocate calls template function _Allocate passing number of objects to be created (_N). For class A, this template function is generated to A* _Allocate ( int , A * ). allocator.allocate _Allocate _N A* _Allocate ( int , A * ) This function allocates _N * sizeof ( A ) memory and returns the starting address. This is assigned to _First. _Ufill does something like this: _N * sizeof ( A ) _Ufill for (; 0 < _N; --_N, ++_F) allocator.construct(_F, _X); Here, _F ( of type A* ) points to the address where object to be constructed from _X through A ( const A & ). _F ( of type A* ) _X A ( const A & ) _Ufill copies the one object constructed (const reference to object is _V. Refer line 2.) to all _N locations. allocator.construct calls template function _Consturct passing the same arguments. A new object is created at location _F from reference to object _X (new placement operator is used here). In line 3, _Last is assigned to address after last object. _End and _Last point to the same location. const _V allocator.construct _Consturct _F Now, let us have a look at how the destructor works. Destructor of the vector does the following things: _Destroy(_First, _Last); allocator.deallocate(_First, _End - _First); _First = 0, _Last = 0, _End = 0; _Destroy calls allocator.destroy ( _F ) for each A* _F from _First through _Last. This function will call destructor ~A() explicitly to destroy the object (remember, we allocated using placement new operator). allocator.deallocate deletes the memory pointed to by (_First) which will free memory allocated for all the objects in the vector. _Destroy allocator.destroy ( _F ) A* _F ~A() new allocator.deallocate Most complicated and most frequently used function in a vector is push_back. Let us analyze how push_back works? push_back will insert new data at the end of the vector. It calls insert ( end(), _X). (Member function end() returns _Last and _X is reference to the object (class A) to be added.) insert calls another overloaded function insert (_Last , 1 , _X). push_back insert ( end(), _X) end() insert insert (_Last , 1 , _X) For class A, this function definition becomes: void insert(A* _P, unsigned int _M, const A& _X) { if (_End - _Last < _M) { unsigned int _N = size() + (_M < size() ? size() : _M); A* _S = allocator.allocate(_N, (void *)0); A* _Q = _Ucopy(_First, _P, _S); _Ufill(_Q, _M, _X); _Ucopy(_P, _Last, _Q + _M); _Destroy(_First, _Last); allocator.deallocate(_First, _End - _First); _End = _S + _N; _Last = _S + size() + _M; _First = _S; } else if (_Last - _P < _M) { _Ucopy(_P, _Last, _P + _M); _Ufill(_Last, _M - (_Last - _P), _X); fill(_P, _Last, _X); _Last += _M; } else if (0 < _M) { _Ucopy(_Last - _M, _Last, _Last); copy_backward(_P, _Last - _M, _Last); fill(_P, _P + _M, _X); _Last += _M; } } Before explaining this code, we should look at the difference between _End and _Last. _End is the end of the buffer allocated to the vector, and _Last is the end of the last value inserted. To make it clear, take the case of pushing back object of A to a vector which currently contains three objects, and assume _End and _Last point to the same location. In this case, allocator will allocate memory for 3+3 objects even though we have to insert only one. This is to reduce reallocations. After push_back, _End will point to the end of 6th object, and _Last will point to the end of the 4th object. During push_back, insert checks whether reallocation is needed ((_End - _Last < _M)). If reallocation is needed, it allocates double size or size() + _M, whichever is higher. It copies all the data to the new buffer and removes the earlier buffer. It then inserts new data at the _Last position and reassigns _Last and _End. If we have enough space to insert a new element (i.e., _End - _Last > _M), new element is inserted at _Last, and _Last is reassigned. (_End - _Last < _M) size() + _M _End - _Last > _M Before understanding algorithms, we will have to learn how function objects work. Class of a function object defines operator()(). The result is, a template function can not detect whether you passed a pointer to a function or object of a class having operator()(). Following example will give you a clear idea about what are function objects: operator()() #include <iostream.h> void f () { cout << "f()" << endl; } class X { public: void operator()() { cout << "X::operator()" << endl; } }; template < class T > void test_func ( T f1 ) { f1(); } void main() { X a; test_func(f ); test_func(a); } f() X::operator() Function objects can be classified as Generator (no argument), UnaryFunction (single argument), and BinaryFunction (takes two arguments). A special case of unary and binary functions are predicates (UnaryPredicate, BinaryPredicate) which simply means function returns a bool. STL has, in the header file <functional>, a set of templates that automatically create function objects for you. It is powerful not only because it’s a reasonably complete library of tools, but also because it provides a vocabulary for thinking about problem solutions, and because it is a framework for creating additional tools. <functional> Problem: copy all the data of one vector<A> to another vector<A>. Analysis: For class A, function definition becomes (you have to pass arguments which support operator ++ () and operator *(). You can apply * and ++ to iterators.): operator ++ () operator *() * ++ copy ( A* first , A* last , A* x ) Function copy evaluates *(x+N) = * ( first + N ) for all N in the range [0, last-first]. It returns x+N. Consider vector<A> objects sourceA, DestA. For copying all data from sourceA to DestA, you have to call copy (sourceA.begin() , sourceA.end() , DestA.begin()). copy *(x+N) = * ( first + N ) N first x+N sourceA DestA copy (sourceA.begin() , sourceA.end() , DestA.begin()) Before calling copy, ensure that destination vector has enough space to accommodate all the data in source vector. Problem: you have a vector<int> of three members and you want to store square of each element in another vector<int>. vector<int Solution: following piece of code does the job for you: int square_it ( const int x) { return x*x; } void main () { vector < int > v1, v2(3); v1.push_back(2); v1.push_back(5); v1.push_back(83); transform ( v1.begin() , v1.end() , v2.begin() , square_it ); for ( vector<int>::iterator it = v2.begin(); it != v2.end() ; it++ ) cout << *it; } For the above code, function prototype generated for transform is: transform int * trnsform ( int * First , int * Last , int* x, int (*f) (const int) ) transform evaluates *(x + N ) = square_it (*(First + N ) ) for all N in the range [0, Last - First]. Note: v2 has enough memory allocated (memory for three integers) before calling transform. *(x + N ) = square_it (*(First + N ) ) First v2 Problem: insert three ascending numbers in a vector without using push_back. int f () { static int x = 1; return ++x; } void main () { vector < int > v2(3); generate ( v2.begin() , v2.end() , f ); for ( vector<int>::iterator it = v2.begin(); it != v2.end() ; it++ ) cout << *it; } For the above code, generated function prototype for ‘generate’ is: generate void generate ( int* first , int* last , int (*f)() ) generate evaluates *(first + N ) = f ( ) for all N in the range [0 , last - first]. *(first + N ) = f ( ) Problem: change all 0 to -1 in the vector<int>. Solution: assume vector<int> v contains integers 1, 2, 0, 6, 0. Following code will change all the 0s to -1: vector<int> v int f (const int x) { if ( x == 0 ) return true; else return false; } replace_if( v.begin() , v.end() , f , -1); Generated prototype for replace_if is: replace_if void replace_if ( int * first , int * last , int (*f) ( const int) , -1 ) For all N in the range [0, last-first], replace_if evaluates to: if ( f ( *(first+N) ) ) *(first + N ) = -1; Problem: print all the members of vector. Solution: following piece of code does the work for you: void f (const int x) { cout << x << endl; } for_each ( v.begin() , v.end() , f); Prototype generated for the above code is: void for_each ( int * first , int* last , void (*f) (const int ) ); for_each will evaluate to: for_each f ( *( first + N ) ) for all N in the range [0 , last - first ] Problem: fill vector<int> v with value 10. Solution: following code does the work for you: fill ( v.begin() , v.end() , 10 ); Generated function prototype for the above invocation of fill is: fill void fill(int * first, int * last, const int &) ; fill evaluates *(first + N) = x once for each N in the range [0, last - first]. *(first + N) = x Problem: count number of '2's in an integer array. bool f2 ( const int x ) { if ( x == 2 ) return true; else return false; } Assume vector<int> v contains integers. Following code will return number of '2's in the vector: int iNoof2s = count_if( v.begin() , v.end() , f2 ); Generated function prototype for the above code is: unsigned int count_if ( int * first , int * last , bool (*fn) ( const int ) ); count_if sets a count n to zero. It then executes ++n for each N in the range [0, last - first] for which the predicate fn(*(first + N)) is true. It evaluates the predicate exactly last - first times. count_if n ++n fn(*(first + N)) You have got an idea about how vector and some of the algorithms work. Try using other containers and explore more algorithms. You will discover more interesting things. Happy coding with.
http://www.codeproject.com/Articles/5682/Understanding-STL
CC-MAIN-2017-17
refinedweb
2,194
63.9
Hello, I have just tried to run a program in the command line. Here is the code: import javax.swing.*; class myFirstProgram { public static void main(String[] arg) { JOptionPane.showMessageDialog(null, "It works!"); System.exit(0); } } When I typed in javac MyFirstProgram.java, there was a short pause. Then I typed in java MyFirstProgram and I got an error that said something about "wrong name: myFirstProgram". So at the command line I typed java myFirstProgram and the program ran. So my java file is called MyFirstProgram.java and my class file is called myFirstProgram.class. Why did it change the filename from "My" to "my"? Thanks, Eric Couple of things here. Java is case sensitive, so since your class is named "myFirstProgram" it has to be called as "java myFirstProgram".. In the first scenario when you did "java MyFirstProgram" it was able to find the file just fine, but then the case didn't match with what you had in your class declaration. The reason "java myFirstProgram" works (even though the actual filename is capitalized), is the windows file system is not case sensitive. So java was able to locate the file, and then your class declaration matched up as well. An easy solution in your case would be to edit the .java file and change the class name to "MyFirstProgram". Class names are supposed to be capitalized anyway. Then recompile it. Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?140957-Class-filename-is-different-from-Java-filename&p=417617
CC-MAIN-2014-42
refinedweb
250
68.36
Some Playing with Derivatives This is a summary of what I’ve been playing with in case people find it interesting. In general, there are three ways to find the derivative of a function: - Do the symbolic manipulation of formulae that we all learned in school when we were 16 years old. This assumes that one has the function as an algebraic expression of some kind in terms of known quantities. - Do it numerically, by computing (f(x + h) – f(x)) / h for some small value of h. This suffers from numerical inaccuracies, and will generally cause rises in the blood pressure of numerical analysts. - Calculate the value of the function and its derivative simultaneously at a given point of evaluation. The last option seems to have a lot of names: automatic differentiation, algorithmic differentiation, and a few more. Just for fun, I wrote an implementation of a very simplified version of it in Haskell. The Implementation module AD where data AD a = AD a a instance Eq a => Eq (AD a) where (AD x dx) == (AD y dy) = x == y instance Show a => Show (AD a) where show (AD x dx) = show x ++ " + " ++ show dx ++ " eps" instance Num a => Num (AD a) where (AD x dx) + (AD y dy) = AD (x + y) (dx + dy) (AD x dx) - (AD y dy) = AD (x - y) (dx - dy) (AD x dx) * (AD y dy) = AD (x * y) (dx * y + x * dy) negate (AD x dx) = AD (negate x) (negate dx) abs (AD 0 _) = error "not differentiable: |0|" abs (AD x dx) = AD (abs x) (dx * signum x) signum (AD 0 _) = error "not differentiable: signum(0)" signum (AD x dx) = AD (signum x) 0 fromInteger i = AD (fromInteger i) 0 instance Fractional a => Fractional (AD a) where (AD x dx) / (AD y dy) = AD (x / y) ((dx * y - x * dy) / y^2) recip (AD x dx) = AD (1 / x) ((-dx) / x^2) fromRational x = AD (fromRational x) 0 instance Floating a => Floating (AD a) where pi = AD pi 0 exp (AD x dx) = AD (exp x) (dx * exp x) sqrt (AD x dx) = AD (sqrt x) (dx / (2 * sqrt x)) log (AD x dx) = AD (log x) (dx / x) (AD x dx) ** (AD y dy) = AD (x ** y) (dx * y * (x ** (y-1)) + dy * (x ** y) * log x) sin (AD x dx) = AD (sin x) ( dx * cos x) cos (AD x dx) = AD (cos x) (-dx * sin x) asin (AD x dx) = AD (asin x) ( dx / sqrt (1 - x^2)) acos (AD x dx) = AD (acos x) (-dx / sqrt (1 - x^2)) atan (AD x dx) = AD (atan x) (dx / (1 + x^2)) sinh (AD x dx) = AD (sinh x) (dx * cosh x) cosh (AD x dx) = AD (cosh x) (dx * sinh x) asinh (AD x dx) = AD (asinh x) (dx / sqrt (x^2 + 1)) acosh (AD x dx) = AD (acosh x) (dx / sqrt (x^2 - 1)) atanh (AD x dx) = AD (atanh x) (dx / (1 - x^2)) diff :: Num a => (AD a -> AD t) -> a -> t diffNum :: Num b => (forall a. Num a => a -> a) -> b -> b diffFractional :: Fractional b => (forall a. Fractional a => a -> a) -> b -> b diffFloating :: Floating b => (forall a. Floating a => a -> a) -> b -> b diff f x = let AD y dy = f (AD x 1) in dy diffNum f x = let AD y dy = f (AD x 1) in dy diffFractional f x = let AD y dy = f (AD x 1) in dy diffFloating f x = let AD y dy = f (AD x 1) in dy Essentially, this declares a data type to represent the pair of a number and its first derivative. It then defines how to perform primitive math operations on that data type, for three of Haskell’s numerical type classes. Note that basically the entire implementation consists of talking about how derivatives fare in the face of certain operations. In fact, you may recognize a lot of it from the algebra of differentials in calculus. For example, if z = xy, then dz = x dy + y dx, and so forth. (One quick note, because someone else asked me about this. The exponentiation operator ** is complicated because both the base and the exponent could be non-constant. So it must use both the power rule and the exponentiation rule.) My Argument With the Type System The last section is certainly weird. It defines four identical functions, because the type system gets in the way of doing what I really want. What I would like is for the diff function to map functions over the numeric typeclasses to other functions over the same type classes. For example, the derivative of a function with type (Num a => a -> a) will itself have type (Num a => a -> a). The derivative of a function with type (Floating a => a -> a) will itself have type (Floating a => a -> a). But the only way that works is for diff to get a type that depends on AD. That’s confusing, because the type AD is an implementation detail; and I’d rather not even export it. The user of this module won’t think they’ve written a function for the type AD; they’ve just written a function over any type that is an instance of, say, Floating. To illustrate something closer to what would make me happy, I’ve also given the same function three better types, in diffNum, diffFractional, and diffFloating. These rank 2 types provide the right level of abstraction; but unfortunately, it’s impossible to write only one function that works for all three numeric type classes. Here’s what I’d really like to be able to say: diff :: forall A a. (A :> Floating, A a, Num a) => (forall b. (A b) => b -> b) -> a -> a Here, I’m using :> to mean “is a superclass of”, and A is meant to quantify over type classes. In other words, the type says that given any type class that’s a superclass of Floating, I can pass in a function that’s polymorphic over that type class, and get back its derivative (interpreting its domain and range as being, at the very least, numbers). But that’s not possible, so the functions above all have some of the advantages, and I’ve written all of them. Does It Work? Glad you asked. Yes and no. Here’s the yes part. $ ghci -XRankNTypes autodiff.hs > let f x = 3 * x^2 + 2 * x - 3 > let f' = diff f > f' 3 20 > let g x = 1 / sqrt (exp x - asinh x) > let g' = diff g > g' 3 -0.12660691544665373 So far, so good. The “no” part of the answer is because there are a few specific situations in which the code will give the wrong answer. First, the code I’ve written often gives a derivative when the derivative is really undefined. Sometimes this is me being sloppy; for example, the log of -1 is undefined, but this code provides a derivative anyway. I could fix this if I were willing to make my code a little uglier. Second, uses of fromRational and fromInteger must be constants, because the code assumes they are. It was pointed out by Luke Palmer that I can define a function of the correct type (Num a => a -> a) by using the Show superclass to convert a value to a string, then read it as an Integer, do all sorts of things to it, and then convert the result back to the type a by using fromInteger. Doing so will give a derivative of zero, even if your function really has a different derivative. A more nefarious example is this: f x = if x == 1 then 1 else x^2 This is the same function as f x = x^2. However, asking for the derivative at x = 1 will return 0, because the function returns the constant 1, whose first derivative is 0. In particular, the technique runs into problems right on the boundaries of intervals where the function is defined piecewise. It doesn’t appear to me that there’s a particularly good way of dealing with this, except to process the source code and modify if statements to refuse to calculate derivatives at those locations. That’s far beyond the scope of a little playing with overloading, so I have no intention of doing it. Extending to Other Cool Stuff If we just got functions computing values, this wouldn’t be very visual. In the effort to avoid introducing GUI libraries, I’ll play the old trick of graphing functions sideways in ASCII art. Let’s look at the second function in my example above… the weird one with square roots, exponentials, and inverse hyperbolic sines. Here’s a test program. import AD graph f ymin ystep xs = mapM_ (putStrLn . flip replicate '*' . round . (/ ystep) . (subtract ymin)) (map f xs) main = do let g x = 1 / sqrt (exp x - asinh x) graph g 0 (1/75) [0.0, 0.1 .. 4.0] putStrLn "------------------------------------" graph (diff g) (-0.5) (1/175) [0.0, 0.1 .. 4.0] putStrLn "------------------------------------" graph (diff (diff g)) (-0.75) (1/75) [0.0, 0.1 .. 4.0] That just defines our function g again, and graphs it and its first two derivatives using ASCII art. Here’s the result. *************************************************************************** *************************************************************************** ************************************************************************** ************************************************************************* *********************************************************************** ********************************************************************* ******************************************************************* **************************************************************** ************************************************************* ********************************************************** ******************************************************* **************************************************** ************************************************* *********************************************** ******************************************** ***************************************** *************************************** ************************************* *********************************** ********************************* ******************************* ***************************** *************************** ************************** ************************ *********************** ********************** ********************* ******************** ******************* ****************** ***************** **************** *************** ************** ************* ************* ************ *********** *********** ********** ------------------------------------ **************************************************************************************** ****************************************************************************** ******************************************************************* ******************************************************** ********************************************* *********************************** *************************** ********************** ****************** ***************** ***************** ****************** ******************** *********************** ************************** ****************************** ********************************* ************************************* **************************************** ******************************************* ********************************************** ************************************************ *************************************************** ***************************************************** ******************************************************* ********************************************************* *********************************************************** ************************************************************* ************************************************************** **************************************************************** ***************************************************************** ******************************************************************* ******************************************************************** ********************************************************************* ********************************************************************** *********************************************************************** ************************************************************************ ************************************************************************* ************************************************************************** ************************************************************************** *************************************************************************** ------------------------------------ ******************* ************ ******** ******** ************ ****************** *************************** ************************************* ********************************************** ****************************************************** ************************************************************ **************************************************************** ******************************************************************* ********************************************************************* ********************************************************************** *********************************************************************** *********************************************************************** ********************************************************************** ********************************************************************** ********************************************************************* ******************************************************************** ******************************************************************* ******************************************************************* ****************************************************************** ***************************************************************** ***************************************************************** **************************************************************** *************************************************************** *************************************************************** ************************************************************** ************************************************************** ************************************************************** ************************************************************* ************************************************************* ************************************************************* ************************************************************ ************************************************************ ************************************************************ ************************************************************ *********************************************************** *********************************************************** One solution would be to not export the data constructor for the AD type. Then the only things you can do with values of type AD are to use the provided functions anyway. Perhaps you’ll find this interesting. It just the same as Jerzy described in his paper, but it provides all the derivatives, not just the first. The code is in Hackage, in a package called numbers. Hi! Very nice code – it’s great to see an elegant language (Haskell) being used in the most elegant area of mathematics (calculus). ( Btw, what license is the code under – is it P.D.? ) – Andy Andy, sure. Do whatever you like with it. Please pay attention to the warnings that it doesn’t always work. But what if you define f using diff, then apply diff to f? Turns out you can get the wrong answer, given the way you’ve written the code. Jeff Siskind and I went nuts on this issue, concluding that although diff is referentially transparent, it cannot be implemented in pure Haskell: you need to use some dirty trick. We exhibit a bunch of such dirty tricks in Scheme, but can’t do it in Haskell. One big question is whether some new pure mechanism could be added that would allow this in Haskell. Existential types might do the trick, but so far I have not figured out how… (details on my publications page, HOSC paper and IFL paper) Barak, That’s interesting. Do you have an easy example of how this can give incorrect results? Sure! This gives the wrong answer: diff (\x -> (diff (x*) 2)) 1 (0 instead of 1) and this fails to type check: diff (\x -> x*(diff (x*) 2)) 1 If you do some unpleasant manual coercion you can get these to give you the right answers, diff (\x -> (diff ((AD x 0)*) 2)) 1 diff (\x -> x*(diff ((AD x 0)*) 2)) 1 As far as we can tell, that cannot be automated in Haskell. PS Although insertion of the lift invocations (lift x = AD x 0) cannot be automated, detection of missing invocations can be. This uses branding, see for details. PPS I am currently putting together the best of the forward-mode AD Haskell I’ve seen to try to make an actual usable version. Of course, this is all basically an exercise before the hard but important part: REVERSE-MODE AD!
https://cdsmith.wordpress.com/2007/11/29/some-playing-with-derivatives/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
CC-MAIN-2015-35
refinedweb
1,956
66.37
Ethernet_b239 (community library) Summary Arduino port of Ethernet library Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. About This repo serves as the specfication for what constitutes a valid Spark firmware library and an actual example library you can use as a reference when writing your own libraries. Spark Libraries can be used in the Spark IDE. Soon you'll also be able to use them with the Spark CLI and when compiling firmware locally with Spark core-firmware. Table of Contents This README describes how to create libraries as well as the Spark Library Spec. The other files constitute the Spark Library itself: - file, class, and function naming conventions - example apps that illustrate library in action - recommended approaches for test-driven embedded development - metadata to set authors, license, official names Getting Started 1. Define a temporary function to create library boilerplate Copy and paste this into a bash or zsh shell or .profile file. create_spark_library() { LIB_NAME="$1" ### Make sure a library name was passed if [ -z "{$LIB_NAME}" ]; then echo "Please provide a library name" return fi echo "Creating $LIB_NAME" ### Create the directory if it doesn't exist if [ ! -d "$LIB_NAME" ]; then echo " ==> Creating ${LIB_NAME} directory" mkdir $LIB_NAME fi ### CD to the directory cd $LIB_NAME ### Create the spark.json if it doesn't exist. if [ ! -f "spark.json" ]; then echo " ==> Creating spark.json file" cat <<EOS > spark.json { "name": "${LIB_NAME}", "version": "0.0.1", "author": "Someone <email@somesite.com>", "license": "Choose a license", "description": "Briefly describe this library" } EOS fi ### Create the README file if it doesn't exist if test -z "$(find ./ -maxdepth 1 -iname 'README*' -print -quit)"; then echo " ==> Creating README.md" cat <<EOS > README.md TODO: Describe your library and how to run the examples EOS fi ### Create an empty license file if none exists if test -z "$(find ./ -maxdepth 1 -iname 'LICENSE*' -print -quit)"; then echo " ==> Creating LICENSE" touch LICENSE fi ### Create the firmware/examples directory if it doesn't exist if [ ! -d "firmware/examples" ]; then echo " ==> Creating firmware and firmware/examples directories" mkdir -p firmware/examples fi ### Create the firmware .h file if it doesn't exist if [ ! -f "firmware/${LIB_NAME}.h" ]; then echo " ==> Creating firmware/${LIB_NAME}.h" touch firmware/${LIB_NAME}.h fi ### Create the firmware .cpp file if it doesn't exist if [ ! -f "firmware/${LIB_NAME}.cpp" ]; then echo " ==> Creating firmware/${LIB_NAME}.cpp" cat <<EOS > firmware/${LIB_NAME}.cpp #include "${LIB_NAME}.h" EOS fi ### Create an empty example file if none exists if test -z "$(find ./firmware/examples -maxdepth 1 -iname '*' -print -quit)"; then echo " ==> Creating firmware/examples/example.cpp" cat <<EOS > firmware/examples/example.cpp #include "${LIB_NAME}/${LIB_NAME}.h" // TODO write code that illustrates the best parts of what your library can do void setup { } void loop { } EOS fi ### Initialize the git repo if it's not already one if [ ! -d ".git" ]; then GIT=`git init` echo " ==> ${GIT}" fi echo "Creation of ${LIB_NAME} complete!" echo "Check out for more details" } 2. Call the function create_spark_library this-is-my-library-name - Replace this-is-my-library-namewith the actual lib name. Your library's name should be lower-case, dash-separated. 3. Edit the spark.json firmware .h and .cpp files - Use this repo as your guide to good library conventions. 4. Create a GitHub repo and push to it 5. Validate and publish via the Spark IDE To validate, import, and publish the library, jump into the IDE and click the "Add Library" button. Getting Support - Check out the libraries category on the Spark community site and post a thread there! - To file a bug; create a GitHub issue on this repo. Be sure to include details about how to replicate it. The Spark Library Spec A Spark firmware library consists of: - a GitHub REPO with a public clone url - a JSON manifest ( spark.json) at the root of the repo - a bunch of files and directories at predictable locations (as illustrated here) More specifically, the collection of files comprising a Spark Library include the following: Supporting Files - a spark.jsonmeta data file at the root of the library dir, very similar to NPM's package.json. (required) The content of this file is validated via this JSON Schema. a README.mdthat should provide one or more of the following sections - About: An overview of the library; purpose, and description of dominant use cases. - Example Usage: A simple snippet of code that illustrates the coolest part about your library. - Recommended Components: Description and links to example components that can be used with the library. - _Circuit Diagram: A schematic and breadboard view of how to wire up components with the library. Learning Activities: Proposed challenges to do more sophisticated things or hacks with the library. a docdirectory of diagrams or other supporting documentation linked to from the README.md Firmware - a firmwarefolder containing code that will compile and execute on a Spark device. This folder contains: - A bunch of .h, .cpp, and .cfiles constituting the header and source code of the library. - The main library header file, intended to be included by users - MUST be named the same as the "name" key in the spark.json+ a .hextension. So if nameis uber-library-example, then there should be a uber-library-example.hfile in this folder. Other .hfiles, can exist, but this is the only one that is required. - SHOULD define a C++ style namespace in upper camel case style from the name (i.e. uber-library-example -> UberLibraryExample) - The main definition file, providing the bulk of the libraries public facing functionality - MUST be named like the header file, but with a .cppextension. (uber-library-example.cpp) - SHOULD encapsulate all code inside a C++ style namespace in upper camel case style (i.e. UberLibraryExample) - Other optional .hfiles, when included in a user's app, will be available for inclusion in the Web IDE via #include "uber-library-example/SOME_FILE_NAME.h". - Other optional .cppfiles will be compiled by the Web IDE when the library is included in an app (and use arm-none-eabi-g++to build). - Other optional .cfiles will be compiled by the Web IDE when the library is included in an app (and use arm-none-eabi-gccto build). - An examplessub-folder containing one or more flashable example firmware .inoor .cppapplications. - Each example file should be named descriptively and indicate what aspect of the library it illustrates. For example, a JSON library might have an example file like parse-json-and-output-to-serial.cpp. - A testsub-folder containing any associated tests Contributing This repo is meant to serve as a place to consolidate insights from conversations had about libraries on the Spark community site, GitHub, or elsewhere on the web. "Proposals" to change the spec are pull requests that both define the conventions in the README AND illustrate them in underlying code. If something doesn't seem right, start a community thread or issue pull requests to stir up the conversation about how it ought to be! Browse Library Files
https://docs.particle.io/reference/device-os/libraries/e/Ethernet_b239/
CC-MAIN-2022-27
refinedweb
1,190
57.06
For anyone stumbling across this thread, this is an old version of the script. The new version can be found here. Most of the information in this thread is outdated. I borrowed some ideas from the scripts AutoGK creates (credit to len0x), and turned them into a function called CropResize. CropResize can be used much like the standard Avisynth resizers, except instead of the resizing resulting in an aspect error if the image is resized "incorrectly", CropResize will automatically crop to prevent any aspect error (or keep it very close to zero). Here's some illustrations of the differences. Source image. Widescreen with black bars top and bottom for 16:9. Spline36Resize(1280) # naturally it results in an error if no height is specified. CropResize(1280) # black is automatically cropped and the output height determined accordingly. Spline36Resize(1280, 518) # without the correct cropping beforehand the picture is squashed. CropResize(1280,518) # the black is cropped along with any additional picture as required for minimum aspect error (no additional picture cropping was required in this case). CropResize(1280,720) # the black is cropped along with any additional picture as required for minimum aspect error. CropResize(1280, OutDar=1.777778) would produce the same result. As you can see from the last picture, CropResize will crop the black, then it'll crop as much picture as required to give you the requested dimensions or display aspect ratio without aspect distortion. As a result, you'd normally use CropResize as follows and let it take care of the rest: CropResize(1280) or CropResize(1280,0) The idea of being able to specify a height or output display aspect ratio is mostly for a nice clean 16:9 or 4:3 output when your source is already close to 16:9 or 4:3 after cropping, as then only a small amount of picture will need to be cropped. CropResize can be used to specify any output display aspect ratio though, and as much picture will be removed as required to get you there. You may be aware Avisynth's resizers can also crop, and they're not limited to mod2 cropping: Spline36Resize(1280, 528, 2, 4, -1, -3) CropResize can now use much the same format. The difference being the cropping specified is cropping in addition to the auto-cropping and all values must be positive for additional cropping. That way if the auto-cropping doesn't give you perfectly clean edges, it's easy to adjust it. If the output height specified is zero, it's determined by the cropping (assuming no OutDAR is specified). The specified cropping doesn't have to be mod2, but autocrop can only crop mod2 so odd numbers will be rounded. Therefore it'd pay to stick to mod2 cropping CropResize(1280, 0, 2, 4, 2, 4) And while it's not mentioned in the autocrop help file, negative cropping seems to reduce the auto-cropping rather than increase it as positive values do. CropResize(1280, 0, -2, -4, -2, -4) CropResize() only requires autocrop.dll (included in the zip file) and always scales using Avisynth's "Spline36Resize". CropResize8() requires the Resize8 script which in turn requires RgTools. CropResize(OutWidth=Width, OutHeight=0, CL=0, CT=0, CR=0, CB=0, Cthresh=30, Cstart=0, Csample=5, OutDAR=0.0, Hmod=4, InDAR=0.0) CropResize8(OutWidth=Width, OutHeight=0, CL=0, CT=0, CR=0, CB=0, Cthresh=30, Cstart=0, Csample=5, OutDAR=0.0, Hmod=4, InDAR=0.0, Resizer="Spline36", ResizerC="Spline36") The Output DAR can also be used to specify an exact output resolution, assuming the Output Width and resolution width are the same: CropResize(1280, OutDAR=1280.0/720.0) # The difference between simply setting a width and height of 1280 & 720 is the Hmod setting takes preference over the height when OutDAR is used. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- OutWidth (default, same as source width) The desired Output Width (1280, 1024 etc). Specifying an Output Width of zero will output the same width as the source. OutHeight (default, determined by cropping) The desired Output Height (720, 576 etc). Output Height will be determined by auto-cropping and/or OutDAR if specified as zero. If the specified Output Height is greater than zero, the OutDAR & Hmod settings have no effect. If an Output Height is specified, as much picture will be cropped as required for minimum aspect error. CL, CT, CR, CB (defaults, 0) Cropping amounts in addition to the auto cropping. Left, top, right, bottom. Negative values decrease the auto-cropping. Cthresh (default, 30) How keenly the auto-cropping crops (range 0 - 255). Zero should disable autocropping and only the specified cropping (if any) will be applied. Cstart (default, 0) The first frame for the auto-cropping to check (in case there's a lot of black at the beginning and it over-crops). Csample (default, 5) How many frames autocrop checks. OutDAR (default, determined by cropping) Has no effect if OutHeight is specified and greater than zero. Must be float (ie 1.777778 or 16.0/9.0 etc). More than six decimal places is probably pointless. For specifying the desired Output Display Aspect Ratio, otherwise it's determined by the cropping (also if OutDAR=0.0). Use in preference to OutHeight if you want CropResize to determine the height automatically, but still want a predefined output aspect ratio. If you specify an Output DAR, as much picture will be cropped as required for minimum aspect error. The HMod setting takes precedence for determining the exact output height. HMod (default, 4) Has no effect if OutHeight is specified and greater than zero. The mod OutDAR should adjust the height to. Mod4 is the default. InDAR (default, same as source resolution) Required for correct resizing of anamorphic sources. The Input Display Aspect Ratio. Must be float (ie 1.777778 or 16.0/9.0 etc). If InDAR is not specified the source is assumed to have square pixels. More than six decimal places is probably pointless. CPreview (default, 0) When the cropping preview is enabled (CPreview=1) autocrop's suggested cropping information is overlayed on the existing clip. Resizing is disabled when the cropping preview is enabled so the video will always display at it's original resolution. See this post for an example. ------------------------------------------------------------------------------------------------------------- CropResize8() works the same way as CropResize(), but instead of always resizing with Avisynth's Spline36Resize, it resizes using the Resize8 script. It has two additional options: Resizer (defaults, "Lanczos4" for luma upscaling, "Spline36" for luma downscaling, the same as the defaults for Resize8) Specifies any resizer supported by the Resize8 script for luma scaling. ie Resizer="Spline36" or Resizer="Bilinear" etc (it corresponds to the "kernel" option for the Resize8 script). ResizerC (defaults, "Lanczos" for chroma upscaling, "Spline36" for chroma downscaling, the same as the defaults for Resize8) Specifies any resizer supported by the Resize8 script for chroma scaling. ie ResizerC="Spline36" or ResizerC="Bilinear" etc (it corresponds to the "kernel_c" option for the Resize8 script). If "Resizer" is specified, but "ResizerC" is not, the scaling specified with the "Resizer" option is also applied to "ResizerC" (unlike the Resize8 script where specifying a luma resizer has no effect on the resizer used for chroma scaling). -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The zip file contains the script and autocrop.dll CropResize 2017-03-06 CropResize 2017-03-09 (Fixed the CPreview option) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :) + Reply to Thread Results 1 to 30 of 44 Thread Last edited by hello_hello; 7th Aug 2019 at 10:22. - - Avisynth rounds to whole numbers (integer?), unless you specify float and I wasn't sure how to do that, so I'm pretty sure 4/3=1 unless you tell Avisynth otherwise, although I could be wrong. 4.0/3.0 would equal 1.33333333_ or maybe float(4)/3 or something like that. I'm a little new to this so as I said I could be misunderstanding. The default "mod" for autocrop.dll is mod4 for the width and mod2 for the height (wMultOf, hMultOf), but the default mod for the CropResize script is mod4 for the height. The width is the same as the source width or whatever width you specify, so if you specify a width of 718 it'll be mod2. The height_mod setting in the script determines the "mod" of the height...... mod2, mod4 etc. If it's over-cropping a little it'd probably be because it needs to to give you the specified output without distorting the picture. If you don't specify an output display aspect ratio, or a large mod, I don't think it's likely to over-crop much, if at all. At the other extreme though, if you have a 1.33333 picture and specify a 1.77777 Output DAR for the script, it'll massively over-crop the top and bottom of the picture to give you 1.7777777. Last edited by hello_hello; 21st Feb 2017 at 13:58. - ndjamena, It's adding it to the script so Input_DAR can be entered as both a decimal and fraction that I haven't got my head around though. Maybe it's staring me in the face when I read your reply but I don't realise it yet. Currently I have the Input_DAR set like this so it's the same as the source resolution by default, and it has to be specified as float as an option. Input_DAR = default(Input_DAR, Float(c.width)/Float(c.height)) Then there's a line in the script to set the "picture aspect ratio" to the same as the source resolution if the user sets Input_DAR to zero, otherwise Picture_Aspect_Ratio uses the Input_DAR specified, which is once again the source resolution if the Input_DAR isn't specified at all. Picture_Aspect_Ratio = (Input_DAR == 0) ? (Float(c.width)/Float(c.height)) : (Input_DAR) That seems to work fine but I'm still not certain how to make it work so Input_DAR can be entered as a fraction or even as a whole number. ie In_DAR=1.333334 or Input_Dar = 16/9 or Input_DAR=2 etc. I assume the magic will be in the boolean functions such as IsFloat or IsInt etc, but I haven't made it that far yet. It's literally my first function so I'm still taking baby steps. I'll look more closely at that in a little while to see if I can work it out, once I find "ceil" so I know what it does. Last edited by hello_hello; 21st Feb 2017 at 06:02. Unless you ask for a string instead of a number, what you're thinking is impossible. Code: CropResize(var Input_Dar) Pos = IsString(Input_Dar) ? FindStr(Input_Dar, ":") : 0 Pos = Pos==0 ? Pos = IsString(Input_Dar) ? FindStr(Input_Dar, "/") : 0 : Pos Num = Pos!=0 ? LeftStr(Input_Dar, Pos-1) : IsString(Input_Dar) ? Value(Input_Dar) : float(Input_Dar) Den = Pos!=0 ? MidStr(Input_Dar, Pos+1) : 1.0 Dar = float(Num) / float(Den) I don't know if that will work, I've just been writing based on what I think might work, but there's no "equation" variable and dividing integers returns an integer so... Last edited by ndjamena; 21st Feb 2017 at 06:38. - It's just a basic outline of what might be needed, not being able to concentrate makes proof reading difficult, I think I've fixed all the obvious errors... I think it could do with more checking and I'm not sure if avisynth will spit out errors if Input_Dar isn't a string but I've pretty much burnt myself out for now and don't really know what you need or will find acceptable. Thanks anyway. You given me more to think about. After a little penny dropping, it appears a fraction can be float, which I think is fine for this purpose if it's entered that way. 4:3 can be entered as the Input_DAR as long as it's entered as 4.0/3.0, whereas 4/3 will produce the wrong aspect ratio, and it should be fairly obvious. I thought because I specified Input_DAR as float rather than integer it'd have to be entered that way, but 4/3 doesn't produce an error so maybe what I need to do is add a line that checks Input_DAR is float and spits out an error if it's not, so 4.0/3.0 will work but 4/3 won't. I'll investigate.... I haven't done that before. Cheers. - - avsmeter or virtualdub running command line, just to get that crop line and nothing else about those aspect ratios, ....... please not again , do your thing in here in this thread, do some calculations etc, get some lines from guys here that would do it, but I stay away from this, because it is not needed, just fyi, do not try to explain it yet again ... I just want to stress the approach, I do not create final avisynth script with all that programming avisynth language, rather doing it in batch scripts, much, much easier, avisynth just runs autocrop.dll mode2, generating crop line, echo autocrop(2,%wMultOf%,%hMultOf%) >> autocrop.avs and batch script (or whatever programing language, if you program it in different language) decides what to do afterwards, adding that cropping line into real avisynth script, or not if it is fishy,because it is a real world, autocrop.dll has to be checked upon etc. adding this mode autocrop info, as I said, using 2, to just get that crop line info, nothing else, you juxtaposed mode 2 and 3 to 3 and 4: 0 - Crop - Crops the image 1 - Preview - Suggested cropping information is overlayed on the existing clip, including a crop command that you can use to replace AutoCrop with. 2 - Log - Logs the cropping parameters to the file "AutoCrop.log" in the current directory. 3- Crop & Log - combination of modes 0 and 2 Last edited by _Al_; 21st Feb 2017 at 09:14. - Just because you're happy to do it a certain way or happy not to crop, it doesn't mean everyone else is, and if you want to output a certain width or aspect ratio with nice clean cropping you need to do some aspect ratio calculations. That's what this script is designed for, to semi automate the process so there's no need to mess around with aspect error calculators yourself if you don't want to. I've already found one use for it. A couple of people in the house prefer to watch video filling the screen and don't have a way to easily zoom as I do (they don't use a PC for playback), and whether you think it's "as the director intended" sacrilege or not, the script makes it very easy for me to knock off a quick copy for one-time viewing using fast x264 settings and without having to manually crop and calculate resizing any more. This: Becomes this: and the closest I have to come to thinking about it now is this: CropResize(OutWidth=1280, OutDAR=16.0/9.0) or this: CropResize(1280, 16.0/9.0) MeGUI's preview Anyway, back to checking out what goodies ndjamena and jagabo have offered..... Last edited by hello_hello; 2nd Mar 2017 at 10:19. - jagabo, Ideally, I'm wanting to be able to enter a fraction as the Input_DAR and have it converted to float automatically so it's handled correctly. Currently 1.33333 works for the input DAR and you can enter it as a fraction like this 4.0/3.0, but if you enter it as 4/3 it's converted to integer so the output DAR ends up wrong. Input_DAR only has to be specified for anamorphic sources but that's when 4/3 or 16/9 might be incorrectly specified. So I understand the problem, I just need to be able to enter 4/3 as the Input_DAR and have it converted to 4.0/3.0 so it's handled correctly, or alternatively 4/3 would produce an error to prevent it's use. It's not the end of the world but it'd be nice if either were possible. When you specify an option, ie function CropResize(clip c, int "Out_Width", float "Input_DAR", float "Output_DAR") I assumed specifying "float" would force the value to be entered that way (ie 1.0) or it'd produce an error, but I guess that's not what happens. It's a fairly simple script but it's a learning experience for me. I haven't experimented with the Input_DAR setting any further yet because I found a small aspect error of a couple of pixels in the width while testing. Sometimes the simplest things.... it turned out autcrop's default cropping is mod4 for the width and mod2 for the height, and I misread the instructions and thought it was mod2 for both, so now and then there was a small aspect error I couldn't find until the penny dropped. That's fixed now so I'll replace the script in the opening post with the fixed version in a little while. Cheers. So use ndjamena workaround. Enter DAR as a text variable then use eval() to convert it to a float: Code: DAR="4/3" FP_DAR=eval("1.0*"+DAR) subtitle(string(FP_DAR)) Or enter your aspect ratios as integer numerator and denominator. Then convert to fp: Code: DAR_NUM=4 DAR_DENOM=3 DAR = float(DAR_NUM) / float(DAR_DENOM) subtitle(string(DAR)) Last edited by jagabo; 21st Feb 2017 at 16:02. I understand what you've written but can't work out to make it work in the script at the moment. I'll try again later when I have more time and can tolerate another error message without punching the screen. Thanks. Edit: So eventually I got it to work by adding this line to the script: Float_Input_DAR = eval("1.0*" + Input_DAR) and changing the Input_DAR type to string, so 4/3 can be entered, but now all aspect ratios have to be entered with quotes. ie CropResize(Input_DAR="1.33333") Which seems to make the cure worse than the problem unless there's a better way I missing? Last edited by hello_hello; 22nd Feb 2017 at 06:35. The use of a string type is to allow you to use integers within the string. Code: CropResize(Input_DAR="4/3") Code: Float_Input_DAR = eval("1.0*" + Input_DAR) Code: Float_Input_DAR = eval("1.0*4/3") At least I know how that works now, so I'm slowly learning. Cheers. I just can't decide whether having to use quotes in order to enter the aspect ratio would be less annoying than not having to use quotes but for an aspect ratio such as 4/3 to produce an incorrect output, which should be obvious. If it can be done, which I imagine it can be, I might try adding something that returns an error if the aspect ratio entered is integer. So 4.0/3.0 will work but 4/3 will produce an error, and then there won't be any quotes needed, cause I think that might annoy me more. Thanks. Last edited by hello_hello; 22nd Feb 2017 at 10:41. Reason: spelling Code: CropResize(Input_DAR=4/3) Code: CropResize(Input_DAR=1) Code: Input_DAR=1 a "var" variable is undefined and can be anything. All he'd need to do is define "Output_DAR" as { var "Output_DAR" } instead of { float "Output_DAR" }, then he can test what it is using IsFloat, IsInt or IsString... he could even use IsClip if he just want to copy the DAR from one clip to anther... I'm not sure how he'd use IsBool though... Avisynth can be torture. I thought this would work until I discovered IsFloat treats integer as float. I'm not sure what the logic behind that is. Assert(Input_DAR.IsFloat(),"Error") so an integer input isn't caught even when I define Input_DAR as integer and this still works instead of producing an error. CropResize(Input_DAR=1) When I added this instead (just for testing), it apparently assumed everything was float because Input_Dar is defined as float. Assert(Input_DAR.IsInt(),"Error") so even integer input results in "Error", which isn't what I expected. CropResize(Input_DAR=1) Defining Input_DAR as var doesn't seem to change any of that, so I'll have to find another way to tackle it.... or give up. Last edited by hello_hello; 22nd Feb 2017 at 15:27. - IsFloat(2) = true # ints are considered to be floats by this function Unless you want to discount 2:1 and 1:1 resolutions (both of which are real things) there's no actual way of checking if someone tried to pass 4/3 other than by requesting a string. - Code: function IsReallyFloat(val v) { s = "" try { s = Hex(v) [* fails on float arguments! *] } catch(err_msg) { s = "true" } return (IsFloat(v)) \ ? (s=="true") [* s<>"true" if Hex succeeds *] \ : false [* eliminate clips, strings, booleans *] } Last edited by raffriff42; 22nd Feb 2017 at 18:18. raffriff42, Thanks, but I think I'm going to need some instructions when it comes to adding that to the script. I'll come back to it a bit later when I'm over the 3417 error messages I've seen so far trying to get it to work, but I'm not 100% sure how to add a function to a function. It's probably easy once you know. Cheers. - Similar Threads - Replies: 2Last Post: 30th Nov 2016, 10:13 Which would you call first in a script?By darkdream787 in forum RestorationReplies: 6Last Post: 31st Oct 2013, 13:04 How can i Use VDub Script[.vcf] into Avisynth Script[.avs] ( Megui )By Maskoff in forum EditingReplies: 1Last Post: 25th Jun 2013, 15:30 trouble adding a different script to a trimmed section in my scriptBy unclescoob in forum RestorationReplies: 20Last Post: 11th Aug 2012, 22:59 Yet Another Script - 2.0 to 5.1 UpmixerBy Soopafresh in forum AudioReplies: 201Last Post: 1st May 2012, 20:55
https://forum.videohelp.com/threads/382601-CropResize-Script?s=305bc4c582130a714a1ba2f39c14282a
CC-MAIN-2019-43
refinedweb
3,719
62.58
julian To Excel Converts serial Julian date into serial Excel dateController: CodeCogs This function converts the serial Julian date that we use as standard in the CodeCogs library into the single value that Excel uses to represent dates. It is the exact opposite of excelToJulian. Excel only understands the Gregorian date system, but for added confusion Microsoft have chosen to present dates differently under the Windows and Apple OSX operating systems (though you can change this default behaviour): On a Windows platform the calculation to get the serial Excel date from a Julian number is: - the Windows standard: starts on the 1 January 1900, which is represented by 1. - the Mac standard (Apple's OSX): starts on 1 January 1904, which is represented by 0. Example 1 #include <stdio.h> #include <codecogs/units/date/juliantoexcel.h> #include <codecogs/units/date/date.h> using namespace Units::Date; int main() { printf("\nIf you type %d into Excel on a Mac, you get valentines day - don't forget!", julianToExcel(date("14 feb 2005"), true)); // 27076 return 0; } - false: from Window Excel values using 1/1/1900. - true: from Mac Excel values using 1/1/1904. Note - Unfortunately Microsoft made a mistake, so they think 29/2/1900 exists - but 1900 isn't a leap year!! This clearly only has an impact in the 1900 date system, default for Windows Excel. We can not generate the 29/2/1900 from any Julian values (because it really doesn't exist), however for earlier days we simply subtract one from the Excel values. Parameters Authors - Will Bateman (Sep 2004) Source Code Source code is available when you agree to a GP Licence or buy a Commercial Licence. Not a member, then Register with CodeCogs. Already a Member, then Login.
http://www.codecogs.com/library/units/date/juliantoexcel.php
CC-MAIN-2018-43
refinedweb
293
54.42
- NAME - DESCRIPTION - CONFIGURATION - STORES - AUTHOR - SEE ALSO NAME infobot - an plugin based irc bot based on Kevin Lenzo's original irc bot DESCRIPTION The original Infobot was written by Kevin Lenzo. You can get more information from here. The code is horrible. This is a new version of the Infobot based on Tom Insam's Bot::BasicBot::Pluggable infrastructure. It's much nicer. Well, I think so anyway. I've ported over most of the plugins I've found or provided the functionality in other ways. All in all, there should be no loss in functionality, maybe even a little increase. And it's much easier to patch and extend. Infobot - now with 78% less crack. CONFIGURATION We look in the current directory for a file called infobot.conf, which is in .ini file. Variables are seperated into the main namespace and then sub namespaces for each plugin. For example the config file channels = #somechannel #someotherchannnel server = irc.example.com nick = mybot [Foo] somevar = a value Will join a couple of channels under the given nick. The plugin Bot::BasicBot::Pluggable::Module::Foo will have the variable somevar set a value. Individual plugins will describe their config values however the config values available for the main bot are - server The server we're going to connect to. Defaults to "irc.perl.org". port The port we're going to use. Defaults to "6667" channels we're going to connect to. ignore_list The list of irc nicks to ignore public messages from (normally other bots). Useful for stopping bot cascades. flood Set to '1' to disable the built-in flood protection of POE::Compoent::IRC store The name of the backend Store module to use. Defaults to Storable and Bot::BasicBot::Pluggable ships with that and a DBI backend. STORES When the infobot starts up it will look in the current directory for various .storable files. The are used as variable stores for the various plugins. Stores are passed anything in the Store namespace. Perhaps the most important value is name which describes which backend to use - the default is Storable but Bot::BasicBot::Pluggable also ships with a DBI backend. See the various backend for what variables you need to pass. Here are some examples Storable [ Store ] type = Storable Deep [ Store ] type = Deep file = brane.deep DBI [ Store ] type = DBI dsn = dbi:SQLite:brane.db user = myusername password = mypassword table = brane the table should be created automatically AUTHOR Simon Wistow <simon@thegestalt.org> based on the original code by Kevin Lenzo et al. Distributed under the same terms as Perl itself. SEE ALSO Bot::BasicBot::Pluggable, Config::Tiny
https://metacpan.org/pod/release/SIMONW/Bot-Infobot-1.0/bin/infobot
CC-MAIN-2018-30
refinedweb
438
68.26
Server program not able to send message to the client I have created a client and server program in c++ and trying to run on Busybox both machines(server and client) can see each other using ping but when I run the program client will listen to the server(saying like it's connected) but client not able to receive the message from server. See also questions close to this topic - Passing double array into parent class c++ I have a question on passing double array from subclass into superclass. Considering the following example: Base.h class Base { private: double ** x; public: Base(x, w, z); printarray(); } Base.cpp Base::Base(x, w, z){} Base::printarray() { int i, j; for (i = 0; i < w; i++) for (j = 0; j < z; j++) cout << x[i][j] << endl; //unable to print array } Derived.h class Derived : public Base { private: double ** y; int w, z; public: Derived(); allocspace(); } Derived.cpp Derived::Derived() : Base(double ** y, w, z) {} Derived::allocspace(int w, int z) { int count; y = new double * [w]; for (count = 0; count < w ; count++) { y[count] = new double [z]; } } So, in construction of Derived instance, I need to pass a double pointer y into Base class. I am unable to print out the value in the double array. After some thoughts, I found out that it is due to the passing of virtual address into the Base class as the array is not constructed yet when the pointer is passed. I tried to find solution online but I am still unable to find any yet. Any help is greatly appreciated! - Polyline width bighger than one point? Is there any chance to set Polyline width bigger from standard one point? SelectObject(hdc, GetStockObject(DC_PEN)); SetDCPenColor(hdc, (255, 0, 0)); Polyline(hdc, apt, count); - C++ code to find root of an equation #include <iostream> #include<conio.h> #include<math.h> using namespace std; float f(int a1, int a2, int a3, int a4, int a5, float x){ return ((a1*pow(x,4))+(a2*pow(x,3))+(a3*pow(x,2))+(a4*pow(x,1))+ a5); } int find_a(int a1,int a2,int a3,int a4,int a5){ int a = 0; while(((f(a1,a2,a3,a4,a5,a))*(f(a1,a2,a3,a4,a5,a+1)))>0){ a++; } return a; } int main() { int a,b,n; int a1,a2,a3,a4,a5; float x; cout << "Enter the coefficient of x^4>"; cin >> a1; cout << "Enter the coefficient of x^3>"; cin >> a2; cout << "Enter the coefficient of x^2>"; cin >> a3; cout << "Enter the coefficient of x>"; cin >> a4; cout << "Enter the constant>"; cin >> a5; cout << "Enter the number of iteration>"; cin >> n; a = find_a(a1,a2,a3,a4,a5); b = a+1; for(int i=1;i<n;i++){ x = (a+b)/2;)){ a = x; } else)){ b = x; } cout << "At "<< i << " iteration the value of x is> " << x; } getch(); return 0; } I am trying to find out a root of equation using bisection method. There in no syntax error as far as I know. It takes all the inputs but there is no result printed. Please help me to fix this problem. - Python Windows raw socket invalid bind argument I want to create a raw socket in Windows to connect to a specific interface on my laptop. I have it working on Linux but it does not work in Windows. This is my code that is fine in Linux but not in Windows: import socket s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_TCP) #s.bind(("192.168.1.100", 0)) s.bind((socket.gethostname(), 0)) #s.bind((socket.gethostbyname(socket.gethostname()), 0)) The commented out lines are suggestions I found here: Python Raw Socket to Ethernet Interface (Windows) but they did not work for me. I always get this error: Traceback (most recent call last): File "D:/raw_socket.py", line 5, in <module> s.bind((socket.gethostname(), 0)) OSError: [WinError 10022] An invalid argument was supplied I was unable to find a solution while researching this problem. Any help is highly appreciated. - Local REST vs IPC Socket vs Local RPC service for handling requests from the local machine I have some common DB logic (get/set) which needs to be added to several applications which are each written in different languages, namely Node.js, PHP, Python, and Java. The DB is quite volatile at the moment since we are migrating from a different technology and since the schema and models are prone to changes. Because of this volatility and plurality of languages, it makes sense to have a thin external service providing the DB access functionality and which will run on the same machine as the application at hand ("side car") to minimize latency (additional latency is critical here). Currently I Implemented the DB logic in java scriptand wrapped the few necessary functions in an " express" service. I feel that providing the service over REST is not necessarily the best option because of the HTTP, network call overhead. I'm trying to understand the pros and cons of RESTrather than trying a different (less known to me) mechanism such as " IPC", " RPC" etc. I'm also looking for an automated way to to generate the clients/server for the different languages/mechanisms without having to implement each of them seperatly, also if possible using IPC or other faster mechanism. - What's that and how it works? I mean non-blocking and blocking sockets. I know 1 thing about them, like non-blocking socket does not wait for incoming data instead of freezing until data didn't come. But is there something more about them? - Problems with BusyBox upon trying to boot So I'm getting a lot of problems while trying to install any OS. First my PC would go into this terminal that said: GNU GRUB version 2.02 Minimal BASH-like inline editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists possible device or file completions. grub> I was able to fix this by using this series of commands: set pager=1 ls set root=(hd1,gpt2) I knew by running 'ls (hd1,gpt2)' that the /boot/ directory was in here linux /boot/vmlinuz-4.15.0-29-lowlatency root=/dev/sda1 initrd /boot/initrd.img-3.13.0-29-lowlatency boot That seemed to have worked since it got me out of the GRUB terminal, but now I encountered another problem. I was on another terminal that said the following: BusyBox v1.22.1 (Ubuntu 1:1.22.0-15ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) So I did some research and found that Ctrl+D or exit worked for other people, but when I tried to run this it gave me an error and did not work, the error was the following (I'm giving anything that can be relevant not all of it since it was about 200 lines of code and I cannot copy it all): sched: Unexpected reschedule of offline CPU#0! WARNING: CPU: 2 PID: 23 at /build/linux-hwe-SYRsgd/linux-hwe-4.15.0/arch/x86/kernel/smp.c:128 native_smp_send_reschedule+0x3f/0x50 If you need any extra details please comment, I'll be happy to provide them. - Looks like Busybox shell was not doing quotes removal It is about following setup: Linux machine, bash, adb, embedded Linux target system with Busybox. For target system following applies: adb shell echo $SHELL /bin/sh adb shell echo $0 /bin/sh The problem is my find command in some script does not find anything (it has been proved by other means that items being looked for in fact do exist on target). My command: adb -s $AdbID shell find / -type f \( -name "'"*audio*"'" -or -name "'"*alsa*"'" \) \ \( -path "'"/usr/lib/*"'" -or -path "'"/usr/bin/*"'" -or -path "'"/etc/*"'" \) If to debug with echo echo gets following string as input arguments: $ adb -s $AdbID shell echo find / -type f \( -name "'"*audio*"'" -or -name "'"*alsa*"'" \) \ \( -path "'"/usr/lib/*"'" -or -path "'"/usr/bin/*"'" -or -path "'"/etc/*"'" \) find / -type f ( -name '*audio*' -or -name '*alsa*' ) \ ( -path '/usr/lib/*' -or -path '/usr/bin/*' -or -path '/etc/*' ) Note: above transcript uses escaped newlines so you dont need to scroll much here, however those are not used in original command. I guess same will apply to find if to remove echo from command string. For me it looks like Busybox was not doing the quotes removal as myself used to have in Bash, after expansions of all other kinds. Ash as it seems to be Busybox shell, in its manual no word reg. quotes removal was found, so no idea how ash works in this regards.# If to replace Linux host with Windows desktop machine + dos command line, remaining elements as in case above* the command works fine. One can figure out difference at following point: *) actually also other target system, however on this side no intentional changes so both setups should be identical regarding target system. If to debug with echo echo gets following string: c:\adb_shell>adb -s 2233445 shell echo find / -type f \ \( -name "'"*audio*"'" -or -name "'"*alsa*"'" \) \ \( -path "'"/usr/lib/*"'" -or -path "'"/usr/bin/*"'" -or -path "'"/etc/*"'" \) find / -type f ( -name *audio* -or -name *alsa* ) \ ( -path /usr/lib/* -or -path /usr/bin/* -or -path /etc/* ) Here the command gets from shell the string of input arguments with quotes removed. I realize just the minute as myself writes this Q that command grouping will also not work as expected (busybox shell will consume paranthesis so 'find' won't receive them), let's address this question in other scope. Possibly command has more errors of this kind. I believe the lack of quotes removal in case of two Linux shells in a chain is also real problem for my command string. What are possible reasons, solutions? - busybox IP command changed in DD-WRT I have a script that uses ip -6 neigh show to show the MAC addresses for all active IPv6 addresses. Recent versions of DD-WRT now use BusyBox v1.29.x and that call fails... instead returning: root@DD-WRT:~# ip -6 neigh show BusyBox v1.29.0 (2018-07-16 12:16:47 CEST) multi-call binary. Usage: ip [OPTIONS] address|route|link|tunnel tunnel add|change|del|show [NAME] [mode ipip|gre|sit] [remote ADDR] [local ADDR] [ttl TTL] ip rule [list] | add|del SELECTOR ACTION I've tried a variety of options within this new command to get the equivalent results but have failed... Any suggestions? I want to use the standard functions available in busybox without having to add other packages if possible. Thanks!
http://quabr.com/51277183/server-program-not-able-to-send-message-to-the-client
CC-MAIN-2018-34
refinedweb
1,785
58.62
In 2009, the Employee Plans Unit of the IRS initiated a ROBS (rollovers for business startups) Compliance Project to monitor general compliance among ROBS plans. (See our July 2010 newsletter for a detailed discussion of the ROBS strategy.) The IRS found many ROBS plan sponsors were under the mistaken assumption they were not obligated to file an annual report/return (Form 5500-EZ), at least for the first year or few years after the plan was implemented. Simplified reporting through Form 5500- EZ is available when an individual (alone or with his/her spouse) owns the entire business and the qualified retirement plan provides benefits to no one other than the owner (and/or the owner’s spouse). Moreover, a special exemption from the filing requirements applies when the value of the plan assets at the end of the year does not exceed $250,000. In a ROBS arrangement, the qualified retirement plan invests in employer stock, and while the shares held by the plan may be held as earmarked investments of the owner’s account, it is the plan and not the individual that is the owner of record. The entire business, then, is not owned by the individual, and the ROBS plan does not qualify for Form 5500-EZ or the filing exemption. Consequently, in virtually all cases, a ROBS plan is obligated to file an annual Form 5500.
https://www.lexology.com/library/detail.aspx?g=a1ca38b8-c130-48c2-82a2-39e407fc6d3c
CC-MAIN-2018-13
refinedweb
231
51.62
MOUNT_PORTALFS(8) MidnightBSD System Manager’s Manual MOUNT_PORTALFS(8) NAME mount_portalfs — mount the portal daemon SYNOPSIS mount_portalfs [−o options] /etc/portal.conf mount_point DESCRIPTION The mount_portalfs utility attaches an instance of the portal daemon to the global file system namespace. The conventional mount point is /p. This command is normally executed by mount(8) at boot time. The options are as follows: −o Options are specified with a −o flag followed by a comma separated systempaces, each of which handles objects of a particular type. The following sub-namespaces are currently implemented: fs, pipe, tcp, and tcplisten. The fs namespace opens the named file, starting back at the root directory. This can be used to provide a controlled escape path from a chrooted environment. (‘‘#’’) character causes the remainder of a line to be ignored. Blank lines are ignored. The first field is a pathname prefix to match against the requested pathname. If a match is found, the second field tells the daemon what type of object to create. Subsequent fields are passed to the creation function. # @(#)portal.conf 5.1 (Berkeley) 7/13/92 identical:. MidnightBSD 0.3 March 11, 2005 MidnightBSD 0.3
http://www.midnightbsd.org/documentation/man/mount_portalfs.8.html
CC-MAIN-2014-15
refinedweb
194
50.73
#include <FXDate.h> #include <FXDate.h> List of all members. Names for the weekdays. [inline] Default constructor. Copy constructor. Initialize with year, month, and day. Initialize with julian day number. Set julian day number. Get julian day number. Set to year, month, and day. Get year, month, and day. Return day of the month. Return month. Return year. Return day of the week. Return day of year. Return days in this month. Return true if leap year. [static] Is the value a leap year. [inline, static] Get the name of the month. Get the abbreviated name of the month. Get the name of the day. Get the abbreviated name of the day. Return current local date. Return current UTC (Zulu) date. Assignment. Assignment operators. Increment and decrement. Equality tests. Inequality tests. [friend] Add days to date yielding another date. Substract dates yielding days. save to stream load from stream
http://ftp.fox-toolkit.org/ref16/classFX_1_1FXDate.html
CC-MAIN-2022-21
refinedweb
149
74.15
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. wikify your texts! micro-framework for text wikification goals - avoid conflicts between text modifications rules and be easy to extend and debug author: anatoly techtonik techtonik@gmail.com license: Public Domain the problem and solution this example is pasted from real-word replacement rules of Roundup issue tracker: >>> import re >>> rules = [ # link to debian bug tracker (re.compile('debian:\#(?P<id>\d+)'), '<a href="\g<id>">debian#\g<id></a>' ), # link to local issue (re.compile('\#(?P<id>\d+)'), '<a href="issue\g<id>">#\g<id></a>' ), ] >>>>> for search, replace in rules: ... text = search.sub(replace, text) ... >>> text '<a href="">debian<a href="issue222">#222</a></a>' expected output is: '<a href="">debian#222</a>' the solution: >>> import wikify >>> wrules = [wikify.RegexpRule(s,r) for s,r in rules] >>> wikify.wikify("debian:#222", wrules) '<a href="">debian#222</a>' usage - define rules that match and process parts of text - text = wikify(text, rules) rule is a function or an object run() method that takes text and returns either None (means not matched) or this text split into three parts [ not-matched, processed, the-rest ]. processed part of text is returned modified by the rule. example of a rule in action: >>> import wikify >>> wikify.rule_link_wikify('wikify your texts!') ('', '<a href="">wikify</a>', ' your texts!') and its source code: def rule_link_wikify(text): """ replace `wikify` text with a link to repository """ if not 'wikify' in text: return None res = text.split('wikify', 1) site = '' url = '<a href="%s">wikify</a>' % site return (res[0], url, res[1]) using the rule with wikify to get processed text: >>> from wikify import wikify, rule_link_wikify >>> wikify('wikify your texts!', rule_link_wikify) '<a href="">wikify</a> your texts!' you probably want change url and searched string, so to avoid rewriting the rule from scratch, wikify provides some. API RegexpRule(search, replace=r'\0') wikify rule class. search is regexp, replace can be string with backreferences (like \0, \1 etc.) or a callable that receives re.MatchObject. r = RegexpRule('(\d+)', '[\\1]') print(wikify('wrap list 1 2 3 45', r)) # wrap list [1] [2] [3] [45] in comparison to standard re.sub, RegexpRule expands \0 in replacement template to the whole matched string. tracker_link_rule(url) chained function rule (function that returns list of rules) that replaces references like #123, issue #123 with link to url with issue number appended. w = tracker_link_rule('') print(wikify('issue #123, Ᾱ', w)) # <a href="">issue #123</a>, Ᾱ wikify(text, rules) rules argument can be a list of rules. wikify ensures that text processed by one rule is not reachable by others. if you try to process some text without wikify with just a series of replacement commands, there can be situations when later replacement may affect the text just pasted by previous one. wikify was made to prevent this from happening. using as a Sphinx extension wikify is also a Sphinx extension. the following lines if added to conf.py, will link issue numbers on changes page to bugtracker for the sphinx project: extensions = ['wikify'] # setup wikify extension to convert issue references to links from wikify import RegexpRule, tracker_link_rule wikify_html_rules = [ # PR#123 or pull request #123 RegexpRule('(PR|pull request\s)\s*#(\d+)', '<a href="\\2">\\0</a>'), # issue #123 or just #123 tracker_link_rule('') ] wikify_html_pages = ['changes'] operation (flat algorithm) for each region - find region in processed text - process text matched by region - exclude processed text from further processing note: (flat algorithm) doesn't process nested markup, such as: *`bold preformatted text`* example - replace all wiki:something with HTML links - [x] wrap text into list with single item - [x] split text into three parts using regexp wiki:\w+ - [x] copy 1st part (not-matched) into the resulting list - [x] replace matched part with link, insert (processed) into the resulting list - [ ] process (the-rest) until text list doesn't change - [x] repeat the above for the rest of rules, skipping (processed) parts - [x] reassemble text from the list roadmap - [ ] optimize - measure performance of using indexes instead of text chunks - [x] write docs - [x] upload to PyPI history - 1.5 - fixed major flaw in subst order for single rule - 1.4 - support named group replacements in RegexpRule - 1.3 - create_tracker_link_rule to tracker_link_rule - 1.2 - convert create_regexp_rule to RegexpRule class - 1.1 - allow rules to be classes (necessary for Sphinx) 1.0 - use wikify as Sphinx extension 0.9 - case insensitive match in tracker link rule - 0.8 - python 3 compatibility - 0.7 - fixed major flaw in text replacements mapping - 0.5 - helper to build rules to link tracker references - 0.6 - flatten nested rule lists - 0.4 - accept single rule in wikify in addition to list - 0.3 - allow callables in replacements for regexp rules - 0.2 - helper to build regexp based rules - 0.1 - proof of concept, production ready, no API sugar and optimizations
https://bitbucket.org/techtonik/wikify
CC-MAIN-2018-05
refinedweb
821
55.74
Time to Move So now that we have all of our seaweed, it’s time to start moving through them. If you remember from before, we’re not actually going to move our fish. Instead, we’ll move the seaweed past our fish. Let’s create a new script named “MoveLeft” Here’s our starting MoveLeft file. using UnityEngine; using System.Collections; public class MoveLeft : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } } For our “MoveLeft” script, we don’t need anything in the Start method, so delete that now. In our Update method, we want the object the script is on to move to the left, so add the following line of code. transform.Translate(Vector3.left * Time.deltaTime); The transform here is the transform you see in the inspector with the position, rotation, and scale. Translate just moves the transform’s position in the direction and magnitude of the Vector3 we pass in. What we’re passing in is Vector3.left multiplied by the amount of time that has passed since the last frame. If you remember, our game will run somewhere between 30 and 90 frames per second. Because the frame rate is variable, we want to use the amount of time passed since the last update. This makes it so our seaweed move the same speed regardless how fast our device can run the game. The final script should look like this using UnityEngine; using System.Collections; public class MoveLeft : MonoBehaviour { // Update is called once per frame void Update () { transform.Translate(Vector3.left * Time.deltaTime); } } …. Time.deltaTime is the amount of time passed since the call to Update(). This number is generally really small around 0.0166. It’s calculated in Unity3D by dividing 1 by your framerate. (1/60 = 0.0166) Now go back to the Editor and select the “seaweed parent” in our Project View. With the “seaweed parent” Prefab selected, look to the Inspector. (the one in the Project view is the Prefab) Add the “MoveLeft” script to our “seaweed parent”. Now try playing again. If your seaweed isn’t moving, go to your code editor and make sure you saved your changes to the “MoveLeft” script. If all went well, you should see your seaweed all moving to the left and your fish appears to be swimming forward. Why Prefabs are Great Once you’re done playing, I want you to select one of the seaweed’s in your Hierarchy (it doesn’t matter which one) Notice that the “MoveLeft” script was added to it. This is where the power of Prefabs comes into play. Any change you make to the Prefab will be automatically applied to placed instances of that Prefab. (this applies in all of your scenes when you have multiple) If you take a closer look at the Transform, you’ll notice the Position & Rotation are bold. Properties that are bold are not using the values from the Prefab. If you modify a property of a GameObject in the Hierarcy View, it will become bold and no-longer take changes from it’s Prefab. Speed things up Right now, our seaweed is moving pretty slow. Let’s modify the “MoveLeft” script to make the seaweed speed adjustable. Edit your “MoveLeft” script to match this using UnityEngine; using System.Collections; public class MoveLeft : MonoBehaviour { [SerializeField] private float _speed = 5f; // Update is called once per frame void Update () { transform.Translate(Vector3.left * Time.deltaTime * _speed); } } Here you can see we’ve introduced a variable with the [SerializeField] attribute on it just like we did previously with the fish. We then multiply our translation by that new “_speed” variable. We set the default for _speed to 5f, so it should move 5 times faster than before. Try playing again. For me, 5 seems a bit too fast. Because we used [SerlaizeField], we can adjust this speed directly in the editor. Select the Prefab for our seaweed and adjust the speed until you find a # that feels right. The final speed I used is 2.5 If you accidentally did your editing on an instance of the seaweed from the Hierarchy instead of the Prefab in the Project view, no problem! You can actually apply your changes from a placed instance to it’s Prefab (and all other instances) by simply clicking the “Apply” button at the top of the inspector. Let’s play some more Give the game another try and see if you can get through all the seaweed. They’re Gone! If you’re any good at this game, you noticed all the seaweed disappeared to the left. You may be wondering if you should add more seaweed to make the level bigger, or if you should move the fish, or maybe the seaweed? If you weren’t, start wondering now and see what ideas you come up with. … … While there are many different ways you could accomplish making this game go on longer, the easiest and best is to just move the seaweed once it goes out of view. To do this, we need to go back to our “MoveLeft” script. Change your) { transform.position = new Vector3(15, 0, 0); } } } Let’s focus on the new lines of code. Lines 13-16 First, we look at the X value of the transforms position. If that X value is less than -15, we execute the code inside the brackets { } The code in the brackets is setting the position of the seaweeds transform. The value we’re setting it to is 15 for the X and 0 for Y & Z. [15, 0, 0] So all we’re really doing here is checking if a seaweed moved far enough to the left. (negative X values are left of our fish who’s at 0) If the seaweed is far enough over at -15 or further, we move it back to the right, but far enough off the right side that our players won’t see it on their screen. Save the script and play again. I’ve split my Scene & Game views again here. I recommend you do the same to get a good idea of what’s going on. Next – Randomization & Ground – We’ll add some randomization, some ground, and a couple props.
https://unity3d.college/2015/11/11/unity3d-intro-building-flappy-bird-part-5/
CC-MAIN-2020-29
refinedweb
1,049
80.62
OBSOLETE Eugene Burmako Type macros used to be available in previous versions of “Macro Paradise”, but are not supported anymore in macro paradise 2.0. Visit the paradise 2.0 announcement for an explanation and suggested migration strategy. Just as def macros make the compiler execute custom functions when it sees invocations of certain methods, type macros let one hook into the compiler when certain types are used. The snippet below shows definition and usage of the H2Db macro, which generates case classes representing tables in a database along with simple CRUD functionality. type H2Db(url: String) = macro impl object Db extends H2Db("coffees") val brazilian = Db.Coffees.insert("Brazilian", 99, 0) Db.Coffees.update(brazilian.copy(price = 10)) println(Db.Coffees.all) The full source code of the H2Db type macro is provided at Github, and this guide covers its most important aspects. First the macro generates the statically typed database wrapper by connecting to a database at compile-time (tree generation is explained in the reflection overview). Then it uses the NEW c.introduceTopLevel API (Scaladoc) to insert the generated wrapper into the list of top-level definitions maintained by the compiler. Finally, the macro returns an Apply node, which represents a super constructor call to the generated class. NOTE that type macros are supposed to expand into c.Tree, unlike def macros, which expand into c.Expr[T]. That’s because Exprs represent terms, while type macros expand into types. type H2Db(url: String) = macro impl def impl(c: Context)(url: c.Expr[String]): c.Tree = { val name = c.freshName(c.enclosingImpl.name).toTypeName val clazz = ClassDef(..., Template(..., generateCode())) c.introduceTopLevel(c.enclosingPackage.pid.toString, clazz) val classRef = Select(c.enclosingPackage.pid, name) Apply(classRef, List(Literal(Constant(c.eval(url))))) } object Db extends H2Db("coffees") // equivalent to: object Db extends Db$1("coffees") Instead of generating a synthetic class and expanding into a reference to it, a type macro can transform its host instead by returning a Template tree. Inside scalac both class and object definitions are internally represented as thin wrappers over Template trees, so by expanding into a template, type macro has a possibility to rewrite the entire body of the affected class or object. You can see a full-fledged example of this technique at Github. type H2Db(url: String) = macro impl def impl(c: Context)(url: c.Expr[String]): c.Tree = { val Template(_, _, existingCode) = c.enclosingTemplate Template(..., existingCode ++ generateCode()) } object Db extends H2Db("coffees") // equivalent to: object Db { // <existing code> // <generated code> // } Type macros represent a hybrid between def macros and type members. On the one hand, they are defined like methods (e.g. they can have value arguments, type parameters with context bounds, etc). On the other hand, they belong to the namespace of types and, as such, they can only be used where types are expected (see an exhaustive example at Github), they can only override types or other type macros, etc. In Scala programs type macros can appear in one of five possible roles: type role, applied type role, parent type role, new role and annotation role. Depending on the role in which a macro is used, which can be inspected with the NEW c.macroRole API (Scaladoc), its list of allowed expansions is different. To put it in a nutshell, expansion of a type macro replace the usage of a type macro with a tree it returns. To find out whether an expansion makes sense, mentally replace some usage of a macro with its expansion and check whether the resulting program is correct. For example, a type macro used as TM(2)(3) in class C extends TM(2)(3) can expand into Apply(Ident(newTypeName("B")), List(Literal(Constant(2)))), because that would result in class C extends B(2). However the same expansion wouldn’t make sense if TM(2)(3) was used as a type in def x: TM(2)(3) = ???, because def x: B(2) = ??? (given that B itself is not a type macro; if it is, it will be recursively expanded and the result of the expansion will determine validity of the program). With type macros you might increasingly find yourself in a zone where reify is not applicable, as explained at StackOverflow. In that case consider using quasiquotes, another experimental feature from macro paradise, as an alternative to manual tree construction. Contents
http://docs.scala-lang.org/overviews/macros/typemacros.html
CC-MAIN-2014-52
refinedweb
732
55.34
Blazor Date Picker Component Overview The Blazor Date Picker component allows the user to choose a date from a visual Gregorian calendar or type it into a date input that can accept only dates. You can control the date format of the input, how the user navigates through the calendar, and which dates the user cannot select. The Date Picker component is part of Telerik UI for Blazor, a professional grade UI library with 85+ native components for building modern and feature-rich applications. To try it out sign up for a free 30-day trial. The Date Picker component is part of Telerik UI for Blazor, a professional grade UI library with 85+ native components for building modern and feature-rich applications. To try it out sign up for a free 30-day trial. To use a Telerik Date Picker for Blazor, add the TelerikDatePicker tag. Basic date picker with namespace and reference The selected date is: @datePickerValue.ToShortDateString() <br /> <TelerikDatePicker @</TelerikDatePicker> @code { DateTime datePickerValue { get; set; } = DateTime.Now; Telerik.Blazor.Components.TelerikDatePicker<DateTime> theDatePicker; // the type of the component depends on the type of the value // in this case it is DateTime, but it could be DateTime? } Features The Blazor Date Picker component exposes the following features: BottomView- Defines the bottommost view in the popup calendar to which the user can navigate to. Defaults to CalendarView.Month. DisabledDates- Specifies a list of dates that can not be selected. Class- The custom CSS class rendered on the wrapping element. PopupClass- additional CSS class to customize the appearance of the Date Picker's dropdown. Enabled- Specifies whether typing in the input is allowed. Format- Specifies the format of the DateInput of the DatePicker. Read more about supported data formats in Telerik DateInput for Blazor UI article. Id- renders as the idattribute on the <input />element, so you can attach a <label for="">to the input. Min- The earliest date that the user can select. Max- The latest date that the user can select. PopupHeight- Defines the height of the DatePicker's Popup. Defaults to auto. PopupWidth- Defines the width of the DatePicker's Popup. Defaults to auto. Value- The current value of the input. Can be used for binding. View- Specifies the current view that will be displayed in the popup calendar. Width- Defines the width of the DatePicker. Defaults to 280px. TabIndex- maps to the tabindexattribute of the HTML element. You can use it to customize the order in which the inputs in your form focus with the Tabkey. Validation - see the Input Validation article. The date picker is, essentially, a date input and a calendar and the properties it exposes are mapped to the corresponding properties of these two components. You can read more about their behavior in the respective components' documentation.
https://docs.telerik.com/blazor-ui/components/datepicker/overview
CC-MAIN-2021-39
refinedweb
464
58.48
This I have been really itching to get back out to speak in front of the developer community. One of the areas I’ve been working in for a while is building SharePoint Apps. Office and SharePoint Apps let you customize the Office and SharePoint experiences with your own customizations. Apps are web-based, and you use HTML and JavaScript to customize Office (Outlook, Word, Excel, PowerPoint) and SharePoint itself. For more info on apps, see the MSDN Library: Apps for Office and SharePoint We’ve also been working on another programming model that I’m really jazzed about. It allows you to build your own custom apps and consume data from Office 365 (Sites, Mail, Calendar, Files, Users). They are simple REST OData APIs for accessing SharePoint, Exchange and Azure Active Directory from a variety of platforms and devices. You can also use these APIs to enhance custom business apps that you may already be using in your organization. To make it even easier, we’ve built client libraries for .NET, Cordova and Android. The .NET libraries are portable so you can use them in Winforms, WPF, ASP.NET, Windows Store, Windows Phone 8.1, Xamarin Android/iOS,. There’s also JavaScript libraries for Cordova and an Android (Java) SDK available. If you have Visual Studio this gets even easier by installing the Office 365 API Tools for Visual Studio extension. The tool streamlines the app registration and permissions setup in Azure as well as adds the relevant client libraries to your solution via NuGet for you. Before you begin, you need to set up your development environment. Note that the tools and APIs are currently in preview but they are in great shape to get started exploring the possibilities. Read about the client libraries here and the Office 365 APIs in the MSDN Library. More documentation is on the way! Let’s see how it works. Once you install the tool, right-click on your project in the Solution Explorer and select Add – Connected Service… This will launch the Services Manager where you log into your Office 365 developer site and select the permissions you require for each of the services you want to use. Once you click OK, the client libraries are added to your project as well as sample code files to get you started. The client libraries help you perform the auth handshake and provide strong types for you to work with the services easier. The important bits.. const string MyFilesCapability = "MyFiles"; static DiscoveryContext _discoveryContext; public static async Task<IEnumerable<IFileSystemItem>> GetMyFiles() { var client = await EnsureClientCreated(); // Obtain files in folder "Shared with Everyone" var filesResults = await client.Files["Shared with Everyone"]. ToFolder().Children.ExecuteAsync(); var files = filesResults.CurrentPage.OrderBy(e => e.Name); return files; } public static async Task<SharePointClient> EnsureClientCreated() { if (_discoveryContext == null) { _discoveryContext = await DiscoveryContext.CreateAsync(); } var dcr = await _discoveryContext.DiscoverCapabilityAsync(MyFilesCapability); var ServiceResourceId = dcr.ServiceResourceId; var ServiceEndpointUri = dcr.ServiceEndpointUri; // Create the MyFiles client proxy: return new SharePointClient(ServiceEndpointUri, async () => { return (await _discoveryContext.AuthenticationContext. AcquireTokenSilentAsync(ServiceResourceId, _discoveryContext.AppIdentity.ClientId, new Microsoft.IdentityModel.Clients.ActiveDirectory .UserIdentifier(dcr.UserId, Microsoft.IdentityModel.Clients.ActiveDirectory .UserIdentifierType.UniqueId))).AccessToken; }); } This code is using the Discovery Service to retrieve the rest endpoints (DiscoverCapabilityAsync). When we create the client proxy, the user is presented with a login to Office 365 and then they are asked to grant permission to our app. Once they authorize, we can access their Office 365 data. If we look at the request, this call: var filesResults = await client.Files["Shared with Everyone"]. ToFolder().Children.ExecuteAsync(); translates to (in my case): GET /personal/beth_bethmassi_onmicrosoft_com/_api/Files('Shared%20with%20Everyone')/Children The response will be a feed of all the file (and any sub-folder) information stored in the requested folder. Play around and discover the capabilities. There’s a lot you can do. I encourage you to take a look at the samples available on GitHub: - - - Also check out these video interviews I did this summer to learn more: - Integrating Xamarin Android Apps with Office 365 APIs - Office 365 API Tools for Visual Studio: Users and Files Enjoy! Join the conversationAdd Comment Nice article Beth. Do you know where I can find the current list of supported project types? Cannot wait for O365 API to be supported in LSCBA Project template. Have a great day! Hi Josh, The currently supported project types are listed on the extension description page: aka.ms/office365apitoolspreview I can't wait for support for CBA's as well, but I'm told there is some Auth work to do. Help the teams prioritize by voicing this (I know you will :-)) It would be great to make a uservoice suggestion on the Office Dev space here: officespdev.uservoice.com And when are we getting a new Lightswitch version? I know we released an update to the NuGet package for msls recently. What do you mean by new? Visual Studio 14? Well the last one was in March and I know since then you were cleaning up the uservoice board so just wondering when the next release is? It used to be every 3 or 4 months. BTW my personal need to recommend this for more projects at work is a custom header ability built in and better menu system for navigation. Actually we released an update to the VS tooling more recently than that. It was VS Update 3 when we did the update work on the publishing wizard IIRC. March was a major update to catch up to the SharePoint app platform. Since then, the updates have been smaller because the changes to the apps platform have been smaller. Office 365 APIs have been the focus lately. I know there's a lot of tools to work on for the Office/SharePoint dev platform and the team is trying to keep up on all of them. We're building a lot of stuff so you'll see our updates jump around the multiple tooling experiences. Ok fine I guess we will wait to see what you release. I would like to tell you though I have heard several Product Managers complain that they can't brand the app with a nice custom header like you might get with any other engine. I realize mobile is the focus however so many other technologies have come up with responsive headers to accommodate desktop mode and mobile at the same time. Nice to see Office 365 taking off and unfortunately Lightswitch dying. Why cant we have Lightswitch Office API, publish to Sharepoint sites or even Windows store? It is so naturally suited for HTML/JScript . I am also now concerned I started with the wrong technology and have just wasted 18 months of extensive dev time. What makes it worse is being kept in the dark… @pp8357hot – Not sure what you mean, you can SharePoint enable a LightSwitch app and publish to the SharePoint store/corporate catalog. LS sits on the apps side of the Office 365 offerings. Office is providing developers choices for building on Office 365. You can go the apps route or you can access services data directly. so Beth, you're gone from LS team or? @Kivito – I usually avoid talking about internal org structures at Microsoft, since they change. But I've always been on the broader Visual Studio team. I've always blogged about my passions around business app development. I do admit that my duties have turned more internally (and I have a family now) so I don't get out and blog/speak as much as I'd like these days. Well that's great, I'm happy for you and wish you more time for blogging and family! I just hope that guys will not become lazy without a strong female hand in the Lightswitch kitchen.. ;) I also wish you all the best, Beth, and I was also impressed by your leadership and energy with LightSwitch. But look at the LightSwitch "team" blog sometime, and you'll be quite embarrassed for your company. What sane developer will ever hang their hat on Microsoft's "RAD" after the Silverlight and now LightSwitch debacle? Personally, I think LightSwitch would be by far the best LOB RAD tool out there, if we could be convinced it has a future. You invested a lot of energy in it. Could you possibly get to MS to give any information on all on it? Or make it open source? Anything?… Agree completley. ..beth, Microsoft produced a fantastic product in LS, they and you pushed it, sold it and evangelised thousands of loyal developers. Then without a sound, a comment or a simple post it looks like they killed it. I will probably continue to use LS for add long as possible, partly because I have locked myself into it with clients, partly because there's no clear alternative but mostly despite Microsoft astounding disregard for their customers and their appalling support LS is still a fantastic product that surpasses anything else out there. BETH, if you can provide clarity on the future of a product that you worked so hard to push us towards it would be really appreciated by everyone who bought into what you sold us. I understand you guys are frustrated in the decline of community activity around LightSwitch. You're preaching to the choir. But I have picked up other responsibilities so unfortunately it's been impossible for me to keep up as well. I still use LightSwitch. I still recommend it for apps that fit. I don't think there is any better alternative to rapidly developing mobile SharePoint apps that connect to multiple data sources and don't need a fancy UI. It's still supported and part of VS2015. And now it's FREE with Community edition. If you guys know of something better that fits this space, I want to know about it too! I'm sorry if you think I tried to sell you something because I'm not a sales person, I'm an engineer and a teacher. I always have had a passion for teaching people how to develop applications. I try my best to show you how to use something to it's maximum potential. That's what I have always done here on this blog. There a many resources here, not just about LightSwitch. (Believe it or not, the most read article here BY FAR is about installing SQL Server.) That's because developing software is my passion. I have been working in the open source community in the last year and I have learned a TON. I have learned what can and can't be realistically open sourced as well. I hope to start blogging here and there again soon about some of my new adventures. I hope you guys can support me on my journey and not hate me because my life moved on. I've always have had the best intentions here. -Beth Beth I am sure no one faults you for anything, or has bad feelings about your work with lightswitch. You're an employee of Microsoft – I think everyone knows that it means you don't get to make all of your own decisions. EVERYONE did and still does appreciate your work, with us, on lightswitch. Someone mentioned that you 'sold' lightswitch – I am sure that they simply meant you promoted it, out of every best intention. It is devastating that Microsoft seems to have left lightswitch to drift. Microsoft has made some apparently good changes of late, but until they are capable of an honest relationship with their customers, which means telling it like it is instead of going silent, and actually engaging with the community, it rings hollow. The company seems multi-faced, unreliable. My software career has been a great success in every way, it's been fun, it's been almost all Microsoft…and I have no confidence in Microsoft at this point. I and probably the rest of us would migrate to any other RAD platform that has legs in a flash as soon as it is manifest, or each of us discovers the framework that fits. It was also a bit painful to read your post above in c# only…salt on the wound <g>. Error: DiscoveryContext could not be found. How to fix this issue @rusticcloud – I still love VB. But honestly, C# really isn't that painful anymore. It's a lot easier than JavaScript ;-) And you will be happy to note that C#6 borrows static usings & exception filtering from VB :-) blogs.msdn.com/…/new-features-in-c-6.aspx @visha – this post was about the pre-release version of O365 client libraries. You should check out Chak's post on what's updated and the new samples: chakkaradeep.com/…/update-to-office-365-api-tools-and-client-libraries This is really beyond catastrophe. asp.net 5 details are out and no support for vb.net, and no support for webforms. Lightswitch apparently dead. There is no current Microsoft product that I like any more other than sql server. There are zillions of devs like myself, folks that are less geared for coding than you Beth, who nevertheless are fully capable of wrangling a product like Access, vb 6, lightswitch, or webforms into valuable, productive, and durable business solutions. Microsoft has nothing to offer us. A product that does suit and has been curated perfectly is Orcale's APEX product. Oracle has evolved that product, which started out as one guy's side project, into a full blown application framework. They have never missed a step – every iteration is in the direction that the users want. The only issue I have with it is that I am not fond of Oracle as a database. God knows what demon has infested Microsoft such that of all companies they seem to excel only at ignoring huge sections of their user base, and breaking every trace of facilitating LOB productivity tools. We are using visual studio 2012, need to integrate the Office 365 files. how to achieve this using the jquery libraries. it is hard to find out the solution. please help with the details.
https://blogs.msdn.microsoft.com/bethmassi/2014/10/14/getting-started-with-the-office-365-apis/
CC-MAIN-2018-13
refinedweb
2,367
64.41
Hi All, Am new to this NAnt environment. Currently am developing a NAnt script to build a Windows forms application. After build the NAnt script, am getting the error as "error CS0246: The type or namespace name 'Form1' could not be found (are you missing a using directive or an assembly reference? ) BUILD FAILED C:\Program Files\NAnt\examples\SampleWindowsApplication\Sample.Build(24,10): External Program Failed: C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe (return code was 1)" Can anyone please advise to resolve this issue? Thanks in advance. ------------------------------------------------------------------------------ This SF.net email is sponsored by Windows: Build for Windows Store. _______________________________________________ NAnt-users mailing list NAnt-users@lists.sourceforge.net
https://www.mail-archive.com/nant-users@lists.sourceforge.net/msg12325.html
CC-MAIN-2019-47
refinedweb
113
52.76
altered the extension wrapping script to generate wrappers for the GL extensions in the raw hierarchy which are then imported into the modules in the root hierarchy. Any extension customisation code should still be present and working (and seems to be in my limited tests). Have fun all, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder Mark Heslep wrote: > Mike C. Fletcher wrote: > >> I've just checked in a fairly significant restructuring of the 3.x >> (OpenGL-ctypes) codebase. What this does is to use the latest ctypes >> auto-generator (actually a fairly heavily refactored version of it) to >> produce a separate package hierarchy (OpenGL.raw) which contains the raw >> OpenGL C-style API. This API can be used by ctypes-aware code that uses >> C-style interfaces, it also allows for coding around problems where the >> PyOpenGL abstraction layer happens to get in the way. >> >> One thing to note is that all of the constants and functions are now >> produced by the auto-generator, this means that there is sometimes more >> machinery in the module than you would expect (e.g. the GLX module now >> has large parts of the X API). At the moment I'm thinking (in the case >> of GLX) much of that machinery is actually useful (if somewhat >> misplaced). e.g. the functions for screen saver development are likely >> in there now. Probably filter them out at some point I suppose. >> >> > It strikes me that a 'raw' API release like this, followed lease by or > released concurrently with a more sophisticated Pythonic API, is the > model to follow for most Python ports of mature C/C++ projects. I was apparently unclear. By "separate package hierarchy" I meant a sub-package of OpenGL whose structure mirrors that of the main package hierarchy. That is, it will all be distributed together for the OpenGL package. What may happen is either that Pyglet uses the auto-generator to create their own copy of the hierarchy (in pyglet.GL). The other option is to make the OpenGL package a setuptools namespace package, though I'm not sure that would work particularly well with the modules right at the OpenGL.* level. > It > lowers the transition bar for both the existing user base and the Py > port developer since a) the C/C++ users don't have to learn a new API on > top the required Python newness, b) the documentation is essentially > already in place*, c) the autogen nature of the raw port makes it easier > to maintain absent the full concentration of the primary Py developer, > and finally d) as consequence of a,b,&c the bug reports are likely to be > more valid/correct. The Pygame ctypes port, with all its good work, is > illustrative of the problems caused by skipping the raw API (e.g. > threatened forks etc.) > Hadn't heard anything about that. I actually see the "raw" API as something of a specialist's feature. It doesn't really make a particularly useful API to have to code every little C idiom in Python, so you have to have a special reason for doing it, such as needing to get around a Python abstraction layer problem. Anyway, if it makes it easier for a C coder or the Pyglet guys, yay :) . Even if it doesn't, it makes for a more consistent layout for the package and gives easy access to the raw API for advanced coders, so we'll likely go this way. > * this BTW makes it straightforward to auto include the documentation as > Py doc strings in the raw module. > There's already basic documentation in the raw module. The thing to keep in mind is that PyOpenGL's online documentation is based on the XML docbook source for OpenGL, GLUT and GLE, and the resulting generated files are *big* (about 8.9MB) and not particularly pydoc-friendly, not really something you want to include in the source-code. Have
https://sourceforge.net/p/pyopengl/mailman/pyopengl-devel/?viewmonth=200611&viewday=14&style=flat
CC-MAIN-2017-04
refinedweb
658
60.14
I am sorry if i have posted this in the wrong section, but I'm new here and this seemed the most logical place to me. I am very new to programming, and to the Eclipse environment. I have had this problem before, and am having it now with Eclipse that it will not accept my curly braces, and sometimes my ;. It has an error saying it expects } to complete method body, or { expected after this token. I don't have the specifics on the ;, but it gets flagged too. As far as I can tell I have the punctuation in the correct place, but it will not run. It will also pop up an error periodically saying the default setting have changed and ask if I would like to retrieve the default settings or ignore. I don't know that I've made any changes, so I restore the defaults, and then my auto-corrections seem to work better. Do you think I need to re-download the program? I am running version 3.6.0. Here is my code if you want to double check me... maybe I am crazy, or really not doing as well as I thought I was. I've indicated the areas where errors are with // to the right side of the lines its occuring. Code java: public class Exercise41 extends CashRegister { public static void main(String[] args) { // get error here for { private static final double QUARTER_VALUE = 0.25; // declare constant private static final double DIME_VALUE = 0.10; // declare constant private static final double NICKEL_VALUE = 0.05;//declares constant private static final double PENNY_VALUE = 0.001; // declares constant } public class CashRegister { // get error here for } this.CashRegister = new CashRegister(); register.recordPurchase(20.37); register.enterDollars(20); register.enterQuarters(2); System.out.println("Change: " + register.GiveChange()); System.out.prinln("Expected: 0.13"); } }
http://www.javaprogrammingforums.com/%20java-ides/5487-help-w-eclipse-needed-errors-w-curly-braces-%3B-printingthethread.html
CC-MAIN-2016-07
refinedweb
307
65.22
Question: Is there any way for all my PHP and/or HTML file output to be "filtered" before being displayed in the browser? I figured that I could pass it through a global function before it is displayed but I'm stuck on the implementation. Please help. If there is a better way to achieve the same result, I'd be happy to know. Thanks. Solution:1 Check out ob_start which lets you pass a callback handler for post-processing your script output. For example, PHP includes a built-in callback ob_gzhandler for use in compressing the output: <?php ob_start("ob_gzhandler"); ?> <html> <body> <p>This should be a compressed page.</p> </html> <body> Here's a fuller example illustrating how you might tidy your HTML with the tidy extension: function tidyhtml($input) { $config = array( 'indent' => true, 'output-xhtml' => true, 'wrap' => 200); $tidy = new tidy; $tidy->parseString($input, $config, 'utf8'); $tidy->cleanRepair(); // Output return $tidy; } ob_start("tidyhtml"); //now output your ugly HTML If you wanted to ensure all your PHP scripts used the same filter without including it directly, check out the auto_prepend_file configuration directive. Solution:2 You can use output buffering and specify a callback when you call ob_start() <?php function filterOutput($str) { return strtoupper($str); } ob_start('filterOutput'); ?> <html> some stuff <?php echo 'hello'; ?> </html> Solution:3 You can use PHP's output buffering functions to do that You can provide a callback method that is called when the buffer is flushed, like: <?php function callback($buffer) { // replace all the apples with oranges return (str_replace("apples", "oranges", $buffer)); } ob_start("callback"); ?> <html> <body> <p>It's like comparing apples to oranges.</p> </body> </html> <?php ob_end_flush(); ?> In that case output is buffered instead of sent from the script and just before the flush your callback method is called. Solution:4 Have a look at using Smarty. It's a templating system for PHP, that is good practice to use, and into which you can plug global output filters. Solution:5 edit: Paul's reply is better. So it would be ob_start("my_filter_function"); My original reply was: That can be achieved with output buffering. For example: ob_start(); // Generate all output echo "all my output comes here." // Done, filtering now $contents = ob_get_contents(); ob_end_clean(); echo my_filter_function($contents); Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/10/tutorial-making-all-php-file-output.html
CC-MAIN-2019-26
refinedweb
393
55.03
Is it possible to call run() method of a thread directly, in java? What is the problem with below Java thread program? class MyThreadClass extends Thread{ public void run(){ for(int i=0; i<10; i++){ System.out.println(i); } } } public class ThreadDemo3 { public static void main(String[] args) { MyThreadClass mtc = new MyThreadClass(); mtc.run(); } } It gives compile time error. We are not supposed to call run() method of a thread directly. We have to call only start(). There is no error, it prints 0 to 9. You can call start() or run() method. Both works find and both are correct also. Though there is no error, and prints properly from 0 to 9. But that is not a correct way, because when you call run() method directly it will not create a separate thread, rather run() function also runs in the main thread only. So it is like your having only one thread in your program. This gives a run time exception as we are not supposed to call run() method of a thread directly. It should be always called from frame work layer, so this will be detected at run time and throws run time exception. So it is not a multi threaded program. Back To Top
http://skillgun.com/question/3080/java/threads/is-it-possible-to-call-run-method-of-a-thread-directly-in-java-what-is-the-problem-with-below-java-thread-program-class-mythreadclass-extends-thread-public-void-run-forint-i0-i10-i-systemoutprintlni-public-class-threaddemo3-public-static-void-mainstring
CC-MAIN-2016-50
refinedweb
209
84.88
09 August 2012 13:15 [Source: ICIS news] LONDON (ICIS)--The European August styrene contract reference price (CRP) has been settled at €1,375/tonne ($1,698/tonne), up by €135/tonne from the previous month, one consumer said on Thursday. The increase of €135/tonne was in line with the August barge contract settlement agreed earlier in the month. Despite a slowdown in activity, styrene prices this month have been supported by higher feedstock costs as well as some tightness owing to restricted imports coming into Europe from the ?xml:namespace> The contract was agreed on a free carrier (F
http://www.icis.com/Articles/2012/08/09/9585641/europe-august-styrene-crp-settles-at-1375tonne-up.html
CC-MAIN-2014-49
refinedweb
101
56.59
> From: Peter Donald [mailto:donaldp@apache.org] > > At 06:16 AM 5/25/01 +0100, Jose Alberto Fernandez wrote: > >> Ick. I don't think I like using namespace in this way. I can handle > >> naespace for "static" structural aspects (ie indicating task > >> library or > >> aspect attribute/element) but it can get confusing to use > >> namespace to also > >> indicate other projects. > >> > > > >Well this are not really XML name-spaces, since they are on > the attribute > >values. Not the attribute names. In any case, if we are > going to include > >things in one another we will need to have a name > dereferencing operator. In > >this case is ":" but it could have been anything else ( "^" > "->" "!" ). > > Excellent - I like. > > Static namespaces (ie task/aspect) allocation uses ':' for resolution > Dynamic instance namespaces (ie other projects) allocation > uses '->' for > resolution (I prefer this over '.' as '.' is commonly used in names of > properties). > > So we would now have something like > > <target name="foo" depends="otherPrj->before-foo, > otherPrj->before-foo2"> > <echo message="Here is the value of public property blah.present"/> > <echo message="in project 'Other': ${otherPrj->blah.present}"/> > </target> > > Thoughts? > Ups, it just came to mind, would we need to scape "->" as "->"? :P If that is the case maybe we need to pick some other less XML sensitive. :( ".^" reminds me of my Pascal days. 8) Jose Alberto > Cheers, > > Pete > > *-----------------------------------------------------* > | "Faced with the choice between changing one's mind, | > | and proving that there is no need to do so - almost | > | everyone gets busy on the proof." | > | - John Kenneth Galbraith | > *-----------------------------------------------------* >
http://mail-archives.apache.org/mod_mbox/ant-dev/200105.mbox/%3C009701c0e518$0eefb460$697b883e@viquity.com%3E
CC-MAIN-2014-10
refinedweb
252
67.65
NAMEAE - simpler/faster/newer/cooler AnyEvent API SYNOPSIS use AnyEvent; # not AE # file handle or descriptor readable my $w = AE::io $fh, 0, sub { ... }; # one-shot or repeating timers my $w = AE::timer $seconds, 0, sub { ... }; # once my $w = AE::timer $seconds, $interval, sub { ... }; # repeated print AE::now; # prints current event loop time print AE::time; # think Time::HiRes::time or simply CORE::time. # POSIX signal my $w = AE::signal TERM => sub { ... }; # child process exit my $w = AE::child $pid, sub { my ($pid, $status) = @_; ... }; # called when event loop idle (if applicable) my $w = AE::idleThis module documents the new simpler AnyEvent API. The rationale for the new API is that experience with EV shows that this API actually "works", despite its lack of extensibility, leading to a shorter, easier and faster API. The main differences from AnyEvent is that function calls are used instead of method calls, and that no named arguments are used. This makes calls to watcher creation functions really short, which can make a program more readable despite the lack of named parameters. Function calls also allow more static type checking than method calls, so many mistakes are caught at compile-time with this API. Also, some backends (Perl and EV) are so fast that the method call overhead is very noticeable (with EV it increases the execution time five- to six-fold, with Perl the method call overhead is about a factor of two). Note that the "AE" API is an alternative to, not the future version of, the AnyEvent API. Both APIs can be used interchangeably and there are no plans to "switch", so if in doubt, feel free to use the AnyEvent API in new code. As the AE API is complementary, not everything in the AnyEvent API is available, and you still need to use AnyEvent for the finer stuff. Also, you should not "use AE" directly, "use AnyEvent" will provide the AE namespace. At the moment, these functions will become slower then their method-call counterparts when using AnyEvent::Strict or AnyEvent::Debug::wrap. FUNCTIONSThis false) or write events ($watch_write is true) on the file handle or file descriptor $fh_or_fd. The callback $cb is invoked as soon and as long as I/O of the type specified by $watch_write) can be done on the file handle/descriptor. Example: wait until STDIN becomes readable. $stdin_ready = AE::io *STDIN, 0, sub { scalar <STDIN> }; Example: wait until STDOUT becomes writable and print something. $stdout_ready = AE::io *STDOUT, 1, sub { print STDOUT "woaw\n" }; - $w = AE::timer $after, $interval, $cb - Creates a timer watcher that invokes the callback $cb after at least $after second have passed ($after can be negative or 0). If $interval is 0, then the callback will only be invoked once, otherwise it must be a positive number of seconds that specifies the interval between successive invocations of the callback. Example: print "too late" after at least one second has passed. $timer_once = AE::timer 1, 0, sub { print "too late\n" }; Example: print "blubb" once a second, starting as soon as possible. $timer_repeated = AE::timer 0, 1, sub { print "blubb\n" }; - $w = AE::signal $signame, $cb - Invoke the callback $cb each time one or more occurrences of the named signal $signame are detected. - $w = AE::child $pid, $cb - Invokes the callback $cb when the child with the given $pid exits (or all children, when $pid is zero). The callback will get the actual pid and exit status as arguments. - $w = AE::idle $cb - Invoke the callback $cb each" method>
https://man.archlinux.org/man/AE.3pm.en
CC-MAIN-2021-10
refinedweb
587
69.01
Hello, i am trying to connect a button in a repester where upon clicking it, opens a lightbox with the current repeater item however, i have been unable to due to the above error. upon clicking the button, it only opens the lightbo with getting the context and it shows this error. I am following the totally codable code. Below is the code: for the dynamic page: after import wixwimdow, then, $w.onReady(() => { $w("#dataset7").onReady(() => { $w("#repeater1").onItemReady(($item, itemData, index) => { $item('#button10').onClick(() => { let item = $item('#dataset7').getCurrentItem(); wixWindow.openLightbox('Comment', item) }); }); }); }) for the light box,, import { lightbox } from 'wix-wixWindow'; import wixData from 'wix-data'; import wixWindow from 'wix-window'; $w.onReady(() => { let theItem = lightbox.getContext() //this is the item you took from page let postID = theItem._id // this is the field key for the item ID in the database collection $w("#dataset3").setFilter(wixData.filter() .eq("_id", postID) //we are now filtering to display only the item that matches this ID ) .then(() => { console.log("Dataset is now filtered"); }) .catch((err) => { console.log(err); }); }); Thank you very much You need to check the code again... You have: It should be: Hello, sorry for my late reply. I have checked the code over 5 times but it is still not working. About the import window, it is written properly on the page code. I made a typo while typing here but it is correct on the site. Can you tell me what the "Cannot find module 'wix-wixWindow' in 'public/pages/mos8c.js" means? and please help me resolve it. Thank you very much Have you added the imports onto the page code too? Also, if you are going by Totally Codeable tutorial, then that is from @Code Queen Nayeli and if you have any issues with using some of her code or tutorials, then you should start by going through Nayeli first through her own website at Hello, I have added the import window to the page code. Before I made this topic, I tried reaching @Code Queen Nayeli through her website and the wix forum but I have been unsuccessful as one can only make an appointment for a project through her website. on the website, I cannot send her a personal message to let her know of my problem. Any other suggestion or help with the code/error? Thank you very much Okay, so you are using this tutorial from Nayeli. So if you follow the tutorial slowly and carefully then you shouldn't go wrong as it is all clearly laid out with the pictures and text and code fully explained for you. Lightbox Code. Page Code Or you can write it like this. Make sure that you are not missing any ';' from the end of your code lines too as it looks like you have missed a few out in your code from your post. If you need to do 'Step 8: Understanding the Page Code to modify it' In the beginning of the code we are importing the Window API in order to open a lightbox window. Or 'Step 9: Understanding the Lightbox Code to modify it' Then make sure that you read the text below that step carefully and make sure that you understand it all and what the code does. Finally, also note that Nayeli has done this tutorial with her repeater on a normal page and not a dynamic page. Step 6: Add a repeater to a regular page and connect it Add a repeater to a regular page and add elements to the repeater. Style and design as desired.
https://www.wix.com/corvid/forum/community-discussion/lightbox-error-cannot-find-module-wix-wixwindow-in-public-pages-mos8c-js
CC-MAIN-2020-10
refinedweb
601
70.94
Well we've entered a new era folks! The government isn't the only one who can make a spacecraft and launch it into space. In honor of the historical space mission by Mike Melvill, first civilian astronaut, I've decided to write an article on launching, well... scheduled tasks. Not quite as exciting as launching a spaceship into outer space, but...hey, even astronauts have to automate some of their day to day activities.Although Microsoft already includes a task scheduler in the operating system, I thought it would be an interesting exercise to create one that runs processes read out of an xml file. This task scheduler did not have to be all things to all people. I simply needed it to start a process at a certain time of day everyday. Specifically, I need it to start and stop windows services using the two batch files listed below: Start.bat Stop.bat net start MyService net stop MyService Table 1 - Using Batch files to stop and start Windows Services The UML design for the Schedule Launcher (reverse engineered with WithClass) is shown below in figure 2. The design consists of the Service1 class that is automatically generated by the framework. The Service class contains a Threading.Timer which is used to poll the system clock for the current time in order to check for a possible launch. Also in the design is a singleton ProcessReader class that reads the processes and times out of an xml file.Figure 2 - UML Diagram of the Windows Service for Launching Scheduled TasksThe timer class allows us to intercept an event every 30 seconds that the service is active. The timer is constructed with the callback delegate for the event handler along with the time we want the timer to trigger an event. Initially we set the time to infinite in order to keep the timer stopped.Listing 1 - Constructor of Service containing timer construction public Service1() { // This call is required by the Windows.Forms Component Designer. InitializeComponent(); // Read process and launch times from the xml file ReadProcesses(); // set up the timer in the stopped state _timer = new Timer(new TimerCallback(OnNextMinute), null,Timeout.Infinite, Timeout.Infinite); } When we are ready to start the timer, we simply change the time period from infinite to a finite period in milliseconds:Listing 2 - Starting the Timerconst long TIMER_INTERVAL = 30000L; private void StartTimer(){// set the timer to trigger an event every 30 seconds _timer.Change(0, TIMER_INTERVAL); } Once we've started the timer we need to check it against the system clock to see if we are ready to launch our process. The OnNextMinute event handler is triggered every 30 seconds by the timer and compares the system clock against the file time in the xml file corresponding to the process. If the time is within 1 minute, it launches the process.Listing 3 - Event Handler triggered by the timer every 30 seconds public void OnNextMinute(object state){// get the current system clock timeDateTime currentTime = DateTime.Now; // loop through each process data read from the XML fileforeach (ProcessInfo p in _processes){ if ((currentTime.Hour == p.StartTime.Hour) && (currentTime.Minute == p.StartTime.Minute) && p.Started == false) {// minute reached, start the processstring path = p.Path + "\\" + p.File; System.Diagnostics.Process.Start(path); p.Started = true; } // reset process flag two minutes later, to be safe if ((currentTime.Hour == p.StartTime.Hour) && (currentTime.Minute > p.StartTime.Minute + 2) && p.Started == true) {p.Started = false;} } // end for each process info } Parsing Xml with XPath Using an XmlDocument with XPath is a very convenient way for us to get the process information out of our file. Our Xml file consists of a set of processes containing the file to execute, the path, and the time to execute in each process node.Listing 4 - Xml File containing Processes to Launch<?xml version="1.0" encoding="utf-8" ?> <Processes> <Process><Path>C:\workspace\QuotingService\bin\bin</Path><File>start.bat</File><Time>9:20 AM</Time></Process> <Process><Path>C:\workspace\QuotingService\bin\bin</Path><File>stop.bat</File><Time> 4:15 PM </Time></Process> <Process><Path>C:\CommodityService\bin</Path><File>start.bat</File><Time>9:20 AM</Time></Process> <Process><Path>C:\CommodityService\bin</Path><File>stop.bat</File><Time> 4:15 PM </Time></Process> </Processes> Below is the routine in the ProcessReader that reads the process nodes into an array of ProcessInfo classes. The constructor loads in the Xml file by calling Load on the XmlDocument. The GetProcesses method selects all the process nodes via XPath. The XPath query //Processes/* asks for all of the nodes underneath the Processes node. The double slash tells SelectNodes to skip past all the ancestor nodes and go right to the Processes node. The star (*) tells xpath to choose all of the nodes underneath Processes.Listing 5 - Reading the Process Nodes using XPath and XmlDocument XmlDocument _xDoc = null; public ProcessReader(){string path = GetConfigPath();_xDoc = new XmlDocument();_xDoc.Load(path); // Load the xml file } public ProcessInfo[] GetProcesses(){ // XPath statement for selecting nodes string xpath = "//Processes/*";// Select the nodes with the XPath queryXmlNodeList nodes = _xDoc.SelectNodes(xpath);// Create an array to hold process infoProcessInfo[] processes =(ProcessInfo[])Array.CreateInstance(Type.GetType("ApplicationLauncherService.ProcessInfo"),nodes.Count); // Go through each process node and populate process// info objectint i = 0; foreach (XmlNode node in nodes){ ProcessInfo p = new ProcessInfo(node["Path"].InnerText, node["File"].InnerText, node["Time"].InnerText,""); processes[i] = p;i++;} return processes; } Below is the ProcessInfo class used to contain the process launch information that we read with our GetProcesses method. It is a simple class containing only fields to hold the process info and times to launch along with a constructor. This class also converts our time string to a DateTime. We are only interested in the time here, so we concatenate a date onto the string in order to produce a legitimate DateTime with the Convert class.Listing 6 - Reading Process info class public class ProcessInfo{ public string Path; // path of the process filepublic string File; // name of the applicationpublic string Arguements; // arguments of the apppublic bool Started; // whether or not it was already// started public DateTime StartTime; public ProcessInfo(string path, string file, stringstarttime, string arguments) {Path = path;File = file;StartTime = Convert.ToDateTime(" 11/17/1965 " +starttime);Arguements = arguments;Started = false;} }Conclusion The time for civilian space travel may be here sooner than we know it. In the meantime, while I'm waiting for my lunar flight, I'll continue to hang out in my namespace and experiment with C# and .NET. Happy Launching! ©2014 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/mgold/ApplicationScheduler11262005055059AM/ApplicationScheduler.aspx
CC-MAIN-2014-35
refinedweb
1,100
53.81
Groovy adds a lot of useful methods to standard JDK classes. For example Groovy adds the equals() method to List and Object[] so both can be compared. We must make sure the array is of type Object[] to make it work. Also the equals() method is added to arrays of type int. def numbers1 = [1,2,3] as int[] def numbers2 = [1,2,3] as int[] def numbers3 = [1,2] as int[] assert numbers1.equals(numbers2) assert numbers2 == numbers1 assert !(numbers1.equals(numbers3)) def list = ['Groovy', 'Grails', 'Gradle'] def stringArray1 = ['Grails', 'Gradle', 'Groovy'] as Object[] def stringArray2 = ['Groovy', 'Grails', 'Gradle'] as Object[] assert list.equals(stringArray2) assert list == stringArray2 assert !(list.equals(stringArray1)) // order matters assert list != stringArray1
https://blog.mrhaki.com/2011/04/groovy-goodness-see-if-list-and-object.html
CC-MAIN-2020-50
refinedweb
118
59.6
Forum:Much of our "new user literature" needs a rewrite. From Uncyclopedia, the content-free encyclopedia We should keep this, but the rest needs to go. Anyhow, many pages for newer users to read (for example HTBFANJS and Uncyclopedia:In-jokes) are becoming annoyingly 2005-y. They're impractically formatted, overly long, barely useful in most parts, and are just in need of clean up in general. For example, I've already started a revised in-jokes page in my userspace, which I'll go through, deleting and improving everything I can. But on the large scale, we need to do something. The "Don't Plagiarize" section of HTBFANJS is sixteen words long. Many other sections are lists. A paragraph making a point of not assuming that your reader is male is in the "Bias is not a replacement for humour" section. Things like this need to be sorted out so that new members, or IPs thinking of joining, can have clear guides on to what to do. Who's with me? --EpicAwesomeness (talk) 16:52, January 9, 2012 (UTC) - Perhaps the next competition would be to revise HTBFANJS in no more than 144 characters. -- RomArtus*Imperator ® (Orate) 17:13, January 9, 2012 (UTC) - Before we even hop to those two, there needs to be some serious reworking on BGBU. We've said this a dozen times over the years, and many a bold, beautiful user has tried and failed at revamping it. We gotta try and fail again! -- 18:18, January 9, 2012 (UTC) - What that guy above me:27, 9 January 2012 - Seriously, we can't ignore any of this. We need to do something, because there's nobody above us that says otherwise. We make up Uncyclopedia, and we need to keep it spangly. What's the point in running VFD and IC and everything when something as big and important as HTBFANJS or BGBU needs work, and isn't getting it? EpicAwesomeness (talk) 17:26, January 10, 2012 (UTC) - Did you just say 'spangly'? Have you seen spang's code? It makes those 'literatures' described above look well-formatted and organised by comparison. ~:39, 10 January 2012 - Yes, I said spangly, but I meant "nice and accessible". Now let's go make Uncyclopedia nice and accessible! Go! Mush! Mush! EpicAwesomeness (talk) 16:26, January 11, 2012 (UTC) - So... you meant the opposite of 'like spang'? Now that's peculiar. Good luck,:02, 11 January 2012 - Bugger the general consensus, but I disagree. Not with the in-jokes list - I've never considered it "new user literature", and it sorely needs updating. But HTBFANJS is a good guide to writing on a comedic style that is suitable for a parody Uncyclopedia. Given it's at the core of our PEE review system as well, and PEE review was designed to be a significant step in our feature article process, and Featured articles are the core of what we do here, HTBFANJS is like the core of the core of the core. Significant changes to that would be like pruning the tree that is Uncyclopedia by cutting out the taproots. It would be like trying to fix a diesel engine by filling the tank with liquid hydrogen. It's like a jumble of similes all piled on top of each other by a desperate writer trying hard to express a thought process without actually having to write down anything of significance. BGBU is an extended version of "Don't be a dick" - a statement that could potentially weed out good investigative journalists - but that can be done in a number of ways to say the same thing, but the crux of the message really does need to stay the same. HTBFANJS could potentially be extended to take into consideration the major changes that have happened over the last 7 years or so (Like the frame stuff could be extended to take into consideration the different namespaces, and what each of them parody. UnTunes is a parody of iTunes, for those that aren't aware, and UnDebate was a parody of Debatepedia. And something should be included in there about the overuse of quotations, as very few Wikipedia articles have introductory quotes, if any, and I get tired of having to commit quoticide. But beyond that I can see no area of HTBFANJS which is not still relevant. So beyond cosmetic changes, I don't see anything in there that should be changed, and a lot of what is in there shouldn't be. Pup 01:39 12 Jan '12 - Indeed, it may not be the case that HTBFANJS needs a full rewrite, but it needs botox, dammit. That's what I've been saying, and if it needs it, we need to do it. --EpicAwesomeness (talk) 16:45, January 12, 2012 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Much_of_our_%22new_user_literature%22_needs_a_rewrite.?oldid=5399261
CC-MAIN-2014-41
refinedweb
804
61.87
Thanks once again to Viru Aithal for the inspiration behind this post, although I did write most of the code, this time. :-) Adding a splash screen can give a touch of class to your application, assuming it's done non-intrusively. This post focuses on how best to do so within AutoCAD, and use the time it's displayed to perform initialization for your application. The first thing you need to do is add a Windows Form to your project: You should select the standard "Windows Form" type, giving an appropriate name (in this case I've used "SplashScreen", imaginatively enough). Once this is done, you should set the background for the form to be your preferred bitmap image, by browsing to it from the form's BackgroundImage property: Now we're ready to add some code. Here's some C# code that shows how to show the splash-screen from the Initialize() method: using Autodesk.AutoCAD.Runtime; using Autodesk.AutoCAD.ApplicationServices; using Prompts; // This is the name of the module namespace SplashScreenTest { public class Startup : IExtensionApplication { public void Initialize() { SplashScreen ss = new SplashScreen(); // Rather than trusting these properties to be set // at design-time, let's set them here ss.StartPosition = System.Windows.Forms.FormStartPosition.CenterScreen; ss.FormBorderStyle = System.Windows.Forms.FormBorderStyle.None; ss.Opacity = 0.8; ss.TopMost = true; ss.ShowInTaskbar = false; // Now let's disply the splash-screen Application.ShowModelessDialog( Application.MainWindow, ss, false ); ss.Update(); // This is where your application should initialise, // but in our case let's take a 3-second nap System.Threading.Thread.Sleep(3000); ss.Close(); } public void Terminate() { } } } Some notes on the code: - I used a sample application called "Prompts" - you should change the using directive to refer to your own module name. - We're setting a number of properties dynamically (at runtime), rather than stepping through how to set them at design-time. - We've set the splash screen to be 80% opaque (or 20% transparent). This is easy to adjust. - Some of the additional properties may be redundant, but they seemed sensible to set (at least to me). Here's the result... I've set up my application to demand-load when I invoke a command, which allowed me to load a DWG first to show off the transparency of the splash-screen (even though the above code doesn't actually define a command - so do expect an "Unknown command" message, if you do exactly the same thing as I have). You may prefer to set the module to load on AutoCAD startup, otherwise. Update: Roland Feletic brought it to my attention that this post needed updating for AutoCAD 2010. Thanks, Roland! I looked into the code, and found that the call to ShowModelessDialog needed changing to this: Application.ShowModelessDialog( Application.MainWindow.Handle, ss, false ); I also found I had to add an additional assembly reference to PresentationCore (a .NET Framework 3.0 assembly). Hi Kean, Which reference I make for management of the API of the AutoCad. Regards. Posted by: Kélcyo Pereira | June 14, 2007 at 01:19 PM Hi Kélcyo, I'm sorry - I don't understand the question... If you're asking which assembly references to add to your project, then you need acdbmgd.dll and acmgd.dll. Regards, Kean Posted by: Kean | June 14, 2007 at 05:14 PM As a non-coder, how do I find out HOW an ex-colleague made a splash screen and then remove it from the launch of AutoCAD? Posted by: Tony | June 14, 2007 at 06:17 PM There are many ways he or she may have coded it, so there's no fixed answer, I'm afraid. A programmer should be able to work it out pretty quickly, though, by looking at the project. Kean Posted by: Kean | June 14, 2007 at 06:37 PM My intention is to search given of an archive txt to work in autocad. Therefore I do not obtain to make one I dialogue to select the archive. Posted by: Kélcyo Pereira | June 14, 2007 at 06:59 PM Because, when use following code, error? Imports Autodesk.AutoCAD.Runtime Imports Autodesk.AutoCAD.GraphicsInterface Imports Autodesk.AutoCAD.ApplicationServices Imports System.Windows.Forms Public Class kpsCommands Implements Autodesk.AutoCAD.Runtime.IExtensionApplication ' Define command 'Asdkcmd1' _ Public Sub Asdkcmd1() Dim ss As FrmSplash = New FrmSplash() Autodesk.Autocad.ApplicationServices.Application.ShowModelessDialog(Autodesk.AutoCAD.ApplicationServices.Application.MainWindow, ss, False) End Sub End Class It requests premission! It has some thing with the security reference. System.Security.Permission.... Regards. Kelcyo Posted by: Kélcyo Pereira | June 15, 2007 at 05:23 PM Hi Kélcyo, I don't know why this is happening on your system. Are you executing code from the dialog itself, rather than just using it as a splashscreen? Regards, Kean Posted by: Kean | June 18, 2007 at 10:56 AM
http://through-the-interface.typepad.com/through_the_interface/2007/06/showing_a_splas.html
crawl-002
refinedweb
804
55.03
> Only the apache classes need to end up in the apache swc Sure and that's what happen even if we create a new manifest file for the external classes and declare the relative namespace until we mention only the apache namespace in include-namespaces directive. if you look at the catalog file of the apache.swc; you'll see there's nothing relative to the exposed classes, that's because they are not included because the namespace wasn't included via include-namespaces. I thought the external-library-path took care of the external classes too it doesn't. - Fred. -----Message d'origine----- From: Justin Mclean Sent: Sunday, December 16, 2012 7:39 AM To: flex-dev@incubator.apache.org Subject: Re: [jira] [Created] (FLEX-33298) The apache lib compile without including anything HI, > All the exposed classes, yes, but not the classes used in this lib but > comming from other libs and that's apparently needed to compile > successfully. "ant apache" compiles without error, so I'm sure sure what you mean by "sucessfully"? I would assume that the external-library-path in the config file takes care of the classes needed from other parts of the SDK. Only the apache classes need to end up in the apache swc. Thanks, Justin
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201212.mbox/%3CBLU162-ds12FDAD1466F1E74243126EB4330@phx.gbl%3E
CC-MAIN-2015-32
refinedweb
214
58.32
Hi, We have a weird situation currently that I can't get to the bottom of. I inherited a thin client set up with WYSE TCss running Windows 7 Embedded. They are domain members and when they start up they run a startup script from a network share which starts an RDS connection. This connection uses a domain user which has permission to make an RDS connection and not much else, so the user is presented with a login screen and they enter their own domain credentials in at that point. Both the startup script and the user that makes the connection are added by GPO. The strange thing is this: we needed to demote one of the three DCs and take it out of the domain. As soon as we did this the thin clients complained about not being able to reach the startup script. Re-promoting the dc fixed the problem immediately. In addition, if you simply disconnect the DC which was being demoted, so that it's still present in all the domain dns records, the TCs can still access the script, but after rather a long pause; as if they are looking for the disconnected DC but then moving to another DC when it's out of contact. Can anyone suggest whats going on here? Thanks, Dan 3 Replies I would double-check the GPO and the absolute path of the script. Assuming all is well, I would investigate DNS and replication. From there, I would probably look at other potential network oddities. Presumably, you should have an absolute path similar to: \\your.domain\netlogon\scripts\startupscript.script In doing so, all three domain controllers would have the same file so that any client which authenticated against any domain controller would be able to run the script. Although the symptoms you report make it sound like this is not the case. Since you said it worked, broke upon demotion, worked upon promotion, if it is not a path issue, then I would then look at DNS. Are there clients out there which somehow are not allowed to communicate to the other two domain controllers? Without the 'functioning' DC turned on, do other DNS queries still work (specifically, can you communicate with the server responsible for hosting the startup script)? Hi Knope, Thanks for the reply. - Even with the box demoted, if I reboot the thin client to that I get the logon prompt for the actual client, I can log no as a domain user no problem and after that everything works ok until the next reboot of the thin client. There's a write-back filter on there to ensure that any changes made aren't saved, so I thought it might be something in the original build. I turned the wrote back filter off, did a couple of logins but that didn't work. The script is stored on a dfs network share, not the Netlogon folder. I don't know why; that's a setup I inherited. -After the long pause, and you get into the session, test which DC you connected against. That is, verify you established a new connection and not cached credentials. Check the netlogon.log file on that DC and the event viewer on that DC to see if there are any connection issues or problems with group policy. I foolishly assumed the DCs were all local. If the other two DCs are in remote offices, perhaps the longer pause could be expected. However, if you have multiple DCs on a site, then there should not be a delay. Check Option 6 in your DHCP server (or switch configuration) to verify you have more DNS servers specified. Perhaps you have only specified the one DC as the only DNS server and the clients are broadcasting the authentication request because they do not know exactly where DC2 and DC3 reside? The script is stored on a dfs network share, not the Netlogon folder. I don't know why; that's a setup I inherited. That should be fine. It has been working thus far, after all. I only mentioned the netlogon share because it is a guarantee that all domain users are going to have read permissions to that directory. The fact that netlogon shares reside on the DCs and the problem was noticed when the one DC was powered off was what led me to this assumption. 1) Check the logon log and the DHCP server settings if necessary to find out why there is a delay. - That is, verify clients are actually authenticating against a DC and not using cached credentials. 2) Is the DFS domain namespace? Give the server a ping by name (or nslookup) before/after the DC changes to verify clients are still looking at the correct spots. - If the clients cannot find the namespace, then we have found the next step in resolving this issue. If the clients can find the namespace, then refer back to the script. I don't know what script it is the clients are trying to run, but I would check for anything in it which singles out an LDAP connection to the specific DC which is turned off or an application or connection string to that DC (or IP address).
https://community.spiceworks.com/topic/1663771-thin-client-cannot-access-startup-script
CC-MAIN-2021-39
refinedweb
882
69.82
A while back, I received a great question from a reader: Just a note in your Learn React By Itself tutorial. In the “Components” section, where you say: return ( React.createElement('li', {className: 'Contact'}, React.createElement('h2', {className: 'Contact-name'}, this.props.name) ) ) It’s not clear to me why you need the parens and can’t just do return React.createElement. I tried that and it fails but I can’t see why. Isn’t typeof x === typeof (x) in JavaScript? And while it is true that typeof x === typeof (x), the same doesn’t always hold for return. Why? There are two things about JavaScript’s return which are a little unintuitive: A returnstatement followed by unreachable code is perfectly valid function doSomething() { return // Valid, but will never be called doSomethingElse() } JavaScript will automatically insert a semicolon at the first possible opportunity on a line after a returnstatement The second bit might be a little hard to grok, so let’s do a quiz. Can you tell me where the semicolon will be inserted on this block of code? return React.createElement('li', {className: 'Contact'}, React.createElement('h2', {className: 'Contact-name'}, this.props.name) ) Once you think you’ve got the answer, touch or hover your mouse over this box to check: // JavaScript inserts a semicolon after the `return` statement! return; React.createElement('li', {className: 'Contact'}, React.createElement('h2', {className: 'Contact-name'}, this.props.name) ) Got it? If not, go over the two rules above until you convince yourself. So, back to the original question: why use brackets on a return statement? Well, if you place your opening bracket on the same line as return: return ( No semicolon can be automatically inserted until that bracket is closed. return ( ... ) // <-- JavaScript inserts semicolon here Of course, we could just place the React.createElement on the same line as return, and avoid these superfluous brackets. But then it wouldn’t look as pretty. tl;dr If possible, JavaScript will automatically insert a semicolon at the end of the line which the return statement is on. Use brackets to make it impossible. Want to see this in action? Check out my Learn Raw React series — and become a React pro while you’re at it. Learn more about JavaScript Need to know all the ins and outs of JavaScript? Just keep reading: Need help remembering all this? My newsletter subscribers receive free cheatsheets on ES6, Promises and React, as well as news on my latest resources. Sign up here:! Curious whether there is any significance to your use of the term “bracket” in this post for what I would call a “parenthesis.” No significance. I’ve always called them brackets. Maybe it is an Aussie thing? And ‘curly brackets’ } too?! 🙂 Yeah, in the UK we tend to call them ‘brackets’ as well And a British thing. 😉 Thank for the post James, I learnt something new this morning. That makes me happy! Nice article! Also helpful to use a linter, such as eslint, to detect unreachable code paths before pushing your code to git
http://jamesknelson.com/javascript-return-parenthesis/
CC-MAIN-2017-34
refinedweb
511
67.25
[Question] Calculate number of chars in X, Y dimensions of a TextView Given a text font and size (e.g. 'DejaVuSansMono', 12), and the bounds of a TextView (e.g. 400, 100), how can I reliably calculate the exact number of characters that can fit into the width and height of the TextView? - Webmaster4o Use scene.render_text()or ImageFontand ImageDraw. With ImageFontand ImageDraw: from PIL import ImageFont, ImageDraw def charsfit(font, dimensions): """Calculate how many characters can fit into the width and height of a textbox based on a tuple describing a font and dimensions of the textbox""" #Font name and font size, TextView width and height fname, fsize = font bw, bh = dimensions #Load font font = ImageFont.truetype(fname, fsize) #Width and height are the width and height of one character. width, height = font.getsize("D") return bw/width, bh/height if __name__ == "__main__": print charsfit(('DejaVuSansMono', 12), (400, 100)) This prints (57, 7) Also, I'm not sure how to do this with scene.render_text(), but the documentation says "This can be used to determine the size of a string before drawing it, or to repeatedly draw the same text slightly more efficiently." You can also use the ui.measure_stringfunction. I can't test this right now, but I think there's a bug in version 1.5 (if you're not in the beta) that causes this to work only within an active drawing context. You can work around it like this: with ui.ImageContext(1, 1): w, h = ui.measure_string('Hello', font=('DejaVuSansMono', 12), max_width=400) (without the ImageContextthe function may not work properly in 1.5) The exact number of characters that fit into a text view also depend on the words, i.e. where line breaks would occur, so you can't really determine it without testing a specific piece of text. Hint... This is a monospaced font. - Webmaster4o Good answer @omz because it doesn't require importing extra modules. I thought there was a solution within uibut I couldn't remember what it was called.
https://forum.omz-software.com/topic/2039/question-calculate-number-of-chars-in-x-y-dimensions-of-a-textview/1
CC-MAIN-2021-43
refinedweb
341
64.91
switch statements work only on integer values. In Java the values you switch on also are required to be constant values (either integer literals or defined constant values), C++ might be more relaxed about that. If the string represent an integer (for example "1") you can use itoa to turn it into an int. If it doesn't there's no easy way, but you could use a hashtable with the strings as keys and integer constants as values and switch on the value found in that hashtable. #include <iostream> #include <cstring> //string header using namespace std; int main() { int a = 0; //set to 1 if found int b = 0; //set to 1 if found char string[] = "Hello"; char string2[10]; cin >> string2; switch (string2) { case 'hello': if (string2 == "hello") cout << "Its identical to string" <<endl; break; case 'GOODBYE': if (string2 == "GOODBYE") cout << "string 2 says goodbye , string 1 says hello"<<endl; break; } return 0; } since this doesnt work I thougt that if i convert the string entered from char to numbers (ints) i could use an if if a ==435 cout<< "true"; else cout << "false"; kinda thing If you just want a more readable switch/case, simply use enumerations. // example of enumeration enum name { list, ....... } #include <stdio.h> // in/out and file functions enum days { sun,mon,tue,wed,thu,fri,sat }; // sun = 0 etc. int main() { enum days week; for (week = sun; week <= sat; week++) { // used like integer constants // enumerations make switch/case easier to read switch (week) { case sun : puts("Sunday"); break; case mon : puts("Monday"); break; case tue : puts("Tuesday"); break; case wed : puts("Wednesday"); break; case thu : puts("Thursday"); break; case fri : puts("Friday"); break; case sat : puts("Saturday"); break; } } getchar(); return 0; } cool.... thanks mate now i can continue my conquer in the world of c++ I do hope you guys dont mind me posting this stuff just trying to deepen my knowledge of c could this be adatped to say if xxxx is entered into string?? Please make this conquest a little more clear ... Please make this conquest a little more clear ... Sorry could this be adapted , so that a string of charatcers are entered, ie say Monday, and it returns something? I'm just wondering since you've used "puts" what ever that means... i was thinking something amoungst the lines #include < iostream> #include < cstring> enum days { sun,mon,tue,wed,thu,fri,sat }; // sun = 0 etc. int main() { char string[10]; cin << string; enum days week; for (week = sun; week <= sat; week++) { // used like integer constants // enumerations make switch/case easier to read switch (week) { case sun : cout << "Found sunday"; break; case mon : cout << "Found monday"; break; case Tue : cout << "found tuesday"; break; default: cout << "your string wasn't matched!"; break; } } getchar(); return 0; } I think what you are asking (please clarify if I'm wrong) is to map the ascii representation of a string into a number you can use in your switch. This isn't very clean, but can be done. But, there are limits. For example, a short is usually 2 bytes in size, so you could compare any 2 byte string mapped into a short. Likewise a long is 4 bytes so you could compare a 4 byte string. That's pretty limiting. A better way to accomplish what you want is to have a table of strings and their values, and then look up the entered text in the table and use the corresponding value in your switch. Something like this: // note: this is not tested, so consider it pseudocode... enum AUserCommand { CommandUnknown, CommandHello, CommandGoodby }; static const struct { const char* name; AUserCommand command; } commandTable[] = { { "hello", CommandHello }, { "goodbye", CommandGoodBy }, }; AUserCommand LookupUserCommand( const char* whatUserEntered ) { int i; for (i = 0; i < (sizeof(commandTable) / sizeof(commandTable[0])); i++) { if (stricmp( whatUserEntered, commandTable[i].name) == 0) return commandTable[i].command; } return CommandUnknown; // not a known command } You can then use the returned enum in your switch statement. This code is overkill for two commands, of course, but as you add more commands you simply add an enum and an entry in the table and you are ready to handle the input. woah hell that looks very complex but your first idea was correct if I understand rightly what your getting at an int is 4 byes... a char is 1 byte per char, so perhaps if i narrowed the chars entered to 3 values then it will be 4 bytes .... Just don't know how to do this Hee hee, well, if you insist.... Here's one way: #define STRING_TO_INT(s1,s2,s3,s4) ((s1 << 24) | (s2 << 16) | (s3 << 8) | s4) switch (STRING_TO_INT( command[0], command[1], command[2], command[3] )) { case STRING_TO_INT( 'H', 'E', 'L', 'L' ): // hello .... break; case STRING_TO_INT('B', 'Y', 'E', 0 ): // bye .... break; } You need a #define rather than a routine so it can be a constant. If you have one or two byte commands, you'll have to make sure and pad 'command' out to 4 bytes with nulls. U cant use strings in switch statement ....the cases accept only single character or a integer ... if u still wants to use strinngs in cases of switch u have to tkae help of some string.h functions such as strcmp or strcpy acc. to ur requirements in the codes Are you trying to see if the users input string matches any string in a separate list? (read carefully) if so it is basically: #include <cstring> // c++ ANSI string, much better :) string valid_strings [] = { your strings here....... }; and in the main: bool valid = false; string str; cin >> str; for(int i = 0; i < how many valid strings you typed; i++) { if(str == valid_strings[i]) // != CAN be done with ansi string, for char strings use strcmp valid = true; } if(valid) cout << "Found the string!"; else cout << "Your string doesnt match!"; btw this is a sequential search: there are better ways of searching through
https://www.daniweb.com/programming/software-development/threads/16624/why-can-t-you-use-a-switch-statment-with-a-string
CC-MAIN-2016-50
refinedweb
984
75.54
This may be a silly question, but while dabbling with Typescript I realised my classes within modules (used as namespaces) were not available to other classes unless I wrote the export keyword before them, such as: module some.namespace.here { export class SomeClass{..} } var someVar = new some.namespace.here.SomeClass(); public module some.namespace.here { public class SomeClass{..} } The primary reason is that export matches the plans for ECMAScript. You could argue that "they should have used "export" instead of "public", but asides from "export/private/protected" being a poorly matched set of access modifiers, I believe there is a subtle difference between the two that explains this. In TypeScript, marking a class member as public or private has no effect on the generated JavaScript. It is simply a design / compile time tool that you can use to stop your TypeScript code accessing things it shouldn't. With the export keyword, the JavaScript adds a line to add the exported item to the module. In your example: here.SomeClass = SomeClass;. So conceptually, visibility as controlled by public and private is just for tooling, whereas the export keyword changes the output.
https://codedump.io/share/oti1V9FF4TBV/1/why-does-typescript-use-the-keyword-quotexportquot-to-make-classes-and-interfaces-public
CC-MAIN-2016-50
refinedweb
190
56.35
Modules Tabris.js uses the “CommonJS” module system, same as Node.js. This means: - Each JavaScript file represents a module. - Each module has an implicit local scope. A variable declared with var, letor constwill never be global. - The module code will not be parsed and executed until the module is imported. - To access a value (e.g. a class) created by module A in another module B, it needs to be exported by A and imported by B Startup When the application starts, it will load the main module to kickstart your application. It is identified in the main field of your project’s package.json. For example: { "name": "my-app", "version": "1.0", "main": "dist/my-main-script.js" } This main module can then import other modules of your application, or third party modules installed in your project via npm. Tabris.js does not support npm modules installed globally on your development machine, only those installed locally in the projectsTabris.js does not support npm modules installed globally on your development machine, only those installed locally in the projects node_modulesfolder. Also, npm modules that depend on native node.js modules like 'http'do not work. The Tabris.js API is also available globally (without importing) and can be accessed immediately under the tabris namespace. Therefore “ new tabris.Button();” always works, while “ new Button();” requires Button to be imported from 'tabris'. Some other values available without import (i.e. in the “global” namespace) are: console, Math, setTimeout, setInterval, clearTimeout, localStorage, XMLHttpRequest, fetch, device, ImageData and WebSocket. Syntax The exact import/export syntax differs depending on your project setup. The modern ES6 syntax is preferred and used throughout this documentation. ES6 Modules support is not provided by Tabris.js directly but by a third party compiler like tsc (works for both JavaScript and TypeScript files), or bundling tools like WebPack. For an in-depth explanation of this syntax please refer to the either - the MDN articles on import and export statements, or - the module chapter in the TypeScript handbook. If you use a vanilla JavaScript project without a compiler/bundler you have to use the ES5/CommonJS syntax (i.e. require()). You can get - an overview of the syntax on the CommonJS Wiki, or - a detailed explanation in the Node.js docs. The Node.js implementation is the standard that Tabris.js follows and aims to be compatible with.
https://docs.tabris.com/3.1/modules.html
CC-MAIN-2021-25
refinedweb
398
58.48
Opened 8 years ago Closed 12 months ago Last modified 12 months ago #9230 closed New feature (duplicate) Iterating over checkboxes in CheckboxSelectMultiple should be possible Description Right now, if you have a form field that is using the CheckboxSelectMultiple widget, there's no way in a template (or even in Python code) to iterate over the constituent pieces. This is a missing feature, since it prevents designers or designer-targeted template tags from working with the "bits". This is a feature-add, not a piece of broken functionality, so it's only for trunk. Attachments (1) Change History (18) comment:1 Changed 8 years ago by benspaulding - Cc benspaulding added - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 7 years ago by mullendr comment:3 Changed 7 years ago by ericholscher - Triage Stage changed from Unreviewed to Accepted I trust malcolm and ben know what they're talking about. Setting to accepted :) comment:4 Changed 7 years ago by Rob Hudson <treborhudson@…> I just hit a use-case where this would be nice, so a +1 as a feature request from me. The workaround posted wasn't helpful... I want to customize the display of the label tag given the choices object, so it's not the default __unicode__ output, but something else. comment:5 Changed 7 years ago by lbolognini This would be a nice to have feature: right now i wanted to display an image next to each checkbox part of this widget. comment:6 Changed 7 years ago by BradMcGonigle I also just ran into a use-case for this so another +1. comment:7 Changed 7 years ago by DataGreed +1 I cannot design nrmally a nice Two-Column list of checkboxes comment:8 Changed 7 years ago by aeby - Has patch set - Needs tests set I wrote a small patch to solve this problem. Needless to say that we have to update all the 'choice' widgets the provide a solid solution. This is a minimal solution intended as a basis for discussion. An iteratable widget could look like this: class MyCheckboxSelectMultiple(CheckboxSelectMultiple): def get_iterator(self, name, value, attrs): """ Alternatively we could pre-render here all the choices and store them in a list. """ self.item = 0 self.name = name self.attrs = attrs if value is None: value = [] self.value = value self.str_values = set([force_unicode(v) for v in value]) self.has_id = attrs and 'id' in attrs self.final_attrs = self.build_attrs(attrs, name=name) return self def next(self): if self.item >= len(self.choices): raise StopIteration if self.has_id: final_attrs = dict(self.final_attrs, id='%s_%s' % (self.attrs['id'], self.item)) label_for = u' for="%s"' % final_attrs['id'] else: label_for = '' (option_value, option_label) = self.choices[self.item] cb = CheckboxInput(final_attrs, check_test=lambda value: value in self.str_values) option_value = force_unicode(option_value) rendered_cb = cb.render(self.name, option_value) option_label = conditional_escape(force_unicode(option_label)) self.item += 1 return mark_safe(u'<label%s>%s %s</label>' % (label_for, rendered_cb, option_label)) Changed 7 years ago by aeby Iterating over choice widgets comment:9 Changed 7 years ago by aeby - Cc aeby added comment:10 Changed 5 years ago by lukeplant - Severity set to Normal - Type set to New feature comment:11 Changed 5 years ago by skylar.saveland@… - Easy pickings unset - UI/UX unset this comes up all the time for me. I use this nasty widget that I made: comment:12 Changed 4 years ago by skyl I have been told that this works: comment:13 Changed 3 years ago by bmispelon - Cc bmispelon@… added - Resolution set to duplicate - Status changed from new to closed comment:14 Changed 12 months ago by gabn88 This is my first post here, so sorry if I'm doing it wrong. I'm trying to iterate over my checkboxes (using Django 1.7.7) where I'm using an ModelFormSet to generate multiple forms. My original code is: {%if field.name == "repeat_weekday" %} <td>{{field}}<td> {% endif %} Now I have made: {% for choice, choice_label in field.field.widget.choices %} <td> <input checked={{choice.checked}} {{choice_label}} </td> {% endfor %} but I don't know how to find the = checked and id and name via the field. Easier would IMO be to do something like the thing below (and I would say that would be a genuine fix of the issue in this ticket): {% for checkbox, label in field.checkboxes %} {{ checkbox }} <!-- Renders the checkbox completely {{ label }} <!-- Renders the label completely {% endfor %} Ok, found the documentation: However, if I update my template to: {% for field in form.visible_fields %} {% for checkbox in field.repeat_weekday %} <td>{{ checkbox }}<td> {% endfor%} {% endfor %} It renders empty <td>'s for the checkbox comment:15 Changed 12 months ago by gabn88 - Resolution duplicate deleted - Status changed from closed to new I have checked the documentation a 100 times, but for me it does not work as expected. The {{field.repeat_weekday}} is always empty in my template, so looping over it wont work to create the checkboxes. I have overriden the render method in my widget, but that should not matter I think since I only change the value and then call the original render: class BinaryMultipleSelectBox(forms.CheckboxSelectMultiple): def render(self, name, value, attrs=None, choices=()): if value is None: value = [] else: value = TemplateEventMeta.WEEKDAYS.get_selected_values(value) return forms.CheckboxSelectMultiple.render(self, name, value, attrs=attrs, choices=choices) comment:16 Changed 12 months ago by bmispelon - Resolution set to duplicate - Status changed from new to closed Hi, The best place to go for questions like these would rather be the django-users mailing list:. Thanks. Please have a look at: This solution works for me.
https://code.djangoproject.com/ticket/9230
CC-MAIN-2016-22
refinedweb
932
53.61
Kay Sievers wrote:> On Tue, Mar 24, 2009 at 17:21, Patrick McHardy <kaber@trash.net> wrote:>> Matt Domsch wrote:>>> c) udev may not always be able to change a device's name. If udev>>> uses the kernel assignment namespace (ethN), then a rename of>>> eth0->eth1 may require renaming eth1->eth0 (or something else).>>> Udev operates on a single device instance at a time, it becomes>>> difficult to switch names around for multiple devices, within>>> the single namespace.>> I would classify this as a bug, especially the fact that udev doesn't>> undo a failed rename, so you end up with ethX_rename. Virtual devices>> using the same MAC address trigger this reliably unless you add>> exceptions to the udev rules.> > This is handled in most cases. Virtual interfaces claiming a> configured name and created before the "hardware" interface are not> handled, that's right, but pretty uncommon.I don't remember the exact circumstances, but I've seen it quite a fewtimes. I'll gather some information next time.>> You state that it only operates on one device at a time. If that is>> correct, I'm not sure why the _rename suffix is used at all instead>> of simply trying to assign the final name, which would avoid this>> problem.> > How? The kernel assignes the names and the configured names may> conflict. So you possibly can not rename a device to the target name> when it's name is already taken. I don't see how to avoid this.Sure, you can't rename it when the name is taken. But what udevapparently does when renaming a device is:- rename eth0 to eth0_rename- rename eth0_rename to eth2- rename returns -EEXISTS: udev keeps eth0_renameWhat it could do is:- rename eth0 to eth2- rename returns -EEXISTS: device at least still has a proper nameAlternatively it should unroll the rename and hope that theold name is still free. But I don't see why the _rename stepwould do any good, assuming only a single device is handled ata time, it can't prevent clashes.
http://lkml.org/lkml/2009/3/24/376
CC-MAIN-2017-09
refinedweb
343
70.13
How to: Create a New Method for an Enumeration (C# Programming Guide) You can use extension methods to add functionality specific to a particular enum type. In the following example, the Grades enumeration represents the possible letter grades that a student may receive in a class. An extension method named Passing is added to the Grades type so that each instance of that type now "knows" whether it represents a passing grade or not. using System; using System.Collections.Generic; using System.Text; using System.Linq; namespace EnumExtension { // Define an extension method in a non-nested static class. public static class Extensions { public static Grades minPassing = Grades.D; public static bool Passing(this Grades grade) { return grade >= minPassing; } } public enum Grades { F = 0, D=1, C=2, B=3, A=4 }; class Program { static void Main(string[] args) { Grades g1 = Grades.D; Grades g2 = Grades.F; Console.WriteLine("First {0} a passing grade.", g1.Passing() ? "is" : "is not"); Console.WriteLine("Second {0} a passing grade.", g2.Passing() ? "is" : "is not"); Extensions.minPassing = Grades.C; Console.WriteLine("\r\nRaising the bar!\r\n"); Console.WriteLine("First {0} a passing grade.", g1.Passing() ? "is" : "is not"); Console.WriteLine("Second {0} a passing grade.", g2.Passing() ? "is" : "is not"); } } } /* Output: First is a passing grade. Second is not a passing grade. Raising the bar! First is not a passing grade. Second is not a passing grade. */ Note that the Extensions class also contains a static variable that is updated dynamically and that the return value of the extension method reflects the current value of that variable. This demonstrates that, behind the scenes, extension methods are invoked directly on the static class in which they are defined..
http://msdn.microsoft.com/en-us/library/bb383974.aspx
CC-MAIN-2014-23
refinedweb
283
61.33
By Carmen Salas I am currently learning how I can optimize the performance of my React applications. When wanting to render components in an application it can take time and slow down your application. One of the React functions I am learning about is React.lazy, which allows your components to lazy-load. Let’s talk about how we use lazy in conjunction with React’s newer feature, Suspense. We’ll go into: - What is lazy loading and why is it important? - What is lazy loading in React? - What is Suspense in React? - How to use React.lazy and Suspense in a React application What is lazy loading and why is it important? Lazy loading stops a webpage from rendering all of its contents at once. Lazy loading allows the contents of a page to render only when a user reaches that part of the page. An application basically holds off on rendering contents if a user does not reach the section of the page with those contents. The benefits of this, are that it optimizes time and space for content delivery on an application. What is lazy loading in React? React has a function react.lazy, which makes it easy to lazily load the contents of a page by code splitting. react.lazy bundles components you are importing to automatically load when rendering the entire page The way react.lazy works is it takes in a function that must call a dynamic import. This means a promise is returned which resolves to a default exported module that is in your application. Here’s how you would use it in an application: const Banner = React.lazy(() => import('../HomePage/Banner')); This will make the Banner component in my application lazily load when I use it, as opposed to how I would normally import it: import Banner from '../HomePage/Banner'; Now if we want to use the lazy function in our application we have to wrap the lazy component in a Suspense component What is Suspense in React? The <Suspense> component is a new addition to React 16.6. It will essentially wait to see if what you want to load is ready to load, and while waiting, Suspense will render a fallback. Suspense takes in a prop called fallback which is your loading state, While loading, Suspense will give you the fallback this could be a component, like a loading spinner or text. How to use React.lazy and Suspense in a React application Now that we know how lazy and Suspense will work together to lazily load contents on to your application let’s see how the code looks. This is how we would wrap our lazy component in a Suspense component. import React, { Suspense } from 'react'; import Spinner from 'react-bootstrap/Spinner'; <Suspense fallback={<Spinner animation="border" variant="info" />}> <Banner/> </Suspense> Here I wrapped my lazy component Banner in the Suspense component and set the fallback in Suspense to be a spinner component imported from React Bootstrap. Pretty simple right? This will then lazily load the Banner component in my application. While loading a react-bootstrap spinner will render on the page while the Suspense component is waiting to see is the Banner component is ready. It will look something like this: In conclusion, These pretty new features from React are really great for optimizing the performance of your applications when it comes to loading and rendering components. This is a pretty simple way to show how to implement lazy loading in your react components but there are endless possibilities in which you can use lazy and Suspense to upgrade and benefit your applications. Try it out! Cover by Jen Theodore on Unsplash Discussion (0)
https://dev.to/cs_carms/lazy-and-suspense-in-react-2gn7
CC-MAIN-2021-43
refinedweb
616
63.7
audio_engine_channels(9E) audio_engine_playahead(9E) - receive messages from the preceding queue #include <sys/types.h> #include <sys/stream.h> #include <sys/stropts.h> #include <sys/ddi.h> #include <sys/sunddi.h> int prefixrput(queue_t *q, mblk_t *mp/* read side */ int prefixwput(queue_t *q, mblk_t *mp/* write side */ Architecture independent level 1 (DDI/DKI). This entry point is required for STREAMS. Pointer to the queue(9S) structure. Pointer to the message block. The primary task of the put() routine is to coordinate the passing of messages from one queue to the next in a stream. The put() routine is called by the preceding stream component (stream module, driver, or stream head). put() routines are designated ``write'' or ``read'' depending on the direction of message flow. With few exceptions, a streams module or driver must have a put() routine. One exception is the read side of a driver, which does not need a put() routine because there is no component downstream to call it. The put() routine is always called before the component's corresponding srv(9E) (service) routine, and so put() should be used for the immediate processing of messages. A put() routine must do at least one of the following when it receives a message: pass the message to the next component on the stream by calling the putnext(9F) function; process the message, if immediate processing is required (for example, to handle high priority messages); or enqueue the message (with the putq(9F) function) for deferred processing by the service srv(9E) routine. Typically, a put() routine will switch on message type, which is contained in the db_type member of the datab structure pointed to by mp. The action taken by the put() routine depends on the message type. For example, a put() routine might process high priority messages, enqueue normal messages, and handle an unrecognized M_IOCTL message by changing its type to M_IOCNAK (negative acknowledgement) and sending it back to the stream head using the qreply(9F) function. The putq(9F) function can be used as a module's put() routine when no special processing is required and all messages are to be enqueued for the srv(9E) routine. Ignored. put() routines do not have user context. srv(9E), putctl(9F), putctl1(9F), putnext(9F), putnextctl(9F), putnextctl1(9F), putq(9F), qreply(9F), queue(9S), streamtab(9S) STREAMS Programming Guide
http://docs.oracle.com/cd/E26502_01/html/E29045/put-9e.html
CC-MAIN-2015-35
refinedweb
391
63.49
How to map this?Frank Langelage Sep 11, 2007 5:19 PM I'm migrating an EJB 2.1 application to EJB 3.0 using JBoss 4.2.2.GA. Some question on how to map fields and relations. Object message contains a message header, message parts and each message part contains message lines. message head has a generated id. message part has a composite pk build of message head id and part_no. @Entity @Table(name="head") public class Head implements Serializable { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="head_id") private Integer headId; @OneToMany(cascade=CascadeType.ALL,fetch=FetchType.LAZY) @OrderBy("partNo") @JoinColumns({ @JoinColumn(name="head_id",referencedColumnName="head_id") }) private Collection<Part> parts = new ArrayList<Part>(); This gives me an Exception saying Repeated column in mapping for entity: Head column: head_id (should be mapped with insert="false" update="false"). Is adding 'insert="false" update="false"' to the JoinColumn properties the right way to solve this duplicate field? What about using Transient? I get similar problems with ManyToOne relations. Database table for Order has a field order_type. This field is a FK to the OrderType table id. But order_type in combination with output_type is a composite FK to OrderProperties. So I would have in Order.java: - a field of type Integer named orderType - a field of type Integer named outputType - a ManyToOne relation to EB OrderType with JoinColumn orderType - a ManyToOne relation to EB OrderProperties with JoinColumns orderType and outputType So three mappings for column order_type and two mapping for column output_type. Should I omit the plain fields in this case and mark the JoinColumn orderType for OrderProperties with 'insert="false" update="false"' then? Or what's the best practice for something like this? This content has been marked as final. Show 3 replies 1. Re: How to map this?wayne baylor Sep 13, 2007 9:56 AM (in response to Frank Langelage) is this a bi-directional relationship? can you post the Part entity code? 2. Re: How to map this?Frank Langelage Sep 13, 2007 6:09 PM (in response to Frank Langelage) No, it's uni-directional. So in the part bean I have the key-fields marked with id and so on. No hint that part is used in a relation. @Entity @IdClass(value=Part.PK.class) @Table(name="part") public class Part implements Serializable { @Id @Column(name="msghd_serial") private Integer msghdSerial; @Id @Column(name="part_no") private Integer partNo; ... 3. Re: How to map this?wayne baylor Sep 14, 2007 10:45 AM (in response to Frank Langelage) hmm, when using @OneToMany and @JoinColumn Hibernate puts the FK in the Part table...so you shouldn't have a conflict. Maybe try removing the referencedColumnName attribute and see what happens.
https://developer.jboss.org/message/362670?tstart=0
CC-MAIN-2019-13
refinedweb
451
51.65
An extent provides you with аccess to аll the persistent instаnces of а class аnd, optionаlly, its subclasses. You cаn iterаte over the elements of the extent or perform а query on the extent. The JDO Extent interfаce represents the extent of а class. Lаter in this chаpter, we will discuss the IgnoreCаche flаg, which controls whether instаnces mаde persistent or deleted during the current trаnsаction аre contаined in the extent. You control whether аn extent is mаintаined for а class in the metаdаtа. You use the metаdаtа class element's requires-extent аttribute to indicаte whether the persistent class hаs аn extent. It hаs а defаult vаlue of "true". If your аpplicаtion does not need to iterаte over the instаnces of а class or perform а query on the extent, you cаn set the requires-extent аttribute to "fаlse" explicitly. Even if а class does not hаve аn extent, you cаn still mаke instаnces persistent, estаblish references to them, аnd nаvigаte to them in your аpplicаtion аnd queries. JDO 1.O.1 requires thаt if а class hаs а requires-extent set to "true", none of its subclasses cаn set requires-extent to "fаlse". If your аpplicаtion specifies the subclass's pаrаmeter to be true when cаlling the getExtent( ) method for а bаse class, аll subclass instаnces аre included in the iterаtion of the extent. You аccess the Extent аssociаted with а class by cаlling the following PersistenceMаnаger method: Extent getExtent(Clаss persistentClаss, booleаn subclasses); It returns аn Extent thаt contаins аll the instаnces in the class specified by the persistentClаss pаrаmeter аnd аll the instаnces of its subclasses, if the subclasses pаrаmeter is true. If the class identified by the persistentClаss pаrаmeter does not hаve аn extent, а JDOUserException is thrown. This occurs only if the metаdаtа for the class hаs the requires-extent аttribute set to "fаlse". The Extent interfаce hаs methods you cаn use to аccess the components thаt were used initiаlly to construct the Extent: PersistenceMаnаger getPersistenceMаnаger( ); Clаss getCаndidаteClаss( ); booleаn hаsSubclasses( ); An Extent is not а Jаvа collection instаnce thаt hаs аll the instаnces of the class populаted in memory. This is а common misunderstаnding. Common Collection behаviors аre not possible. For exаmple, you cаnnot determine whether one Extent contаins аnother, the size of the Extent, or whether the Extent contаins а specific instаnce. Such operаtions аre performed by executing а query аgаinst the Extent. An Extent instаnce is logicаlly а holder of the following informаtion: The class of the instаnces in the Extent Whether subclasses аre pаrt of the Extent A collection of аctive iterаtors over the Extent No dаtаstore аction is tаken when you construct аn Extent. The contents of the Extent аre аccessed when а query is executed or you use аn Iterаtor to iterаte over its elements. An Extent is often used аs а pаrаmeter to а Query instаnce. When you perform а query on аn Extent, the Extent is used only to identify the prospective dаtаstore instаnces; its elements аre typicаlly not instаntiаted in the JVM. Chаpter 9 covers queries in detаil. You cаll the following Extent method to аcquire аn Iterаtor to iterаte over аll the instаnces in the Extent: Iterаtor iterаtor( ); You cаn cаll iterаtor( ) multiple times to construct multiple Iterаtor instаnces thаt cаn iterаte over the extent independently. Extent does not provide аny other Collection methods. If you cаll аny mutаting Iterаtor method, including remove( ) , аn UnsupportedOperаtionException is thrown. If you hаve аlreаdy аccessed а specific instаnce in the Extent аnd it is in memory, it is returned. This instаnce аlso contаins аny updаtes you mаy hаve mаde to it. An Extent cаn hаve а very lаrge number of instаnces. It might be common for you to iterаte over the elements of аn Extent. Extents аre supposed to be implemented such thаt you do not get out-of-memory conditions during iterаtion. If your аpplicаtion does hаve limitаtions on the number of instаnces thаt cаn reside in memory, Chаpter 13 describes the аbility to evict instаnces from the cаche аs а meаns of limiting memory growth. When you hаve finished using аn extent Iterаtor, you should close it to free аll its аssociаted resources. You cаn cаll the following Extent method to close аn Iterаtor аcquired from the Extent: void close(Iterаtor iterаtor); After this cаll, the Iterаtor returns fаlse to hаsNext( ) аnd throws NoSuchElementException if next( ) is cаlled. The Extent itself cаn still be used to аcquire other iterаtors аnd perform queries. You cаn аlso cаll the following Extent method to close аll of the iterаtors аcquired from the Extent: void closeAll( ); The following progrаm demonstrаtes the use of аn Extent. It аccesses the MediаContent extent on line [1] аnd аcquires аn Iterаtor on line [2]. It then iterаtes through the extent, аccessing eаch MediаContent instаnce on line [3]. pаckаge com.mediаmаniа.store; import jаvа.util.Iterаtor; import jаvаx.jdo.PersistenceMаnаger; import jаvаx.jdo.Extent; import com.mediаmаniа.MediаMаniаApp; import com.mediаmаniа.content.MediаContent; public class GetMediаContent extends MediаMаniаApp { public stаtic void mаin(String[] аrgs) { GetMediаContent content = new GetMediаContent( ); content.executeTrаnsаction( ); } public void execute( ) { Extent mediаExtent = pm.getExtent(MediаContent.class, true); [1] Iterаtor iter = mediаExtent.iterаtor( ); [2] while (iter.hаsNext( )) { MediаContent mediа = (MediаContent) iter.next( ); [3] System.out.println(mediа.getDescription( )); } } } The IgnoreCаche flаg in the PersistenceMаnаger controls whether instаnces mаde persistent or deleted in the current trаnsаction аre included during Extent iterаtion or queries. We cover the effect of IgnoreCаche on queries in Chаpter 9. If you hаve set the IgnoreCаche flаg to fаlse, аn implementаtion thаt performs queries in the dаtаstore server will need to flush the instаnces in the аpplicаtion cаche to the dаtаstore, so their currently cаched stаte cаn be reflected in the query result. You cаn set IgnoreCаche to true аs а performаnce-optimizing hint, so the implementаtion cаn аvoid flushing the cаche when а query is executed or аn Extent is iterаted. You cаn use the following PersistenceMаnаger methods to get аnd set the IgnoreCаche flаg аssociаted with а PersistenceMаnаger: booleаn getIgnoreCаche( ); void setIgnoreCаche(booleаn flаg); The IgnoreCаche flаg аffects the extent Iterаtors for аll Extents obtаined from the PersistenceMаnаger. If you hаve the IgnoreCаche flаg set to fаlse in the PersistenceMаnаger when you cаll iterаtor( ) to obtаin аn Iterаtor instаnce from аn Extent, then: The Iterаtor will return instаnces thаt were mаde persistent in the trаnsаction prior to cаlling iterаtor( ). The Iterаtor will not return instаnces deleted in the trаnsаction prior to the cаll to iterаtor( ). Setting the IgnoreCаche flаg to true is only а hint thаt the Extent cаn return аpproximаte results by ignoring persistent instаnces thаt hаve been аdded, modified, or deleted in the current trаnsаction. If IgnoreCаche is set to true in the PersistenceMаnаger when аn Iterаtor is obtаined, new аnd deleted instаnces in the current trаnsаction might be ignored by the Iterаtor, but it is аt the option of the implementаtion. Thаt is, new instаnces might not be returned, аnd deleted instаnces might be returned. Iterаting аn Extent with IgnoreCаche set to true cаn differ аmong implementаtions. Therefore, to be portable you should set the IgnoreCаche flаg to fаlse.
http://etutorials.org/Programming/Java+data+objects/Chapter+8.+Instance+Management/8.2+Extent+Access/
crawl-001
refinedweb
1,184
61.16
At the Build conference, we announced the release of the new converged Windows Phone 8.1 and Windows 8.1 platforms. As a developer, this means you can now build XAML and HTML universal apps that run on both Phone and Tablets by sharing a significant amount of code and content. To enable building universal apps, we added a number of new features to Visual Studio as part of the Visual Studio Update 2 RC. You have two ways to learn more about these features. One way is through this blog post. The other way is by watching my Build talk that covers all of the material you will see here in more detail: There is no right or wrong way here, so pick either the video or the blog depending on how much time you have. Without further delay, let’s take a quick look at universal apps! Creating Universal Apps To help you get started with building universal apps in C#, C++, and JS, we created new project templates that contain the basic structure and behind-the-scenes configurations to allow you to share code and content: wanted to add support for Windows 8.1. Structure of Universal Apps A universal app is a collection of three projects – a Windows Store project, a Windows Phone project and a Shared project – enclosed in a solution folder that is optional. The Windows Store and Windows Phone projects are platform projects and are responsible for creating the application packages (.appx) targeting the respective platforms. These projects contain assets that are specific to the platform being targeted. The Shared project contains assets that are shared between the Windows Store and Windows Phone projects. The set of item types (.cs, xaml, .xml, .png, .resw, etc.) supported by the shared projects is the same as the platform projects. Shared projects by themselves don’t have a binary output but their contents are imported by the platform projects and used as part of the build process to generate the Windows Store and Windows Phone application packages (.appx). Writing code in the Shared project While developing your universal app, you will mostly be writing code that runs on both platforms. If required, you can also write platform specific code in the Shared projects using #if and #endif directives. By default, we have predefined the following conditional compilation constants that you could use to write platform specific code. Context switcher in the editor While writing code in a Shared project, you can use the project context switcher in the navigation bar to select the platform you are actively targeting, which in turn drives the intellisense experience in the code editor. Switching startup projects using debug target dropdown We have also added the ability to quickly switch the startup projects in the debug target dropdown that now enumerates all the possible projects in the solution that you might want to deploy to a device or emulator/simulator. Sharing code across Universal Apps You can use class libraries to share your code across different universal apps. For C# and Visual Basic, we have improved the existing Portable Class Libraries (PCLs) to also support Windows Runtime and XAML when targeting Windows 8.1 and Windows Phone 8.1 platforms. Check out this blog for more details on PCL improvements. For C++, you can use the new Class Library project templates under “Universal Apps” with shared projects to share your code between Windows 8.1 and Windows Phone 8.1 class libraries. I hope you found this overview of building XAML universal apps useful. If you have any questions or comments, please feel free to post below or contact us via forums or UserVoice . Stay tuned for another blog explaining the new XAML tooling features we have added in Visual Studio to support Windows Phone 8.1 applications. Yes – but what's the point,? Very few people including developers want Windows phone and maybe even fewer for Windows 8. The big Windows market is from Windows XP (yes its still there) through to Windows 8.1 – We have never specifically targeted products just at the latest version of Windows – it would be commercial suicide. What we want is the ability of Visual Studio to target multiple native platforms, including Android and ideally Apple with the minimum of changes – using c# or better still C++. Know VS is a Microsoft product but it isn't free we have to buy it – so how about making it really useful. Great write up, thank you! I can attest that, although not recommended, you can put 100% of your XAML and .CS files into the shared project and have the app run in both Win 8 and WP8. Screens, SQLite classes, web service calls, the works. The only file I left in the app project was the app.xaml. The only problem I had was that Visual Studio crashed the further & harder I pushed the shared project…hopefully it was just my dev VM and not a global issue. Pretty amazing and welcomed! Echoing Brian M. I am looking at this in terms of the 2.0 version of our internal application. We are solid on Windows 7, no plans to push to 8 ( retraining costs ) We allow our end users lots of variety in devices. Windows Phone is not something that our end users want. And frankly, we don't want the costs of having to push everyone to Windows Phone. So, this is a non-starter. It sounds great, but it doesn't work were we need it to. The walled garden may be working for Apple for toy applications, but it doesn't for business and business apps. It's not a Windows only world out there. Stop trying to tie us to it. What is there new for Windows 7 desktop developers in XAML? What new XAML tooling is there in VS 2012 after update 4? Hi Thanks for your support. I know that many does not want to support you makings apps for Windows Phone. But personally I think it's great have that chance. I have two ideas (a game and ToDo app) that could use this very well. It's a great opportunity for promote Windows Phone. Developers, if you like Android or iOS go for them, but also make great apps for Windows. no windows 7 support, DOA!, wake up! Our business applications run on Windows 7 and someday Windows 8 desktop and cannot be published through an app store given they are internal to our business. What path for XAML/WPF and VS improvements are there for us? Our business application user base uses desktop and web based applications. We do not have a standard mobile tablet/phone platform and do not develop for mobile. What is the forward path for a Fortune 500 business needing its data, applications and source code hosted in house on desktop applications or desktop browser based web sites (In other words, no cloud TFS, no cloud VS, no Azure, no mobile)? This is a common requirement for insurance, financial and health care corporations. I do have a WP8.1 device (which I love!) and a laptop running 8.1, and maybe soon an 8.1 tablet so I am very excited by these new development facilities. However, like nearly everyone that has posted here my bread and butter is WPF/XAML and WinForms business apps that run on Windows 7… why is Microsoft *only* concentrating on W8 when there is a ***massive*** worldwide community of businesses and organisations who want to keep developing in Tools that are W7 compatible? As much as I can't wait to get started on a universal app I have in mind, this is just for pleasure and not a realistic business solution (since businesses don't want W8 yet) so I am very concerned that MS (as usual) is putting all it's eggs in one very unpopular basket. Given how MS has worked in the past I know this is probably a wasted request but *PLEASE* don't stop giving businesses and existing development environments new and exciting tools for existing technologies like WPf/XAML!! Why Universal Windows Apps are not Enabled for Visual Basic!!? Why, I do not support the universal application only Visual Basic. What Microsoft'm thinking. I would like you to respond as soon as possible. I too would like to add my concern about this not being announced for Visual Basic. I was at one of the Hackathon events and produced a nice little windows phone app using VB. I originally tried using C#, but not using it on a regular basis I found that the syntax just got in the way. I thought the whole idea was to be able to build on our existing skills, so don't understand why this thought would not be carried through with this product. I'm a little disappointed by continued portable assembly approach. Why didn't Microsoft implement .net 2.0 or .net 4.0 on all platforms and then add phone and tablet assemblies to it as required? -10 points for the added complexity and incompatibility issues using the current approach. What is the expected lifecycle of universal apps? Silverlight seems like the closest thing to universal development that Microsoft has right now. @AbuS3ood, eightman, Vince Miccio, We’re currently in the process of building Universal app support for both languages for the next release of Visual Studio based on the .NET Compiler Platform (“Roslyn”). -The Visual Basic and C# Languages Team Can u tell provide more insight on how is "Tombstone" handled in Windows phone 8.1? I can see the namespaces but not the assembly. If I develop a universal app then it seems logical to me to do it right from the start. So I do not want to use HERE but Bing right from the start. I assume some parts of WP8.1 are not ready yet? But when will they be ready? or better said what is the roadmap and please make it available with a non-expiring key, the same way as HERE has now. ¿Qué tan fácil es de usar? Is this possible with the games build with the Unity engine Thanks for the info Navit! Just starting to learn how to do universal apps, as I had been learning how to build two apps previously for both WP8 and W8.1. This will certainly make things a whole lot easier. I'm sorry the comments seem to be way off target, and I for one appreciate the move to try and make it easy to build apps across multiple platforms (even if those apps are currently only Microsoft OSes). Thanks for this. I can't wait to get started with universal Apps! can we upgrade our pre exisiting windows store app into universal app Excuse me sir, I'm just the new user for vb 2013, esp for XAML. I can create it, but I don't know how to write code on it even the simple button Close. Normally, in vb.net, I use code Me.close or End to close form. Pls help me some clue how to write code on because I really interested in App for Windows 8.1. thanks Is it posible by anyway to build universal windows app by using VS 2010??OR Is it compulsin to have VS2013? There are far too many comments here from uninformed people. Visual Studio 2013 still has project types for WPF and Silverlight (in either C# or VB), so you can still develop for the traditional platform as long as you target for the correct version of .Net. Further, you now get Blend free instead of having to pay for it – bottom line, you get more than you did before. The Windows Universal Apps are for Windows Store platforms meaning they do not work on traditional platforms like Windows 7 etc. It was announced that way when the product was released this spring. If you consider Windows 10, you will get your traditional desktop like you have in Windows 7 or XP plus the store features. Microsoft hasn't taken anything away from you… Hola: si tengo la carpeta datamodel( con el archivo dat.json) ,Common. Como puedo hacerlo en varios idiomas atraves de la carpeta String y además como podria insertar los mapas ya que tengo ocho paginas con distintas coordenadas ( para fijar la ellipse). Mateniendo da tamodel, Common Gracias obviously like your website however you have to test the spelling on several of your posts. Several of them are rife with spelling issues and I to find it very bothersome to inform the truth however I will surely come again again.
https://blogs.msdn.microsoft.com/visualstudio/2014/04/14/using-visual-studio-to-build-universal-xaml-apps/
CC-MAIN-2018-47
refinedweb
2,126
71.95
Start Writing a Plugin # A plugin is a class inheriting from the Plugin class. This class must: - call the parent constructor with its PluginManifest - implements the init method You can use Kourou to initialize your development environment: kourou app:scaffold. Then edit the package.json file to move the kuzzle package from the dependencies to the devDependencies. You must also add kuzzle in the peerDependencies of the package.json. init method # The plugin must implement the method init. This method will receive the configuration of the plugin as well as its context in parameters. In order to be able to interact with the features of Kuzzle, it is necessary to save the context. import { Plugin } from 'kuzzle'; export class MyPlugin extends Plugin { async init (config: JSONObject, context: PluginContext) { this.config = config; this.context =
https://doc.kuzzle.io/core/2/guides/write-plugins/start-writing-plugins/
CC-MAIN-2022-05
refinedweb
133
50.02
I was getting this error when including this header in my driver: arch/mips/include/asm/mipsregs.h:644:33: error: unknown type name ‘u16’ since the use of u16 is not really necessary, convert it to unsigned short. Signed-off-by: Qais Yousef <qais.yousef@imgtec.com> Reviewed-by: Steven J. Hill <Steven.Hill@imgtec.com> --- arch/mips/include/asm/mipsregs.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h index e033141..0a2d6ef 100644 --- a/arch/mips/include/asm/mipsregs.h +++ b/arch/mips/include/asm/mipsregs.h @@ -641,9 +641,9 @@ * microMIPS instructions can be 16-bit or 32-bit in length. This * returns a 1 if the instruction is 16-bit and a 0 if 32-bit. */ -static inline int mm_insn_16bit(u16 insn) +static inline int mm_insn_16bit(unsigned short insn) { - u16 opcode = (insn >> 10) & 0x7; + unsigned short opcode = (insn >> 10) & 0x7; return (opcode >= 1 && opcode <= 3) ? 1 : 0; } -- 1.7.1
https://www.linux-mips.org/archives/linux-mips/2013-12/msg00044.html
CC-MAIN-2016-30
refinedweb
170
52.46
I would think so yes. Talked with Todd on irc and he suggested a good solution, shim jars that are loaded based on what version of hadoop is on the class path. However to get that working is a lot more work than just rewriting some classes to use the metrics2 namespace. As such it seems way too early to think about removing the working code. On Tue, Jul 10, 2012 at 4:54 PM, Andrew Purtell <apurtell@apache.org> wrote: > On Tue, Jul 10, 2012 at 3:33 PM, Elliott Clark <eclark@stumbleupon.com> > wrote: > > > > > > As far as I can tell this basically sinks all Metrics2 usage in HBase. > > So can we settle this then as something to do 0.96 and beyond and get > the RC out? > > Best regards, > > - Andy > > Problems worthy of attack prove their worth by hitting back. - Piet > Hein (via Tom White) >
http://mail-archives.apache.org/mod_mbox/hbase-dev/201207.mbox/%3CCAKYwJ9whAwwFfVT-X0MGKzeFAqZAx_VVR3g+3-+2goOd0GPmMQ@mail.gmail.com%3E
CC-MAIN-2018-30
refinedweb
147
83.96
Blogito, Ergo Sum Visit Sounds Familiar to see Silverlight 2.0 in action. Bob Familiar's Facebook profile A very common deployment scenario for Silverlight 1.1 Applications is to be hosted from Windows Server 2003/IIS6 environments. This implies that the development environment for Silverlight will be based in .Net Framework 3.5 while the deployment environment will be based in .Net Framework 2.0. Combine the platform mismatch between development and production environments with the fact that we are working with as yet unsupported products and technology and we have a recipe for a failed project. We would be introducing risk into our project. That risk that must be understood and mitigated for if we plan on being early adopters of this breakthrough technology. This article delves into the areas of risk associated with developing Silverlight 1.1 applications that invoke Web Services and that are then deployed into Windows Server 2003/IIS6 environments. Silverlight is a cross platform, cross browser plug-in for creating Rich Internet Applications, Silverlight supports a subset of Windows Presentation Foundation XAML combined with a code behind model to produce stunning visual experiences that leverage vector graphics, animation, streaming video and rich user interface controls. Since Silverlight applications run in the browser and since browsers support web services it make sense to leverage web services for data access. Combining Silverlight with web services has the same architectural underpinnings as AJAX; client side code asynchronously invoking web services via the XmlHttpObject. Silverlight comes in 2 versions today: Web Services that you wish to call from Silverlight applications are also required to be marked scriptable. By marking the service scriptable, you have control over the response format, which can be either Xml or JavaScript Object Notation (JSON). One of the advantages of using JSON as a serialization format over XML is that you can deserialize objects by simply evaluating a JSON string. This is especially useful when returning objects or collections of objects from web service calls. Calling Web Services from Silverlight 1.0 leverages exactly the same approach as an AJAX Application. Simply use your favorite AJAX library (ASP.Net AJAX for example) to configure the service proxy references and invoke the services asynchronously. This is all done from client side JavaScript. Since this is a technique that is well documented I will not cover it in detail here. Refer to the ASP.Net AJAX () web site. Instead I intend to focus on developing Silverlight 1.1 applications that invoke Web Services directly. My development environment will leverage Visual Studio 2008 Beta 2 and be based on .Net Framework 3.5. I will then cover deploying both the Silverlight application and the Web Service to a Windows Server 2003 .Net Framework 2.0 environment. This configuration is shown here: Common Silverlight 1.1 Architecture Pattern I believe that this will be a very common configuration for some time and since doing this is not as straight forward today due to the early release bits we are dealing with, I wanted to document the process I have discovered for both developing and deploying applications that leverage this configuration. There is no doubt the Visual Studio and Silverlight teams will smooth out many of these issues by release time but if you want to start working with alpha and beta bits in a production environment having the process well documented should make adoption go much smoother. The tricky part of this approach has to do with the fact that your development environment will be configured for .Net Framework 3.5 and the host production environment will be configured for .Net Framework 2.0. In addition, Silverlight at this time does not support cross domain scripting so you will need to avoid that in both your development and production environments. If you want to follow the tutorial section of the article you will need the following system setup: Development Client Production Server <ShamelessPlug> I use Discount ASP () for my hosting needs. They are awesome and I highly recommend them. </ShamelessPlug> In this section I will cover the step by step recipe for creating a Silverlight 1.1 Application that invokes web services. The resulting configuration will allow for seamless debugging of the both the client side Silverlight code as well as the server-side web service code. File, New, Project, Other Project Types This will be the Silverlight Client project. File, Add, New Project This will be the placeholder for the data returned by our web service. Give the TextBlock the name ‘TheMessage’. The TextBlock element will be used to display the results of our call to the web service. This will be the Silverlight Host project. This will link the host project to the Silverlight client project. By linking the 2 projects, the SilverlightApp assemblies and XAML will be pulled into the SilverlightHost project on each build. There will be no need to pull files from multiple directories using this technique. You will be prompted to enable Silverlight debugging for this project. Debugging is good so select ‘Yes’. The project linking process described above makes sure that the XAML and client assembly are moved into the host project but in ‘Orcas’ Beta 1 it does not move the Silverlight.js or the HTML page that loads the Silverlight plug-in into the host project. You need to do this manually: Note that even though we created an ASP.Net Web project as our host, we are not using any ASPX pages with associated code-behind. This is because this Web project is compiling to .Net Framework 3.5 and our deployment environment is based on .Net Framework 2.0. For debugging purposes we will need to keep our development environment based on .Net Framework 3.5. By avoiding server side code in the web host, we can deploy the SilverlightHost project to a .Net Framework 2.0 environment without making any modifications. [more on deployment later] Since there aren’t any dynamic web pages in our solution, all the server side code will be handled by web services. In order to avoid cross domain scripting issues in our development environment, the web services called from the SilverlightApp must reside within the SilverlightHost project. If you already have existing ASP.Net 2.0 ASMX web services you want to reuse, what you can do is add a web service to the SilverlightHost project that wraps the call to the legacy web service. In either case we will need to revisit the web services when we get into deployment mode [did I mention more on deployment later?]. We must also mark the web service scriptable. To do this reference the System.Web.Extensions name space, add the appropriate using statement and apply the [ScriptService] and [ScriptMethod] attributes to the class and method. This will generate the proxy for the web service. In this step we will be adding the client-side C# code to the Silverlight application that will invoke the web service asynchronously. Open Page.xaml.cs and: The resulting code should look like this: Build the solution. View the Default.html page in the SilverlightHost project. The resulting Silverlight application should look like this. Isn’t it beautiful? Silverlight Rocks! OK maybe that was a bit over the top but our focus here was not the visuals but the project infrastructure. Now that the infrastructure is in place you are ready to go to town on creating an amazing cross browser/cross platform Rich Internet Applications that leverage web services. Now let’s delve into the steps necessary to deploy this amazing application. Up to this point we have been in development mode using Visual studio ‘Orcas’ Beta 1 and .Net Framework 3.5. That is exactly what we want for the SilverlightApp but not for our web services. In order to deploy this application, we will need to take the following steps: This step may be very easy if your Web Services already exist in this form and you simply wrapped your services in order to get your Silverlight Application functioning. If not you will need to back port your Web Service code using Visual Studio 2005. If you are creating your scriptable service for the first time, remember to reference the Syste.Web.Script.Services namespace and mark your service and method scriptable using the [ScriptService] and [ScriptMethod] attributes. You must also configure the web service application to support calling Web services from script. In the Web.config file for the web service application, you must register the ScriptHandlerFactory HTTP handler, which processes calls made from script to .asmx Web services. > Once that is complete, you can use whatever technique you use to deploy your web service application to the production environment. I use a hosted environment that supports FTP so I leverage the ‘Copy Web Site’ feature within Visual Studio. Note that your deployed web services must be served up from the same domain as your SilverlightApp since we do not have a cross domain scripting feature as of yet. If you are mashing up services from different domains you will need to do the mashup on the server side and offer up your mashup endpoint to your SilverlightApp from your domain. Up to this point, the SilverlightApp has been referencing the web service that was within the SilverlightHost project. Now that we are moving to a production environment, we will want the SilverlightApp to reference our deployed .Net Framework 2.0 web services. If you are moving between development and production for testing purposes, then what you can do is define 2 Web References, one for the ‘local’ development environment web service and one for the ‘live’ production environment web service. Then by using conditional compilation you can easily move between the development environment reference and the production environment reference. Note in the Solution Explorer there are 2 Web References in the Silverlight Client project: One is called LiveService and the other is called LocalService. I then use #pragmas in code to perform conditional compilation. The #define can be in code or handled in the build script. Now we are in the home stretch. You should rebuild the entire solution and move the SilverlightHost project files into your production environment. Visual Studio ‘Orcas’ Beta 2 does not have the ‘Copy Web Site’ enabled at this time so I use a manual FTP process to move the files from my system to my host environment. The files/folders you want to move are: If you want to check out a real world example of an application built and deployed using this architecture visit (yet another shameless plug). This site leverages a Silverlight 1.1 front end that invokes 2 ASMX Web Services for retrieving CD and Track information. The Web Services are located at: The XAML was created using Expression Blend August Preview. The application also demonstrates the use of animation (move your mouse over the CD covers), formatted TextBlock (liner notes), looping video (Silverlight Logo), click event handling (hyperlink text) and WMA and MP3 media streaming (click the play button). Adopting unsupported technology and products introduces a great deal of risk. The capabilities and stability of the technology must outweigh that risk. In addition understanding the impact on both the development and production environments must be known in order to make the case for the adoption of early release technology Silverlight 1.1 works well in this scenario and offers you an opportunity to leverage your .Net skills for creating Rich Internet Applications. Since the user interface is being handled entirely by XAML it only makes sense to leverage web services to provide the data to our application. By using Silverlight 1.1 Alpha and Visual Studio 2008 Beta 2 combined with a host environment that is still based in .Net Framework 2.0 we incur some additional steps in getting our application deployed. But once you understand the issues and know how to work around them, the door is wide open for early adoption of this breakthrough platform. If you would like to receive an email when updates are made to this post, please register here RSS Liquid Boy blog has a great set of posts on building a Silverlight v1.1 application that looks like the Great post, helped me figure out why my deployed enviroment wasnt working. Of course the silverlight app must be compiled against the deployed relay service in the host - doh! Great article. Ill make sure to mention on my BLOG as well. Glad it was useful Jacob! -bob Thanks, Bob. This saved me hours of work! In the Silverlight forums there are guides to developing Silverlight 1.1 apps using VS 2005 that can allow for more fine grained control of your projects. But with the drawback of no 'Orcas' specific features can be used. That is a great caveat you suggested: capabilities should outweigh risk. For example, the functions found in the System.Windows.Browser namespace in System.Silverlight.dll allow complete client side access to the entire hosting DOM tree. It can lead to very powerful RIA scenarios but is quite experimental, both technically and usability-wise. You've been kicked (a good thing) - Trackback from DotNetKicks.com Silverlight Cream for September 24, 2007 "what you can do is add a web service to the SilverlightHost project that wraps the call to the legacy web service" Doesn't this double the number of TCP connections on the server-side? The suggestions here are work arounds for limitations that exist with the alpha release. I suspect that as Silverlight 1.1 moves into Beta, I will need to revisit this article and update it based on new features. -bob In my last post I announced the ReMIX07 Boston Soundtrack application and provided the iframe for embedding THANK YOU! This post made my life much easier. :) I have spent a lot of time recently coming up to speed with Silverlight 1.1. It is the alpha version I have spent a lot of time recently coming up to speed with Silverlight 2.0. It is the alpha version The Microsoft East Region DPE team has started in our blogs what we called "Architect Point of View". The Microsoft East Region DPE team has started in our blogs what we called "Architect Point of View"
http://blogs.msdn.com/bobfamiliar/archive/2007/08/30/adopting-silverlight-an-architects-point-of-view.aspx
crawl-002
refinedweb
2,389
64.91
One. In the first line I declared a package of MT::Plugin::HelloWorld. This declaration is optional, but is good form to avoid namespace collisions with other plugins or MT. Using MT::Plugin:: is another good practice for clarity and similar reasons. In the second line we call into service MT::Template::Context, the module that contains the majority of the plugin magic and is the main workhorse during content generation.. Hooking into MT's template processing starts with the stash method provided by the MT::Template::Context module. Through stash, we can retrieve the current information MT is processing templates with at that moment. We can also use stash to store our own information for later use by other associated tags. Here is a quick example of its use: # This line stores $value in the current context with a key of 'foo.' $ctx->stash('foo',$value); # This line retrieves the value of foo and assigns it to $value my $value = $ctx->stash('foo'); As MT works its way recursively through the tags it encounters while processing templates, it is constantly adding, retrieving, and clearing values from the stash. Here are some of the current keys MT will use during template processing. Other than tag, these stashed references provide access to content that has been retrieved by MT from the database into memory, based on the template processing's current context that has been determined by the template type or by another tag. As we'll see in the examples that follow, this information in the stash is quite handy to developing our own plugins. Now let's return to the rest of the MT plugin framework. As we've discussed, variable tags alone are not terribly interesting. Another construct supported by MT is the container tag. As its name implies, this type of tag contains additional markup and template tags within start and end tags. Container tags allow us to process a block of template code and/or create a context from which other tags can draw their data. Here is a simple example from MT's built-in tags of a container ( MTEntries) that creates a list of weblog entry titles inserted into the template by MTEntryTitle. <MTEntries> <MTEntryTitle /><br /> </MTEntries> Programming container tags requires a bit more consideration, because it is likely that the contents of the container tag require further processing. Let's review an example of a simple container with an associated variable tag. package MT::Plugin::SimpleLoop; use MT::Template::Context; MT::Template::Context->add_container_tag(SimpleLoop => \&loop ); MT::Template::Context->add_tag(SimpleLoopIndex => \&loop_index ); sub loop { my $ctx = shift; my $args = shift; my $content = ''; my $builder = $ctx->stash('builder'); my $tokens = $ctx->stash('tokens'); for my $i(1..$args->{loops}) { $ctx->stash('loop_index',$i); my $out = $builder->build($ctx, $tokens); $content .= $out; } } sub loop_index { my $ctx = shift; return $ctx->stash('loop_index'); } 1; With this plugin implemented, we can create a list of integers in our template like this: <MTSimpleLoop loops="10"> <MTSimpleLoopIndex/><br /> </MTSimpleLoop> Looking back at the example code, things start off like our variable tag replacements. I declare a package and the use of MT::Template::Context class before registering one container tag and one variable tag with their associated subroutines. Moving on to the first subroutine of loop, we begin as before by assigning the Context class and tag argument hash references to variables. As mentioned, container tags differ from their variable counterparts in that other template tags are assumed to be within them. This means that at some point, we have to pass the container tags' contents back into the template-processing engine before ending our subroutine. Here in our example, we retrieve a reference to the template builder class ( MT::Builder) that has been stashed by the system and store it in $builder. We also get a reference to the collection of processing tokens it has created from the stash and store it in $tokens. With everything in place, we start loop. First we stash the current index of the loop in loop_index. Next we pass the current context and the processing tokens back to the builder, eventually storing the result of that processing in $out. We concatenate this result to previous results in $content and loop again. To see how we use the loop_index value we stashed, we go to the second subroutine, where we retrieve that value and return it as a string. This is an excellent example of how tags work together using stash, and begins to demonstrate the possibilities of extending MT's operation with plugins. Let's press on. We'll only take a cursory look at the conditional tag, since it's just a specialized container tag that has been added to the API for convenience. Subroutines registered as conditional tags need only return a true or false value. MT automatically handles whether the conditional tag's contents should be passed on for further processing or be stripped from the template's output. In other words, there is a need to wrap the builder object in a conditional. Here is a simple implementation of two conditional tags. package MT::Plugin::ConditionalExample use MT::Template::Context; MT::Template::Context->add_conditional_tag(IncludeThis => sub { return 1 }); MT::Template::Context->add_conditional_tag(Excludethis => sub { return 0 }); With this plugin implemented, we could use the following markup into our template: <MTIncludeThis>This text will appear.</MTIncludeThis> <MTExcludeThis>This text will be stripped.</MTExcludeThis> Only the phrase: "This text will appear." would remain, since that conditional tag's subroutine returns a true value to MT's template builder. Global Filters are not tags, but arguments that can be added to any Movable Type template tag. Global filter arguments take a single value (quite often just a "1" to signify "on") and invoke a filter that is applied to the tag's content right before insertion. MT's native global filters include routines for stripping markup tags, encoding XML, and converting text to lower case. Global filters can be a sophisticated as you like, but generally they tend to be quite simple. Here is an example of a global filter that will strip out blank lines. package MT::Plugin::StripBlanks use MT::Template::Context; MT::Template::Context->add_global_filter(strip_blanks => sub { &strip_blanks }); sub strip_blanks { my $text = shift; my $arg_value = shift; my $ctx = shift; $text=~s/^\s*?\n//gm if ($arg_value); return $text } 1; Once again, I've written this example out "longhand" for clarity. Note that the values passed to a global filter routine are different than those in its tag-based counterparts. Global filter routines are passed a scalar with the text to be processed, a scalar of the value of the argument, and finally, a reference to the Context class instance. In our example, the argument value stored in $arg_value and the context object stored in $ctx are not of any use, so we just ignore them. We apply a simple regular expression to the value of $text and return it. With this plugin implemented we can do … <MTEntries strip_blanks="1"> ... </MTEntries> … and all blank lines will be stripped out of the result's template markup within the MTEntries tagset. With the release of Movable Type 2.6, the plugin framework has begun to branch out from template processing by introducing an API for hooking text-formatting engines into the system. The intention of this type of plugin is to provide more control and an easier means of authoring content in MT's browser-based interface for non-technical users who are not XHTML savvy. Text-formatting engines handle the formatting of structured text notation into some other markup language, such as XHTML. In some ways, text-formatting plugins are like global filters; however, there are some noteworthy differences. Global filters must be explicitly declared in each template, forcing all authors to use the same formatting style, in addition to rendering MT's preview function useless. In this example, we'll look at a subset of an early text formatting plugin I developed using a text notation called TikiText. package MT::Plugins::TikiText; use MT; MT->add_text_filter('tiki' => { label => 'TikiText', docs => '', on_format => \&tiki }); sub tiki { my $text=shift; my $ctx=shift; require Text::Tiki; my $processor=new Text::Tiki; return $processor->format($text); } There are a number of significant differences in how you implement this type of plugin. The first is that text-formatting plugins are registered using the MT module and not the Context module. Another significant difference is that registering text-formatting plugins is a bit more involved. A text-formatting engine is registered with a single key and an associated options hash. In our example we use the key tiki. The key used is quite significant because it will be stored with each entry to determine which formatting engine to apply when published. This key should be lowercase and only contain alphanumeric characters and the "_" (underscore) character. This key should not change once deployed. Moving on to the options hash associated to the tiki key, we set label with a short descriptive name of TikiText that will be used in the MT interface. Next we define the URL of any documentation to the format with docs. (MT creates a link in its interface to this documentation for easy user access.) Like its predecessors, I define the subroutine that will handle the text formatting using on_format. As the tiki subroutine demonstrates, text-formatting plugins are passed the text to be processed and may optionally receive a context object if it is invoked while processing templates. I created the TikiText processor as a separate Perl module for the sake of reusability, so we simply declare its use, instantiate it, and return the formatted text. As developers, we know that things don't always go as planned, and we have to be prepared to handle errors. In keeping my examples simple and easily digestible, I glossed over error handling. Let's address that now. I've already mentioned that returning an undefined value from a plugin routine will be interrupted as an error by MT and stop processing. While these error messages are better then a completely uninformative 500 error, we can do better to inform a user of what error has occurred and how they may correct it. Movable Type's Context class inherits an error method from MT::ErrorHandler for returning an error condition and message back to the system and the user. return $ctx->error('An informative error message to help the user.'); The Context class also inherits an errstr method, which retrieves the last error message set. warn $ctx->errstr; These methods have many uses, but here are some of the most common: # Checking if a tag is being called in a particular context. In this case # we are checking if our tag has been placed inside of an entry context. $ctx->error('MT'.$ctx->stash('tag').' has been called outside of an MTEntry context.') unless defined($ctx->stash('entry')); # Checking if a required argument (name) has been passed. $ctx->error('name is a required argument of '.$ctx->stash('tag').'.') unless defined($args->{'name'}); # Catch any errors during a template build and pass it to the context. defined(my $out = $builder->build($ctx,$tokens)) or return $ctx->error($builder->errstr); Also new to the version 2.6 framework is the addition of the MT::PluginData class, which provides plugin developers direct and convenient access to MT's data persistence mechanism. Like MT::Entry and similar MT native objects, MT::PluginData inherits the MT::Objects object. This abstraction saves MT's code from having to deal with the difference in the underlying data storage mechanism that MT is using. (MT now supports Berkeley DB, MySQL, SQLite, and PostgreSQL.) Unique to this module are the plugin, key, and data methods that are provided in addition to the underlying MT::Object functionality that is inherited. Here is the example from the plugin data module documentation with my comments. use MT::PluginData; my $data = MT::PluginData->new; $data->plugin('my-plugin'); $data->key('unique-key'); $data->data($big_data_structure); # Remember $big_data_structure has to be a reference. # $data->data('string'); # ERROR! # $data->data(\'string'); # CORRECT! $data->save or die $data->errstr; # save is inherited from MT::Object. # Elsewhere retrieving this data would look something like... my $data = MT::PluginData->load({plugin => 'my-plugin', key => 'unique-key'}); my $big_data_structure = $data->data; We're only scratching the surface here. Covering MT's data persistence mechanism could be an article in and of itself. What's important is that you know it's there and for you to take advantage of it. Here's a quick summary of best practices that I have learned through developing several Movable Type plugins of my own and reviewing the code of dozens of others by my MT colleagues. Declare a package for your plugin. It creates your own namespace and helps avoid collisions with other plugins a user may have installed. I highly recommend using prefixing your packages with MT::Plugin:: for clarity. Declare the version number of your plugin. Metadata is always good for future uses; it keeps you and everyone else sane. All it takes is two lines: use vars qw( $VERSION ); $VERSION = 0.0; Avoid loading external (non-MT) modules at compile time. Movable Type loads and compiles all plugins each time one of its CGIs are evoked. In order to keep operations that do not need this functionality from being penalized, it's recommended that you declare modules for use in the subroutine that requires that module. Notice how in my text-formatting plugin example, I declare the use of Text::Tiki in the tiki subroutine and not globally. Declare all of your tags at the beginning of the code. Place tag functionality in external named subroutines. These go hand in hand and just makes your code more readable and easier to debug. Placing your tag functionality outside of the tag registration has the added benefit of code reusability. For instance, multiple tags can call the same subroutine and respond differently, based on the stashed tag name. Use hierarchical naming and mixed caps. MT tag style uses a method similar to WikiWords, where spaces are removed and each word is capitalized. When naming your tags, think of them existing in a hierarchy where each word represents a branch in its path. Note in our container tag example how I named the tags SimpleLoop and SimpleLoopIndex not SimpleLoop and SimpleIndex. In doing so, it becomes clear that SimpleLoopIndex belongs to SimpleLoop. (See this document for more on the Movable Type template and tag philosophy.) Stash variables with unique prefixes. In our container tags example, I stashed a value with a key of loop_index. This isn't terribly unique, and it stands to reason that another plugin developer may use that same key in his or her plugin. When I stash something I always (except in examples) add the name of my plugin as a prefix to the variable name. It helps identify what stashed that data and it also makes it unlikely that another plugin would utilize that key and cause a collision. As I hope you can tell by the length of our whirlwind tour, the richness of Perl and the MT Plugin framework provides developers with a great deal of power and flexibility in extending the system. With a bit of OO Perl know-how, the potential use of Movable Type for any number of publishing applications not only becomes possible, but fairly easy to implement. For more details and latest information check these resources: Timothy Appnel has 13 years of corporate IT and Internet systems development experience and is the Principal of Appnel Internet Solutions, a technology consultancy specializing in Movable Type and TypePad systems. Return to the Web Development DevCenter.
http://archive.oreilly.com/lpt/a/3327
CC-MAIN-2015-32
refinedweb
2,629
53.41
from __future__ import print_function import mdtraj as md traj = md.load('ala2.h5') print(traj) <mdtraj.Trajectory with 100 frames, 22 atoms, 3 residues, without unitcells> We can also more directly find out how many atoms or residues there are by using traj.n_atoms and traj.n_residues. print('How many atoms? %s' % traj.n_atoms) print('How many residues? %s' % traj.n_residues) How many atoms? 22 How many residues? 3 We can also manipulate the atom positions by working with traj.xyz, which is a NumPy array contain the xyz coordinated of each atom with dimensions (n_frames, n_atoms, 3). Let's find the 3D coordinates of the tenth atom in frame 5. frame_idx = 4 # zero indexed frame number atom_idx = 9 # zero indexed atom index print('Where is the fifth atom at the tenth frame?') print('x: %s\ty: %s\tz: %s' % tuple(traj.xyz[frame_idx, atom_idx,:])) Where is the fifth atom at the tenth frame? x: 0.697151 y: 0.92419 z: 0.872604 As mentioned previously in the introduction, every Trajectory object contains a Topology. The Topology of a Trajectory contains all the connectivity information of your system and specific chain, residue, and atom information. topology = traj.topology print(topology) <mdtraj.Topology with 1 chains, 3 residues, 22 atoms, 21 bonds> With the topology object we can select a certain atom, or loop through them all. (Note: everything is zero-indexed) print('Fifth atom: %s' % topology.atom(4)) print('All atoms: %s' % [atom for atom in topology.atoms]) Fifth atom: ACE1-C All atoms: [ACE1-H1, ACE1-CH3, ACE1-H2, ACE1-H3, ACE1-C, ACE1-O, ALA2-N, ALA2-H, ALA2-CA, ALA2-HA, ALA2-CB, ALA2-HB1, ALA2-HB2, ALA2-HB3, ALA2-C, ALA2-O, NME3-N, NME3-H, NME3-C, NME3-H1, NME3-H2, NME3-H3] The same goes for residues. print('Second residue: %s' % traj.topology.residue(1)) print('All residues: %s' % [residue for residue in traj.topology.residues]) Second residue: ALA2 All residues: [ACE1, ALA2, NME3] Additionally, every atom and residue is also an object, and has it's own set of properties. Here is a simple example that showcases just a few. atom = topology.atom(10) print('''Hi! I am the %sth atom, and my name is %s. I am a %s atom with %s bonds. I am part of an %s residue.''' % ( atom.index, atom.name, atom.element.name, atom.n_bonds, atom.residue.name)) Hi! I am the 10th atom, and my name is CB. I am a carbon atom with 4 bonds. I am part of an ALA residue. There are also more complex properties, like atom.is_sidechain or residue.is_protein, which allow for more powerful selections. Hopefully, you can see how these properties can be combined with Python's filtered list functionality. Let's say we want the indices of all carbon atoms in the sidechains of our molecule. We could try something like this. print([atom.index for atom in topology.atoms if atom.element.symbol is 'C' and atom.is_sidechain]) [1, 10] Or maybe we want all even-indexed residues in the first chain (Although this example only has the one chain....). print([residue for residue in topology.chain(0).residues if residue.index % 2 == 0]) [ACE1, NME3] If you're hesistant about programming filtered lists like the ones above, MDTraj also features a rich atom selection language, similar to that of PyMol and VMD. You can access it by using topology.select. Let's find all atoms in the last two residues. More information about the atom selection syntax is available in the main documentation. print(topology.select('resid 1 to 2')) [ 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21] You can also do more complex operations. Here, we're looking for all nitrogen atoms in the backbone. print(topology.select('name N and backbone')) [ 6 16] If you ever want to see the code that generates these results you can use select_expression, which will yield a string represention of the atom selection code. selection = topology.select_expression('name CA and resid 1 to 2') print(selection) [atom.index for atom in topology.atoms if ((atom.name == 'CA') and (1 <= atom.residue.index <= 2))] (atom-selection.ipynb; atom-selection_evaluated.ipynb; atom-selection.py)
http://mdtraj.org/latest/examples/atom-selection.html
CC-MAIN-2017-39
refinedweb
711
69.28
Yup I did that but one method runs fine, the other doesnt give me any returns.. and the methods are fine... can it be because it accesses jLabels etc on the form? Yup I did that but one method runs fine, the other doesnt give me any returns.. and the methods are fine... can it be because it accesses jLabels etc on the form? Hi guys, I have 2 classes, two JFrame classes in NetBeans. The first one has a public Main() and the other has a public Maintain(), from the maintain I'm trying to re-trigger the commands in... package GUI; import javax.swing.*; import java.awt.*; import java.net.*; import java.io.*; import java.util.*; public class GuiMaintain extends javax.swing.JFrame { edited ... Iam having no errors.. just the jList is not getting updated.. sorry about the words. [Java]annoying - Pastebin.com - I have no idea.. whats wrong with it.. It loads all records from text file into the jList1... with the ListModel.. but then when it comes to updating problems... private void txtSearchKeyTyped(java.awt.event.KeyEvent evt) { if (jComboSelection.getSelectedIndex() == 0) { if (txtSearch.getText().length() == 4) { } else if... ChristopherLowe but then how do you put that into the JPanel you want? or how do you trigger it I tried adding for example: ImageJPanel jPanelNew = new ImageJPanel();... I don't get an image showing... the directory is good.. no error message.. it runs, but empty.. :s I've been trying for hours .. just frustrated heh. I'm trying to draw an image inside a jScrollPane, or in any other scrollable type of panel... can someone please give me a VERY simple method as I'm new to java.. I have read all about jScrollPane on... Hi members, Is anyone familiar with how to use the jTable in NetBeans, I'm new, and trying to add data to a Table (jTable1), but I'm not being successful, can anyone help, I went to ummm set... Thanks dude :) Found the solution to put the splitted text into an array, but have no idea of how to read from text file.. can someone help? Dear members, I'm new here first off.. my name is Julian.. (: nice to meet y'all... (Im new to java aswell) .... Would like some help regarding the split("|") method, I'm trying to...
http://www.javaprogrammingforums.com/search.php?s=de21115c4e0fb6f0e550a2cb6d113ed9&searchid=1724910
CC-MAIN-2015-35
refinedweb
389
79.46
By Evan Wong, Solutions Architect Before going through the step-by-step guides, the user should have the following prerequisites: This tutorial uses a number of third party resources including the sample application source codes. Special thanks to Satya Depareddy for the application source codes on GitHub - This. Before creating a container cluster, there are some services that need to be activated for the first-time users. If the Cloud resources are not activated, it need to be activated before you can proceed to create the Kubernetes cluster. Navigate to the Container Service console, select Kubernetes, and click the Create Kubernetes Cluster button. There are four different editions of Kubernetes cluster that you can create on Alibaba Cloud.. If you do not have a GitHub account, go to and sign up for a new account. Fill in the username, email and password. Then, after verification, choose the Free account. After registration is completed, it shall bring you the main landing page. In this lab, we are using GitHub as the source code repository. First, you would need to fork the source codes from existing Git repository: To do this, login into your own GitHub and navigate to this repository Click on the Fork on the top right hand corner on the screen. After forking successful, you should have the source codes in your own repository. Go to the Alibaba Cloud console, click Home in the upper left corner of the page, and select Container Registry. The prompt shown in the following figure appears upon your first logon. Select Malaysia (Kuala Lumpur) or any other region of your choice in the upper left corner and click OK. Go to Code Source and click Bind Account On the pop-up dialog, click on the right arrow. It will open a new link to sign in to the GitHub account. On the GitHub sign-in page, input the login details and click Sign In. On the Authrization page, click on "Authroize Aliyun Developer" Once it is authorized, you should receive a notification email. Go back to the Container Registry page. Click on the Account Bound button. By now, it should show "Bound" on the GitHub code source section. Go back to the Namespace page. On the defaut prompt, click OK. If it is the first time, click on the Reset Docker Login Password. Set the Docker logon password to [Aliyun-test] or [your choice of password]. A namespace is a collection of repositories. We recommend that you group the repositories of a company or organization in one namespace. The following figure shows the list of namespaces. In this lab, we would be using the existing namespace devops-workshop. Create a repository according to the following figure. Set the region to Malaysia (Kuala Lumpur) or any other region of your choice. Set parameters according to the following figure and click Next. Select the namespace you created earlier. Select GitHub, input your account user name and project. Click Create Repository. The following figure shows that the repository has been created. Click Manage to open the repository. Detailed commands for pushing images to this repository are displayed. Copy the first command shown in the following figure to the ECS terminal and enter the repository logon password. On the terminal, on the root level, navigate to the application source code directory. $ cd java-webapp-docker List the available docker images: $ docker images Copy the second command shown in the following figure to the ECS terminal (replace [ImageId] with the actual one and set [tag] to v1). Copy the third command shown in the following figure to the ECS terminal (set [tag] to v1). The following figure shows that the image is being uploaded. The following figure shows that the image has been uploaded. Go to the Alibaba Cloud console and select Tags. The uploaded image is displayed. Go to the build section, enable the Automatically build image option. For details about how to download the image in other environments, see the repository guide. $yum install -y git Next, you would need to clone the codes to the local computer. To do that open a terminal or command prompt. Type $git clone After the codes are successfully cloned. To create a new tag, type $git tag release-v1.0 To create a new branch from the tag, $git branch release-v1.0-branch release-v1.0 $git checkout release-v1.0-branch Go to the home directory of the project source code – java-webapp-docker. Change the directory to src/main/webapp. Open the index.jsp with an editor such as vi or vim. Change the header <html> <body> <h1>Welcome to Alibaba Cloud DevOps v1.0</h1> </body> </html> Username for '': <USERNAME> Password for '': <PASSWORD>. Dockerize App and Push to Container Registry: CI/CD Automation on Container Service (1) Deploy Docker Image to Alibaba Cloud Container Service: CI/CD Automation on Container Service (3) 2,599 posts | 594 followersFollow Alibaba Clouder - March 5, 2019 Alibaba Clouder - March 5, 2019 Alibaba Cloud New Products - March 10, 2021 Alibaba Clouder - March 2, 2021 PM - C2C_Yuan - July 8, 2020 Alibaba Clouder - September 1, 2020 2,599 posts | 594 followersFollow Learn MoreLearn 5625446600632107 November 20, 2019 at 9:07 am The GitHub's link of is no longer valid.
https://www.alibabacloud.com/blog/continuous-deployment-automation-on-alibaba-cloud-cicd-automation-on-container-service-2_594540
CC-MAIN-2021-25
refinedweb
878
66.33
File Class Provides static methods for the creation, copying, deletion, moving, and opening of files, and aids in the creation of FileStream objects. For a list of all members of this type, see File Members. System.Object System.IO.File [Visual Basic] NotInheritable Public Class File [C#] public sealed class File [C++] public __gc __sealed class File [JScript] public class File Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. Remarks futher. Example [Visual Basic, C#, C++] The following example demonstrates some of the main members of the File class. [Visual Basic] Imports System Imports System.IO Public Class Test Public Shared Sub Main() Dim path As String = "c:\temp\MyTest.txt" If File.Exists(path) = False Then ' Create a file to write to. Dim sw As StreamWriter = File.CreateText(path) sw.WriteLine("Hello") sw.WriteLine("And") sw.WriteLine("Welcome") sw.Flush() sw.Close() End If Try ' Open the file to read from. Dim sr As StreamReader = File.OpenText(path) Do While sr.Peek() >= 0 Console.WriteLine(sr.ReadLine()) Loop sr.Close() Dim path2 As String = e As Exception Console.WriteLine("The process failed: {0}", e.ToString()) End Try End Sub End Class ()); } } } [C++] #using <mscorlib.dll> using namespace System; using namespace System::IO; int main() { String* path = S"c:\\temp\\MyTest.txt"; if (!File::Exists(path)) { // Create a file to write to. StreamWriter* sw = File::CreateText(path); try { sw->WriteLine(S"Hello"); sw->WriteLine(S"And"); sw->WriteLine(S"Welcome"); } __finally { if (sw) __try_cast<IDisposable*>(sw)->Dispose(); } } // Open the file to read from. StreamReader* sr = File::OpenText(path); try { String* s = S""; while (s = sr->ReadLine()) { Console::WriteLine(s); } } __finally { if (sr) __try_cast<IDisposable*>(sr)->Dispose(); } try { String* path2 = String::Concat(path, S"temp"); // Ensure that the target does not exist. File::Delete(path2); // Copy the file. File::Copy(path, path2); Console::WriteLine(S"{0} was copied to {1}.", path, path2); // Delete the newly created file. File::Delete(path2); Members | System.IO Namespace | Working with I/O | Reading Text from a File | Writing Text to a File | Basic File I/O | Reading and Writing to a Newly Created Data File
https://msdn.microsoft.com/en-US/library/system.io.file(v=vs.71).aspx
CC-MAIN-2015-18
refinedweb
368
68.87
Run Time Polymorphism In C#.net Contact Us Privacy Policy Terms & Conditions About Us ©2016 C# Corner. Thanks, Paras Sanghani Mark As Answer if it helped you. Polymorphism in C Generate and add keyword variations using AdWords API Polymorphism in JavaScript Window Tabs (WndTabs) Add-In for DevStudio SAPrefs - Netscape-like Preferences Dialog AngleSharp Comments and Discussions You Compile Time and RunTime Polymorphism in C++ (Hindi) - Duration: 7:22. Overloading (not really polymorphism) is simply multiple functions which have the same name but different signatures (think multiple constructors for an object taking different numbers of arguments). A Christmas rebus What exactly is a short circuit? Though we are calling Draw function from the base class object we can call method of Derived classes Conclusion: We have learnt how to achieve runtime polymorphism in Visual C#. Sign In·ViewThread·Permalink oops BHAVESH635-Jun-13 20:23 BHAVESH635-Jun-13 20:23 hello sir i m fresher in oops .please tell me how/when use this concept in our project Sign In·ViewThread·Permalink Re: oops Boipelo13-Jul-13 Example....private void SearchPerson(string name) { ...some text } private void SearchPerson(string name, string surname) { ...some text }Benefits: there are a lot, from that example – It allows extensibility, meaning if In my experience the word usually refers to overriding. This is called polymorphism. Posted by jeff | 2014/04/24, 10:25 AM Reply to this comment thank you Posted by ashu | 2015/10/24, 2:16 PM Reply to this comment so, vry wanderful Posted by Dwimacha Basumatary - Use the new keyword if hiding was intended. */ public void Show() { Console.WriteLine("Show From Derived Class."); } } static void Main(string[] args) { Base objBase = new Base(); objBase.Show();// Output - It is known as Early Binding because the compiler is aware of the functions with same name and also which overloaded function is tobe called is known at compile time. - When a method of a base class is overridden in a derived class (subclass), the version defined in the derived class is used. - At runtime, it will be decided which method to call and if there is no method at runtime, it will give an error. - The negative order integer challenge, but it's Prime Time! Derived objDerived = new Derived(); objDerived.Show();//Output--> This is Derived Class. If you don't put a modifier on a base class method, polymorphism can't ever happen. So both classes can use the same methods but implement them differently. In static polymorphism, the response to a function is determined at the compile time. Compiler would not be aware whether the method is available for overriding the functionality or not. The C# approach is more explicit for the purpose of making the code safer in versioning scenarios, i.e., you build your code based on a 3rd party library and use meaningful, Console.ReadLine(); } } } It means that you are hiding (re-defining) the base class method. C# Copy public class Shape { // A few example members public int X { get; private set; } public int Y { get; private set; } public int Height { It's because function overloads are resolved at the compile time. But i have a doubt: Can I Achieve Dynamic Polymorphism Using New Keyword?[^] thanks a ton,Rahul Sign In·ViewThread·Permalink Superb Sagar A A22-Jan-14 1:06 Sagar A A22-Jan-14 1:06 It really helps Advantage of early binding is execution will be fast. Method Overloading or compile time polymorphism means same method names with different signatures (different parameters) For more details check this link polymorphism in c# Run Time Polymorphism Run time polymorphism also This will happen at runtime and not at compile time. The important thing to remember about overriding is that the method that is doing the overriding is related to the method in the base class. Polymorphism means having more than one form. Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! You’ll be auto redirected in 1 second. weblink C# C# Programming Guide Classes and Structs Classes and Structs Polymorphism Polymorphism Polymorphism Classes Objects Structs Inheritance Polymorphism Versioning with the Override and New Keywords Knowing When to Use Override and Expected numbers for user engagement Cryptic Hour Pyramid! C# Corner welcomes David McCarter as a featured columnist Authors: Improve your writing skills C# Corner Contribute An Article A Blog A News A Video A Link An Interview Question Ask Method Overloading or compile time polymorphism means same method names with different signatures (different parameters) For more details check this link polymorphism in c# Run Time Polymorphism Run time polymorphism also In my experience the word usually refers to overriding. Method overloading means having two or more methods with the same name but with different signatures. navigate here What I mean is, can I call a method of child class using a parent class object(parent is an abstract class)? September 19, 2013 at 10:38 PM said... Sign In·ViewThread·Permalink My vote of 5 Purushotham Agaraharam10-Jul-13 3:44 Purushotham Agaraharam10-Jul-13 3:44 Great....Simple Lucid Presentation Sign In·ViewThread·Permalink My vote of 5 Niranjan N Tantri10-Jun-13 1:08 Niranjan N Tantri10-Jun-13 1:08 The method in the derived class hides the method in the base class. Here's an example showing that overload choice is performed at compile time: using System; class Test { static void Foo(object a) { Console.WriteLine("Object overload called"); } static void Foo(string a) { This is called static or earlier binding. Hot Network Questions Is it possible to send all nuclear waste on Earth to the Sun? Runtime Polymorphism or Late Binding The polymorphism in which compiler identifies which polymorphic form to execute at runtime but not at compile time is called as runtime polymorphism or late binding. Console.ReadLine(); } } } Compiler demands virtual Show() method and it compiles successfully. When a virtual method is called on a reference, the actual type of the object to which the reference refers is used to determine which method implementation should be used. You do not know at compile time which specific types of shapes the user will create. Compile time Polymorphism or Early Binding The polymorphism in which compiler identifies which polymorphic form it has to execute at compile time it self is called as compile time polymorphism or Stack Overflow Podcast #97 - Where did you get that hat?! techsapphire 1,015 views 22:17 Operator Overloading in C++ (HINDI) - Duration: 21:50. Method overriding means having two or more methods with the same name and same signature, but with a different implementation share|improve this answer edited Jun 30 at 15:25 ragingasiancoder 602216 answered class A { public virtual void Leg(string Name) { } } class B:A { public override void Leg(string Name) { } } Example for Over loading class A { void a() share|improve this answer answered Jan 28 '10 at 7:11 Max Shawabkeh 25.5k46474 add a comment| up vote 0 down vote Classical examples of static polimorphism are based on template metaprogramming or Run time Polymorphism Run time Polymorphism is also known as method overriding. Compiler would not be aware whether the method is available for overriding the functionality or not. Sign in to make your opinion count. Which method is to be called is decided at compile-time only. So, compiler is not aware while compilation is going on which function will be overriden as they will come in to effect runtime when an object of base class or derived more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Overloading (not really polymorphism) is simply multiple functions which have the same name but different signatures (think multiple constructors for an object taking different numbers of arguments). Method overloading means having two or more methods with the same name but with different signatures. Now let’s dive little deeper and understand what we discussed above in more technical terms. You can use polymorphism to solve this problem in two basic steps:Create a class hierarchy in which each specific shape class derives from a common base class.Use a virtual method to Why did the rebels need the Death Star plans? Geeky Shows 2,264 views 7:22 Interface, Abstract class difference and Interview Questions NET - Duration: 22:17. Since it depends on CLR (run time) this kind of polymorphism is called "run-time" polymorphism. I suggest that you read more about OOP – it will be useful in future, here is a link to a detailed article.Introduction to Object Oriented Programming Concepts (OOP) and More[^]I Categories ASP.NET (13) C# 3.0 (3) HTML5 (1) Silverlight 3 (5) SQL Server (4) Functions (1) Trigger (1) WCF (5) WCF REST (2) WPF (4) XAML (2) TagsBAML COMPILATION Dynamic Loading BestDotNetTraining 60,638 views 34:43 What is IEnumerable, IComparable And IComparer Interfaces in C# - Duration: 22:15. In this Mechanism by which a call to an overridden function is resolved at a Run-Time (not at Compile-time) if a base Class contains a method that is overridden. Now this is an example when we are sharing method name and implementing them differently, let’s take a scenario where implementation is in some derived class.
http://dailyerp.net/run-time/run-time-polymorphism-in-c-net.html
CC-MAIN-2017-39
refinedweb
1,555
59.74
Image background node. More... #include <Inventor/nodes/SoImageBackground.h> Image background node. Draws a background image. This node provides a convenient way of rendering an image in the background of the scene. The position options like LOWER_LEFT can be used, for example, to place a logo in the corner of the window. The STRETCH and TILE options cause the image to fill the window and automatically adjust if the window size changes. Note that the SoImage node can also be used to place an image in the scene, but the position of the image is specified in 3D coordinates. This node positions images relative to the physical drawing window. image will be stretched or tiled across the entire virtual window. SoBackground, SoGradientBackground BackgroundNode, MedicalBonesMuscles, BonesMuscles Image background style. Creates a background image node with default settings. Returns the type identifier for this class. Reimplemented from SoBackground. Returns the type identifier for this specific instance. Reimplemented from SoBackground. Names file from which to read texture image. The standard image file formats are supported. See SoRasterImageRW for the list. If the filename is not an absolute path name, the list of directories maintained by SoInput is searched. If the texture is not found in any of those directories, then the file is searched for relative to the directory from which the node was read. For example, if a node with a filename of "../tofu.rgb" is read from /usr/people/bob/models/food.iv, then /usr/people/bob/tofu.rgb will be read (assuming tofu.rgb isn't found in the directories maintained by SoInput).
https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_image_background.html
CC-MAIN-2021-04
refinedweb
263
61.43
ArrayList(s) and Static Array(s). They have a lot in common. They both store data, they can both be traversed in loops...However, the ArrayList is MUCH more appealing in my opinion, since it can be dynamically re-sized, that is, re-sized during execution of a program, which is very nice. On the flip side, a static array has size limitations to how much space you declared them to have. Let's get to the code, shall we? Static Arrays The static array. A nice little structure that holds data in a neat fashion. We can traverse through the data in a loop. It seems like a perfect thing...WRONG. public class Main { public static void main(String[] args) { int[] numbers = new int[5]; } } This array can hold up to 5 integers. It essentially already does hold them, since it initializes them to zero. You CANNOT hold more than 5 in this data structure, or you will get an ArrayIndexOutOfBoundsException, unless you "re-size" it...which is kind of pointless unless you back up your data, because when you re-size the old array, you lose ALL of your data. numbers = new int[10]; Whenever you use the keyword new in creating/"re-sizing" a static array...you lose all of the previous data. In other words, all 10 of the new numbers will be set to zero. ArrayList The way to declare an ArrayList follows. ArrayList<CLASS> name = new ArrayList<CLASS>(); If you leave out the <CLASS> part of it, it can hold any object. It's the same as doing this...ArrayList<Object>... import java.util.ArrayList; // necessary to access the ArrayList class public class Main { public static void main(String[] args) { ArrayList<Integer> integers = new ArrayList<Integer>(); } } This integers ArrayList will only hold data of type Integer. Methods of the ArrayList class Adding Data Well, we have an ArrayList, let's add something to it! ArrayList<Integer> numbers = new ArrayList<Integer>(); numbers.add(10) This adds the value 10 to our list. There is another add() method, which is as follows. numbers.add(0, 10); This adds our number 10 as the FIRST number in our list. Remember, the indexes start at zero, not one. Everything that was at index 0 or greater is pushed back 1 index. The first parameter specifies the index to add the item at. Accessing Data Well, now that we use a class for storing data, we have to use methods to get to that data. Let's use our Integer list for an example...In older times (AKA pre-JDK 5), you had to "unwrap" the Object of type Integer, because you can't use an Integer object in calculations. If you had an ArrayList to store some Object form of a simple data type (int, double, boolean, etc...), you have to call other methods, and convert from object to simple data type to get the actual value of that simple data type. Now, it's as simple as using the get() method. We use the method get(INDEX) to access a particular slot in the list. This accesses the object at the specified index in the list. // "numbers" is predefined int num = numbers.get(0); // first item Traversing ArrayLists We can use a for loop to access/output all of our array elements. // "numbers" is predefined for (int x = 0; x < numbers.size(); x++) System.out.println(numbers.get(x)); The previous code will access ALL of the elements, since the condition in our loop to stop is x < numbers.size(). The size() method just returns how many objects are in our list. There is an alternate way to print out the list...the following. System.out.println(numbers); That, if we say we have 5 integers in our list (1, 2, 3, 4, 5), will print this out... Quote As you can see, when we normally say println(some_object); it will normally print out a memory address. The ArrayList functions differently. It's not a jerk. Checking if we have data It's very simple...The ArrayList class has a few methods that make this incredibly easy. boolean empty = numbers.isEmpty(); Well, I hope you can you can see what the isEmpty() method is going to tell us... The next method makes searching for repeats and such very easy. As a matter of fact, this is what I used numerous times in my snippet here...NoRepeatRandom. (actually, upon further examination, I didn't use it. I created my own method to check if the data is in there) int num = 10; if (numbers.contains(num)) System.out.println("10 is in the list!"); Well, that's easy, if the list contains whatever you put in the contains() method...a boolean value is returned...true in this case, since we added the number 10 to the list before. Well, I hope this tutorial helped! I hope I didn't sound like I was rambling. Gimme a break, it's my first tutorial. "At DIC we be card dealing, array-smashing code ninjas! Buh-Bye!
http://www.dreamincode.net/forums/topic/58935-arraylist-vs-static-arrays/
CC-MAIN-2018-22
refinedweb
846
67.65
{-# LANGUAGE TypeSynonymInstances #-} -- | -- Module : System.Linux.Epoll.Buffer -- Copyright : (c) 2009 Toralf Wittner -- License : LGPL -- Maintainer : toralf.wittner@gmail.com -- Stability : experimental -- Portability : non-portable -- -- Buffer layer above epoll. Implemented using 'EventLoop's. -- The general usage is that first an instance of 'Runtime' is obtained, then -- one creates as many buffers as needed. Once done with a buffer, it has -- to be closed and finally the runtime should be shutdown, which kills -- the event loop, e.g. -- -- @ -- do r <- createRuntime (fromJust . toSize $ 4096) -- withIBuffer r stdInput $ \\b -> -- readBuffer b >>= mapM_ print . take 10 . lines -- shutdownRuntime r -- @ -- -- Please note that one has to close all buffers before calling shutdown on -- the runtime. module System.Linux.Epoll.Buffer ( Runtime, createRuntime, shutdownRuntime, BufElem (..), IBuffer, createIBuffer, closeIBuffer, withIBuffer, OBuffer, createOBuffer, closeOBuffer, withOBuffer, readBuffer, readAvail, readChunk, writeBuffer, flushBuffer ) where import BChan import Prelude import Data.Maybe import Control.Monad import Control.Exception (bracket) import System.Posix.Types (Fd) import System.IO (hPrint, stderr) import System.Posix.IO (fdRead, fdWrite) import System.Linux.Epoll.Base import System.Linux.Epoll.EventLoop -- | Buffer Element type class. -- Any instance of this class can be used as a buffer element. class BufElem a where -- | Zero element, e.g. empty string. beZero :: a -- | Concatenates multiple elements. beConcat :: [a] -> a -- | The length of one element. beLength :: a -> Int -- | Returns element minus integer. beDrop :: Int -> a -> a -- | Writes element to 'Fd', returns written length. beWrite :: Fd -> a -> IO Int -- | Reads element of given length, returns element and actual length. beRead :: Fd -> Int -> IO (a, Int) instance BufElem String where beZero = "" beConcat = concat beLength = length beDrop = drop beWrite fd = liftM fromIntegral . fdWrite fd beRead fd n = do (s, k) <- fdRead fd (fromIntegral n) return (s, fromIntegral k) data Buffer a = Buffer { bufferChan :: BChan (Maybe a), bufferCBack :: Callback } -- | Buffer for reading after 'inEvent'. newtype IBuffer a = IBuffer (Buffer a) -- | Buffer for writing after 'outEvent'. newtype OBuffer a = OBuffer (Buffer a) -- | Abstract data type for buffer runtime support. data Runtime = Runtime { rtILoop :: EventLoop, rtOLoop :: EventLoop } -- | Creates a runtime instance where size denotes the epoll -- device size (cf. 'create'). createRuntime :: Size -> IO Runtime createRuntime s = do iloop <- createEventLoop s oloop <- createEventLoop s return $ Runtime iloop oloop -- | Stops event processing and closes this runtime (and the -- underlying epoll device). shutdownRuntime :: Runtime -> IO () shutdownRuntime rt = do stopEventLoop (rtILoop rt) stopEventLoop (rtOLoop rt) -- | Create buffer for 'inEvent's. createIBuffer :: BufElem a => Runtime -> Fd -> IO (IBuffer a) createIBuffer rt fd = do chan <- newBChan let emap = [(inEvents, handleRead chan), (closeEvents, handleClose chan)] cb <- addCallback (rtILoop rt) fd emap return $ IBuffer (Buffer chan cb) -- | Create Buffer for 'outEvent's. createOBuffer :: BufElem a => Runtime -> Fd -> IO (OBuffer a) createOBuffer rt fd = do chan <- newBChan let emap = [(outEvents, handleWrite chan), (closeEvents, handleClose chan)] cb <- addCallback (rtOLoop rt) fd emap return $ OBuffer (Buffer chan cb) -- | Close an IBuffer. Must not be called after 'shutdownRuntime' has been -- invoked. closeIBuffer :: BufElem a => Runtime -> IBuffer a -> IO () closeIBuffer rt (IBuffer b) = do writeBChan (bufferChan b) Nothing removeCallback (rtILoop rt) (bufferCBack b) -- | Close an OBuffer. Must not be called after 'shutdownRuntime' has been -- invoked. closeOBuffer :: BufElem a => Runtime -> OBuffer a -> IO () closeOBuffer rt (OBuffer b) = do writeBChan (bufferChan b) Nothing removeCallback (rtOLoop rt) (bufferCBack b) -- | Exception safe wrapper which creates an IBuffer, passes it to the provided -- function and closes it afterwards. withIBuffer :: BufElem a => Runtime -> Fd -> (IBuffer a -> IO ()) -> IO () withIBuffer r fd = bracket (createIBuffer r fd) (closeIBuffer r) -- | Exception safe wrapper which creates an OBuffer, passes it to the provided -- function and flushes and closes it afterwards. withOBuffer :: BufElem a => Runtime -> Fd -> (OBuffer a -> IO ()) -> IO () withOBuffer r fd f = bracket (createOBuffer r fd) (closeOBuffer r) $ \b -> f b >> flushBuffer b -- | Blocking read. Lazily returns all available contents from 'IBuffer'. readBuffer :: BufElem a => IBuffer a -> IO a readBuffer (IBuffer b) = liftM (beConcat . map fromJust . takeWhile isJust) $ getBChanContents (bufferChan b) -- | Blocking read. Returns one chunk from 'IBuffer'. readChunk :: BufElem a => IBuffer a -> IO a readChunk (IBuffer b) = liftM (fromMaybe beZero) $ readBChan (bufferChan b) -- | Non-Blocking read. Returns one chunk if available. readAvail :: BufElem a => IBuffer a -> IO (Maybe a) readAvail (IBuffer b) = do let ch = bufferChan b empty <- isEmptyBChan ch if empty then readBChan ch else return Nothing -- | Non-Blocking write. Writes value to buffer which will asynchronously be -- written to file descriptor. writeBuffer :: BufElem a => OBuffer a -> a -> IO () writeBuffer (OBuffer b) = writeBChan (bufferChan b) . Just -- | Blocks until buffer is emptied. flushBuffer :: OBuffer a -> IO () flushBuffer (OBuffer b) = waitBChan (bufferChan b) -- -- Event handling -- handleClose :: BufElem a => BChan (Maybe a) -> Device -> Event Data -> IO () handleClose ch _ _ = writeBChan ch Nothing -- Ensure blocking ops finish. handleRead :: BufElem a => BChan (Maybe a) -> Device -> Event Data -> IO () handleRead cha dev e = do doRead cha (eventFd e) reEnableCallback dev (eventRef e) (eventDesc e) where doRead :: BufElem a => BChan (Maybe a) -> Fd -> IO () doRead ch fd = do (s, k) <- beRead fd defaultBlockSize `catch` \er -> logErr er >> return (beZero, 0) unless (k == 0) $ writeBChan ch (Just s) when (k == defaultBlockSize) $ doRead ch fd handleWrite :: BufElem a => BChan (Maybe a) -> Device -> Event Data -> IO () handleWrite cha dev e = doWrite cha (eventFd e) where doWrite :: BufElem a => BChan (Maybe a) -> Fd -> IO () doWrite ch fd = do s <- peekBChan ch case s of Just s' -> do k <- beWrite fd s' `catch` \er -> logErr er >> return 0 if k == beLength s' then do dropBChan ch doWrite ch fd else do unGetBChan ch (Just (beDrop k s')) reEnableCallback dev (eventRef e) (eventDesc e) Nothing -> return () defaultBlockSize :: Int defaultBlockSize = 8192 logErr :: (Show a) => a -> IO () logErr = hPrint stderr inEvents :: EventType inEvents = combineEvents [inEvent, edgeTriggeredEvent, oneShotEvent] outEvents :: EventType outEvents = combineEvents [outEvent, edgeTriggeredEvent, oneShotEvent]
http://hackage.haskell.org/package/epoll-0.2/docs/src/System-Linux-Epoll-Buffer.html
CC-MAIN-2016-07
refinedweb
921
62.88
pfm_get_os_event_encoding man page pfm_get_os_event_encoding — get event encoding for a specific operating system Synopsis #include <perfmon/pfmlib.h> int pfm_get_os_event_encoding(const char *str, int dfl_plm, pfm_os_t os, void *arg); Description This is the key function to retrieve the encoding of an event for a specific operating system interface. The event string passed in str is parsed and encoded for the operating system specified by os. The event is encoded to monitor at the privilege levels specified by the dfl_plm mask, if supported, otherwise this parameter is ignored. The operating system specific input and output arguments are passed in arg. The event string, str, may contains sub-event masks (umask) and any other supported modifiers. Only one event is parsed from the string. For convenience, it is possible to pass a comma-separated list of events in str but only the first event is encoded. The following values are supported for os: - PFM_OS_NONE This value causes the event to be encoded purely as specified by the PMU hardware. The arg argument must be a pointer to a pfm_raw_pmu_encode_arg_t structure which is defined as follows: typedef struct { uint64_t *codes; char **fstr; size_t size; int count; int idx; } pfm_pmu_encode_arg_t; The fields are defined as follows: - codes A pointer to an array of 64-bit values. On input, if codes is NULL, then the library allocates whatever is necessary to store the encoding of the event. If codes is not NULL on input, then count must reflect its actual number of elements. If count is big enough, the library stores the encoding at the address provided. Otherwise, an error is returned. - count On input, the field contains the maximum number of elements in the array codes. Upon return, it contains the number of actual entries in codes. If codes is NULL, then count must be zero. - fstr If the caller is interested in retrieving the fully qualified event string where all used unit masks and all modifiers are spelled out, this field must be set to a non-null address of a pointer to a string (char **). Upon return, if fstr was not NULL, then the string pointer passed on entry points to the event string. The string is dynamically allocated and must eventually be freed by the caller. If fstr was NULL on entry, then nothing is returned in this field. The typical calling sequence looks as follows: char *fstr = NULL pfm_pmu_encode_arg_t arg; arg.fstr = &fstr; ret = pfm_get_os_event_encoding("event", PFM_PLM0|PFM_PLM3, PFM_OS_NONE, &e); if (ret == PFM_SUCCESS) { printf("fstr=%s0, fstr); free(fstr); } - size This field contains the size of the struct passed. This field is used to provide for extensibility of the struct without compromising backward compatibility. The value should be set to sizeof(pfm_pmu_encode_arg_t). If instead, a value of 0 is specified, the library assumes the struct passed is identical to the first ABI version which size is PFM_RAW. - PFM_OS_PERF_EVENT, PFM_OS_PERF_EVENT_EXT This value causes the event to be encoded for the perf_event Linux kernel interface (available since 2.6.31). The arg must be a pointer to a pfm_perf_encode_arg_t structure. The PFM_OS_PERF_EVENT layer provides the modifiers exported by the underlying PMU hardware, some of which may actually be overridden by the perf_event interface, such as the monitoring privilege levels. The PFM_OS_PERF_EVENT_EXT extends PFM_OS_EVENT to add modifiers controlled only by the perf_event interface, such as sampling period (period), frequency (freq) and exclusive resource access (excl). typedef struct { struct perf_event_attr *attr; char **fstr; size_t size; int idx; int cpu; int flags; } pfm_perf_encode_arg_t; The fields are defined as follows: - attr A pointer to a struct perf_event_attr as defined in perf_event.h. This field cannot be NULL on entry. The struct is not completely overwritten by the call. The library only modifies the fields it knows about, thereby allowing perf_event ABI mismatch between caller and library. - fstr Same behavior as is described for PFM_OS_NONE above. - size This field contains the size of the struct passed. This field is used to provide for extensibility of the struct without compromising backward compatibility. The value should be set to sizeof(pfm_perf_encode_arg_t). If instead, a value of 0 is specified, the library assumes the struct passed is identical to the first ABI version which size is PFM_PERF. - cpu Not used yet. - flags Not used yet. Here is a example of how this function could be used with PFM_OS_NONE: #include <inttypes.h> #include <err.h> #include <perfmon/pfmlib.h> int main(int argc, char **argv) { pfm_raw_pmu_encode_t raw; int ret; ret = pfm_initialize(); if (ret != PFMLIB_SUCCESS) errx(1, "cannot initialize library %s", pfm_strerror(ret)); memset(&raw, 0, sizeof(raw)); ret = pfm_get_os_event_encoding("RETIRED_INSTRUCTIONS", PFM_PLM3, PFM_OS_NONE, &raw); if (ret != PFM_SUCCESS) err(1", cannot get encoding %s", pfm_strerror(ret)); for(i=0; i < raw.count; i++) printf("count[%d]=0x%"PRIx64"\n", i, raw.codes[i]); free(raw.codes); return 0; } Return The function returns in arg the encoding of the event for the os passed in os. The content of arg depends on the os argument.> Referenced By pfm_get_event_encoding(3), pfm_get_perf_event_encoding(3).
https://www.mankier.com/3/pfm_get_os_event_encoding
CC-MAIN-2017-30
refinedweb
826
55.95
Hello, I am having a particularly nasty case of "What is the syntax?" while working on an event driven library. Basically this is what I want: class Object {// private: public: virtual bool onEvent(Event e)=0;//all objects have to react to events Object operator|(const Object &o) { // I want to return an Object which will execute both onEvents... // How can I do this... IF I can do this... // my example attempt: (I have yet to fully memorize the lambda expression syntax in c++11) return Object::onEvent(Event e)=[]{return (*this).onEvent(e) || o.onEvent(e);} } }; class Test1:public Object {// private: public: virtual bool onEvent(Event e) { cout<<"Test1"<<endl; return 0;//do not delete } }; class Test2:public Object {// private: public: virtual bool onEvent(Event e) { cout<<"Test2"<<endl; return 0;//do not delete } }; //a more complex test class Incrementer:public Object {// private: int x; public: Incrementer():x(0){} virtual bool onEvent(Event e) { cout<<"Counter is: "<<x<<endl; return 0; } }; So that this code: Test1 t1; Test2 t2; Incrementer i; f(t1|t2|i); Will pass an object to f() whose onEvent function consists of: cout<<"Test1"<<endl; cout<<"Test2"<<endl; cout<<"Counter is: "<<x<<endl; Is this even possible? If it is, what would its constraints be? I would think that Test1 and Test2 should be possible (since they could easily be implemented via an array of function pointers) but how will Incrementer have access to i if it is currently an Object?
https://www.daniweb.com/programming/software-development/threads/467508/inherited-class-operator-override
CC-MAIN-2018-13
refinedweb
246
56.39
Generic constraints inside .NET has always been a fun enterprise, especially given how C# handles them There has been some discussion on Jon Skeet’s blog about the fact that C# does not allow for generic constraints referring to a number of types. These include: This is indeed a bit unfortunate, as it limits some of the more interesting applications. The example Jon shows is indeed illegal in C#: public static T[]GetValues<T>() where T : struct, System.Enum { return (T[]) Enum.GetValues(typeof(T)); } However, as Jon correctly points out, this is indeed supported by the CLR directly. In fact, with our knowledge of F# constraints, we can write this exact function in F# without any such issue. It’s little wonder that F# has learned some of their lessons from C#, but as well as having the language designed by the person who brought generics to .NET also helps. Let’s first look at our F# implementation. The idea here is to ensure that our T type as above is an enum. In order to do so, we must specify it is of type enum<underlying-type> where the underlying type is most usually an Int32. Remember, enums can be of any integral besides char, so that’s why it must be specified. namespace Codebetter.Constraints module Constraint = open System let getValues<'a, 'b when 'a : enum<'b>>() = Enum.GetValues(typeof<'a>) :?> 'a array As we can see from the above code, it’s rather straight forward. We specify the ‘a must be an enum of an inner type of ‘b. We could have simplified this for an external call to export it using an int as our ‘b, but let’s keep this as generic as possible. Calling this code, we can get arrays of all values. Let’s test in F# interactive: > Constraint.getValues<System.IO.FileAccess,int>();; val it : System.IO.FileAccess array = [|Read; Write; ReadWrite|] > Constraint.getValues<string,int>();; Constraint.getValues<string,int>();; ^^^^^^^^^^^^^^^^^^^^ error FS0001: The type 'string' is not a .NET enum type In our first example, we call with System.IO.FileAccess in which it gives us the three enum values, and in our second case, we tried with a non-enum type of string and sure enough it tells us as much. But, what about C# here? Could this code transfer? In some C# calling code, we could then use our function as follows: static void Main(string[] args) { var values = Constraint.getValues<FileAttributes, int>(); foreach(var value in values) Console.WriteLine(value); } This will give us the expected results of all values for the FileAttributes enum just as we would through our F# code. An issue arises, however, when we try with a failure case as we had above: var values = Constraint.getValues<string, int>(); foreach (var value in values) Console.WriteLine(value); This will compile just as our previous example did. When we run this example, however, we get an exception thrown due to Enum.GetValues expecting an enum type as we have below and our constraint not honored. Unhandled Exception: System.ArgumentException: Type provided must be an Enum. Parameter name: enumType at System.Enum.GetValues(Type enumType) at Codebetter.Constraints.Constraint.getValues[a,b]() in C:\Work\ConstraintLib\Module1.fs:line 19 So, the important thing to remember is that even if other languages export generic restrictions such as an enum, C# will not honor this restriction. Other restrictions such as reference type restrictions such as the following work well: let printClass<'a when 'a : not struct> (arg:'a) = printfn "%A" arg And the calling C# code: Constraint.printClass("Hello") // prints Hello Constraint.printClass(3) // error CS0452 By the code above, we sure enough have the compiler honoring our generic restrictions in this case. We’re just getting started here in this brief series to cover generic constraints. There are a few more to talk about before we’re done such as method signature restrictions, constructor restrictions and so forth. I hope at some interation that generic restrictions in C# get revisited to make them as fully features as F#’s. Interestingly, doing a more literal translation of the unsupported C# code (as opposed to using an F# enum constraint) will lead to code which works in both F# and C#: let getValues<'a when 'a :> System.Enum and 'a : struct>() = System.Enum.GetValues(typeof<'a>) :?> 'a array @Keith, You are correct and adding that to the next post. Matt
http://weblogs.asp.net/podwysocki/archive/2009/09/16/generically-constraining-f-part-i.aspx
CC-MAIN-2014-10
refinedweb
742
66.13