text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
![endif]--> Arduino <![endif]--> Buy Download Products Arduino AtHeart Certified Learning Getting started Examples Playground Reference Support Forum Advanced Search | Arduino Forum :: Members :: jc000 Show Posts Pages: [ 1 ] 1 Using Arduino / Programming Questions / Re: printf() Command on Graphic LCD? on: February 23, 2011, 06:55:32 pm Well, you can use printf() if you initialize it with fdevopen() first. This very simple example will work for printing on a single line. You could extend it to keep a line buffer, interpret CR & LF, and scroll the screen up. Code: #include <LiquidCrystal.h> LiquidCrystal lcd(12, 11, 10, 5, 4, 3, 2); int lcd_putc( char c, FILE * ) { lcd.write( c ); return 0; } void setup(void) { ... fdevopen( &lcd_putc, NULL ); ... } 2 Using Arduino / Motors, Mechanics, and Power / Re: Fried my Atmega328 using L298 to drive step motor ... what to blame? on: February 22, 2011, 12:14:57 pm Quote from: cr0sh on February 20, 2011, 09:17:51 pm! Well, this one was pretty easy. I try to buy everything from Mouser.com, to simplify my purchasing. When I searched Mouser for Multiwatt heatsinks, thinks is the the only one that came up! Thanks for the tip on the grease, I will do that. I'm pretty sure that's available at the local electronics store. 3 Development / Other Software Development / Re: new library - MCP23016 IO expander on: February 20, 2011, 04:26:15 pm Ok, I should have been a lot more specific about my latches comment. Consider the following sequence of events: 1. Set port 0 to outputs, port 1 to inputs 2. Write B11111111 to port 0. This will also writes 0 to port 1, but that's pr'y OK. 3. Read port 1. This will also read GP0 into the 'port0' member variable. Now 'port0' contains WHATEVER the GP0 register had in it. 4. Try to write a 0 to the LSB of port 0. You would EXPECT this to write B11111110 to port 0. But in my experience this may or may not happen. In step 3, you filled the 'port0' member with the contents of GP0 as a side-effect of filling the 'port1' member from GP1, which is what you were really trying to do. The GPx registers seem not to reliably contain the last value written to output pins. The OLATx registers do, so in my code I always read from the latches when I need the values last set to the pins. Now in your code, you buffer the values in RAM so you wouldn't really need to go to the latches at all. Instead, when you read, you could just remove any bits that were set to outputs, so you'd only update 'port0' and 'port1' for bits whose corresponding pins were set to inputs. WRT the internal-pullups, yes I am reading the 23016 datasheet, and I see it is a much simpler chip. I use the pullups all the time, to avoid having to add resistor networks to the system for the same purpose. 4 Using Arduino / Motors, Mechanics, and Power / Re: Fried my Atmega328 using L298 to drive step motor ... what to blame? on: February 20, 2011, 02:32:57 pm Thanks, Magician. Okay, that makes sense. I also am not at all certain the L298 is to blame. if I posted it on the general board, they'd be like, "Well what were you DOING with the Arduino? And post it there..." ESD problems seems like a pretty good stab at an explanation, especially given that I was handling the Arduino+shield unit quite a bit the first day. I have been running the same system using the replacement Atmega since I posted the original, and the replacement chip is not fried. Perhaps this will be a good lesson in bullet-proofing, as you say. And zener diodes. I've seen those, I'll have to learn about them. Btw, the cat feeder is working flawlessly. The enclosed circuits are separated from the cat by about 3 feet of cable, although I suppose the motor and wires are exposed, so she could static those things up good if she was motivated. I have some writing and picture-taking to do, and then I'll post that one on the gallery. 5 Development / Other Software Development / Re: new library - MCP23016 IO expander on: February 20, 2011, 02:12:10 pm I use the MCP23018 extensively, so I'll give this a try. A couple suggestions... Definitely agree on previous post that the library should be named for the specific chip family. Any chance you could host on github or something like that? Would make it much easier to submit patches. Abstract the Wire.begin/send/end stuff into a private function to avoid so much duplication. Is it a good idea to call Wire.begin() from a chip-specific driver? If you have a handful of different drivers EACH making a begin() call, will that mess something up? In my own code, I call Wire.begin() in setup(), and then each driver's own begin() afterward. Why take only a 'bool' for pinModePort? Seems like an artificial constraint that an entire port be either input or output. The chip supports some 'pins' being input and others output, why not support this in software? When you READ the port values you store them in (portX), which is the same place you're use when WRITING to them. Yet you are reading from the ports themselves, not the latches. I think this will give inconsistent results. To reliably read back the values you've written to outputs, I think you have to read from the latches. In such a library, I would like to see support for... Pull-up registers (Do you have them on 23016?) Writing 8 bits to a whole port at once. This is how I usually use it. Writing 16 bits to the whole chip at once. One thing I use it for is to drive a 12-pin 4-digit 7-segment LED, so I know all 12 bits at a time to display a digit. Btw, I think you need Wire.h in your .PDE because that's how Arduino build system knows which libraries to link in. It's a clever, simple approach (like much in the Arduino software). Other build systems require you to configure the project and separately define which libraries to link in. Here, it just picks it up from the includes. Nice. 6 General Category / General Discussion / Re: Why am I a ghost?? on: February 20, 2011, 02:02:57 pm Oh, it's GENDER! Great, thanks. 7 General Category / General Discussion / Why am I a ghost?? on: February 20, 2011, 12:38:00 pm Just curious... Wondering why I have a little 'ghost' icon next to my name in posts, where most others have a little silhouette of a person. 8 Using Arduino / Motors, Mechanics, and Power / Re: Fried my Atmega328 using L298 to drive step motor ... what to blame? on: February 20, 2011, 11:58:02 am Y. Magician, there is definitely LOW on Arduino pins 11 & 12 when the switches are open. R1 & R2 are there to pull the pins low. This is before I learned about pull-up resistors on the chip itself, so future revs of this will have the switches simply connect GND to the pin. But these Arduino pins are not part of the L298 circuit, they're just there so I have some control over the operation of the sketch. 9 Using Arduino / Motors, Mechanics, and Power / Re: Fried my Atmega328 using L298 to drive step motor ... what to blame? on: February 19, 2011, 06:46:13 pm Ok, I managed to measure current FROM the L298, and it came it an 210mA. Current TO the L298 will be the tricky bit. 10 Using Arduino / Motors, Mechanics, and Power / Re: Fried my Atmega328 using L298 to drive step motor ... what to blame? on: February 19, 2011, 06:28:21 pm Thanks... Yeah, I was trying to replicate that drawing. So in the datasheet schematic, D1 connects pin 13 to +Vs, D3 connects pin 14 to +Vs. In my circuit, D1 connects pin 13 to +12V, and D4 connects pin 14 to +12V. So my D1/D4 should be doing the job of D1/D3 in the datasheet schematic. (Note that there are 8 diodes in all, so perhaps the right-most part of the drawing was cut-off for you.) CrossRoads, thanks for that tip, I'll have to figure that out. My little meter doesn't read current, and my big meter is to large for this circuit. Guess I need to upgrade. 11 Using Arduino / Motors, Mechanics, and Power / Fried my Atmega328 using L298 to drive step motor ... what to blame? on: February 19, 2011, 01:35:12 pm Hi. This morning I seem to have fried the atmega chip on my Duemilanove, and so I'd like to figure out the problem so I don't do it again. The Deumilanove is working now because I replaced the Atmega328 chip with a replacement that I had (coincidentally) just ordered. I am driving a step motor using an L298 circuit built as a shield. I have used this design in other projects, and it's worked fine. (In fact, this exact design feeds my cat every 12 hours ) But this is the first time I have run a motor for long durations (over an hour) at a 20% duty cycle. The driver chip does not heat up too much at 20%, although it did at 25%, so I dialed it down. This is also the first time I am using a particular motor I just picked up. The motor is not marked in any way, so I am driving it with 12V but frankly I have no idea how much to use on it. The schematic is attached. Does anyone have ideas what could have caused this? Pages: [ 1 ] | SMF © 2013, Simple Machines Newsletter ©2014 Arduino
http://forum.arduino.cc/index.php?action=profile;u=39584;sa=showPosts
CC-MAIN-2014-41
refinedweb
1,676
83.56
Before this experiment with Google Web Services I had not yet implemented web service applications in my business, though I could clearly see the need for them. Since I started working with Java Studio Creator I saw an opportunity to make current some of the older technology I built to do web services-type jobs before web services came around. In Part 1 of this 2-part article I'll explore the terminology and technology around web services. I'll take a look at how they work within the IDE's environment and build an application that uses Google's Web Services to check spelling on a word. Then, in Part 2, I'll put a live web services application from the National Weather Service into action. A good example of an application that's perfect for web services is a feature that I built into a series of sites for a group Vermont realtors. To enhance their real estate listings, they wanted the local weather report to appear on their home pages. I designed it—many years ago—to work like this: It works fine, but you can probably guess what happens when the National Weather Service changes their web page...my whole system crashes because it can't find the information it is looking for and I have to do a lot of work to fix it, fast. A better solution: Web Services. Luckily, the National Weather Service now offers a web service for access to their information. So I thought this would be the time to redesign my weather application using Java Studio Creator. But first, I need to explore what web services are all about, and become familiar with how they work. I'll start by doing some background work on the web. Before beginning work with web services, it is necessary to learn about a set of industry-wide standards for using them across the many programming environments, including Java. Both providers (such as Google Web Services) and clients (my Java Studio Creator application) use these standards. The most important are: Read more about web services in this popular article, What's New in SOA and Web Services? by Ed Ort. Another good resource for information on web services is at java.sun.com/webservices. The Google Web Services API provides a SOAP protocol to allow it to access information from Google. This, working in conjunction with WSDL, allows clients to access Google Web Services from within many programming environments, including the Java Studio Creator IDE. The following information, which I found via the Sun Developer Network (SDN) for Java Studio Creator, is also very helpful and worth exploring: Java Studio Creator makes the process of setting up a web service quite easy. The IDE comes with seven preconfigured web services, and Google is one of them. Web services are located in the Servers window under the node labeled Web Services (see Figure 1). Inside this node lives another node called Samples where I find the Google Web Service (listed as GoogleSearch). Other choices include web services for news, weather, quotes, world time, and Amazon access. In Part 2, I'll add a new web service for the National Weather Service, but for now I just want to experiment with Google's spell checker feature. Here are the steps: 1. Create the user interface: Add components to the Visual Designer To create the user interface I need to open a new project in Java Studio Creator and populate it with components. I add a Text Field and a Button component for entering and submitting a spell check to Google. This is easily accomplished by dragging the components from the Basic palette onto the Visual Designer. Then, I change the text field components id to "spellString" and the button components id to "search" by clicking the appropriate component and changing the value in the Properties window. While I'm at it, I also change the text attribute of my button to "Check Spelling"—this is the button label. I add a Static Text component and change the id to "result" in the Properties window. My correctly spelled word will get populated here by the Java bean and the Google Web Service. Finally, I format and size the various components to my liking, adding several Static Text components to give the page some juice, as shown in Figure 2. 2. Add the Google Web Service to the Application Java Studio Creator makes my life quite easy, here. To add the service to my application I simply move my mouse to the Server window (see Figure 1) and drag the node, GoogleSearch, onto my design canvas. This is all it takes to add the web service component to my application. Though this web service is not visible on the Visual Designer, I can see it listed as googleSearchClient1 in the Outline window. The Outline window is typically located below the Servers and Palette windows in the left side of the IDE. It provides an overview of my application's structure, and also shows the components I have dropped onto the Visual Designer. You may need to go to the View drop-down menu and select Outline to open it, as shown in Figure 3. To see what functions (that is, methods) are available to me, I go back to the Servers window and expand the GoogleSearch web service one more level as follows: Web Services > Samples > GoogleSearch Web Services > Samples > GoogleSearch Then I click the “+” one more time and see that three methods were available: doGetCachedPage doSpellingSuggestion doGoogleSearch doGetCachedPage doSpellingSuggestion doGoogleSearch For this application, I'll use the doSpellingSuggestion method. Setting up Google web services for checking spelling is a bit easier than using the Search methods because the spelling method returns a string. When using the doGoogleSearch or doGetCachedPage, an object is returned that requires some additional parsing in Java. But I'll work with objects in the next article. I also see that I can click on one of the methods, such as doSpellingSuggestion, to view both Method Information and Method Parameters in the Properties window. In this case, when I look at the Signature under Method Information (Figure 4), it tells me exactly how to interact with this method. When I click on [...]to view the whole line, a new window opens that shows me the following code: java.lang.String doSpellingSuggestion(java.lang.String key, java.lang.String phrase) I can see that two Strings are passed to the Web service (key and phrase) and a String is also returned. Here's how code completion helps me even further: 3. Adding an action to the Check Spelling button Now that the page looks the way I want, and the Google Web Service is installed in my application, I need to instruct my Java Studio Creator application what to do when the end user clicks my Check Spelling button. Because I want it to start the Google Web Services API, I have to add an action to this button. By double-clicking on the button inside the Visual Designer I am put right into the Java Page Bean, which lets me insert the action to be performed when my application runs and the user clicks the button. So, from the Visual Designer, I double-click the Check Spelling button and my cursor lands exactly where the code needs to be inserted in the method, as shown below. I like that Java Studio Creator makes it this easy for me to see what line needs my own code! public String search_action() { // TODO: Process the button click action. Return value is a navigation // case name where null will return to the same page. return null; } 4. Obtaining a Google License Key Before my application will work, I need to register with Google and obtain a license key from them. Luckily, this is about as easy as it gets for a registration. Here's how it works: 5. Add the code to the Page Bean Now that I have my license key, I'll go back to the Page Bean and start writing a few lines of code. I can access my Page Bean by clicking on the Java tab above my editor pane. But instead, I double-click on my Check Spelling button, which puts me in the page bean at the exact point of the button's action. Now in the Page Bean, I notice that the IDE has already added the following line of code to the top of my Page Bean: import webservice.googlesearchservice.googlesearch.GoogleSearchClient; This code imports the Google Web Services package, and allows me to use the code-completion feature contained within Java Studio Creator to help me build Java as I type. Step 1: Add a private instance variable First, I need to add the following private instance variable above my search_action() method: private String mySearchResult; The above statement declares that the returned results will be a string called mySearchResult. If I were using the Google Search feature, I would instead declare an object, as the results would contain many variables. This is how Google Web Services packages them. Step 2: Add variables Then I add some variables to my method, starting by replacing the line // TODO: Process the button click action with the following: String spellCheckWord = null; String myKey = "EnterYourKeyHere"; The variables spellCheckWord and myKey will be passed to the Google Web Services (explained in more detail below). I need to initiate them both as type String. Then I enter the key I received by e-mail from Google. Step 3: Add code to search_action() method Here's the code I added to my search_action() method: if (this.spellString.getValue() != null) { spellCheckWord = (String)this. spellString.getValue(); } try { mySearchResult = googleSearchClient1.doSpellingSuggestion( myKey,spellCheckWord); if (mySearchResult != "") { result.setValue(mySearchResult);; } else{ result.setValue("No Results Found"); } catch (Exception e) { log("Remote Connect Failure", e); throw new FacesException(e); } return null; } It's worth taking a look at the code piece by piece. This code first tests whether the user entered text into the text box prior to pressing the button. (Ideally, I'd include an else statement in case the user did not enter anything.) if (this.spellString.getValue() != null) { spellCheckWord = (String)this. spellString.getValue(); } The following try/catch statement allows me to catch an error on connecting with the Google Web Services. The catch section can be expanded to advise the user of what happened in the case of a connection failure. try {code lines} catch (Exception e) { log("Remote Connect Failure", e); throw new FacesException(e); } The code below is the core of my application, as it allows the user to connect with the Google Web Service. MySearchResult is the value returned to my application, in this case, as a correctly spelled word. mySearchResult = googleSearchClient1.doSpellingSuggestion(myKey,spellCheckWord); Google documentation tells me that MySearchResult is a type String. I can also see that this is the case by viewing Return Type under Method Information. To see this, I click on doSpellingSuggestion in the Servers window, and then take a look in the Properties window (as shown in Figure 4 above). To produce the code after the equal sign, I need to enter googleSearchClient1 (which is what the web service was named when I dragged it into my application). If I follow this by a period, code completion shows me my options. Since I'm using Google's spell checker, I choose the doSpellingSuggestion option and notice that two variables (both strings) need to be passed to check spelling on: myKey and spellCheckWord. These I declared above as type String. Note that I must call my method as follows: googleSearchClient1.doSpellingSuggestion and not just this: doSpellingSuggestion In both the Code and the Properties window (shown in Figure 4) I can confirm that this is the case. The longer method is Java Studio Creator's way of ensuring that I use a unique method name. I have the ability to install more than one instance of Google Web Services, and this allows me to use several instances independently. In learning more about this, I found that the WSDL language allows for this complex model hierarchy, and so within the Java Studio Creator IDE (including multiple port names), I still need to fully qualify my method calls. The code below instructs the application to make sure that results have been returned. If the word is spelled correctly, or Google is unable to find a correctly spelled word, an empty string is returned. Rather than showing no results at all I want it to display a No Results Found message. if (mySearchResult != "") { result.setValue(mySearchResult); } else{ result.setValue("No Results Found"); } Note that result.setValue(mySearchResult); causes my text box—which I named result when I designed my application—to get the value returned by the Google Web Service. Here is the final method for my button action, including all of the Java I needed to write: private String mySearchResult; public String search_action() { String spellCheckWord = null; String myKey = "My key provided by Google went here"; if (this.spellString.getValue() != null) { spellCheckWord = (String)this.spellString.getValue(); } try { mySearchResult = googleSearchClient1.doSpellingSuggestio(myKey,spellCheckWord); if (mySearchResult != "") { result.setValue(mySearchResult); } else{ result.setValue("No Results Found"); } } catch (Exception e) { log("Remote Connect Failure", e); throw new FacesException(e); } return null; } Now it's time to test the Google web service in my application. From the menu bar I selected Run > Run Main Project. A few moments later, my application loads in a Web window, and I enter misspelled words. Results are returned to me through the Google Web Services API. In Part 2, I'll build a more advanced application using the National Weather Service web service, and include more advanced output using validators.
http://developers.sun.com/jscreator/community/2/journals/hardy/journal5.html
crawl-001
refinedweb
2,296
60.85
Serializablein the 1.0 version of the spec. It was also not a static inner class, so it can't be made to be Serializable. Therefore, it is being deprecated in version 1.2 of the spec. The replacement is to use an implementation dependent Object. public class StateManager.SerializedView extends Object Convenience struct for encapsulating tree structure and component state. This is necessary to allow the API to be flexible enough to work in JSP and non-JSP environments. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait Copyright © 1996-2013, Oracle and/or its affiliates. All Rights Reserved. Use is subject to license terms.
http://docs.oracle.com/javaee/7/api/javax/faces/application/StateManager.SerializedView.html
CC-MAIN-2014-52
refinedweb
108
54.49
Hi, just tested a simple Java application - quick start ;-) Regards J. package snippet; import org.eclipse.paho.client.mqttv3.IMqttDelivery; public class MQTTPublishFile implements MqttCallback{ static MqttClient client; static String MQTTServer= "messaging.quickstart.internetofthings.ibmcloud.com"; static String MQTTPort="1883"; static String tenant_id="quickstart"; static String device_id="a36d7c91bf9e"; static String client_id = tenant_id+':'+device_id; public static void main(String[] args) { publish(); } public static void publish() { try { client = new MqttClient("tcp://"+MQTTServer+':'+MQTTPort,client_id); client.connect(); for (int i = 0; i < 100; i++) { MqttMessage msg = new MqttMessage(); msg.setQos(0); String payload = "{\"d\": {\"myName\": \"JW\", \"cputemp\":"+i+"}}"; msg.setPayload(payload.getBytes()); client.publish("iot-1/d/"+device_id+"/evt/mbed-quickstart/json",msg); Thread.sleep(30000); } client.disconnect(); } catch (MqttException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } } @Override public void connectionLost(Throwable arg0) } @Override public void deliveryComplete(IMqttDeliveryToken arg0) { } @Override public void messageArrived(String arg0, MqttMessage arg1) throws Exception { } } Answer by jwende (38) | May 22, 2014 at 11:45 AM Sorry for the confusion - I didn't found any java sample code - so I only wanted to share my first working draft :-D That's fine - we were just wondering what issue the was (I even tried to compile and run the code to see what could go wrong). You're missing an '{' (open curly bracket) after the connectionLost method, by the way. I think the reason for a lack of Java examples may be because we're aiming more at devices (and Java may seem a bit... 'heavy' for smaller, battery-powered devices). Anyway, thanks for sharing. Answer by B Alton (638) | May 22, 2014 at 11:28 AM Good day, Um... this may sound bizarre, but where's the actual question? Are you asking why this code doesn't work? Or were you just sharing this code? (It does look like pretty useful code for others to see) Anyway, this is me, scratching my head... This is a weird scenario where the 'answer' has more questions than the 'question' itself... Regards, Ben No one has followed this question yet. Connecting to IOT (Internet Of Things) from my Java Program using REST Api 1 Answer How do you publish to a MQTT topic in IoT using java? 7 Answers How a node red function can call multiple debug nodes? 1 Answer How to get device data from other organisation(s)? 3 Answers Missing Paho dependency in IBM iot-java github Maven project 2 Answers
https://developer.ibm.com/answers/questions/14146/simple-java-sample-code.html?childToView=14182
CC-MAIN-2019-30
refinedweb
403
56.76
Hi David I¹m OK with the filter idea you just explained, I still think it¹s more useful than the jar list itself, but at least you will have all you need. About the class it¹s more related to how the classloaders will interact and how my application will be able to determine wich Driver version should use as you have more than one available. Imagine you create 2 datasources, one pointing to the classes12.jar for the oracle8i and other pointing to the ojdbc14.jar for oracle10g. Then you deploy one application that uses both datasources. Then you have some Driver extension code in your application that should extend the oracle10g driver... How your application classloader will be able to resolve wich driver it¹s going to load? Regards, Daniel On 19/4/07 21:41, "David Jencks" <david_jencks@yahoo.com> wrote: > I don't seem to be explaining what I have in mind very well. I was thinking > of a text entry box on the db wizard page where you;d type in a fully > qualified class name and a button you'd then push, at which point only jars > containing that class would appear in the list of jars. You could then select > the one you want as a dependency of the plan you are constructing. I don't > understand the purpose of the class you show below. > > So for instance you could type in "com.foo.BarXADataSourceImpl" and you'd get > a list of all the different FooBar db driver jars. > > thanks > david jencks > > On Apr 19, 2007, at 9:58 AM, Daniel Alheiros wrote: > >> Interesting... >> >> So if my application depends on an OracleDriver or other vendor specific >> implementations how my application will be instructed to refer to the >> specific implementation (jar) as my class import one of those. How can it >> resolve the reference to the correct jar? >> >> An example class: >> import java.sql.Connection; import java.sql.Driver; import >> java.sql.DriverManager; import java.sql.SQLException; import >> oracle.jdbc.OracleDriver; /** * @author daniel */ public class WeirdTest { >> /** * @param args */ public static void main(String[] args) >> { try { Driver driver = new OracleDriver(); >> DriverManager.registerDriver(driver); Connection conn = >> DriverManager.getConnection("jdbc:oracle:thin:@noldb03:1521:cpslive", >> "nol_web_page_builder", "nolwebpagebuilder"); >> System.out.println(conn.getMetaData()); } catch >> (SQLException e) { e.printStackTrace(); } } } >> Regards >> >> >> On 19/4/07 15:51, "David Jencks" <david_jencks@yahoo.com> wrote: >> >>> > >>> > On Apr 19, 2007, at 2:08 AM, Daniel Alheiros wrote: >>> > >>>> >> Sorry for the ignorance, but I'm a but confused about this jar >>>> >> selection... >>>> >> Does it allows me to have more than one driver version for the same >>>> >> vendor >>>> >> (like having a Oracle 8i and Oracle 10g drivers at same time being >>>> >> used by >>>> >> different datasources)? >>> > >>> > yes. >>> > >>>> >> If not, what is the point in selecting the JAR in >>>> >> this form? >>>> >> >>>> >> Does the Geronimo's classloaders allows me to have "one" class in >>>> >> different >>>> >> versions available? I don't believe it could be managed safely, so >>>> >> I believe >>>> >> it doesn't. >>> > >>> > not in the same classloader, but just as with the datasources this >>> > works fine with the "same" class in different classloaders. >>>> >> >>>> >> I like the idea of having a list containing eventual implementation >>>> >> options, >>>> >> because it is much easier to take a look in an existing environment >>>> >> and be >>>> >> aware of what drivers are available and avoid eventual >>>> >> incompatibilities >>>> >> changing drivers (like Oracle 8i and Oracle 9 or 10g drivers) >>> > >>> > Collecting the jars needed for a particular driver to work can't be >>> > done in any automatic uniform way at the moment AFAIK. So we either >>> > have to show all the jars and let the user pick or require that some >>> > person figure out and install the metadata for each driver. My >>> > experience is that a metadata based solution will never be kept up to >>> > date. >>> > >>> > I thought that a search feature that would find all the jars >>> > containing a particular class, given the fully qualified class name, >>> > was a reasonable compromise between having to look through all the >>> > jars with no clue about whats inside and hiding most of the jars >>> > based on what is sure to be out of date information. >>> > >>> > thanks >>> > david jencks >>> > >>>> >> >>>> >> Regards >>>> >> >>>> >> >>>> >> On 18/4/07 19:28, "Aaron Mulder" <ammulder@alumni.princeton.edu> >>>> >> wrote: >>>> >> >>>>> >>> Just remember that one of the main reasons that there's an awkward >>>>> >>> display of tons of JARs now is that the DB2 driver (did? does?) >>>>> >>> require 3 JARs to all be added in order to function, and only one of >>>>> >>> those has any kind of driver implementation AFAIK. (I think one is a >>>>> >>> license and not sure about the other.) I think there is at least one >>>>> >>> other multi-JAR driver out there too. >>>>> >>> >>>>> >>> I think it would be nice if we showed a single combo box, perhaps >>>>> >>> with >>>>> >>> just the driver JARs listed, and then had a checkbox where if you >>>>> >>> clicked it then the page would adjust to let you select multiple >>>>> >>> arbitrary JARs instead. >>>>> >>> >>>>> >>> Thanks, >>>>> >>> Aaron >>>>> >>> >>>>> >>> On 4/18/07, Daniel Alheiros <Daniel.Alheiros@bbc.co.uk> wrote: >>>>>> >>>> Well anyway, if you are going to filter by the class, you still >>>>>> >>>> have the >>>>>> >>>> "problem" of not seeing all its dependent classes/jars... But I >>>>>> >>>> really think >>>>>> >>>> it shouldn't be a big deal in this context. >>>>>> >>>> >>>>>> >>>> Talking about the performance hit related to this kind of >>>>>> >>>> filtering, it can >>>>>> >>>> be avoided if you keep track of those classes since you install >>>>>> >>>> it into your >>>>>> >>>> repository folder. What do you think about it? >>>>>> >>>> >>>>>> >>>> This initial idea in terms of filtering aims to make the >>>>>> >>>> information shown >>>>>> >>>> really useful to the console user (system operators and >>>>>> >>>> developers)... >>>>>> >>>> >>>>>> >>>> And instead of just filtering the java.sql.Driver >>>>>> >>>> implementations, it could >>>>>> >>>> be instructed to look at XAResource implementations as well... >>>>>> >>>> >>>>>> >>>> Regards >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> On 18/4/07 16:17, "David Jencks (JIRA)" <jira@apache.org> wrote: >>>>>> >>>> >>>>>>> >>>>> >>>>>>> >>>>> [ >>>>>>> >>>>>? >>>>>>> >>>>> page=com.atlassian.jira. >>>>>>> >>>>> pl >>>>>>> >>>>> ugin.system.issuetabpanels:comment-tabpanel#action_12489781 ] >>>>>>> >>>>> >>>>>>> >>>>> David Jencks commented on GERONIMO-3106: >>>>>>> >>>>> ---------------------------------------- >>>>>>> >>>>> >>>>>>> >>>>> What we have now is surely annoying and hard to use, but there >>>>>>> >>>>> are at least >>>>>>> >>>>> 2 >>>>>>> >>>>> problems with the proposal: >>>>>>> >>>>> >>>>>>> >>>>> 1- it wouldn't show a jar that had an xa-only datasource >>>>>>> >>>>> implementation with >>>>>>> >>>>> no Driver implementation >>>>>>> >>>>> 2- it wouldn't show utility jars that might be needed by some >>>>>>> >>>>> drivers >>>>>>> >>>>> >>>>>>> >>>>> So, an option to filter the jars might be useful but it would >>>>>>> >>>>> certainly slow >>>>>>> >>>>> down the ui and you'd need to be able to turn it off. >>>>>>> >>>>> >>>>>>> >>>>> Perhaps a "find the jars containing this class" button would be >>>>>>> >>>>> useful? >>>>>>> >>>>> >>>>>>>> >>>>>> When you are in the Create Database Pool wizard the jar driver >>>>>>>> >>>>>> list could >>>>>>>> >>>>>> show only jars containing Driver implementations >>>>>>>> >>>>>> >>>>>>>> ------------------------------------------------------------------ >>>>>>>> >>>>>> --------- >>>>>>>> >>>>>> -- >>>>>>>> >>>>>> --------------------------------------------- >>>>>>>> >>>>>> >>>>>>>> >>>>>> Key: GERONIMO-3106 >>>>>>>> >>>>>> URL: >>>>>>>> >>>>>> GERONIMO-3106 >>>>>>>> >>>>>> Project: Geronimo >>>>>>>> >>>>>> Issue Type: Improvement >>>>>>>> >>>>>> Security Level: public(Regular issues) >>>>>>>> >>>>>> Components: console >>>>>>>> >>>>>> Affects Versions: 2.0-M3 >>>>>>>> >>>>>> Reporter: Daniel Alheiros >>>>>>>> >>>>>> Priority: Minor >>>>>>>> >>>>>> >>>>>>>> >>>>>> The combo actually shows all the repository jars but could be >>>>>>>> >>>>>> organized in >>>>>>>> >>>>>> an >>>>>>>> >>>>>> easier way if are shown only the repository jars containing >>>>>>>> >>>>>> java.sql.Driver >>>>>>>> >>>>>> implementations. >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> >>>>.
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200704.mbox/%3CC24E50DF.4E3%25Daniel.Alheiros@bbc.co.uk%3E
CC-MAIN-2014-42
refinedweb
1,151
61.16
Perhaps many of you use NUnit. It's a great tool. But sometime ago I wanted to make benchmarking and then I discovered Zanebug. Since I get involved in the project, and now I want to introduce to some of the features that makes Zanebug a better alternative to NUnit. Zanebug is compatible with NUnit, so you can work with it or Nunit at minimun effort. Just change your references. Zanebug introduces the Repeat attribute for methods, its works in this way: using Adapdev.UnitTest; [TestFixture] class TestClass { [Repeat(100)] [Test] void TestCase() { MethodToBenchmark(); } } This way your code will iterate 100 times. Zanebug will report the time elapsed and the resources used in the GUI. In NUnit you have Setup, and TearDown attributes, that do initialization and termination code for the TestFixture. A nice feature of Zanebug is Method specific setups and teardowns: [TestFixture] public class TestFixture { // Runs once at the beginning of SimpleTest only [TestSetUp("SimpleTest")] public void SpecificTestSetUp() { // setup code } // A Test [Test] public void SimpleTest() { // test code } // Runs once at the end of SimpleTest only [TestTearDown("SimpleTest")] public void SpecificTestTearDown() { // teardown code } } This feature is very useful and I missed on NUnit. The graphical user interface of Zanebug is more friendly and has more information than NUnit. Even Even, you can monitor system performance There is a lot more on this tool, I suggest you to visit zanebug web site:. Sean McCormack has been doing a great job with this tool and you should try it. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/dotnet/IntroducingZanebug.aspx
crawl-002
refinedweb
256
62.98
Change the ListModel dynamically - stereomatching @ import QtQuick 2.1 import QtQuick.Controls 1.0 Rectangle { id: root width: 500 height: 500 state: "one" ListModel{ id: one } ListModel{ id: two } function changeModel(){ if(state == "one"){ one.append({"source": "aaa"}) return one }else{ two.append({"source": "bbb"}) return two } } Row{ Button{ id: buttonOne text: "One" onClicked: { root.state = "one" } } Button{ id: buttonTwo text: "Two" onClicked: { root.state = "two" } } TableView{ id: tableView model: changeModel() TableViewColumn { title: "image" delegate: Text{ text: tableView.model.get(styleData.row).source } } } } } @ When I click the "Two" button, there are always an error message qrc:/main.qml:59: TypeError: Cannot read property 'source' of undefined How could I change the model dynamically?What kind of mistake I make?Thanks - JapieKrekel The reason is that you change your underlaying model of the TableView and both are not of the same length. Since you use .get(styleData.row) all the delegates visible in the table refer to a specific element on index (row number) in you model. If you change the model, each delegate will try to get its data again, but the number of items of the other model is different, and if the number of items is less than the items that are visible, some of the items will fail to get its data, those rows will dissappear but give an error. If you use: @# delegate: Text{ text: styleData.value } @ instead, then it works, without errors, but then you cannot have multiple properties in the same ListElement. Hope it helps. - stereomatching Thanks for your help:).Do we have a more flexible way to switch between different models? Solution one : Design a intermediate models(called C) with several properties when I switch the model, I could clear the data of C and copy the data of different models(A or B) to C Solution two : Use Loader to load the view, when the model change destroy the old table and load a new table Do you have easier solutions?Thanks a lot
https://forum.qt.io/topic/29088/change-the-listmodel-dynamically
CC-MAIN-2018-26
refinedweb
332
56.76
In this chapter, we describe how to use libjit with a number of short tutorial exercises. Full source for these tutorials can be found in the tutorial directory of the libjit source tree. For simplicity, we will ignore errors such as out of memory conditions, but a real program would be expected to handle such errors. In the first tutorial, we will build and compile the following function (the source code can be found in tutorial/t1.c): To use the JIT, we first include the <jit/jit.h> file: All of the header files are placed into the jit sub-directory, to separate them out from regular system headers. When libjit is installed, you will typically find these headers in /usr/local/include/jit or /usr/include/jit, depending upon how your system is configured. You should also link with the -ljit option. Every program that uses libjit needs to call jit_context_create: Almost everything that is done with libjit is done relative to a context. In particular, a context holds all of the functions that you have built and compiled. You can have multiple contexts at any one time, but normally you will only need one. Multiple contexts may be useful if you wish to run multiple virtual machines side by side in the same process, without them interfering with each other. Whenever we are constructing a function, we need to lock down the context to prevent multiple threads from using the builder at a time: The next step is to construct the function object that will represent our mul_add function: The signature is a jit_type_t object that describes the function's parameters and return value. This tells libjit how to generate the proper calling conventions for the function: This declares a function that takes three parameters of type int and returns a result of type int. We've requested that the function use the cdecl application binary interface (ABI), which indicates normal C calling conventions. See section Manipulating system types, for more information on signature types. Now that we have a function object, we need to construct the instructions in its body. First, we obtain references to each of the function's parameter values: Values are one of the two cornerstones of the libjit process. Values represent parameters, local variables, and intermediate temporary results. Once we have the parameters, we compute the result of x * y + z as follows: This demonstrates the other cornerstone of the libjit process: instructions. Each of these instructions takes two values as arguments and returns a new temporary value with the result. Students of compiler design will notice that the above statements look very suspiciously like the "three address statements" that are described in compiler textbooks. And that is indeed what they are internally within libjit. If you don't know what three address statements are, then don't worry. The library hides most of the details from you. All you need to do is break your code up into simple operation steps (addition, multiplication, negation, copy, etc). Then perform the steps one at a time, using the temporary values in subsequent steps. See section Working with instructions in the JIT, for a complete list of all instructions that are supported by libjit. Now that we have computed the desired result, we return it to the caller using jit_insn_return: We have completed the process of building the function body. Now we compile it into its executable form: As a side-effect, this will discard all of the memory associated with the values and instructions that we constructed while building the function. They are no longer required, because we now have the executable form that we require. We also unlock the context, because it is now safe for other threads to access the function building process. Up until this point, we haven't executed the mul_add function. All we have done is build and compile it, ready for execution. To execute it, we call jit_function_apply: We pass an array of pointers to jit_function_apply, each one pointing to the corresponding argument value. This gives us a very general purpose mechanism for calling any function that may be built and compiled using libjit. If all went well, the program should print the following: You will notice that we used jit_int as the type of the arguments, not int. The jit_int type is guaranteed to be 32 bits in size on all platforms, whereas int varies in size from platform to platform. Since we wanted our function to work the same everywhere, we used a type with a predictable size. If you really wanted the system int type, you would use jit_type_sys_int instead of jit_type_int when you created the function's signature. The jit_type_sys_int type is guaranteed to match the local system's int precision. Finally, we clean up the context and all of the memory that was used: In this second tutorial, we implement the subtracting Euclidean Greatest Common Divisor (GCD) algorithm over positive integers. This tutorial demonstrates how to handle conditional branching and function calls. In C, the code for the gcd function is as follows: The source code for this tutorial can be found in tutorial/t2.c. Many of the details are similar to the previous tutorial. We omit those details here and concentrate on how to build the function body. See section Tutorial 1 - mul_add, for more information. We start by checking the condition x == y: This is very similar to our previous tutorial, except that we are using the eq operator this time. If the condition is not true, we want to skip the return statement. We achieve this with the jit_insn_branch_if_not instruction: The label must be initialized to jit_label_undefined. It will be updated by jit_insn_branch_if_not to refer to a future position in the code that we haven't seen yet. If the condition is true, then execution falls through to the next instruction where we return x to the caller: If the condition was not true, then we branched to label1 above. We fix the location of the label using jit_insn_label: We use similar code to check the condition x < y, and branch to label2 if it is not true: At this point, we need to call the gcd function with the arguments x and y - x. The code for this is fairly straight-forward. The jit_insn_call instruction calls the function listed in its third argument. In this case, we are calling ourselves recursively: The string "gcd" in the second argument is for diagnostic purposes only. It can be helpful when debugging, but the libjit library otherwise makes no use of it. You can set it to NULL if you wish. In general, libjit does not maintain mappings from names to jit_function_t objects. It is assumed that the front end will take care of that, using whatever naming scheme is appropriate to its needs. The final part of the gcd function is similar to the previous one: We can now compile the function and execute it in the usual manner. In the previous tutorials, we compiled everything that we needed at startup time, and then entered the execution phase. The real power of a JIT becomes apparent when you use it to compile functions only as they are called. You can thus avoid compiling functions that are never called in a given program run, saving memory and startup time. We demonstrate how to do on-demand compilation by rewriting Tutorial 1. The source code for the modified version is in tutorial/t3.c. When the mul_add function is created, we don't create its function body or call jit_function_compile. We instead provide a C function called compile_mul_add that performs on-demand compilation: We can now call this function with jit_function_apply, and the system will automatically call compile_mul_add for us if the function hasn't been built yet. The contents of compile_mul_add are fairly obvious: When the on-demand compiler returns, libjit will call jit_function_compile and then jump to the newly compiled code. Upon the second and subsequent calls to the function, libjit will bypass the on-demand compiler and call the compiled code directly. Note that in case of on-demand compilation libjit automatically locks and unlocks the corresponding context with jit_context_build_start and jit_context_build_end calls. Sometimes you may wish to force a commonly used function to be recompiled, so that you can apply additional optimization. To do this, you must set the "recompilable" flag just after the function is first created: Once the function is compiled (either on-demand or up-front) its intermediate representation built by libjit is discarded. To force the function to be recompiled you need to build it again and call jit_function_compile after that. As always when the function is built and compiled manually it is necessary to take care of context locking: After this, any existing references to the function will be redirected to the new version. However, if some thread is currently executing the previous version, then it will keep doing so until the previous version exits. Only after that will subsequent calls go to the new version. In this tutorial, we use the same on-demand compiler when we recompile mul_add. In a real program, you would probably call jit_function_set_on_demand_compiler to set a new on-demand compiler that performs greater levels of optimization. If you no longer intend to recompile the function, you should call jit_function_clear_recompilable so that libjit can manage the function more efficiently from then on. The exact conditions under which a function should be recompiled are not specified by libjit. It may be because the function has been called several times and has reached some threshold. Or it may be because some other function that it calls has become a candidate for inlining. It is up to the front end to decide when recompilation is warranted, usually based on language-specific heuristics. While libjit can be easily accessed from C++ programs using the C API's, you may instead wish to use an API that better reflects the C++ programming paradigm. We demonstrate how to do this by rewriting Tutorial 3 using the libjitplus library. To use the libjitplus library, we first include the <jit/jit-plus.h> file: This file incorporates all of the definitions from <jit/jit.h>, so you have full access to the underlying C API if you need it. This time, instead of building the mul_add function with jit_function_create and friends, we define a class to represent it: Where we used jit_function_t and jit_context_t before, we now use the C++ jit_function and jit_context classes. In our constructor, we attach ourselves to the context and then call the create() method. This is in turn will call our overridden virtual method create_signature() to obtain the signature: The signature_helper() method is provided for your convenience, to help with building function signatures. You can create your own signature manually using jit_type_create_signature if you wish. The final thing we do in the constructor is call set_recompilable() to mark the mul_add function as recompilable, just as we did in Tutorial 3. The C++ library will create the function as compilable on-demand for us, so we don't have to do that explicitly. But we do have to override the virtual build() method to build the function's body on-demand: This is similar to the first version that we wrote in Tutorial 1. Instructions are created with insn_* methods that correspond to their jit_insn_* counterparts in the C library. One of the nice things about the C++ API compared to the C API is that we can use overloaded operators to manipulate jit_value objects. This can simplify the function build process considerably when we have lots of expressions to compile. We could have used insn_mul and insn_add instead in this example and the result would have been the same. Now that we have our mul_add_function class, we can create an instance of the function and apply it as follows: See section Using libjit from C++, for more information on the libjitplus library. Astute readers would have noticed that Tutorial 2 included two instances of "tail calls". That is, calls to the same function that are immediately followed by a return instruction. Libjit can optimize tail calls if you provide the JIT_CALL_TAIL flag to jit_insn_call. Previously, we used the following code to call gcd recursively: In Tutorial 5, this is modified to the following: There is no need for the jit_insn_return, because the call will never return to that point in the code. Behind the scenes, libjit will convert the call into a jump back to the head of the function. Tail calls can only be used in certain circumstances. The source and destination of the call must have the same function signatures. None of the parameters should point to local variables in the current stack frame. And tail calls cannot be used from any source function that uses try or alloca statements. Because it can be difficult for libjit to determine when these conditions have been met, it relies upon the caller to supply the JIT_CALL_TAIL flag when it is appropriate to use a tail call. This libjit/dpas directory contains an implementation of "Dynamic Pascal", or "dpas" as we like to call it. It is provided as an example of using libjit in a real working environment. We also use it to write test programs that exercise the JIT's capabilities. Other Pascal implementations compile the source to executable form, which is then run separately. Dynamic Pascal loads the source code at runtime, dynamically JIT'ing the program as it goes. It thus has a lot in common with scripting languages like Perl and Python. If you are writing a bytecode-based virtual machine, you would use a similar approach to Dynamic Pascal. The key difference is that you would build the JIT data structures after loading the bytecode rather than after parsing the source code. To run a Dynamic Pascal program, use dpas name.pas. You may also need to pass the -I option to specify the location of the system library if you have used an import clause in your program. e.g. dpas -I$HOME/libjit/dpas/library name.pas. This Pascal grammar is based on the EBNF description at the following URL: There are a few differences to "Standard Pascal": program Name (Input, Output);. This can be abbreviated to program Name;as the program modifiers are ignored. xor, shl, @, etc have been added. Integer, Cardinal, LongInt, etc) follow those used in GNU Pascal also. The Integertype is always 32-bits in size, while LongIntis always 64-bits in size. SysInt, SysCard, SysLong, SysLongCard, SysLongestInt, and SysLongestCardare guaranteed to be the same size as the underlying C system's int, unsigned int, long, unsigned long, long long, and unsigned long longtypes. Addressis logically equivalent to C's void *. Any pointer or array can be implicitly cast to Address. An explicit cast is required to cast back to a typed pointer (you cannot cast back to an array). Stringtype is declared as ^Char. Single-dimensional arrays of Charcan be implicitly cast to any Stringdestination. Strings are not bounds-checked, so be careful. Arrays are bounds-checked. p[n]will access the n'th item of an unbounded array located at p. Use with care. file oftypes. Data can be written to stdout using Writeand WriteLn, but that is the extent of the I/O facilities. import Name1, Name2, ...;can be used at the head of a program to declare additional files to include. e.g. import stdiowill import the contents of stdio.pas. We don't support units. ; ..can be used at the end of a formal parameter list to declare that the procedure or function takes a variable number of arguments. The builtin function va_arg(Type)is used to extract the arguments. import("Library")can be used to declare that a function or procedure was imported from an external C library. For example, the following imports the C putsand printffunctions: Functions that are imported in this manner have case-sensitive names. i.e. using Printf above will fail. throwkeyword can be used to throw an exception. The argument must be a pointer. The try, catch, and finallykeywords are used to manage such exceptions further up the stack. e.g. The catch block will be invoked with the exception pointer that was supplied to throw, after casting it to Type (which must be a pointer type). Specifying throw on its own without an argument will rethrow the current exception pointer, and can only be used inside a catch block. Dynamic Pascal does not actually check the type of the thrown pointer. If you have multiple kinds of exceptions, then you must store some kind of type indicator in the block that is thrown and then inspect ^Name to see what the indicator says. exitkeyword can be used to break out of a loop. XXH. The first digit must be between 0 and 9, but the remaining digits can be any hex digit. (if e1 then e2 else e3). The brackets are required. This is equivalent to C's e1 ? e2 : e3. return value;in C. It isn't necessary to arrange for execution to flow through to the end of the function as in regular Pascal. sizeof(Type)can be used to get the size of a type. pointer to procedure/functiontype. This document was generated by Klaus Treichel on May, 11 2008 using texi2html 1.78.
http://www.gnu.org/software/dotgnu/libjit-doc/libjit_3.html
CC-MAIN-2018-09
refinedweb
2,910
62.48
This is the third article in a four-part series discussing common XML pitfalls and, more importantly, ways around them. As a consultant and trainer, I have noticed that many companies and developers make the same mistakes when they adopt XML technology. This series is an attempt to document some of those problems and spare you the annoyance of dealing with them. Part 1 looked at common misunderstandings with the XML standard itself (such as encodings and namespaces). Part 2 focused on design issues: how and where to introduce XML support in an application. One of the guiding principles of Part 2 is to treat XML files as an interface between applications and to apply the time-tested design techniques used with JavaBeans and other interfaces: separation of tasks, documentation, built-in evolution, and more. This article looks at validation, error handling, and schemas. Schemas primarily address the design, documentation, and validation of an interface. I'll focus primarily on validation. First, some vocabulary definitions. The W3C XML Schema Recommendation is one schema language for XML. Others include Document Type Definitions (DTDs), RELAX NG, and Schematron (see Resources). In this article, Schema (capital S) indicates the W3C XML Schema Recommendation. Lowercased, schema refers to the general concept of a schema language. In 1998, when XML emerged as a W3C recommendation, DTDs were something of a novelty. They'd been around since SGML was adopted by the International Standards Organization (ISO) in 1986, but few other file formats offered a validation mechanism, and none of those that did proved as popular as XML. Still, the underlying concept isn't new; it derived directly from database schemas. Essentially a DTD describes the vocabulary's structure (the tags and attributes), similar to the way a database schema describes a database's structure (the tables and columns). The W3C later released XML Schema as a more powerful schema language. Experience with database design shows that schemas are most useful as a safeguard against programming errors. Storing incomplete or incorrect data is less likely when you use a schema because the data must conform to clear rules. The usefulness of schemas increases with the number of applications accessing the information. The more applications that access and modify the data, the greater the need for the structure and guidance that a schema provides. However, database developers have long known that a schema does not replace error handling. Database validation is the last chance to catch errors, but a good application has already validated the data at that point (and error messages from the database engine are anything but readable). By and large, this experience holds true with schemas for XML. The more applications that use a given vocabulary, the more you need a schema to define a common framework. And although schemas provide some error handling, it is useful to complement them in the application. The DTD was the original schema language for XML. A direct transplant from SGML, the DTD is too limited for many applications. Still, because it is the oldest schema language, it is the most widely available. It always pays to keep a DTD around, because some of the older products in your toolkit might not work with more recent alternatives. XML Schemas are more powerful and more modern. Among the differences between DTDs and XML Schemas (see Resources for a couple of articles that compare the two), three important ones matter in my practice: - XML namespaces DOCTYPEdeclarations - Rich data typing Lack of support for namespaces is the single most glaring deficiency of DTDs. (You can emulate namespaces partially through parameter entities, but it is complex and not totally satisfactory.) XML Schemas make namespaces a first-class citizen for XML. I discuss DOCTYPE statements and data typing in the next section, Schema for validation. XML Schema is a complex recommendation. This complexity has led to the development of alternate schema languages that emphasize simplicity. The best-known alternative is RELAX NG, which the ISO is standardizing. Although technically interesting, those efforts lack the W3C's support, which translates into less support from tool vendors. My customers have shown little interest in these alternatives, and vendors don't offer much support for them, so I don't use them often. As I mentioned earlier, schemas have essentially three applications: design, documentation, and validation. Simple vocabularies are often designed by carefully crafting a corresponding schema. With proper annotations, the schema serves as documentation for developers. Designing the schema directly works well for tiny vocabularies. Anything larger is best served by a real modeling language such as UML (see Resources for a previous Working XML series that discusses UML). In this article I'll concentrate on the use of schemas for validation. I have noticed three common errors in the use of schemas: - Making them too stringent (a mistake I tend to make; I always have to double-check myself) - Failing to design proper error handling - Implementing validation unreliably How stringent should a schema be? This is the first question to ask when you develop a new schema. Designers tend to be strict in an attempt to prevent errors; catching errors early through proper validation of files helps build more stable systems. Yet experience (including experience with database designs) shows that you need to balance strictness with what, for lack of a better word, I call clarity. The dilemma boils down to this: Up to a certain point, the schema must match the expectations and understanding of the developers and other users of the vocabulary. In other words, the vocabulary should be easy to work with. Yet to design a schema that prevents as many errors as possible, you need to organize the vocabulary around the error checking, sometimes using advanced features such as inheritance. The result can be a complex schema that's hard to read and harder to implement correctly. Many users will find this too strict a framework. It can also be difficult to maintain, because adding or removing validation can require that you change the vocabulary. Look at the common example of designing a vocabulary for international purchase orders. You need tags to record addresses, and you want those addresses to be correct because goods will be shipped to them. But how far is too far when it comes to validating the address? Take the state element, for example. States are required for U.S. addresses but do not exist (and have no equivalent) in most other countries. So you'd like to make the state tag mandatory ( minOccurs="1" in XML Schema), but you can't because it won't work for most countries. One option is to have strict validation by introducing specialized address elements by country: U.S. addresses include a state; no other addresses do. This may sound attractive at first, but when you realize that there are 193 countries in the world it becomes clear that the schema will become bloated. This example, in which tags are introduced only to strengthen validation, is an example of lack of clarity. The purpose and intent of those tags with a fair amount of redundant information will not be obvious to schema readers. Ironically, this can lead to errors. So what do you validate for? Validate as much as makes sense, but refrain from bending the data structure to push for more validation. Validation and error handling are not a monolith. Your application can validate at different levels: - Structure: The schema specifies a structure that controls which tags will appear where, how often they repeat, and more -- for example, a purchase order line consists of a product number, description, quantity, and price. - Data typing: You can use data typing to control tag content -- for example, the quantity field in a purchase order is a non-negative integer (because the customer can't order less than a whole unit of the product). - Assertion: Use assertions to check relationships between fields -- for example, the purchase order's total is the sum of the order lines. DTD is pretty good at structural validation. So is XML Schema. XML Schema offers extensions (such as typing, local elements, and inheritance) that have proven contentious with the developer community. I for one was perfectly happy with the DTD's feature set in this respect. XML Schema also adds rich data typing support. You can validate against the data types found in modern databases and programming languages. Furthermore, you can derive your own types through facets. In a nutshell, facets further restrict a simple type by specifying the maximum length of a string or the upper and lower limits of a number. The last level, assertion, is typically implemented through Java code or through a dedicated assertion language such as Schematron. Unfortunately, because XML Schema does not recognize assertions, some applications don't include them in their validation strategy. The result? Applications print out bizarre error messages, freeze, or crash when they receive an incorrect file. To build strong error handling, you need a layered approach: - At the lowest level, the parser checks for syntax conformance. - On the next layer sits a schema that validates the structure and typing. - A layer of custom Java code (or a Schematron) performs the next level of validation. - Optionally, the Java object that you load the data into can perform a last level of validation. Each layer is more specific than the previous one, making it easier to share validation across several applications. Validation is also more maintainable when you clearly specify the responsibility of each piece of code. Implementation considerations It seems obvious that validation should not depend on the XML file being correct. If the file were correct, why would you bother validating it? Yet many applications validate files only if they are at least partly correct. Applications that rely on the file itself to reference the schema (through a DOCTYPE statement or through a schemaLocation attribute) are at risk. If the file points to an incorrect schema, the parser will validate against the incorrect schema. It might not report errors even though the file is incorrect. With DTDs, the document must include a DOCTYPE statement. So to be safe, the application must either insert its own DOCTYPE statement before parsing or, at a minimum, implement SAX's LexicalHandler interface and check that the referenced DTD is the correct one. XML Schema offers a better solution: Tell the parser to load the schema from the document namespace. With a JAXP 1.2 parser, you configure the schema by using the property in the Java code before reading the document (see Resources for an article with sample code). Thanks to this property, the parser will always load the correct schema. A word of warning: JAXP properties work like XML namespaces. The URL is an identifier; it does not point to a Web page. Just make sure you copy the URL exactly as shown. What about the schemaLocation attribute that might appear in the XML document? If you read the XML Schema specification, you'll see that this attribute was never intended as a general mechanism for loading schemas but only as a hint. In practice, it is useful when you test and debug (before you've had a chance to configure the parser fully), but not for production data. Strong error handling translates into more reliable applications. When you load an XML document, it is best to catch errors early, before they have a chance to pollute other data. Schema languages help in that respect, but for maximum reliability you want to adopt a layered approach and not limit yourself to structural validations. You should validate using data types and assertions as well. Learn - "Working XML: Safe coding practices" (developerWorks, July and August 2005): Read the previous articles in this series. - "UML, XMI, and code generation, Part 1" (developerWorks, March 2004): Learn why UML is better than Schema language for XML design and how to implement it. See also Part 2 (May 2004), Part 3 (June 2004), and Part 4 (August 2004). - "Validating XML" (developerWorks, August 2003): Get started with DTDs and XML Schemas in no time. - "A hands-on introduction to Schematron" (developerWorks, September 2004): Find out how to use this assertion validation language. - "Understanding RELAX NG" (developerWorks, December 2003): Check out this tutorial if you want to get up to speed on RELAX NG. - "Tell a parser where to find a schema" (developerWorks, May 2003): This tip includes sample listings for schema validation with JAXP 1.2. - "Comparing W3C XML Schemas and Document Type Definitions (DTDs)" (developerWorks, March 2001): David Mertz is skeptical that Schemas will replace DTDs, though he believes that XML Schemas are an invaluable tool in a developer's arsenal. - "Why XML Schema beats DTDs hands-down for data" (developerWorks, June 2001): Kevin Williams tells why he's sold on XML Schema for the structural definition of XML documents for data. - "XML style guidelines for leveraging schema validators" (developerWorks, November 2003): This article discusses proper XML structure as well as best and worst practices for defining data validation rules in XML Schema. - developerWorks XML zone: Learn more about XML here. You'll find technical documentation, how-to articles, education, downloads, product information, and more. - IBM XML certification: Find out how you can become an IBM Certified Developer in XML and related technologies. Get products and technologies - XML Schema Quality Checker: This XML Schema verification tool is available as a free trial download through IBM alphaWorks. Discuss Benoît Marchal is a Belgian consultant. He is the author of XML by Example, Second Edition and other XML books. You can contact him at bmarchal@pineapplesoft.com or through his personal site at.
http://www.ibm.com/developerworks/xml/library/x-wxxm32.html
crawl-002
refinedweb
2,276
54.52
Important: Please read the Qt Code of Conduct - Streaming Video Out Hi, I found how I can read a movie and put in a widget with QMediaPlayer. I Added to a GraphicsScene to put an overlay over the video. Now I would like to difuse this video ( with the overlay ) on a broadcast address. But I don't know how i can get the streaming video data to flush it in a socket. Can you say me which class i can use to do it ? Kind regards Hi and welcome to devnet, Streaming is not part of QtMultimedia's API. However if you're on linux you can use e.g. QtGstreamer or vlc-qt Hope it helps @Bouriquo You can get the data from the file with QFile and its read()method then use a QTCPServer to send the data and QTCPSocket to receive it. Hi All, Thanks you for your reply. @SGaist yes i know it's not a part of qmultimedia. But if you prefer, I developed several applications, one to take a stream and replay the same stream with QUdpSocket, another one to play a video with QMediaPlayer and another to Play a video and add a Pixmap overlay. But now, i want to mixed them and obtain a replay of the input stream to the output stream with a pixmap overlay added. And i don't know how I can get the data to stream from the QGraphicsScene ( video + pixmap ). @raf924 Yes Stream a video I found it, but like as said before, i want to modifiy the frame before send to the network with a pixmap overlay. @Bouriquo Oh ok. hmm well With The QVideoSource class you can get QVideoFrames of which you can get the bytes then maybe from those you can make it a pixmap then add your overlay ( i have no idea how to do that) then send that through the socket and convert it back to a qvideoframe which you can display on a Qvideosurface. I don't know how to do all that though or if it can be done at all, but where there's a will there's a way right? Anyway this might help : It's very easy but hard too. I have a Qt video surveillance application. I'm using OpenCV to grab images from cameras, FFMpeg to store video and a own http server Qt based to broadcast the video using h264 or mjpeg. If you want to use broadcast (one server to more clients) I suggest to use udp. If you want to send more than one stream I suggest you to use an own protocol similar to avi format (little package with header and data). I tried to get QVideoFrame from QAbstractVideoSurface. But I can't get the QVideoFrame buffer to send it to my UdpSocket. I just used QVideoFrame.bits(), but it's seems is empty. @Bouriquo How did you go about getting it? Did you subclass QAbstractVideoSurface like in the link i gave you? @raf924 Yes I was already find this class. #include "myvideosurface.h" MyVideoSurface::MyVideoSurface() { writer = new QUdpSocket(); } QListQVideoFrame::PixelFormat MyVideoSurface::supportedPixelFormats( QAbstractVideoBuffer::HandleType handleType) const { Q_UNUSED(handleType); // Return the formats you will support return QList<QVideoFrame::PixelFormat>() << QVideoFrame::Format_RGB565; } bool MyVideoSurface::present(const QVideoFrame &frame) { Q_UNUSED(frame); //statusLabel->setText(tr("Now broadcasting datagram %1").arg(messageNo)); QVideoFrame *clone = new QVideoFrame(frame); clone->map(QAbstractVideoBuffer::ReadOnly); qDebug() << clone->isMapped(); //frame.map(QAbstractVideoBuffer::ReadOnly); QByteArray *datagram = new QByteArray((char*)(clone->bits()),clone->mappedBytes()); //QByteArray datagram = "Broadcast message "; //qDebug() << "Write bytes1 : " << frame.mappedBytes() << frame.mapMode(); //qDebug() << "Write bytes2 : " << clone->mappedBytes() << clone->mapMode(); qDebug() << clone->bits(); writer->writeDatagram(datagram->data(), datagram->size(),QHostAddress::Broadcast, 45454); //writer->writeDatagram((char*)clone->bits(),clone->mappedBytes(),QHostAddress::Broadcast, 45454); clone->unmap(); return true; } Out of curiosity, why not use something that's designed for that task ? GStreamer allows to do everything you need: get the camera stream, add overlay, stream using standard protocol and if you use QtGStreamer you also have a widget or QtQuick element to visualize that. Yes I know. But I just would like to try by myself :D. This post is deleted! Is using GStreamer still recommended solution for streaming out through network? EDIT: relevant -> recommended Why wouldn't it be ? @SGaist said in Streaming Video Out: Why wouldn't it be ? My mistake, made post most precise. What exactly do you want to do ? I need a simple way to send frames from camera over network, while at the same time providing real-time 30FPS camera preview in QML widget. Simple - means as little coding as possible. The way I'm trying to do it now is to derive C++ class from QObjectwith "mediaObject"property expose it as a source for VideoOutputin QML. The C++ class would catch the frames and save them into buffer (and sent over network). Unfortunately its impossible (afaik) using just QML, beacuse imageCaptureobject inside Camera does not allow to save frame into buffers (only into files). Then QtGStreamer is likely what you want. But, would it work if I would create a class with Q_PROPERTYmediaObject - that hold QCamera instance? And pass this class to QML videoOutput ? QCamera is a class to get video stream or images from a camera. The QML VideoOutput type is designed to render video onscreen. This post is deleted!
https://forum.qt.io/topic/60332/streaming-video-out/10
CC-MAIN-2020-45
refinedweb
888
72.76
Android applications are perhaps the first choice to download, due to the market they share with the world. I have seen a lot of applications that have been developed and run on Android for a long time before launching them on the iOS store. Although, the strict iOS scrutiny does play a role in the number of applications available on its store. Launching first for Android isn’t about scrutiny or being rejected, it’s just about how much you can reach with your launch and which store matters the most for the business. According to GlobalStats, Android has 72.73% of the market share while iOS has only around 26%. In simple words, out of 100 people, 72 are using Android OS while 26 use iOS. So, if you need to reach a broader market, what would you prefer? 💹 Here comes the importance of Android testing and the frameworks that can help us facilitate it. Since developing for Android comes with this advantage, it also comes with a high risk. If your application is not able to perform as intended and has several glitches, you are jeopardizing your business in front of the audience. This could have fatal effects. The best way to minimize the risk is to use a framework suitable to your requirements and skills. On the same track, in this post, we will go through several popular Android testing frameworks and list their greatest strengths. Let’s get started! Table of Contents - Appium - TestProject - Espresso - Calabash - Selendroid - Robotium - Android automation frameworks at a glance - Vote for your favorite Android testing frameworks 1) Appium Appium is a multi-talented, versatile open-source testing framework that tests Android, iOS, and Windows applications through automation. The main aim of Appium is to relieve the users from being bound into a single technology stack and carry on with its anomalies. Appium allows flexible test execution and has been in the community for a long time. A large community will help you in every corner of the Android testing process. Pros - No app recompiling: Appium lives on the philosophy that the user should not be required to recompile the application or modify it for testing. - Tool independent: As I mentioned, it does not let you get bound into a single tech stack. It’s flexible and can operate with a lot of tools. So, if you have a favorite tool, you can integrate Appium into it and enjoy testing. - Open-source: It is open-source and hence free to use for everyone. - Language independent: Since Appium works with every testing framework, you can choose a framework that supports your language. Hence, it can support any language you want to write tests in. - Cross-platform: It’s cross-platform in nature so you can use the same code among different operating systems. This also enhances the code reusability. Cons - If you are not using Selendroid, you may not be able to test for the Android version below 4.2. - Appium has been observed to be a bit slower than other testing frameworks in test execution. Code example def getresults(self): displaytext = self.driver.find_element_by_accessibility_id("CalculatorResults").text displaytext = displaytext.strip("Display is " ) displaytext = displaytext.rstrip(' ') displaytext = displaytext.lstrip(' ') return displaytext def test_initialize(self): self.driver.find_element_by_name("Clear").click() self.driver.find_element_by_name("Seven").click() self.assertEqual(self.getresults(),"7") self.driver.find_element_by_name("Clear").click() Bottom line: Appium is a great software that has existed for a long time and has proved its worth. It gives the flexibility of test framework, language, and platform which is everything a tester wants. Appium is a very useful tool for your Android testing and I am sure you will love it. 2) TestProject TestProject is made for the community. Hence, it is free to use for everyone. TestProject is built on top of proven and trusted frameworks – Selenium and Appium. Hence, you not only get the benefits of Selenium and Appium but also the added features of TestProject itself. The tool is available for Android app testing, iOS app testing (iOS testing on Windows too!), and even web app testing. It is powerful, crisp, and provides unparalleled advantages to the testers & developers. Pros - Recorder: TestProject comes with advanced recording capabilities that help in increasing the code reusability by extending the recorded tests across suites. - Self-healing recorder: The recorder is artificial intelligence-powered and possesses self-healing capabilities. Therefore, any change in the location of the element is detected and the tests are adjusted accordingly. - Free to use: It’s free to use for everyone and therefore, you can try it and continue your usage based on experience, without worrying about emptying your pockets. - Dynamic element support: The automation framework comes with built-in support for dynamic elements. - Adaptive-wait technology: The adaptive-wait technology adapts itself to the loading time of the web page. This prevents a few test cases from failing just because the web page took a little more time to load. - Cloud-enabled: The tool comes with hybrid support for cloud-based testing along with local on-premise testing. - Centralized dashboard: It has a very attractive and unique dashboard where a tester can analyze their tests in a single place, no matter where they executed them. - Addons library: It comes with loads of addons to enhance the capabilities of the tool according to the tester or the project. This way you can also customize the tool as per your preferences. - Schedule and run: The tool offers “schedule and run” features for the tests, benefiting from built-in CI flows. Cons - People coming directly from Appium might find some of their integrations missing from the tool. - TestProject does not offer an in-house device farm solution (they do though have built-in integrations to either BrowserStack or Sauce Labs). You need to connect your physical real devices via USB cable or can use emulators/simulators. Code example driver.findElement(By.id("name")).sendKeys("John Smith"); driver.findElement(By.id("password")).sendKeys("12345"); driver.findElement(By.id("login")).click(); boolean passed = driver.findElement(By.id("logout")).isDisplayed(); if (passed) { System.out.println("Test Passed"); } else { System.out.println("Test Failed"); } Bottom line: TestProject is a great free tool that is gaining appreciation from all over the world. Communities and forums have picked it up for its superior strengths and swiftness in the Android testing phase. No matter your skill set, if you love Appium, you are going to fall in love with TestProject’s extremely intuitive & powerful solution for your E2E mobile testing 📲 3) Espresso Espresso is an Android testing framework developed by Google and follows open-source development paradigms. Espresso uses JUnit as its base and is, therefore, a familiar and easy android testing framework for the testers. The tool is extremely efficient in the user interface testing of the android mobile applications. Pros - Easy and predictable: Espresso is easy to use and the scripts are predictable in nature because of their familiarity with the JUnit. - Fast: This Google automation tool is faster in processing even with the “wait” and “sleep” in the code. This is achieved by automatically detecting the idle thread and when the tests are in other states. - Highly stable: For the same reasons described in the previous point, Espresso is highly stable in nature. - Open-source: It is an open-source framework and hence the abilities can be extended to other tools. Cons - UI-based only: This Android framework is just UI-based which raises the need to learn multiple frameworks for a completing testing phase. This can be a deal-breaker for some testers. - Not cross-platform: It is not a cross-platform android testing framework. This means you have to construct multiple scripts for different platforms. Code example @Test fun greeterSaysHello() { onView(withId(R.id.name_field)).perform(typeText("Steve")) onView(withId(R.id.greet_button)).perform(click()) onView(withText("Hello Steve!")).check(matches(isDisplayed())) } Bottom line: Espresso is a very comforting tool that asks for a minimum job from the tester. The problem is that it is just best for two things: Android and UI-based testing only. When other frameworks are offering much more, some testers are bound to look the other way even if Espresso’s synchronization features are extraordinary. 4) Calabash Calabash is an automation testing framework used with both Android and iOS applications. The power of calabash is its test creation which does not require any coding skills from the tester 👩💻 The framework works exceptionally well with real devices and provides real-time feedback for automation tests. Calabash is open-source in development nature and can work around with hybrid applications as well. Pros - BDD-enabled: Calabash uses behavioral driven development that is easier and faster to construct. - Automatic UI interactions: It enables automatic UI interactions such as pressing a button, etc. - Easy CI/CD integrations: Can be integrated with popular CI/CD tools very easily. - Real device compatible: It has been praised for running on real devices by the testers. This feature allows the testers to assess the mobile device in real conditions which are better than simulators. Cons - Calabash server on Android can only be used to test the UI inside the application code. This problem does not persist with Appium. - Notification testing is currently not available in Calabash. Code example ## Invalid login ## Given /^I am about to login$/ do welcome = page(WelcomePage).await @page = welcome.wordpress_blog end When /^I enter invalid credentials$/ do @page = @page.login(USERS[:invalid]) end Then /^I am presented with an error message to correct credentials$/ do @page.assert_invalid_login_message screenshot_embed end Bottom line: Calabash is a good tool for acceptance testing in Android. But, it comes with a few limitations that are hard to ignore. While I can praise Calabash for its real-time feedback and real device testing, it makes me think twice before using it. I find Calabash more powerful for iOS applications rather than for Android-based testing. Although it depends on the project and the team too. 5) Selendroid Selendroid is a word originating from the amalgamation of Selenium and Android. You can also remember this framework as “Selenium for Android“. Since the framework is Selenium-based, it comes with all the powerful strengths of Selenium and this has been a USP in the market for it. Selendroid uses Selenium 2.0 to write the test cases which a lot of automation testers are already aware of. Also, it can be used with android native applications or hybrid applications. Pros - Android webview enabled: Selendroid tests the mobile web using the Android mobile view app. - AUI support: The framework supports advanced user interaction APIs and multiple gestures to facilitate efficient testing of the UI of the application. - Hot plugging supported: It supports the hot plugging of the hardware device. - Vast use: It can be used on simulators, emulators, real devices, and even with the Selenium grid which is very helpful. - JSON Wire Protocol compatible: Selendroid is JSON Wire protocol supported which extends its functionality for the testers. - Open-source: The framework is open-source and therefore can be used by anyone for free. Cons - Selendroid is a heavy framework when it comes to computer resource utilization. A minimum recommended RAM would be 4GB if you want to use it. - Complained of being a bit slow but I guess that depends on the configurations and addons you have installed. Code(); Bottom line: Selendroid automation testing framework is made especially for Android UI testing. It leverages Selenium and if you are in love with it, you will definitely love Selendroid. The framework is open-source and you can give it a try and let me know how you feel about it. 6) Robotium The last of the open-source Android testing frameworks I will be taking up in this list is the Robotium framework. Robotium is a high-performance framework to handle multiple Android activities 🤹♂️ The main strength of the framework is the UI tests in which it is extremely proficient. It can be used to write functional, acceptance, and system tests in addition to the UI. Robotium comes with full support to native and hybrid applications written for Android operating systems. Pros - Minimum application knowledge: You don’t need to be an expert in the application code for testing it with the Robotium automation framework. You can test apk files or source code as you like. - Ability to handle multiple activities: Robotium can handle multiple Android activities automatically. - Time effective: The framework writes test cases that are readable, easy and fast to write, and maintainable. - CI enabled: It can be easily integrated with Maven, Gradle or Ant for the purposes of continuous integration. Cons - Robotium is incapable of handling the flash components. - It has a lot of issues when it comes to the notification or notification bar testing in android devices. - Even though Robotium claims of being fast on its official page, the community has repeatedly emphasized the slower test executions. Code example * Constructs this object. * * @param viewFetcher the {@code ViewFetcher} instance * @param waiter the {@code Waiter} instance */ public Checker(ViewFetcher viewFetcher, Waiter waiter){ this.viewFetcher = viewFetcher; this.waiter = waiter; } Bottom line: Robotium is more a UI testing framework than a complete solution in this field. Moreover, I have found a lot of people raising the issues of slow test execution and the inability to handle multiple activities. This can be a dealbreaker as it will waste time and raise a need to learn another framework. Android automation frameworks at a glance The following table summarises all the Android automation frameworks discussed in this post: *W- Windows; M- macOS; L- Linux 📌 If you want to explore the best frameworks for iOS, you can check out my guide to iOS testing frameworks. Vote for your favorite Android testing framework This post walked you through the best Android automation testing frameworks that you can look forward to in your testing career. But, these are not all equivalent in power or capabilities. They even offer different features which are written in different languages. However, there is no “one” best framework among them. Which one you should choose depends on the project and the requirements needed. It is therefore recommended to go through each of them and carefully analyze their powers. I also understand that even if they are among the most popular Android automation frameworks in the community, they are not the only ones on that list. You may be working or have worked on a different tool that you found comparable to these and would love to see here. I will leave you to decide which framework matches your team and project the best, and would love to know your recommendations in the comments 😀
https://blog.testproject.io/2021/09/15/top-android-testing-frameworks-you-need-to-know/
CC-MAIN-2022-05
refinedweb
2,445
55.84
#include <FillStyle.h> A GradientFill. TODO: clean this up! The type of GradientFill. A Focal fill is a gradient fill with a focal point. Construct a GradientFill. Optionally the records can be passed here. The actual matrix of the gradient depends on the type; the constructor handles this, and users should just pass the user matrix. Get the focal point of this GradientFill. If the focal point is 0.0, it is a simple radial fill. Referenced by gnash::StyleHandler::addFocalGradient(). Referenced by gnash::AddStyles::operator()(). Query the GradientRecord at the specified index. There are recordCount() records. Get the number of records in this GradientFill. Set the focal point. Value will be clamped to the range -1..1; callers don't need to check. Set this fill to a lerp of two other GradientFills.
http://gnashdev.org/doc/html/classgnash_1_1GradientFill.html
CC-MAIN-2014-52
refinedweb
134
64.78
Indian CEO Says Most US Tech Grads "Unemployable" 1144." outsourcing and unemployment (Score:5, Funny) Re:outsourcing and unemployment (Score:5, Funny) If you have 10 people and none of them have jobs, you have 100% unemployment. If you then bring in 90 people with jobs and keep the 10 people with no jobs, you have 100 people and only 10% unemployment. See? Bringing in people and giving them jobs does help local unemployment. dom Re:outsourcing and unemployment (Score:5, Informative) let us not forget that microsoft let go about 5000 workers to reduce costs, so your analogy then becomes similar to You have 40 employed people and ten unemployed.. the employer then fires 30 of those and replaces them with foreign imports that are cheaper, now of the sample group instead of having 20% unemployed you have 50% you then have the same number of jobs, but with more people to share them around between. Re:outsourcing and unemployment (Score:4, Insightful) let us not forget that microsoft let go about 5000 workers to reduce costs It's not a great comparison. It's normal and healthy for a corporation to trim the fat. The American automotive industry is a great example of what happens when you don't get rid of workers and assets after they become redundant and/or unnecessary. If Microsoft legitimately can't find talented workers, I suppose there's nothing wrong with employing a handful of foreigners. The total number of H1-B visas isn't terribly high in the grand scheme of things (limited to 65,000 per year, maximum stay of 3 years; 6 if a renewal is approved) Microsoft employs approximately 89,000 people, and received 3,517 H1-Bs in 2006. Also don't forget that there are plenty of American citizens working abroad. I can't find a great source for data on this, but Google turned up an article from 2005, claiming that there were approximately 4 million American expatriates at the time. H1-B visa holders also tend to be highly educated by the very nature of the program. I fully support the notion of attracting the best and brightest minds to my country. It might make me less competitive in the job market, but will almost certainly be good for the country as a whole. Perhaps the biggest injustice of the system is the manner in which foreign graduate students are treated. We award a huge number of advanced Ph.D positions (often government funded) to foreign students, and force them to return home after they've received their degree! Not only are we depriving American citizens from educational opportunities, but we're also essentially educating other countries' workers for free. Re:outsourcing and unemployment (Score:5, Insightful) It gets better - my job was sent to India - and from what I was told they hired 12 people (in India) to replace 2 people in the USA (me and a co-worker). Re:outsourcing and unemployment (Score:5, Insightful) If you have 10 people and none of them have jobs, you have 100% unemployment. If you then bring in 90 people with jobs and keep the 10 people with no jobs, you have 100 people and only 10% unemployment. OK, you made me laugh. But ... Theoretically you should get an even lower UE rate. You see those 90 people with jobs will need someone to serve them burgers when they go McDonalds. If 3 of the original 10 unemployed get jobs serving the needs of those 90, leaving you with a 7% UE rate, and, more importantly, with a lower number of unemployed people. That, at least in theory, is how bringing in skilled labour is meant to reduce unemployment. Re:outsourcing and unemployment (Score:5, Funny) If those 90 people are forbidden from eating cows by their religion, the original 10 are still screwed. McDonald's won't be hiring. Re:outsourcing and unemployment (Score:5, Funny) wtf does mcdonalds have to do with cows? Re:outsourcing and unemployment (Score:5, Insightful) About as much as KFC has to do with chickens. Re:outsourcing and unemployment (Score:5, Interesting) yes.. because getting in foreign workers will help REDUCE local unemployment.... maybe in soviet russia. Yeah, because unemployment is "the problem" - not getting the damned job done so that something of value gets created and sold so that wealth can actually get produced, salaries, taxes, and bills paid, and economies improved, right? I've been having a tough time finding a reasonably qualified programmer from straight out of college. I'm not looking for senior database developers, just people who can solve basic logic skills and... write software! From fresh grads with MASTERS degress in IS I get blank stares from such questions as: (in any language of choice!) 1) If you had a string, and wanted to replace part of that string with another string, how would you do it? 2) How would you add 5 to each element in an array of integers? 3) How would you add 5 to a field of integers in an SQL table? 4) Write up any form of database "select" query. I don't expect it to parse, just have the basic pieces. Honestly, just a simple "Select field [, field2] from [table] where (conditions));" would suffice. 5) In your language of choice, take a variable containing the value 5 and construct a sentence that says "I have 5? Supposedly, the job I'm offering is why they went to school, but they aren't even qualified to begin. So what did they do for 6 years? If you are hiring a welder, he'd better know how to weld. If you hire a doctor, he'd better have a good working knowledge of medicine. Why can't we expect to hire fresh programmers who know how to... program? Re:outsourcing and unemployment (Score:5, Insightful) Wait, what? You're looking for basic coding and DB, but asking for candidates with a Master's in Information Science? IMO that seems more like wandering into an architecture school looking for welders. There will be probably a few, but it's going to take a lot of effort to find them. Re:outsourcing and unemployment (Score:4, Interesting) I don't qualify my advertising with *any* form of educational requirement. I only list the skills required. Of all the programmers we now have at our small-but-growing-fast company, none of them have even a BA. PS: We're flexible enough with our hours that one of our programmers is going to school to complete a degree in Mathematics. I'm not asking for Masters degrees, but I'm getting them. And they sure aren't helping them much, at least as far as I'm concerned. Re:outsourcing and unemployment (Score:5, Insightful) The problem here is not the available candidates, it is your approach to trying to fill the position. Please, hear me out (as this is something I've run into myself, more or less). First, if you're looking for someone with specific skills, you are intrinsically expecting them to have experience with those things. Like most things in life, you can not gain experience or knowledge in something without doing it, first. If you are looking for entry-level candidates, you are looking for intellectual aptitude, a foundational skill-set indicative of the ability to learn, and a broad but shallow understanding of many different topics. If you want someone who has a more topical understanding than just the basics, but not someone more skilled than "entry level" (say, intermediate or experienced) then you are looking for someone with a PhD. We're not (necessarily) talking about incompetent students, here. A student who was (say) a tech while going through school is going to put the things on his resume which relate to his academic preferences and strengths. There isn't all that much which can be covered in a semester. Also, consider that something known is not always easily conveyed in a foreign format. It's damn hard to orally convey a lot of the things I type on a daily basis (and the logic/process is sometimes also difficult to convey: the "speech" part of my brain is somewhat disconnected from the part which performs the work, it would seem). I imagine I'm not alone in this, at all. (Likewise, pen + paper isn't the same thing, especially if your experience is very environmentally confined or "mostly academic".) Now, granted, I do not know your hiring process or requirements, but I can see such a scenario play out in such a fashion (and have seen it a number of times). IT is complex; there are a lot of things to look at, and unless you're already locked into a sub-field, the amount of things you can (and might have to) study to land a job to start a career in a sub-field is intimidatingly large. Not everyone has the opportunity to grow in their field "organically", and it's very difficult to hit a moving target (ie land a job) when the market is tight. I've seen a lot of job postings, and been to a couple job interviews with questions like you describe. Sometimes they're looking for an introductory position and don't realize it. Sometimes (as I suspect the case is with you) they're trying to pull an experienced or intermediate-level developer or systems person in at entry-level wages. I think the difference between a US college graduate IT person and an Indian worker is probably that the Indian worker's schooling has been more highly tailored towards job postings and the fact that he very well may have "abandoned all hope" (at all) for a number of years while he underwent his schooling.. Re:outsourcing and unemployment (Score:5, Insightful). You know, I enjoyed most of your post, but found this section really lacking. You seem to be suggesting that you should hire the inferior person, if he's a native of the country you happen to be born in (or are a current resident of), over the superior person who is not a member of the same group. How is this reasonable? If you do this, then you're just short-changing your company, and putting everyone's paychecks at risk. Thats one of the things that people who havent run a business dont get. The pressure and obligation to keep the business solvent and growing so that everyone gets to keep their jobs and keep getting paid, is quite intense. Hiring inferior (but American) staffers over superior (but foreign) folks doesnt help anyone, least of all your countrymen. It just creates another marginal business that probably wont last, and will then drive up the unemployment rate. You pick the best people you can afford, and you ignore things like nationality, gender, ethnicity, religion or sexual preference (assuming the person can fit in with the group). And thats it. Re:outsourcing and unemployment (Score:4, Insightful) You seem to be suggesting that you should hire the inferior person, if he's a native of the country you happen to be born in (or are a current resident of), over the superior person who is not a member of the same group. How is this reasonable? It doesn't look reasonable from a little-picture bottom-line view, but in the big picture it's not only reasonable but important. This is why Congress limits foreign workers. Of the two workers, the local is likely to spend more domestically, will pay more taxes over his or her career, may serve on a jury, is many times more likely to do volunteer work, and is infinitely more likely to defend the nation in times of crisis. Nations prefer local workers because local workers prefer their nations. Re:outsourcing and unemployment (Score:4, Insightful) Is this nation worth saving? It can't produce a programmer that can write a "SELECT" statement in 5 years of computer science training! Ill informed graduates need a serious lesson in "real life" and need to return to their collage with this results and protest. I had six years of computer science, taught simple SQL to students as a TA, and now years later I look up SQL commands. It's just not worth remembering when I've got examples in my own code or online, and I use it so rarely. Bullshit (Score:4, Insightful) He's looking for someone to do a relatively simple DB-related job. He's asking a few questions that should be dead simple for anyone who's only so much as worked through tutorials in a few related subjects. It ain't rocket science. You talk about "foreign formats," about not expecting academics to have practical experience, you talk about "tailored toward job postings"... but those are all hand-waving and pretty feeble excuses for not having a clue of basic concepts of the job they're applying for. No employer should be obligated to hire morons unless it's to do with Affirmative Action. If they can't handle this kind of stuff they should submit their application to MacDonalds. I find it hard to believe it's so hard to get a hold of people with such basic skills. But if it's true, the educational system is deeply flawed and we need fixes, not excuses. Re:outsourcing and unemployment (Score:5, Insightful) I'd object to "all". While it is quite possible less common in other fields, I know lots of CS and SoftEng graduates who got a university education precisely because they wanted to add a strong theoretical background to the technical skills they could acquire on their own. At risk of trolling: code monkeys get trained, developers learn. Re:outsourcing and unemployment (Score:5, Funny) "Seriously don't expect university graduates to be able to do any professional job well, engineering, architecture or software coding, all the graduates will require years of training to become anything approaching useful." Years of training, eh? Hmmm... if only there was some kind of institution that could offer this training, perhaps for a tuition fee... Re:outsourcing and unemployment (Score:4, Insightful) Even architects have to know the basics. Or their fancy designs would fall over. There's a reason you make engineering students build bridges out of spaghetti. The same computer students should know how to build a DB out of a flat file. Right, but not too many people try to hire engineers for their spaghetti-gluing skills. Re:outsourcing and unemployment (Score:4, Interesting) That's why I dropped degrees as a requirement altogether. Yes, that means that sifting through applicants becomes a lot like an American Idol casting (you have a few hundred applicants for the position, 90% of which don't even come close to qualifying), but it's worth it. As it's for malware forensic, asm plays a role. Especially understanding asm you didn't write. So one of the centerpiece questions is basically: You have this piece of code in a subroutine: pop eax inc eax push eax retn What do you expect it to do, and what would you do in your disassembler? Believe it or not, anyone who was able to solve that was a VERY good analyst. That's a question you can hand out in written form, get written answers and you sieve out those 90% that don't even have the foggiest idea what's going on (those are also the 90% you don't need). I don't even read the answers (ok, I glance at them so I won't get someone who wrote "no idea, but I don't care, I'm here for the fat check"), I don't care how they answer it. I care that they understood what's there and that they have an idea or at least a hunch (hunches are quite valuable in that biz) where to put the crowbar. The rest is training. What I need is people who don't fear to get their feet wet, who don't mind poking at code and who can play with it. I need explorers and tinkerers. It doesn't matter if your answer is right. What matters is that I see you pondered it and had an idea. Re:outsourcing and unemployment (Score:5, Funny) I've been having a tough time finding a reasonably qualified programmer from straight out of college. I'm not looking for senior database developers, just people who can solve basic logic skills and... write software! You are in luck. As fate has it, I am straight out of college student, looking for work as a programmer. From fresh grads with MASTERS degress in IS I get blank stares from such questions as: (in any language of choice!) No worries, I will give answers instead of blank stares, though blank stares may last 10-15 seconds as I parse questions. My language of choice to answer questions is Ruby. Let's look at some answers that you claim Master's graduates have trouble with. 1) If you had a string, and wanted to replace part of that string with another string, how would you do it? def string_replace(str, find, replace="") pos = Regexp.new(Regexp.quote(find)) =~ str if pos.nil? return nil end ret = str[0...pos] + replace + str[(pos + find.length)..-1] return ret end This function returns a string with find changed to replace, first instance only. A nil is returned is the target string is not found, and removes the target string if a replacement string is not provided. For instance: def string_replace("I like blue.", "blue", "red") would return: "I like red." 2) How would you add 5 to each element in an array of integers? arr.map{|num| num=num+5} 3) How would you add 5 to a field of integers in an SQL table? UPDATE tblname SET col = col + 5 4) Write up any form of database "select" query. I don't expect it to parse, just have the basic pieces. Honestly, just a simple "Select field [, field2] from [table] where (conditions));" would suffice. You pretty much answered this one yourself. In any case, for an example, SELECT firstname, lastname FROM people WHERE age >= 21 would get the names of people who can drink (in America) 5) In your language of choice, take a variable containing the value 5 and construct a sentence that says "I have 5 children". x = 5 str = "I have " + x.to_s + "? About me, I am a college graduate from a well-known university with a Bachelor's in Computer Science from the College of Engineering with a 3.5+ GPA. Since you don't correlate degrees with talent, I won't bore you with the details. However, if you are willing to take a chance, I am willing to demonstrate my abilities and prove that I can do what you need, so take a chance on a random guy from Slashdot. No Slashdot account, but I can reached at hire.random.guy.from.slashdot@gmail.com (Registered just for this purpose.) Re:outsourcing and unemployment (Score:4, Insightful) I'm not the guy who was (presumably) hiring, but let me comment nonetheless. 1) String replacement. No need to get overcomplicated - use String#sub [ruby-doc.org]. 2) Update an array. Your assignment to "num" there is meaningless. For one thing, you're assigning to a local (lambda argument) which is going to be immediately discarded afterwards. For another, if you're trying to mutate the array in-place, then you should be using Array#map!, not Array#map. And if you're trying to make a new array with values, then you do not need the assignment at all. Re:outsourcing and unemployment (Score:4, Insightful) You're looking at the wrong degree, IS is a managerial degree not a technical degree. Just like a Management degree, IS gives very very little information about the person's actual skill set. A Management degree says "he likes money and people". An IS degree says "he likes money, people and computers". You must remember, all "people management" degrees are fundamentally about managing unqualified and/or stupid people. So you hire an MIS for say managing the computing needs of an office with very little computing needs, managing the software installation part of an assembly of line for kiosks, or thousands of similar jobs requiring only minimal computer skills. Your MIS guy's resume saying "oracle" means he's used some basic qui query engine in class. Well, obviously that's quite valuable if you want him managing a call center. Not so much if you want him programming. A qualified programmer will have a degree in science, engineering, mathematics, or occasionally some "interesting" major, and ideally list a slew programming languages. For example, if you see a guy with a degree in Music Theory, Economics, or French that knows C, Java, and Ruby, well I promise you that guy can learn SQL infinitely faster than your MIS. I mean, business gets all excited about these business oriented degrees we academics sell, but mostly these degrees say `` This person lacked the initiative, confidence, and curiosity to pursue real academic interests. We recommend using them to manage people without collage degrees. '' Re:outsourcing and unemployment (Score:4, Insightful) While I agree that anyone with a university title for computer science should at least have some basic ability in actually writing code, I think you misunderstand what computer science is all about. It is simply not intended as vocational training for programmers. Of course, a student with any sense at all would make sure he is at least employable outside academia, but the point of a computer science study is not to become a programmer.: These guys are excellently prepared for becoming academics but the schools they came from don't seem to be very concerned with giving them the basic skills they need to get a job outside of academia. When they don't even have a couple of proper courses about, say, web-app and web-service programming. It is almost as much effort to train some of these university graduates up as it is to pick a person off the street who is self educated in basic programming, or even has no clue of it at all and train him/her up. Another thing is that some of the more business oriented of these schools are starting to turn out grads that have been taught nothing but "industry standard" (read Microsoft) OS'es/programming languages/tools. Nobody is doing a young computer graduate any favors by teaching him/her only MS or only *nix etc. They have to have a basic grasp of both. Walk into any telecommunications company and you will quickly find out that Microsoft products are not an "industry standard", "DirectX" is not the universal de facto standard for game programming, "OpenGL" is not dying, a huge number of software gets written for other platforms than PCs and server systems, the list goes on... The best people to hire are usually complete nerds because they alone tend to have the kind of basic grasp of software development that is needed because they acquire it in their spare time. Mind you it is definitely a plus if these nerds have a degree. There is, however, a surprising number of people with computer related degrees applying for developer jobs who simply seem to have a very limited clue about how to develop software. Unfortunately comp-sci seems to have become a popular choice for people intending to become programmers. Perhaps we should split comp-sci into two paths? One for people intending to get a job in academia and one for those destined for the commercial job market? Re:outsourcing and unemployment (Score:4, Insightful): I've been programming professionally for 17 years, most of it in C++, and even I haven't heard of a god class. I can make a guess, but it would just be a guess. Have you considered that you may be mistaken regarding how commonly used that term is? Re:outsourcing and unemployment (Score:5, Insightful) Re:outsourcing and unemployment (Score:4) If I can make this political, I would blame the conservative movements strategies over the past decades to push free market economics. The only way to get tax cuts for the rich and unchecked corporate power in a democracy is by convincing more than 50% of the voting population that they will someday, soon, be rich and the head of one of those corporations. I often here things like 'Fuck you socialists! when I get my MBA I am going to be a rich entrepreneur and I don't want to have to pay tax to support scum like you!' from people going through university on borrowed money and their parents contributions. They honestly believe they have a free ticket to the top 1%, along with around 50% of the population. Those numbers don't add up. Re:outsourcing and unemployment (Score:5, Insightful) Re:outsourcing and unemployment (Score:5, Funny) unfortunately, it seems that if /. *had* outsourced their coding this silly javascript nonsense we're seeing would be fixed.... eventually, and for lots of money, after a process consultant had submitted the change request forms and the technical lead had decided that a complete rewrite using .net was the only way to solve the problem. Re:outsourcing and unemployment (Score:5, Interesting) Use NoScript. It loads me a nice reasonable rendition of slashdot without all the bullshit. Slashdot is actually the reason I started using the plugin. I don't know what the fuck Slashdot coders are doing that is so script intensive on a fucking news/forum site since Google docs and Gmail which uses tons upon tons of Javascript runs reasonably while what should be a simple site of html,css, and a conservative amount of javascript feels like I am loading 72 instances of Eclipse on a 486. Get your shit together slashdot. Re:outsourcing and unemployment (Score:5, Informative) Try going to your Help page, and under "Classic Index", check the box that says "Use Classic Index". There are other boxes there too e.g. "Simple Design". Re:outsourcing and unemployment (Score:5, Funny) Interestingly enough, changing my preferences (and saving them) to simple view, low-bandwidth, no icons did ABSOLUTELY FUCKING NOTHING. Same garbage all over my screen. I am thoroughly convinced this is a Phishing site and all of our passwords are now being used to pound our Karma into the mud so NewYorkCountryLawyer looks even better then he already did. Re:outsourcing and unemployment (Score:5, Interesting) Agreed. I used to read Slashdot 10 years ago on 233 MHz Sparc 5 workstations, running SlowArseis and it was perfectly reasonable. Now it keeps beachballing my MacBookPro, which is ten times faster on clock speed alone, never mind it can do a lot more in a cycle, has faster bus, RAM and hard disk. I would've thought Slashdot of all places wouldn't succumb to the gleeful bloat which has rendered spectacular advances in hardware almost irrelevant to the end user experience. Re:outsourcing and unemployment (Score:4, Interesting) I would've thought Slashdot of all places wouldn't succumb to the gleeful bloat which has rendered spectacular advances in hardware almost irrelevant to the end user experience. Indeed, this shit does not bode well for the future of slashdot. These sorts of counter-productive and superfluous web-site "upgrades" are the kind of thing that often precedes the death of a company. It's like the brains have already left the building and the company is just left running on empty until it collapses under the weight of the remaining stupidity. Re:outsourcing and unemployment (Score:5, Insightful) Re:outsourcing and unemployment (Score:5, Funny). Where's India's domestic economy? (Score:3, Interesting) I'd say its time to pull the plug on free trade and let these people jump start their own local economies on their own merits, and not on shoveling their crap into the USA. India has not done a damned thing for the USA and I see no reason why the USA should throw its people out of work to subsidize India's economy. Free trade is not worth it. Re:Where's India's domestic economy? (Score:5, Funny) India has not done a damned thing for the USA It would be hard to neatly express the USA's $11,400,000,000,000 debt without the zero. Invented in India. OOOH BURN! Re:Where's India's domestic economy? (Score:5, Funny) Re: (Score:3, Insightful) Re: (Score:3, Funny) I would totally mod you up if I had points. Your comment was so poetically simple yet dead on. Thank you. Re:Where's India's domestic economy? (Score:5, Insightful) Whether you agree with the outcome or not, foreign labor has helped to reduce the price of many of the goods and services that westerners rely on every day. India has allowed us to save $0.05, $5, $50, maybe $500 on a consumer goods at the cost of our manufacturing base. The reason your typical Dell computer costs $400 is because they can ship part of the costs of support out to India. The same is true of big-box retailers like Walmart selling t-shirts and teapots cranked out in Chinese, Indian, and Indonesian factories for substantially less than local boutiques like American Apparel that sell US-made goods. Part of what you're paying for is branding, distribution chain inefficiency, fashion, etc. but it's important not to discount the labor cost--no matter how small--because that's all part of the race to the bottom. If you don't like outsourced IT for any reason--"I don't like China's stance on Tibet" is as good a reason as "I find their accent makes resolving a problem over the telephone difficult"--then don't buy from companies that use it. You'll probably have to pay more for it, but nobody said having principles and sticking to them wouldn't require some sacrifices. Chances are good you'll find it's not as expensive as you think and a lot of times you'll end up with a better product/service because of it. The masses have spoken: saving a few bucks is worth it. If you don't like it--vote with your dollars and encourage your friends and family to do the same. Arguing for government regulation so that american workers don't' have to be competitive is ridiculous. Screaming nonsense like "India hasn't done a damned thing for the USA" is rediculous when you consider the role workers in developing nations play in producing the products that fuel every aspect of our lives. Re:Where's India's domestic economy? (Score:4, Insightful) That's the thing--- it hasn't. Drugs-- $5.00 here, $0.10 there .50 at the local markets-- but $2.49 full copyrighted retail). DVD's-- $19.99 here, $2.49 there (and in reality about Clothing-- $1 or less there--- $19.99 here. There is *no* reason the clothes, drugs, movies, songs, etc. etc. should have that extreme of a price difference. In a normal capitalistic society, we would be allowed to buy the 10 cent pills there and import them here and resell them for 20 cents. We have all this dvd regionalized shit, and protected trade zones, and other restrictions on free trade. Our declining wages would not matter so much if we really were getting the benefits of free trade. But the wealth here is literally being pumped out of the country- and the jobs too. Re:Where's India's domestic economy? (Score:5, Interesting) Just because you haven't been able to think of the reason doesn't mean there isn't one. To take the example of a DVD, only considering America and India. A film has a fixed cost of say $100 million to recoup from DVD sales, and each individual DVD has a cost of say $0.20 to produce and sell. If the DVD seller only sold at $19.99 in both countries then sales in India would be negligible, meaning that sales in America will need to cover the entire cost of both making the film and pressing the DVDs. If they sold DVDs at $2.50 everywhere then the margin would be insufficient to cover their costs. What you are ignoring is that the by selling the DVD in India at $2.50 the company knows it wont cover all the overhead costs, but it will cover some of them. If Indian sales generate $5 million then it lowers the amount they need to charge in America to make a profit by $5 million. If films etc weren't sold at a lower price in countries with lower wages then they would have higher prices in the countries where they are sold in order to cover the lost revenue. Re:Where's India's domestic economy? (Score:5, Insightful) The problem is that they want to have their cake and eat it too. They want to source globally and produce wherever it's cheapest. They don't want us to source globally and buy wherever it's cheapest. They want your wages to be competitive with foreigners. They don't want their prices to compete with products sold abroad. It's not a two-way street. Re:Where's India's domestic economy? (Score:4, Insightful) The problem is that "free trade" should mean that the price in market A cannot be more than the price in market B plus costs for transportation to and sale in market A. Any person or company should be free to fly to india, buy 5000 copies of the latest DVD, fly back, and sell those DVD's for any price he or she likes. That *is* free trade. Companies, especially if they sell a non-commodity (ie there is no competitor with the exact same product; compare bricks to dvds), love segmenting markets so they can maximize their profit. Offering student discounts is a prime example of this: students have less expandable income, so the optimal price for them (ie the intersection of supply and demand curves) is lower than for non-students [ignoring the 'hook 'em while they're young' argument]. Market segmentation is always good for the company selling goods, and can be bad for the consumer on the wrong end of the segmentation. Free trade *should* limit the ability of companies to segment markets based on geography just as anti-discrimination practices *should* limit their ability to segment based on race, gender, religion etc, which are also good proxies of income (eg [census.gov]; blacks earn (median) 30k, hispanics 34k, whites 49k and asians 58k). Just imagine having separate prices for black people and white people! By granting companies the sole right to distribute something and enforce that right using the courts, international treaties, customs, and DRM, we are allowing them to operate as if free trade does not apply to them. Re:Where's India's domestic economy? (Score:4, Interesting) Although free trade has increased the average wealth in developed societies (wealth measured not just in money but also in what you can get for that money) it has also increased wealth inequality (the second effect being much stronger than the first). As you pointed out, there is a huge difference in prices between the same goods in the original (developing) country and in any destination developed country. The difference is mostly captured by companies and then passed on to CxOs and large shareholders (small shareholders usually get a pittance on account of their share being a tiny percentage of the total). Basically this is because of two effects: - Job competition with foreign based/born workers (outsourcing) means that companies can (and do) pressure local workers to keep salaries low. - Intellectual Property laws create artificial barriers which are only enforced in developed markets, thus resulting in high-spreads in the cost of medicines, video and audio media and trademarked goods (all which are very IP-heavy). A lot of the problem is that large companies have a disproportionate amount of influence with politicians and thus get laws passed for their benefit which usually negatively affect people and small up-and-coming companies (anti-circumvention laws, over-broad IP laws and other barrier to entry laws). It's thanks to this regulatory capture by the industry that the wealth produced by Free Trade has been channeled mostly to a small number of people. Although some defend that what's needed is more Free Trade, it's my opinion that what the kind of trade we have now is not Free and that until the political system and the laws are fixed to remove the undue influence of special interest groups, rules have to be put in place to restrict trade: the truth is that, things being as they are now, just like the positive aspects of free trade went into the pockets of a few, the negative impact of restricting trade would hit the pockets of that same few. Free Trade must be built on a basis of true freedom of trading, not in the tightly controlled channels of wealth as we have now - the trade off should be clear: either the benefits are free to flow to all or voters will turn against the opening of borders which is a requirement of Free Trade. Re:Where's India's domestic economy? (Score:5, Interesting) The fair price is 10 cents in both places. Under real free trade, you couldn't prevent it. Prices are not relative under real capitalism. So right now, I compete with someone who makes 1/10th of what I do-- in part because I'm subsidizing research on his health care and his movies and entertainment. By your logic, billionaires should pay 2 million bucks for the same shirt that you and I buy for 20 bucks. Cable TV should cost a billionaire 100k a month. Prices are not relative-- it's only because of gross sellouts and artificially protected regions that such *extreme* price differences are maintained. Within the U.S. competition brings down prices rapidly-- but between the U.S. and India, it doesn't. Re:Where's India's domestic economy? (Score:5, Insightful) There is no such thing as a fair price. The sooner you accept this, the faster you will understand economics. It is an empirical "science", not a value judgement. In practice, however -- in the real world -- retail margins are around 100% in developed countries (broad generalization) but 10-20% in developing countries, as a result of the low rents and low wages paid to retail employees. With similar differences, but smaller amounts, at the wholesale level, one would expect retail prices on many things to be less than half of their developed country equivalent, just on this one factor. I don't know if anyone said that, but if he/she did you're both wrong. Prices are based on the idea of maximizing profits, which in cases like drugs and IP is equivalent to maximzing revenue. If there is little transfer between two markets, then this is achieved by charging the price where you would lose total revenue (sales * price) if the price went up or down even a little bit. Millionaires might pay more than the unemployed for, e.g. jewellery, just as you suggest. I suspect they pay less, not more, for cable TV due to an externality: the unemployed watch more TV. But you're right that is in effect a subsidy, but it's not an explicit or deliberate one, as far as I can see. It's a logical result of maximizing revenue. No, prices are dependent on the particular conditions of each market. That's all. They certainly are not "absolute truths" -- if that makes them "relative" then I'm comfortable with that term. It does, for consumers in both countries. Google "comparative advantage" for the how and why, if you really want to know. Re:Where's India's domestic economy? (Score:5, Interesting) When it comes to shipping out labor, everyone seems to miss the big picture. What is the purpose of a nation? To benefit and protect the citizens therein (at least that's what is sold to the citizens). Everyone has to be a member of a nation whether they want to or not, and most nations only allow you to be a citizen of their nation and no other. So people are effectively trapped within one system. As of yet there is no such thing as a global citizen. So a nation's goal is not to server the world, but to serve its citizens. If it can serve both the world and its citizens simultaneously, that is great. But if it has to choose between one or the other, then it must serve its citizens first. Originally in the US corporations were limited entities that were only allowed to exist for public benefit and only for a limited duration until their objective was reached. But that changed over time, and now corporations are some of the most powerful entities in the US. Corporations in the US benefit from many things, including physical production, access to the US market, subsidies, government contracts, tax breaks, tariffs, and many other benefits from being registered as a US corporation. One must remember that a nation and its government is there to serve the betterment of its citizens, and not corporations. If it benefits a corporation to outsource to another country, but not the citizens, why do it? The nation has no obligation to benefit the corporation unless it also benefits citizens. In fact that's why US corporations are given all the advantages they get - in the end it benefits the citizens. But once the public is being injured by the current regulations governing international business, it's time to change the laws. Why benefit a tiny proportion of the US population consisting of high-level execs as well as foreign nationals at the expense of the vast majority of the US population through regulation? If a company wants to be "global" and hire foreign workers at the expense of US citizens, I have no problem with that. But they must lose the benefits of a being a registered US corporation. They must truly go international, meaning no tax breaks, no subsidies, no being on the advantageous side of tariffs, etc.. It's really simple. LS Re:Where's India's domestic economy? (Score:5, Insightful) India has not done a damned thing for the USA Uh, except for all the coding and tech support they're doing for us. Yeah, this kind of crap hurts when you hear it from this class of a guy that may very well control your future employment options, to at least some degree. But, I'd say their coding has done plenty for the USA... just ask the managers who have outsourced there. You don't like that comment? Does it enrage you? Well, then that's an emotional reaction and I'd say it's misleading you. :) /. libertarian audience gets all antsy for government protection with regards to outsourcing. Should individuals take care of themselves and should society have as much freedom as possible or not? Ultimately, in 20 years, I think we're going to have a partner in India that we will be very happy to have, particularly with the rise of China. We'll also have such a depreciated dollar, and the Indian talent will be relatively scarce, we will reach a parity, and all boats will rise. Economies are prosperous when they're efficient. They're efficient when the most work gets done with the least amount of cost. If going to India makes tech more efficient, the USA as a whole prospers. Does this hurt our feelings as geeks? Yes... hell yes. But you know what? I think I'm a better value than an Indian employee, and I think I can prove it (and I think I am proving it, along with many other IT folks here). Every single country that has shut itself to trade has suffered.. every.. single.. one. Why should we be any different - we obey the laws of macro-economics in this country! I find it a little too convenient when the Re:Where's India's domestic economy? (Score:4, Insightful) _Nothing_ works when it's in the form of an Idea. Socialism as an Idea doesn't work. Libertarianism as an Idea doesn't work. I find the "we should privatize all the roads" Libertarians tiresome and insipid, but find myself agreeing more with general Libertarian principle than with Liberalism, Progressivism, Neoconism, etc... But people like to try to get themselves under an umbrella. "I'm Republican and a neocon, therefore anything Obama does I must hate, no matter how trivial or whether I would have cheered if Dubya had done the same thing". The correct Libertarian approach isn't an idealistic one, but a societal "greedy" one. We shouldn't have 100% open trade because of some ideal. We should determine what policies will be in our best interests and will protect the rights of US citizens, everything else is secondary. Re:Where's India's domestic economy? (Score:4, Insightful) The cost of free trade will be playing out over the next few years, but was started years ago. It is really about a race to the bottom via who will work for less, and who will work sweatshop hours for ppl that run the companies that make idiotic decisions like they did during the DOT COM daze. These new to the game ppl in India will also suffer once the US companies have canned all the US workers who WERE the #1 customers of these US companies. They will see what a tangled web has been woven, much like the tangled threads of the international finance thieves that sent trillions into oblivion. Customers with no job tend to spend less, holy cow who would have thought ! The US was the largest economy in the world, but then it sold out most of it textiles and manufacturing jobs to 3rd world countries like India. Companies in India do not follow our labor laws, but yet they are attached to US companies as proxies and do work for customers within the US, so that is a loophole. If India had to pay the same licenses, fees, taxes, ad naseum that US corps did things would be a bit different. With an unlevel playing field these talking heads can spout their rhetoric, but once it all comes falling down due to 100's of trillions in derivatives tanking then his high and mighty attitude will have to descend down to the mere mortal's world. [marketwatch.com] Buffett warned of this 7 years ago, and other sane folks tried bu have been ignored by the same empty suits that make statements like this bozo in India. Re: (Score:3, Insightful) You didn't "got stupid", it's just that your industry had grown so much that the internal market alone could not sustain further industrial growth. If Americans are unemployable.... (Score:5, Insightful) Re: (Score:3, Informative) Re:If Americans are unemployable.... (Score:4, Informative) The money is coming from somewhere... Don't you remember the economic meltdown? Turns out the money was, and still is, coming from nowhere. India: The skrypt kiddies of programming (Score:5, Interesting) Amen. I won't say that all the programmers in India suck, because that would be an inaccurate stereotype. However, I will say that The worst code I have ever seen from American programmers I have worked with was better than the best code that came back from Indian outsourced groups. I suspect that all the GOOD INDIAN PROGRAMMERS CAME TO AMERICA TO MAKE BETTER MONEY. Why would you hire the leftovers? Really, you think that you can just get better quality by spending less? Really? Re:India: The skrypt kiddies of programming (Score:4, Insightful) Why would you hire the leftovers? Really, you think that you can just get better quality by spending less? Really? Here's the deal: Manager X tells their boss that they can save the company millions of dollars by sacking US IT staff and sending the work to India. When the software comes back from the Indian sweat-shop it's a steaming pile of sacred cow shit, but by that time Manager X has got big brownie points, a big bonus and a promotion and doesn't have to deal with it. Now the problem is dumped in the hands of Manager Y and the few US IT staff who are still left at the company. This is just another example of the perverse incentives in Western business which gave us delights such as the credit crash, where bankers could make multi-million dollar bonuses by lending billions to people who never had any chance of paying the money back... of course they wouldn't have to repay their bonuses when the loans went bad, and the government would bail out the banks anyway. Re:India: The skrypt kiddies of programming (Score:5, Insightful) The tragedy doesn't end there. Manager Y gets a lot of heat to get the (allegedly finished) product out the door. His few remaining IT staff (who are usually the cheapest, not the best, of the original staff since they should only have to make a few "adaptions") try to puzzle together what the outsourced programmers created (or rather, they try to find out what the hell the code is doing and compare it to what it should do. Usually it doesn't really match), and the product gets postponed because the IT people have to rewrite some portions. The more different outsourced groups worked on the product, the more has to be rewritten, interfaces for the defined interfaces have to be created (because 'definition' seems to be a very variable thing in outsourceland. I guess it's translated to something akin to 'guideline' or 'noncommittal recommendation'). In short, they work their collective asses off to pretty much reimplement the tool. In the end, they will have created the software anew and dump the sacred cow doo. Manager Y gets fired because he way overspent (after all, he only got about 10% of the budget he needed to reimplement the software, but that wasn't planned), the programmers get yelled at for saving the project (which surely boosts their motivation ... their motivation to check for other jobs, at least) and Manager X gets to hire a new Manager Y and IT team, which will, in turn, face the same fate. But hey, it's cheaper! Stupid mods, "Troll" != "Disagree" (Score:4, Insightful) This post and its associated rating (currently 50% Troll) is a prime example of how /.'s moderators have really gone downhill. The text of the post is both relevant and spot-on, rather more insightful than otherwise, and in no way is it seeking to get a rise out of the readership by misleading obstinacy. Sure, it's cynical as hell, but then again, the current situation in the US would seem to warrant precisely such an attitude. It seems the mods need more education about what "Troll" really means -- for starters, "Troll" != "Disagree", and "Troll" != "Do not like". Methinks this kind of modding behaviour is the /. equivalent of griefers. Meh. Cheers, Re:India: The skrypt kiddies of programming (Score:5, Insightful) I suspect that all the GOOD INDIAN PROGRAMMERS CAME TO AMERICA TO MAKE BETTER MONEY. You've pretty much nailed it, and it doesn't just apply to Indian programmers. Why get paid chump change (even if it's a lot by local standards) when you can go right to the source of the cash and earn the same rates as people do there? So long as you're good enough... Re:If Americans are unemployable.... (Score:5, Insightful) when I have to review code coming from India it is full of bugs, short cuts, and shit that doesn't make a damn bit of sense even to the Indian staff that's stateside? Umm.. because it's written by programmers? :) Seriously, this is standard no matter what the nationality. Re:If Americans are unemployable.... (Score:4, Interesting) That money came from banks who threw as much as you wanted at you provided you put up your house as collateral. How it works now, where the real estate bubble popped and banks cling to money like it's worth anything anymore is beyond me, though. But ... maybe just because banks stopped handing out money like crazy, people can't spend anymore, got no job or got laid off, and the economy is in the gutter? I don't want to say that spending money you don't have is any good, nor do I say that banks should hand any bum money for nothing (and, face it, giving you money for a house that's already drowned in mortgage is 'for nothing'). But what some people don't understand is that the economy can only thrive if people have money to spend. To have money to spend, people need jobs. To make "everyone" have a job you effing have to stop shipping in more people. It should be a no brainer. One of the core reasons for the economy downturn is simply that companies tried to manufacture in China and India and sell in the US and Europe. That doesn't work. You give a little money to Chinese and Indian people who can basically survive (but not buy your fancy high tech, 'luxury' crap) and pay nothing to US and European people who should in turn buy it. Buy it with what money? People need jobs to earn money, to have money that they can spend. It is as simple as that. HCL Ha Ha (Score:5, Insightful) I know there is going to be a lot of flak directed at HCL. But unfortunately HCL is not the only monkey around. I live in India, and have a lot of friends working in such companies (Infosys, Wipro, HCL, TCS etc., etc.,) These service companies have lot of PR support due to feeding poor kids meals blah blah (you get the philantrophy angle, right?) However beneath the facade lurks pure evil. Firstly these are service companies. they bill clients by the hour. Which then brings us to their processes and employees. Innovation and smart working is discouraged, and the training given is "how to bill maximum hours" and "how to fool the client into believing you are working". So these drones are taught how not to work smartly, how not to do more with less time. you get tonnes of reports tones of meaningless slides to fool the clients, who are anyways willing to get fooled. But kid yourself not, same is the case with US based service companies also, but with service companies a smaller percentage in US(except in Law area), things don't seem obvious. But Indian IT has become a service economy with drones. Drones who are dumb "copy paste" coders. I am in a product company, and often we get software engineers with 10 years of "coding" experience who do not know how to use regular expressions. Infact in their job, they would do a manual search and replace, because they can bill more hours to client. Such practices actually make hiring intelligent engineers bad, They want drones. Till few years back, when product companies were unheard of in India, many people migrated off-shore. Nowadays the drain has stemmed, but with lots of money coming in, even good engineers are flocking to this circus, and the whole place is a mess. Now why do Amercian comanies like to get screwed? Well the managers there can justify their paychecks more readily if tonnes of drone like reports and jargon filled meaningless data is thrown around in board meetings. your PHBs love these drones. They work for 14 hours a day at half the cost. OTOH, an intelligent enginner will work for 4 hours finish the work, and charge double. How will they boast that they have a cheap engineer working for 14 hours a day? Now Microsoft loves these companies very much. Because they promote windows, and in their advertisements, boast about better performance and all that BS. The public here trusts these guys. Wow CEO used to clean his own toilet. Woweee! They go to these fund raisers, do hoop haa about poor kids, give a few hundred dollars to a charity, and they are the ambassadors of good will. The dark side is brushed under the carpet. Whats not told is that number of hours each employee spends at his/her desk is counted. Every time you go in your wing, your clock starts ticking. Every time you go out, clock stops. Companies like Accenture India division make employees sign on bonds that they are willing to work 12 hours a day. Its all a circus, and the American PHBs love their circus animals. Who suffers. Grads in the US, and engineers like us who have so limited options in India. Moreover our reputation suffers. We are clubbed "Indian engineers are not intelligent". On the plus side product companies are growing, but on the downside most of these have these drones who cannot unlearn what the service industry taught them? Ever wonder why India does not have companies like Intel, Lenovo, Huawai emerging, but only subsidiaries and service drones? Well I just gave you your answer. ...News at 11. (Score:5, Funny) He has a bit of a point (Score:3, Interesting) I was a CS major. One of the most practical courses I took was one where we did team programming projects, and had to work on a spec. That was as close to real life programming as I ever got... I don't think it should be a focus but a basic understanding of some process (any process as new processes are derived from elements of old ones) would go a long way to new grads fitting into IT work (which is where most people doing computer stuff in college end up). Americans are unemployable... (Score:4, Insightful) Yes, we yanks are such dolts! (Score:5, Funny) #1: (Score:3, Interesting) if you are poor, you tend to be more highly motivated than when you are rich (and yes, middle class, or even lower middle class american counts as rich in this world) #2: if you are poor, you can be paid a lot less to do the same job than someone less motivated and in a better socioeconomic position do you know what #1 and #2 are? facts. now mod me troll and flamebait, but you know i speak the truth. deal with it (or more likely, suppress my words and go on whining) computer programming is a rather interesting skill in the internet age: if you have a terminal, and a keyboard, all that matters is the quality of the mind behind those two things. doesn't matter where you are, doesn't matter your age, doesn't matter your education level. here on slashdot, we are all familiar with the internet as a universal leveller when it comes to things like music distribution or political dissent. well guess what: it applies to computer programming as a career choice as well that fact is not nice if you are rich westerner, but it is still a fact nonetheless: you have a hell of a lot of highly motivated, much cheaper competition out there. deal with it, or whine. but i don't see what the whining is supposed to get you except self-righteous victimization. it certainly won't get rid of the competition or get you higher pay life is not always kind folks. just fucking deal with it already and stop the pathetic whining My observations. (Score:5, Insightful) On one level, that may be true. There are a lot of people who think that College is supposed to be the same as a tech school. They go to college expecting to be trained for a specific career. Some colleges have begun to oblige and are acting like the trade schools that some students (and parents) expect them to be. If you've only been trained in retreading tires, you don't know how to mount a new tire on the rim and balance it. When the CS requirements of some schools consist of "MS Office" in three different sections, how in the fuck do they expect their grads to know anything? Now, on the other hand there are plents of schools who are giving real and complete tech educations. These people are constantly getting screwed by employers who give up after interviewing a few of the other kind of student. Lastly you have the tech executives who want nothing more than to lower costs. They want the cheapest labor, and nothing else. They are pushing to raise the H1B caps. They are pushing for outsourcing. It has nothing to do with the quality of US grads. It has EVERYTHING to do with the fact that they want to pay people less money. If I spend 6 years in college and have a Master's degree, you can kiss my ass with your $35k offer. The guys right off the boat from Bombay will be willing to take that sort of job. They don't have $50-200k in student loans to pay back. It's basic economics. What this glut is doing is providing a greater supply of labor in order to drive down prices. If you're the only plumber in your town, you can charge pretty much whatever you want. No one else has the skills, knowledge or tools to do that work. What happens if overnight four more plumbers come to town? Instead of being able to charge $75 per hour, you may have to cut back to $50. What happens if ten more plumbers come to town? You'll suddenly find yourself working for minimum wage. That's what certain executive-types are trying to do to technology. LK Unemployable? (Score:5, Insightful) Perhaps Mr. Nayar should stop beating around the bush and just state the reasons why he thinks Americans are unemployable: Americans enjoy running water. Americans don't want to live in a small mud hut with their whole extended family. Americans don't want to work 80 hours a week on slave wages with no overtime. Americans have a higher cost of living in regards to just about everything. Americans usually need cars to function in American society. Americans want to have 72"+ LED backlit LCD TVs. Managers don't get bonuses for hiring Americans. I personally think that every job should have a wage that a person can live off of, "unskilled" or "skilled". If you want to see something funny, hand a CEO a floor buffer and watch him fumble about with it. Pay peanuts (Score:4, Insightful) ...get code monkeys. I wonder what he earnt this year? I would say that a rich overpaid CEO complaining that people won't accept a sub-standard wage are the epitome of hypocrisy and greed. I'm surprised he's not whining that good slaves are hard to find. What a crock of shit (Score:5, Insightful) In the long run these companies are going to learn the hard way that paying an out sourced developer who has a 3 month class in C will get you nowhere near a developer with a CS degree in terms of quality, functionality, and efficiency. Re:What a crock of shit (Score:5, Insightful) For some companies, the reason for outsourcing is that in the end, GOOD coders are rare, and BAD coders are plenty. That's true in the US as it is overseas. Why pay top dollar for bad code in the US when you can get similarly bad code by outsourcing for much cheaper? Many US companies offer fairly competitive starting salaries, at least twice as much as the 35k or 40k reported here for other software houses, often more, if they can find those GOOD coders here locally. It is simply that GOOD coders are in fact rare, and many companies recognize that. So I can see why they might as well just outsource since the quality isn't going to be much better by recruiting an army of (expensive) BAD coders locally. Contradiction from the Right (Score:5, Interesting) The biz lobbyists first claimed that not enough US citizens were going into the field. Now it's that we are "too lazy for the details", not quantity? Which is it? Outsourcing and H1B's were never sold as a way to replace "C" Americans with "A" 3rd-worlders. Did they lie to Congress and voters? I find most Indians incompetent (Score:5, Interesting) Supposedly, the Indians coming to the States are the smartest. I find them to be no better than American educated and trained workers. IIT is not a breeding ground for great talent, rather superior attitudes. No different than the Ivy League in the United States. I have worked with plenty of Indian talent in Silicon Valley, and managed many as well. It depends on the person; where you go to school, or if you go to school, is irrelevant. The Chinese and Europeans are the folks I move to the top of the interview list. Re:I find most Indians incompetent (Score:5, Insightful) > The Chinese and Europeans are the folks I move to the top of the interview list. ...trust a story about outsourcing to get the racist bastards to come crawling about the woodwork. Re:I find most Indians incompetent (Score:4, Insightful) I also happen to agree with the sentiments of the GP. Personally I find the top coders that I deal with are from Europe (especially Eastern Europe), China, South Africa and Australia. Bottom of the pile is the Indian subcontinent (India, Pakistan, Bangladesh) - technically they are fine but culturally there seems to be an aversion to thinking for themselves although I suspect that's the fault of the management culture there and the legacy of the caste system. The next to bottom I find to be American programmers - they tend to be pretty low on the technical scale (my suspicion being that the US education system is not very good) and are terrified of doing anything on their own initiative or anything slightly innovative (which manifests itself as apparent laziness as the common response seems to be to avoid any communication on the subject - not returning e-mails or calls). I have come to the conclusion this is due to a) fear of being sacked due to not having employment rights, b) fear of being sued as the culture is so litigious, c) fear of stepping on someone's patent, causing their employer to have to fork out money and leading back to point (a). Of course this is anecdotal and only represents what I personally have experienced. Are you insane? (Score:4, Insightful) Don't be an idiot, the original post was absolutely 100% racist. Let's read it carefully: "The Chinese and Europeans are the folks I move to the top of the interview list." He clearly has stated that he shows preference to people of a specific ethnicity over others. That's textbook racism. It's not crosses burning on your lawn or racial slurs racism, but it is racism. What the original poster has done has clearly described that they do not judge each Indian or American applicant on their own merits, and gives preference to Chinese and Europeans by "moving them to the top of the interview list." It may turn out that he hires more Europeans and Chinese over Americans and Indians, but their country of origin should have no bearing on his choice of qualified employees. Only their work experience and the answers they have to questions pertaining to the job should be relevant in an interview. Besides, if he overlooks that one star programmer from India or the US just because of his prejudice, then he's doing a disservice both to the himself as well as the prospect. We may be a litigious society that's lost a lot of it's motivation for working hard, but I'm an American myself and if you had treated me that way and you had interviewed me for a US position, I would show you just how hard working and litigious I personally could be. Thank goodness such treatment is against the law in the US. 'walking the extra mile' (Score:3, Funny) That explains everything... Name an "Indian" project that went well (Score:4, Insightful) "master the 'boring' details of tech process and methodology" Ha! I myself have worked for large outfits and many in my family work for large outfits. My experience and that of my loved ones is that working with Indian companies is a guarantee for disaster. Recently my sister witnessed a $50 million project being trashed. The problem is that Indian IT companies usually limit themselves to implementing exactly what you specify. Or, if you ask for an analysis, they let a bloated system emerge. Unless you work for a CMMI level 4 company this attitude is next to useless. People that master "tech process and methodology" wind up being slaves to "quality". Quality as in "meticulously following the procedures." As more than 90% of businesses don't really have quality in place -or at best, have some quality shroud- this means that de facto they are slaves to the next management level. Very convenient once you are the manager. The problem is that higher management and share holders don't understand that this is common practice. They only see that Indians cost 10 times less than European/US people. If you need 20 times more people to do the work, cost double. The bureaucracy of 20 times more people cripples your organization. Man, I've seen a team of 10-15 people writing 'make' files for package generation. And particularly crappy 'make' files at that. Had to wait hours to have them running a 'make pkg' command and returning me the generated package. For Christ's sake! This is something you think about and implement on a rainy afternoon and which takes 1 minute to run each time afterwards. The Bangalore Pressure Cooker (Score:5, Interesting) Until a couple of years ago, I worked for a major US IT firm, in Storage, and went to Bangalore to train new 2nd-level support guys on our mid-range products. The guys themselves were generally OK, since they weren't new to the industry, though there were some odd gaps in basic storage knowledge, such as SCSI protocols. Not something you'd expect to find in a person who'd allegedly done 2nd level support at another company, one that specialized in storage! In general, though, I wasn't training new graduates from the likes of IIIT-B, but I met a few and had discussions with their managers. What I learned was that these young people were under immense pressure to succeed in IT, with the hopes and expectations of whole extended families riding on their backs. IT is the ticket out of the slums, and families make enormous sacrifices to get their kids in to the industry in the first place. In college, I was told, there's also massive pressure to score high marks, and the process is more biased towards rote learning and cramming for exams. Not totally, of course - that would be impossible - but the point is that, like the Indian education system in general, it's tighter and more authoritarian in terms of curriculum, and the schools themselves were under govt. pressure to deliver high numbers of graduates. I hate to say this, but I met a few "graduates" who were simply not "graduate material", in terms of basic intelligence, curiosity, enthusiasm, or ability to absorb new concepts. Other graduates I met have great careers ahead of them, but I came away with the impression that "graduate" over there is a bit (again, not totally!) like "MCSE" in other countries: a statement of the exams you have passed, not a wider measure of your ability to function in a complex, ever-changing IT world. The problem with "cramming" is that while it might get you through an exam, the knowledge is not integrated and retained as well as it should be. I'm seeing this myself, now that I'm getting to go to university as a mature student (Engineering), where some subjects would IMHO be better assessed by e.g. thesis, not exam. Indian hypocrisy is palpable (Score:5, Insightful) If an American called Indians unemployable, that American would labeled a bigot. But Indians say that sort of thing about Americans all the time. According to India, and a lot of US companies: all the smart people in the world come from countries where people earn as little as $1 a day. If anybody in the US suggests that visa limits not be raised, India screams and cries about US racism and xenophobia. But, what percentage of Americans work for WiPro? My understanding is that India is not all accepting of immigrants from Bangladesh. And how can India's caste system not be consider one of the earth's most extreme form of bigotry? I might add, the US has a well earned reputation of being lavishly generous in matters of immigration. India constantly warns the US about the horrors of a "brain drain" that would be caused by the US not allowing unlimited guest workers from India. But why is India not worried about the Indian "brain drain" caused by the "best and brightest" leaving India. We might also want to give some thought to the US "brain drain" that is being caused by the US "best and brightest" avoiding STEM jobs, because the job prospects for Americans is so dismal. Azim Premji, who owns 79% of WiPro, recently wrote an article that warned that "US protectionism will be counter-productive" "If we get into protectionism, then the West is going to get a wave of protectionism in response, and that is going to turn back the clock 20 years," Premji told The Sunday Times. "And it will be America and Europe that suffer," he said because they will be excluded from the only growth markets left, in Asia, Africa and China. You are not going to grow at 10 per cent trading in London, are you," he asked. [indiatimes.com] Ever hear the expression: "what is good for the goose, is good for the gander?" India is one of the most protectionist nations on earth, and they have been for a long time. If India wants to consider guest workers part of trade agreements, then when does India make good for the three million Indians already living in the USA? Or does India consider "protectionism" a one-way thing? Re: (Score:3, Insightful) The US created these industries without massive immigration We must be thinking about two separate versions of 'the US' because the one I know today consists of 99.9% people with migration background. Personally, I think the man is right. In light of what Indians are willing to do for less money, of course Americans are unemployable. Vice versa you could also say that Americans are just not willing to be enslaved the way Indians are. So his whole statement becomes rather relative, doesn't it? The Problem is that we keep being willing to receive our support from Indian Re:Move Microsoft to India (Score:5, Interesting) I posted this before and I'll post it again. So far in the last 12 months I've had three side projects that projects that were outsourced but for whatever reason such a mess was made of them that the clients have brought them to us to fix at a higher than normal rate. My employer's now collaborating with an "reverse" outsourcing mob who've set themselves up to help people bring their failing outsourced projects back and are getting a fair bit of work through it. To be honest, the quality of code I'm seeing is easily the worst I've ever seen and that includes half-assed open source projects. Whether that's because it's just "sweatshop code" as one client put it or they are attempting to write super advanced AI code generators and using them to generate the code...and failing miserably, I don't know. But it's terrible. From the complete lack of imagination and forward thinking in design, right down to the god awful highly inconsistently cased variable names. Remember this is *three* different projects from three different Indian companies theoretically written by three different sets of programmers. The code all looks and feels the same, which leads me to believe there's something going on industry wide over there. What that is I have no idea but they need to fix it quick smart as the industry as a whole is getting a bit of a reputation. What I do know is people are willing to pay much more once they've tried outsourcing and failed. Those that don't go out of business in the mean time that is. (Yes I'm sure there's some top quality code coming out of India, I doubt most of it is written by the sorts of companies in this articlee). Re:Move Microsoft to India (Score:5, Insightful) I'm living in the Philippines, I can answer the crappy code part. While many might like to think of us as being a 'third world' kind of country, we are more of a follower of first world trends in disguise, we do it by building cheap look-a-likes and selling at a price our market can accommodate. We don't really fit the glove of this whole "X World" thing. That said, Why: It's simple. We are what we are because our ethos is "Near enough really is absolutely good enough, anything better is a waste of money, effort, and time". An analogy: You want a straight and level sidewalk? Damn, that's going to cost you extra. And you want it free of obstructions like telegraph poles, open drains, plus all the little lines that we refuse to step on? You want wheelchair access too? And you want it to actually be 'finished'? Well, for that kind of crazy desire, your price has now reached exactly the same as what you would pay in first world USA or anywhere else in the world for the same quality stretch of sidewalk. Americans want stuff done on the cheap. Guess what - you actually do get what you pay for! (I know, who'd have though!) Re:Move Microsoft to India (Score:5, Interesting) Uhm, the guys who invented the transistor, and setup a bunch of well-known semiconductor companies, the traitorous eight [wikipedia.org], how many were born outside the USA? Of the rest, how many were born to immigrants to the USA? (The answers: "at least 3" for the first, and "at least 1" for the second). It's hilarious that a nation whose success was built on waves of immigration can spawn people so ignorant of the contributions of immigrants. The rest of the 1st world doesn't mind though - we'll be glad to take the USAs spot as patron of the world's best & brightest - please do stop your H1-B programme. Re:ORLY? (Score:4, Insightful) The last code delivered by Infosys was functional... but had to be ripped back out of production. The next bit of code didn't follow any of our published standards. It took several days to fix the obvious problems, then it got booted out of testing for a week's corrections. They used to be a lot better back in 2003. The biggest problem right now is that they won't say "no" to management about anything. Insanely crazy schedules-- "Sure, we can meet that". Grossly abbreviated testing... "Okay- we can mitigate that risk". I think most of the super sharp guys are now management there. The actual coders are now getting down to low experience yes men/women who are not as clever and rush things without following standards. Doesn't matter-- you just can't get around the fact that they currently make 1/10th of what we do and bill out at 1/3 of what we do. Re:ORLY? (Score:5, Insightful)." How 'bout a TCO on MBAs? (Score:4, Interesting) Seriously though, it really sounds like a study of the TCO of MBAs is more in order -- how many outsourcing snafus, and how much of the current financial woes in the US, are due to MBAs with precisely the mentality noted by the GP: Unfortunately, we find much of this same short-sighted idiotic MBA behaviour in the US government [salon.com] over the past several years: "We support our troops," indeed. How bitter. I have good friends in the military, and these Blackwater goons are effectively stealing wages from them. Meh. Another example [washingtonpost.com]: By any strict economic definition, there is another word for "profit" -- "inefficiency". Ethically speaking, one might even stretch things a bit and call it "theft". Making a living is one thing, but fleecing your customers simply because you can is a crime in all but name. Cheers, Re: (Score:3, Insightful) I think the problem with corporations is the same as the problem with copyright. Both were created for the public good- not for the private good. The primary stakeholders in each has lost sight of the fact that their special privileges were created for the public good. When it gets bad enough, those rights can be taken back. Re:enjoy capitalism (Score:5, Insightful) Vote with your wallets This will never work. Just like businesses, most people care about their bottom line. Any Midwestern autoworker would sign under your post, and yet look at their spending habits outside of buying (heavily discounted) American cars. I bet they don't think twice about buying the cheapest jeans or kitchenware made in china while shopping at some mega retailer. Re:Huh? HCL? (Score:5, Insightful) Programming by rote (Score:4, Insightful) My experience with the majority (and yes there are exceptions) of Indian IT workers is that they have little or no creativity or the ability/willingness to question obviously bad design. Yes they get the work done but at what overall cost to the business. For example, the IT of an Airline was outsourced to an Indian company. We had to get a firewall rule added so that passenger details could be sent to Homeland Security. It took over 3 months for their supposedly expert network managers to get the rule added even though they had been supplied with detailed instructions on how to do it. When asked why it had taken so long, the answer given was 'We have no one here who has done that sort of configuration before'. This was coming from a company that boasted how many Cisco certified people they had. On the other hand, there are exceptions and most of those (IMHO) are people who have been trained outside India and have thus broken the mould so to speak. Many of these can think creatively and add real value to projects. Ironically, these Indians have a very low opinion of companies such as HCL etc to properly run western IT departments. I'm posting as an AC as I'm currently working in an IT Dept that is about to be outsourced to and Indian Company. I'd like to keep my job as long as possible. Re:Huh? HCL? (Score:4, Insightful) I don't know where you're working. Such poor work habits have only been the case in one environment I've ever seen, out of many my own job history and the many partners I've worked with, where the manager had frightened and alienated all the staff and they were all job hunting. (All 5 of those engineers resigned within one week of each other: it was frightening to see as a corporate partner, but I gave 2 of them recommendations because they were _good_ at dealing with that mess beyond what I would have tolerated.) One of the reasons those engineers balked was because not only was the product "not perfect" it was demonstrably broken due to the excess "features" added by the manager that were not part of the core requirements, and it simply would not work. American workers are more willing to question authority. It drives authority nuts, and I've had it happen with international scenarios, where I struggled to be allowed to speak directly with the actual engineers so we could resolve the confusion about the most effective approach. We also loathe the telephone tag of sending our question to a call center or a manager, who rewrite and re-interpret it, then having them talk to a technician, who re-interprets it, and eventually gets to an engineer who wonders why we want to gogo-fratz with the banana puddijng, but does their best to send back an answer. We Americans try to sneak past those layers of management and bureaucracy to find the person who actually knows, and trade notes. (I do, anyway, and try to send them my patches.) Re:I'll guess I'll complain on Slashdot again (Score:5, Insightful) Then you're doing something wrong. I'm sorry for this, but I can't stand people who blame job markets for being unemployed. There's *always* work, so long as you know where to look. If you have a CMU degree, developing software at home *casually* for 20 years is hardly an endorsement. I could say that same thing and I'm only 30. Being unemployed for 6 out of 7 years is also very, very bad. I'd think twice about even touching you for *any* job if I found that out. Hell, working in MacD's would have looked better - I've recommended IT staff for employment even though they've been working at supermarkets, etc. lately because I *know* it's a tough market and they need to take what they can get. It also makes me wonder what the hell you *have* been doing for those years, if you weren't working. Maybe you travelled, maybe you lived off your savings, maybe you started your own business, maybe you did other things, but hell - 6 entire years of unemployment is a bad place to start from. You think you're going to land an MS job with that on your record (not that I've ever seen the big deal with MS jobs, to be honest)? And I've found jobs online and offline - the best ones are normally online but I've landed some lovely places offline too, usually by word-of-mouth (90% of my clients over the last nine years have been by word-of-mouth). And I don't mean "keyboard shuffler" jobs. I make a good living providing IT management to schools (state and private, primary, secondary, college, already supported for IT or not) in London - hardly an "easy" job to land, especially for a kid straight out of university, especially for one with *NO* work experience when they started, especially for nine years of full employment in a row (seven self-employed but often working for only a handful of clients on a regular basis) and *especially* when I was actually hired to work on critical IT systems in preference to the existing, "free", borough services provided to those schools & colleges. It's a matter of persistence and having something to show. Getting an interview and getting a job are vastly different things - the interview is HARD to get, the job shouldn't be if you've got to interview. Something about your post suggests to me that you have FAR too high an expectation based on the fact that you have a skill that you have rarely demonstrated in a work environment, but mostly "at home" on toy projects. I can program in C, Z80 and x86 assembly. I can manage SQL databases. I've made my own toy operating systems. I can build and manage networks. None of that matters, even though I use it as part of my job. I'd love to have a job doing certain parts of that, but it's just not possible to fill my hours with the tasks I enjoy the most. I have dozens of those sorts of qualifications, projects, etc. too, they appear on my CV, but equally I have a full history of employment in a relevant sector. Recession? Stop blaming external factors for your expectations. England is in one of it's worst ever "recessions"... at the height of it, I left one job to seek out another because I wasn't enjoying it. I have a house with substantial mortgage, a wife who earns her share and (at the time) a newborn child. I competed for the new role against 50-year experienced IT managers, in a London borough, and walked into the job - not because I was cheap, not because I was perceived as being easily led, but because my history spoke for itself even though my employers understood 0.1% of what was on my CV. I don't think "no one wanted to hire"... I think "no one wanted to hire YOU". I'd probably bin your CV if you have a six year unexplained gap in it and your biggest project was an MMORPG (I'm sorry, but it's a game... unless I'm a game developer, I *will* just ignore that project as nothing more than a hobby). I'd be worried that you can't find a job online (I view submissions from skilled IT people who submit on paper with suspicion if they could have filed online) - that's where the *best* IT jobs are... they are shor Re:I'll guess I'll complain on Slashdot again (Score:4, Insightful) "It lasts, right now, for 1.25 years. Do the math." Erm... 1.25 years of unemployment on a CV (and I don't know the rules but in most places you ARE allowed to do volunteer work and sometimes a small amount of actual work and still claim). Followed by the following thoughts in a potential employer's head: "He was on benefits for over a year." "He didn't do anything else in that time." "If I employ him, there's nothing to stop him leaving in 3/6/9 months, whatever the cut-off point is, and then going back on full benefits for another year." "Maybe he's using me to 'refresh' his benefits." "I'm not going to get the best work out of him, and he's been idle for at least a year, and then he'll probably leave or get himself sacked." "Why should I employ him?" Unless you can answer that last question, there's nothing in it for an employer. It's harsh, yes, but true. Especially true as you get older... if you're going to employ a 40-year-old over a 30-year-old, they better have 10 years of experience to draw on! If they can't show that, you might as well employ the 30-year-old (who will want less money) and train him. Re:anecdotal, but (Score:4, Insightful) I recall reading somewhere that in a basic programming concepts class, during the time when everyone was hopping on the comp sci bandwagon for easy money, there will be about half the people who come out of it simply unable to grasp such simple concepts as control structures and variables in any meaningful way -- no matter how good the instruction. The problem as I've come to see it is in India, your actual aptitude for programming isn't really relevant to whether you get into the training. I don't know why this is, because in theory this is tested for ahead of time. The difference I see is that in the US, most people without such aptitude will change their majors. In India, it's no deterrent -- this is often the only way out of abject poverty and so they will understandably fight tooth and nail to complete their training and enter the workforce. This in turn heavily weights the available pool of developers in the direction of "incompetent". It's not that the people of India are as a whole any less likely to have the ability to succeed in computer-related careers than anywhere else in the world population -- but desperation drives a disproportionately large percentage of unqualified people into this career path.
https://news.slashdot.org/story/09/06/22/0019233/indian-ceo-says-most-us-tech-grads-unemployable?sdsrc=rel
CC-MAIN-2017-47
refinedweb
15,668
69.72
Working on GPU in a generic way Are there any tools or libraries that belong to opencv that enable GPU work in a generic way? Are there any tools or libraries that belong to opencv that enable GPU work in a generic way? There's a way to execute custom OpenCL code on the GPU from OpenCV using the cv::ocl classes and UMat type images. Here you can find an example for doing this. A word of warning: OpenCL code can be difficult to develop. It's massively parallel, has no concurrency checks (so it will crash for any bugs) and it's difficult to debug. OTOH OpenCL is very portable, it runs on most GPUs and on CPUs too (CUDA runs only on nVidia hardware). I personally prefer using CPU parallelisation using TBB, it's much easier to implement. Concerning CUDA, there is a cv::cuda namespace for CUDA-based operations. It has several image processing algorithms implemented, but I don't know if it's possible to run generic CUDA code. Asked: 2019-09-12 07:39:17 -0500 Seen: 68 times Last updated: Sep 13 GPU Code Not Working Question Does OpenCV support PowerVR SGX540 GPU? cv::gpu::remap comparatively slow No. For that you'd need something like OpenGL 4.3, which has compute shaders. Here is a link to a sample compute shader app:... Note that the code requires the GLUT and GLEW libraries. Compute shaders were made official in 2012, with OpenGL 4.3. They're not new, and they are widely used.
https://answers.opencv.org/question/218273/working-on-gpu-in-a-generic-way/?sort=latest
CC-MAIN-2019-51
refinedweb
260
74.79
I have to mark C code and part of that involves running and timing their submissions. The problem is that their code then runs as me and they can in principle do whatever they like using my permission settings. For example, they could copy my ssh private key. I could set up a virtual machine and run their code within it (although I am not entirely sure of the best way to lock this down either). A problem with this is that the speed performance will now not be realistic. I could provide the same virtual machine to all the users to test their code on beforehand so at least they have the same set up to test with. Is there a good way to set up an environment where you can run code written by others but limit the damage it can do? There are actually multiple types of virtual machines, which is kinda being missed in the other answers. You can have what's known as container virtualisation - something like Linux vServer or OpenVZ. They share the host kernel, running what are known as 'containers' (with their own environments) rather than virtualising any hardware, and are almost as fast as bare metal. (OpenVZ is more common in cheaper VPS services, but only supports up to a custom kernel 2.6.x, while vServer goes up to latest stable). Apart from that, the overhead of full virtualisation on a modern machine isn't as bad as you think! With hardware virtualisation on a mid-high end CPU, most uses wouldn't even notice any performance penalty, unless there was contention for resources (e.g. the host or another VM was using a lot). It will be a little slower, because some resources are used by the guest OS, but the cost of the virtualisation itself is almost negligible - especially CPU usage, since that can be passed to the raw hardware with (almost) no translation, if it's hardware accelerated. You could try it, you might be surprised. Note that each also comes with differing levels of isolation. Container virtualisation makes it much easier to exploit kernel and other bugs to 'break out' of the container - LXC is not secure at all, though OpenVZ is considered pretty mature and secure (and is commonly used in VPS services, where you're selling containers to untrusted people). vServer is somewhere in between. Full virtualisation has better isolation, but some attacks still exist to break out. It depends just how far you expect a malicious student to go. It might be sufficient to simply run as a different user. You might want a container for more security. Chances are that a container is enough for anything you'll encounter in these circumstances. Make a single user account with limited privileges (which means access to only a limited set of library routines, possiby even a stripped down shell access). ssh as the user in your system, and run their programs. ssh You can even write a small bash shell script (or any other shell script) to achieve this. bash owner nobody chroot There's a few different possibilities, depending on how much isolation you want. The easiest is to simply trust the code. It looks like that is out of the question for you, or you wouldn't be asking this. The next step up is to run the code on a separate user account, as Vigneshwaren suggested. If you want to restrict network access specifically for a particular user account, that can be accomplished through iptables owner matching. When done, the user account can be left around or deleted, and any processes running as that user can be killed outright. One more step up is to add a chroot jail to the separate user account. This can cause trouble with libraries or configuration files that need to be in place, but if it's e.g. a pure number-crunching exercise, it can be practical. It ensures that only the files you want the students' code to be able to access are accessible to that code. The final step would be to execute the code in a completely separate environment. Think virtual machine, here, although a separate physical computer could accomplish the same thing. The code can execute in a completely isolated environment, including with the virtual network cable unplugged, and any damage it could possibly do, including filling up the disk or fork-bombing, will be isolated within the virtual machine and the worst that might happen is that you need to forcibly turn it off. Since the VM will have a completely separate OS installation, especially if you remove the network connection before running the software, this cannot possibly leak any of your sensitive data. With a VM, you can use disk snapshots to allow you to quickly and easily return to a known state after running each student's program. It all depends on where on the effort-versus-trust-needed scale you place your students. Less trust requires more effort on your part to make sure nothing bad happens. chroot /mnt/chroot /bin/bash I have to mark C code Then you have full access to the source code, Look though it - its doubtful they will be able to pass off anything malicious in the code without you noticing. If your unsure run it though a VM, but in most cases you will know what is being run I have worked on a similar system a few years ago. What I did was to use ptrace to limit the system calls (see code here), and optionally change user id or chroot. If the programs are simple, involving only pure algorithms and basic I/O tasks, this should be a practical solution that you could consider. BTW, it is worth mentioning that you should also limit compiler time/memory usage. Some malicious programs may include directives like #include </dev/random> which could cause the compiler to hang for a long time, or some recursive macros causing the compiler to eat up lots of memory. #include </dev/random> By posting your answer, you agree to the privacy policy and terms of service. asked 12 months ago viewed 577 times active
http://superuser.com/questions/690553/run-user-code-safely/690556
CC-MAIN-2014-52
refinedweb
1,039
60.55
I'd like to create a HashMap with two String values. The first string value is of some "Mode" and the second string value is of some "State" for example: "TRACK" mode is "DISABLED" I would like to create a HashMap with multiple defined modes, with a "State" associated with each that changes from "ENABLED" or "DISABLED". This is what I have so far: public class SystemModeStatus{ private Map<String String> map = new HashMap<String,String>(){ { put("MODE_A", "STATE"); put("MODE_B", "STATE"); put("MODE_C", "STATE"); } }; public SystemModeStatus(Map<String, String> m){ this.map = m; } public Map<String, String> getMap(){ return this.map; } } Firstly you should probably remove the getMap() method as you want to expose as few methods as possible on the internal map. Then you can add methods to update the values in your map: public void set(String mode, String status) { this.map.put(mode, status); } public String getStatus(String mode) { return this.map.get(mode); } More info here:
https://codedump.io/share/rqwXUlBClU9N/1/setting-and-getting-with-java-hashmaps
CC-MAIN-2018-05
refinedweb
161
62.07
09 July 2010 15:51 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--When your aim is to fly around the world, using only sunlight for propulsion, you are going to need the lightest and strongest materials – and some of the most advanced solar technology. The first Solar Impulse plane (HB-SIA) – a gentle giant with the wingspan of an airliner (63m) but the weight of a small car (1,600kg) – took to the summer skies this week over ?xml:namespace> Soaring to a height of 8,500m, the aircraft, powered solely by the 12,000 solar cells on its wings and four electric engines producing just 10 horsepower each, absorbed sunlight until about two hours from dusk. After that, it relied on battery power and its pilot’s skill to fly through the night before drawing more energy from the sun’s rays in the morning. The scale of the achievement should not be underestimated. Materials and technologies have been pushed to the limit. This is a groundbreaking as well as a breathtaking project, not surprisingly linked to the products and expertise of major chemicals producers. (See a description of the project and the build-up to the first night flight here as well as commentary on solar power.) Bayer MaterialScience is a recent official sponsor. Its polyurethane foam has been used in the cockpit cladding, the engine cowling and the wings. A thin polycarbonate film has been used in the cockpit windows. Solar Impulse's flight has been seven years in the making, but the project still has a long way to run. It is proving the suitability of materials to significant extremes of physical and environmental stress. But it is also pushing the boundaries of perceptions of just what might be achievable if we think outside the box. Expertise gained from early flights will be harnessed in the construction of the next Solar Impulse plane: the one that will eventually attempt an around-the-world mission. “I have just flown more than 26 hours without using a drop of fuel and without causing any pollution!” pilot and co-founder of the project André Borschberg said after landing. He was in the air for 26 hours and nine minutes. He described watching the battery charge levels in the aircraft rise as the solar panels absorbed energy from the sun as it gained altitude during the day, and his feeling of joy at seeing the sun rise and the energy start to circulate in the solar panels again after the night flight. Co-founder Bertrand Picard said the flight gave credibility to speeches he and Borschberg had given over the years about renewable energy and “clean techs”. The project is a test bed for materials and systems as well as its pilots. “Solar Impulse is above all a symbol for all of us, generating maximum support for technologies and positive attitude towards renewable energies in order to ensure the energy and ecological future of our planet,” says the Belgium-based chemicals and materials group Solvay. The company supports the project financially but also plays an active role offering technical solutions and know-how, including forecasting and simulation skills for materials in extreme environments. The other main partners in the project are Omega and Deutsche Bank. A host of scientific, specialised and other supporters help provide a format for the sharing of ideas and expertise. And it is this active involvement of a broad range of suppliers and sponsors that makes the project so fascinating. Bayer MaterialScience says that more of its products will be used in the next Solar Impulse aircraft. “The company is working flat out on the development of further ultra-lightweight materials,” it said on Friday. Its carbon nanotubes could be used to save weight but add strength to the structural components, for instance. The major target of the cooperation, a Bayer spokesman told Insight this week, “is to develop tailor-made, lightweight, high-performance materials as the new solar-powered plane will weigh less than 1,600 kg and thus be lighter than an Audi A6”. Solar Impulse is not a flight of fancy but a unique demonstration of what can be achieved using advanced materials, design, electronics and telemetry. It pushes the boundaries of sustainability and opens up the imagination to what might just be achievable. The next target for the project is a trip across the A second prototype is scheduled to fly around the world in 2013 in five, five-day stages, travelling at an average speed of 70 km
http://www.icis.com/Articles/2010/07/09/9375439/insight-groundbreaking-ideas-take-flight.html
CC-MAIN-2014-42
refinedweb
759
57.3
Mark Seemann's thoughts about whatever .NET development topic he's currently immersed in. There are several different ways to implement Dependency Injection (DI), and Martin Fowler describes four of them in his excellent article on IoC/DI. In this article, the first three approaches (Constructor, Property, and Interface Injection) are mainly described as a background for introducing the Service Locator pattern. In Fowler's example, a generic Service Locator is a registry (basically an in-memory table), but these days you most commonly see it implemented as a factory. Consider this simple interface: public interface IMyInterface { void DoStuff(); } Here's a class that uses a Service Locator to get an instance of IMyInterface: public class ImplicitConsumer public void PerformOperation() { IMyInterface mi = ServiceFactory.Create<IMyInterface>(); mi.DoStuff(); } The ServiceFactory class has a static method that returns an instance of the requested interface. Although not explicitly shown, the ImplicitConsumer class has a default constructor, since no constructor is defined and the C# compiler then automatically creates one. Now, imagine yourself in a situation where you need to consume an instance of the ImplicitConsumer class and call its PerformOperation method. Also imagine that you have just been handed the class in a binary form, with documentation, but without source code. In this scenario, you would probably write code like this: ImplicitConsumer ic = new ImplicitConsumer(); ic.PerformOperation(); Writing the first line is very straightforward, since there's only one way to create a new instance of the class. Next, with the ic instance, IntelliSense will quickly help you to find and invoke the PerformOperation method, and that's it: The code compiles and you are happy. At run-time, however, this code is going to fail, since the Service Locator has not been configured. At this point, you may resort to the documentation, and if you are lucky, the documentation will tell you that the PerformOperation method expects the Service Locator to be configured to return an instance of IMyInterface. If you aren't so lucky, you will have to fire up Reflector to figure out what to do. Depending on the Service Locator's implementation, this configuration may be done in the configuration file or in code. Here's how it might look in code: ServiceFactory.Preset<IMyInterface>(new StubMyInterface()); Here we have what looks like two pieces of totally unrelated code, yet they are very closely related at run-time. If you came by this code without prior knowledge, you'd probably mistake the first line of code for a piece of lava flow and delete it. If you were the author of those three lines, you might attempt to protect yourself from this risk by applying a comment to the first line, but that would be an apology. In my opinion, an API should always strive to steer developers in the right direction. With respect to DI, the API should clearly state its intent to consume a particular dependency. Constructor Injection does this very explicitly: public class ExplicitConsumer private IMyInterface mi_; public ExplicitConsumer(IMyInterface mi) if (mi == null) { throw new ArgumentNullException("mi"); } this.mi_ = mi; this.mi_.DoStuff(); With this implementation, any developer is forced to do the right thing. There's only a single constructor, and IntelliSense will show that it expects an instance of IMyInterface. While a developer could pass null as a parameter, he or she would do so in spite of the knowledge that was just communicated by IntelliSense (and there would still be a run-time error). There would be no need to read the documentation or fire up Reflector, because the class makes it very clear that it needs a working instance of IMyInterface to perform its work, and as a developer, you must supply it: ExplicitConsumer ec = new ExplicitConsumer(new StubMyInterface()); ec.PerformOperation(); The only drawback of Constructor Injection that I have ever been able to identify is the need to initialize all dependencies at once if you have a complex hierarchy of dependencies, as described in my former post. If lazy initialization is a necessity, you can use Provider Injection, which is a variation of Constructor Injection. Although this is currently my favorite DI strategy, it's less well-known and more difficult to explain. In any case, the main point is that if a component expects to consume a dependency which will be supplied at run-time, it should clearly state that intent through its API, instead of relying on out-of-band discovery mechanisms. Although the API may end up looking more complex, it's ensuring that mistakes are much harder to make. Even if there's a slightly more pronounced learning curve to get started with an API that uses Constructor Injection, it's easier to use in the long run. A developer with no prior knowledge of the component will sooner be able to produce code that compiles with a component relying on a Service Locator, but he or she will sooner be able to produce code that works with Constructor Injection. If you would like to receive an email when updates are made to this post, please register here RSS You're absolutely right. However, when I employ a certain design principle (), I don't find that kind of problem occurring, even with setter injection. If no one can get at the concrete class without going through the "service locator", you're home free. At least, that's what's been working for me. Hi Udi Thank you for your comment. While I employ a principle similar to yours, I think it describes the structural relationship between components. Your comment suggests that you derive a creational principle from your structural principle, but I must admit that it's not obvious to me how the second follows from the first? I am working on a rather simple DI issue. Constructor DI in my mind is the best way "state dependency intent". The problem I have with setter DI is that it is a setter, and could be overwritten, sure there are mesures that can be taken but in the end even I may accidentially call the setter and create some hard to track and dangerous issues. I understand the statement that interfaces define behavior indepenent of contruction, and even Fowler boils it down to tradeoffs. I view this like life before generics with no covariant return types, some ugly code in that era, and the new overload keyword was downright dangerous. In this era, I was told to use the new keyword to work around the lack of covariance. Now we have generics and code is elegant, clean, and feels right. What does the BCL need to allow us to express DI in an elegant and safe manner? Great posts, and good discussion. Hi Mike Thank you for your comment. What about Constructor Injection do you think is not elegant and safe? That's exacly what I was saying, Constuctor Injection(CDI) is both "elegant and save". The problem is implementing CDI when using an interface, looking back to the first question about the inability to explicitly expess DI intent on an interface, you can't. If you are decoupling layers, you use an interface, and CDI is not supported. You have to use Setter DI which is less elegant and intuitive. Setter DI can also be dangerous as it can be set twice which is not the intent of DI. When I read Fowlers paper, and put DI in practice in an interface, it reminds me of the new overload keyword prior to generics to work around the lack of covariance in the BCL. Thanks for the discussion... Oh, I think I now see what you are saying: If you really, truly want to model a dependency constraint into a contract, you can only do so by Property Injection, because you can model a property on the interface, but not a constructor. I still think it's a Really Bad Idea™ to model dependency chains in contracts, but that aside, this brings up an interesting point of Krzysztof Cwalina's: Interfaces don't define contracts. If, on the other hand, you were to model your contracts as classes, you could have an Initialize method that takes any dependencies as parameters. Then, any virtual members of the class would be protected hook methods that public clients would only be able to invoke via public safeguards that check whether the instance has been properly initialized. You see this pattern in many places in the BCS, particularly in the design-time framework. Why you would want to do this with a dependency hierarchy still eludes me, though :) Mark, In multi-threaded environments, it often makes sense to create a new "service" object, rather than have a single object be thread safe. Also, when using these same principles for the interaction between the Service Layer and the Domain Model, decoupling the class creation enables you to "inject" other behaviors into the creation process. An example of this scenario can be found here: On the issue of setters and CDI with interfaces, interfaces should not expose these setters. The client code shouldn't be aware of the dependencies of its service object. The DI framework would wire that up behind the scenes - regardless of whether you used constructors or setters. On Krzysztof's post, interfaces don't NECESSARILY define a contract, but they CAN. That's where the value lies. Mike D, Check out my post (), to see how clients, services, and interfaces work together. Thank you for your comments. Yes, I agree that multi-threading tend to make our lives more difficult, and that we should strive to use a service per thread, instead of developing thread-safe services (which is harder). In both WCF and ASP.NET, this is pretty easy to ensure, since you can just create your service hierarchy at each request, since requests are (typically) handled by a new object instance (of Page, or your service, or whatever). There will obviously be a performance price to pay, but some of that can be alleviated be creating a pool of service hierarchies. In any case, it's a good point. The ability to inject behavior between consumer and service can be implemented with CDI using the Decorator pattern, but I agree that using a configurable Pipeline as part of a Service Locator is a bit cleaner. I totally agree with your point on Property Injection and setters, and I was thinking the same thing several times while writing the other comments, but never got around to explicitly state it :)
http://blogs.msdn.com/ploeh/archive/2007/06/02/StateYourDependencyIntent.aspx
crawl-002
refinedweb
1,761
50.46
. The latter strategy should be used when you have a certain set of components that are coupled in some way that if one is crashing they all need to be reset to a stable state before continuing. There are two ways you can define an Actor to be a supervisor; declaratively and dynamically. In this example we use the dynamic approach. There are two things we have to do:.: ChatStorage // needs someone to provide the ChatStorage val sessions = new HashMap[String, Actor] protected def sessionManagement: PartialFunction[Any, Unit] = { case Login(username) => log.info("User [%s] has logged in", username) val session = new Session(username, storage) session.start sessions += (username -> session) case Logout(username) => log.info("User [%s] has logged out", username) val session = sessions(username) session.stop sessions -= username } protected def shutdownSessions = sessions.foreach { case (_, session) => session.stop } }] // someone needs to provide the Session map protected def chatManagement: PartialFunction[Any, Unit] = { case msg @ ChatMessage(from, _) => sessions(from) ! msg case msg @ GetChatLog(from) => sessions(from) forward msg } } Using an Actor as a message broker, as in this example, is a very common pattern with many variations; load-balancing, master/worker, map/reduce, replication, logging etc. It becomes even more useful with remote Actors when we can use it to route messages to different nodes. Actors are excellent for solving problems where you have many independent processes that can work in isolation and only interact with other Actors through message passing. This model fits many problems. But the Actor model is unfortunately a terrible model for implementing truly shared state. E.g. when you need to have consensus and a stable view of state across many components. The classic example is the bank account where clients can deposit and withdraw, in which each operation needs to be atomic. For detailed discussion on the topic see. This however is addressed by the persistence module in Akka. Akka provides the possibility of taking the transactional data structures we discussed above and making them persistent. It is an extension to the STM which guarantees that it has the same semantics. The persistence module has pluggable storage back-ends. At the time of the writing it has three different storage back-ends: They all implement persistent ‘Map’, ‘Vector’ and ‘Ref’. Which can be created and retrieved by id through one of the storage modules. val map = RedisStorage.newMap(id) val vector = CassandraStorage.newVector(id) val ref = MongoStorage.newRef(id) Now let’s implement the persistent storage. We start by creating a ‘ChatStorage’ trait allowing us to have multiple different storage backend. For example one in-memory and one persistent. /** * Abstraction of chat storage holding the chat log. */ trait ChatStorage extends Actor Let’s use Redis to implementation the persistent storage. Redis is an excellent storage backend, blazingly fast with a rich data model. Our ‘RedisChatStorage’ extends the ‘ChatStorage’ trait. The only state it holds is the ‘chatLog’ which is a ‘Vector’ managed by Redis. We give it an explicit id (the String “akka.chat.log”) to be able to retrieve the same vector across remote nodes and/or through server restarts.. Redis works with binary data so we need to convert the message into a binary representation. Since we are using Strings we just have to invoke ‘message.getBytes(“UTF-8”)’, but if we would have had a richer message that we wanted to persist then we would have had to use one of the Akka’s serialization traits or serializers. You can read more about that here. The ‘GetChatLog’ message handler retrieves all the messages in the chat log storage inside an atomic block, iterates over them using the ‘map’ combinator transforming them from ‘Array[Byte] to ’String’. Then it invokes the ‘reply(message)’ function that will send the chat log to the original sender; the ‘ChatClient’. You might rememeber: We define the ‘RedisChatStorage’ as ‘Permanent’ by setting the ‘lifeCycle’ member field to ‘Some(LifeCycle(Permanent))’. persistent ‘Vector’ from Redis. /** * Redis-backed chat storage implementation. */ class RedisChatStorage extends ChatStorage { lifeCycle = Some(LifeCycle(Permanent)) private var chatLog = RedisStorage.getVector("akka.chat.log") log.info("Redis-based chat storage is starting up...") def receive = { case msg @ ChatMessage(from, message) => log.debug("New chat message [%s]", message) atomic { chatLog + message.getBytes("UTF-8") } case GetChatLog(_) => val messageList = atomic { chatLog.map(bytes => new String(bytes, "UTF-8")).toList } reply(ChatLog(messageList)) } override def postRestart(reason: Throwable) = chatLog = RedisStorage.getVector("akka.chat.log") } The last thing we need to do in terms of persistence is to create a ‘RedisChatStorageFactory’ that will take care of instantiating and resolving the ‘val storage: ChatStorage’ field in the ‘ChatServer’ with a concrete implementation of our persistence Actor. /** * Creates and a RedisChatStorage. */ trait RedisChatStorageFactory { val storage: ChatStorage = new RedisChatStorage } We have now created the full functionality for the chat server, all nicely decoupled into isolated and well-defined traits. Now let’s bring all these traits together and compose the complete concrete ‘ChatService’. /** * Object encapsulating the full Chat Service. */ object ChatService extends ChatServer with SessionManagement with ChatManagement with RedisChatStorageFactory Now that we have the ‘ChatService’ object how do we make it into a remote service that we can use from different nodes? It is very simple. We only need to do two things. First we need to start up a remote server to run the ‘ChatService’. Then for each client that wants to use the ‘ChatService’ we just need to invoke ‘ChatService.makeRemote’ to get a handle to the remote ‘ChatService’. Starting the first step. We have two options on how we can start up a remote server. Either start up the ‘RemoteNode’ in some part of the code that runs on the machine you want to run the server on (can just be a simple class with a ‘main’ method). We start the ‘RemoteNode’ by invoking ‘start’ and passing in the host name and port. RemoteNode.start("darkstar", 9999) You can also choose to use the version of ‘start’ that takes a ‘ClassLoader’ as argument if you want to be explicit on through which class loader you want to load the class of the Actor that you want to run as remote service. The second option is to put your application in a JAR file and drop it into the ‘AKKA_HOME/deploy’ directory and then start up the Akka microkernel. This will deploy your application and start the ‘RemoteNode’ for you. Then you use the ‘AKKA_HOME/config/akka.conf’ configuration file to configure the remote server (among many other things). The microkernel is started up like this: export AKKA_HOME=... cd $AKKA_HOME java -jar $AKKA_HOME/dist/akka-0.6.jar That was the server part. The client part is just as simple. We only need to tell the runtime system that we want to use the ‘ChatService’ as a remote Actor by invoking the ‘makeRemote(hostname, port)’ function on it. This will instantiate the Actor on the remote host and turn the local Actor instance into a proxy or handle through which we can use the remote Actor transparently with the exact same semantics as if it was a regular local Actor. That’s it. Now let’s run a sample client session. ChatService.makeRemote("darkstar", 9999) ChatService.start That’s it. Were done. Now we have a, very simple, but scalable, fault-tolerant, event-driven, persistent chat server that can without problem serve a million concurrent users on a regular workstation. Let’s use it. Now let’s create a simple test runner that logs in posts some messages and logs out. import se.scalablesolutions.akka.sample.chat._ /** * Test runner emulating a chat session. */ object Runner { // create a handle to the remote ChatService ChatService.makeRemote("localhost", 9999) ChatService.start def run = { val client = new ChatClient("jonas") client.login client.post("Hi there") println("CHAT LOG:\n\t" + client.chatLog.log.mkString("\n\t")) client.post("Hi again") println("CHAT LOG:\n\t" + client.chatLog.log.mkString("\n\t")) client.logout } } All this code is available as part of the Akka distribution. It resides in the ‘./akka-samples/akka-sample-chat’ module and have a ‘README’ file explaining how to run it as well as a Maven ‘pom.xml’ build file so it is easy to build, run, hack, rebuild, run etc. You can also just read the next section for instructions on how to run it. Or if you rather browse it online. First we need to start up Redis. For details on how to set up Redis server have a look here. Download and build Akka Run the microkernel export AKKA_HOME=... cd $AKKA_HOME java -jar ./dist/akka-0.6.jar Run a sample chat session Now you could test client reconnect by killing the running microkernel and start it up again. See the client reconnect take place in the REPL shell. That’s it. Have fun. There is much much more to Akka than what we have covered in this article. For example Active Objects, Cluster Membership API, a Comet module, REST (JAX-RS) integration, a Security module, AMQP integration, Spring integration, Google Guice integration, Lift integration, a rich Transaction API, tons of configuration possibilities etc.Jonas Bonér 04 January 2010
http://jonasboner.com/2010/01/04/introducing-akka/
CC-MAIN-2014-15
refinedweb
1,522
58.38
Deleting Resources Microsoft® Windows® 2000 Scripting Guide The Delete method lets you delete instances of a managed resource. Not all WMI classes support the Delete method; those that do include Win32_Directory, CIM_DataFile, Win32_Share, and Win32_ScheduledJob. This means that you can use scripts to delete files, folders, shared folders, or scheduled tasks. To delete a managed resource, retrieve the instance to be deleted, and then call the Delete method. The script template in Listing 6.28 demonstrates this operation by deleting the shared folder named "New Share Name." Listing 6.28 Template for Deleting Resources To use this template to delete other managed resources: Set the value of strClassName to the appropriate WMI class. If necessary, set the value of strNamespace to the appropriate WMI namespace. Set the value of strKeyName to the name of the key property for the class. Set the value of strKeyValue to the appropriate value.
https://technet.microsoft.com/en-us/library/ee198927.aspx
CC-MAIN-2018-17
refinedweb
149
65.01
I would like to have a list of connected users, have chosen dialog tui for this. This is my first little python (3.5) script. import sys import psutil import locale import dialog import pprint locale.setlocale(locale.LC_ALL, '') d = dialog.Dialog(dialog="dialog") choices = [] i = 0; users = psutil.users() for user in users: item = ('{0}.'.format(i), user.name) choices.append(item) i += 1 choices.append(('X', "Exit")) #pprint.pprint(choices) #OUTPUT: [('0.', 'root'), ('1.', 'root'), ('X', 'Exit')] #code, tag = d.menu("List", choices) code, tag = d.menu("List", choices=[('0.', 'root'), ('1.', 'root'), ('X', 'Exit')]) child_output.strip())) dialog.DialogError: dialog-like terminated due to an error: the dialog-like program exited with status 3 (which was passed to it as the DIALOG_ERROR environment variable). Sometimes, the reason is simply that dialog was given a height or width parameter that is too big for the terminal in use. Its output, with leading and trailing whitespace stripped, was: Error: Expected at least 6 tokens for --menu, have 4. It is not enough pass it as a variable, like: code, tag = d.menu("List", choices) It should be declared explicitly as: code, tag = d.menu("List", choices=choices)
https://codedump.io/share/2eG8EkVhw0p4/1/how-to-pass-a-variable-to-the-dialog39s-choices-propery
CC-MAIN-2017-13
refinedweb
197
61.53
I frequently find myself working with large lists where I need to apply the same time-consuming function to each element in the list without concern for the order that these calculations are made. I’ve written a small class using Python’s multiprocessing module to help speed things up. It will accept a list, break it up into a list of lists the size of the number of processes you want to run in parallel, and then process each of the sublists as a separate process. Finally, it will return a list containing all the results. import multiprocessing class ProcessHelper: def __init__(self, num_processes=4): self.num_processes = num_processes def split_list(self, data_list): list_of_lists = [] for i in range(0, len(data_list), self.num_processes): list_of_lists.append(data_list[i:i+self.num_processes]) return list_of_lists def map_reduce(self, function, data_list): split_data = self.split_list(data_list) processes = multiprocessing.Pool(processes=self.num_processes) results_list_of_lists = processes.map(function, split_data) processes.close() results_list = [item for sublist in results_list_of_lists for item in sublist] return results_list To demonstrate how this class works, I’ll create a list of 20 integers from 0-19. I’ve also created a function that will square every number in a list. When I run it, I’ll pass the function (job) and the list (data). The class will then break this into a list of lists and then run the function as a separate process on each of the sublists. def job(num_list): return [i*i for i in num_list] data = range(20) p = ProcessHelper(4) result = p.map_reduce(job, data) print(result) So if my data originally was a list that looked like this: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] When I split it into sublists, I’ll end up with a list of 4 lists (as I’ve indicated that I want to initialise 4 processes): [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19]] Finally, the result will give me the list of squared values that looks like this: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361] I’ll continue to build this class as I identify other handy helper methods that I could add.
https://jacksimpson.co/multiprocessing-in-python/
CC-MAIN-2021-31
refinedweb
390
66.17
What you want to do, is ensure that all interactions with these external libraries are wrapped in a begin..rescue..end. You catch all external errors and can now decide how to handle them. You can throw your hands up in the air and just re-raise the same error: begin SomeExternalLibrary.do_stuff rescue => e raise end This doesn’t really win us anything. Better yet you would raise one of your own custom error types. begin SomeExternalLibrary.do_stuff rescue => e raise MyNamespace::MyError.new end This way you know that once you’re past your interfaces with the external libraries you can only encounter exception types that you know about. The Need For Nested Exceptions The problem is that by raising a custom error, we lose all the information that was contained in the original error that we rescued. This information would have potentially been of great value to help us diagnose/debug the problem (that caused the error in the first place), but it is lost with no way to get it back. In this regard it would have been better to re-raise the original error. What we want is to have the best of both worlds, raise a custom exception type, but retain the information from the original exception. When writing escort one of the things I wanted was informative errors and stack traces. I wanted to raise errors and add information (by rescuing and re-raising) as they percolated through the code, to be handled in one place. What I needed was the ability to nest exceptions within other exceptions. Ruby doesn’t allow us to nest exceptions. However, I remembered Avdi Grimm mentioning the nestegg gem in his excellent Exceptional Ruby book, so I decided to give it a try. The Problems With Nestegg Unfortunately nestegg is a bit old and a little buggy: - It would sometimes lose the error messages - Nesting more than one level deep would cause repetition in the stacktrace I also didn’t like how it made the stack trace look non-standard when including the information from the nested errors. If we take some code similar to the following: require 'nestegg' class MyError < StandardError include Nestegg::NestingException end begin 1/0 rescue => e begin raise MyError.new("Number errors will be caught", e) rescue => e begin raise MyError.new("Don't need to let MyError bubble up") rescue => e raise MyError.new("Last one for sure!") end end end It would produce a stack trace like this: examples/test1.rb:26:in `rescue in rescue in rescue in <main>': MyError (MyError) from examples/test1.rb:23:in `rescue in rescue in <main>' from examples/test1.rb:20:in `rescue in <main>' from examples/test1.rb:17:in `<main>' from cause: MyError: MyError from examples/test1.rb:24:in `rescue in rescue in <main>' from examples/test1.rb:20:in `rescue in <main>' from examples/test1.rb:17:in `<main>' from cause: MyError: MyError from examples/test1.rb:21:in `rescue in <main>' from examples/test1.rb:17:in `<main>' from cause: ZeroDivisionError: divided by 0 from examples/test1.rb:18:in `/' from examples/test1.rb:18:in `<main>' After looking around I found loganb-nestegg. This fixed some of the bugs, but still had the non-standard stack trace and the repetition issue. When you’re forced to look for the 3rd library to solve a problem, it’s time to write your own. This is exactly what I did for escort. This functionality eventually got extracted into a gem which is how we got nesty. Its stack traces look a lot like regular ones, it doesn’t lose messages and you can nest exceptions as deep as you like without ugly repetition in the stack trace. If we take the same code as above, but redefine the error to use nesty: class MyError < StandardError include Nesty::NestedError end Our stack trace will now be: examples/complex.rb:20:in `rescue in rescue in rescue in <main>': Last one for sure! (MyError) from examples/complex.rb:17:in `rescue in rescue in <main>' from examples/complex.rb:18:in `rescue in rescue in <main>': Don't need to let MyError bubble up from examples/complex.rb:14:in `rescue in <main>' from examples/complex.rb:15:in `rescue in <main>': Number errors will be caught from examples/complex.rb:11:in `<main>' from examples/complex.rb:12:in `/': divided by 0 from examples/complex.rb:12:in `<main>' Definitely nicer. We simply add the messages for every nested error to the stack trace in the appropriate place (rather than giving them their own line). How Nested Exceptions Work The code for nesty is tiny, but there are a couple of interesting bits in it worth looking at. One of the special variables in Ruby is $! which always contains the last exception that was raised. This way when we raise a nesty error type, we don’t have to supply the nested error as a parameter, it will just be looked up in $!. Ruby always allows you to set a custom backtrace on any error. So, if you rescue an error you can always replace its stack trace with whatever you want e.g.: begin 1/0 rescue => e e.message = "foobar" e.set_backtrace(['hello', 'world']) raise e end This produces: hello: divided by 0 (ZeroDivisionError) from world We take advantage of this and override the set_backtrace method to take into account the stack trace of the nested error. def set_backtrace(backtrace) @raw_backtrace = backtrace if nested backtrace = backtrace - nested_raw_backtrace backtrace += ["#{nested.backtrace.first}: #{nested.message}"] backtrace += nested.backtrace[1..-1] || [] end super(backtrace) end To produce the augmented stack trace we note that the stack trace of the nested error should always be mostly a subset of the enclosing error. So, we whittle down the enclosing stack trace by taking the difference between it and the nested stack trace (I think set operations are really undervalued in Ruby, maybe a good subject for a future post). We then augment the nested stack trace with the error message and concatenate it with what was left over from the enclosing stack trace. Anyway, if you don’t want exceptions from other libraries invading your app, but still want the ability to diagnose the cause of the exceptions easily – nested exceptions might be the way to go. And if you do decide that nested exceptions are a good fit, nesty is there for you. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/ruby-%E2%80%93-why-u-no-have-nested?mz=38541-devops
CC-MAIN-2017-34
refinedweb
1,093
56.45
Main Class of the Sampling framework. More... #include <point_sampling.h> Main Class of the Sampling framework. This class allows you to perform various kind of random/procedural point sampling over a triangulated surface. The class is templated over the PointSampler object that allows to customize the use of the generated samples. Definition at line 465 of file point_sampling.h. The EdgeSamplingStrategy enum determines the sampling strategy for edge meshes. Given a sampling radius 'r', and the total length of the edge mesh 'L', the number of generated samples is: op(L/r) (+ 1 if the mesh is not a loop) where op is (floor | round | ceil) Definition at line 764 of file point_sampling.h. Estimate the radius r that you should give to get a certain number of samples in a Poissson Disk Distribution of radius r. Definition at line 1767 of file point_sampling.h. Perform an uniform sampling over an EdgeMesh. It assumes that the mesh is 1-manifold. each connected component is sampled in a independent way. For each component of length <L> we place on it floor(L/radius)+1 samples. (if conservative argument is false we place ceil(L/radius)+1 samples) Definition at line 778 of file point_sampling.h. This function computes a montecarlo distribution with an EXACT number of samples. it works by generating a sequence of consecutive segments proportional to the triangle areas and actually shooting sample over this line Definition at line 1112 of file point_sampling.h. Compute a sampling of the surface where the points are regularly scattered over the face surface using a recursive longest-edge subdivision rule. Definition at line 1288 of file point_sampling.h. Compute a sampling of the surface where the points are regularly scattered over the face surface using a recursive longest-edge subdivision rule. Definition at line 1379 of file point_sampling.h. Compute a Poisson-disk sampling of the surface. The radius of the disk is computed according to the estimated sampling density. This algorithm is an adaptation of the algorithm of White et al. : "Poisson Disk Point Set by Hierarchical Dart Throwing" K. B. White, D. Cline, P. K. Egbert, IEEE Symposium on Interactive Ray Tracing, 2007, 10-12 Sept. 2007, pp. 129-132. Definition at line 2001 of file point_sampling.h. When performing an adptive pruning for each sample we expect a varying radius to be removed. The radius is a PerVertex attribute that we compute from the current quality the expected radius of the sample is computed so that it linearly maps the quality between diskradius and diskradius*variance in other words the radius Definition at line 1796 of file point_sampling.h. This function computes a montecarlo distribution with an EXACT number of samples. it works by generating a sequence of consecutive segments proportional to the triangle areas and actually shooting sample over this line Definition at line 1153 of file point_sampling.h. This function compute montecarlo distribution with an approximate number of samples exploiting the poisson distribution approximation of the binomial distribution. For a given triangle t of area a_t, in a Mesh of area A, if we take n_s sample over the mesh, the number of samples that falls in t follows the poisson distribution of P(lambda ) with lambda = n_s * (a_t/A). To approximate the Binomial we use a Poisson distribution with parameter = np can be used as an approximation to B(n,p) (it works if n is sufficiently large and p is sufficiently small). Definition at line 1086 of file point_sampling. Definition at line 610 of file point_sampling.h. This is the main function that is used to build a poisson distribuition starting from a dense sample cloud (the montecarloMesh) by 'pruning' it. it puts all the samples in a hashed UG and randomly choose a sample and remove all the points in the sphere centered on the chosen sample You can impose some constraint: all the vertices in the montecarloMesh that are marked with a bool attribute called "fixed" are surely chosen (if you also set the preGenFlag option) Definition at line 1913 of file point_sampling.h. Sample the vertices in a uniform way. Each vertex has a probability of being chosen that is proportional to the area it represent. Definition at line 688 of file point_sampling.h. Sample all the border vertices. It assumes that the border flag have been set over the mesh. All the vertices on the border are sampled. Definition at line 930 of file point_sampling.h. Sample all the border corner vertices. It assumes that the border flag have been set over the mesh both for vertex and for faces. All the vertices on the border where the edges of the boundary of the surface forms an angle smaller than the given threshold are sampled. It assumes that the Per-Vertex border Flag has been set. Definition at line 916 of file point_sampling.h. Sample all the crease vertices. It assumes that the crease edges had been marked as non-faux edges for example by using tri::UpdateFlags<MeshType>::FaceFauxCrease(mesh,creaseAngleRad); Then it chooses all the vertices where there are at least three non faux edges. Definition at line 941 of file point_sampling.h. Sample the vertices in a uniform way. Each vertex has the same probabiltiy of being chosen. Definition at line 730 of file point_sampling.h. Sample the vertices in a weighted way. Each vertex has a probability of being chosen that is proportional to its quality. It assumes that you are asking a number of vertices smaller than nv; Algorithm: 1) normalize quality so that sum q == 1; 2) shuffle vertices. 3) for each vertices choose it if rand > thr; Definition at line 650 of file point_sampling.h. Compute a sampling of the surface that is weighted by the quality and a variance We use the quality as linear distortion of density. We consider each triangle as scaled between 1 and 1/variance linearly according quality. In practice with variance 2 the average distance between sample will double where the quality is maxima. If you have two same area region A with q==-1 and B with q==1, if variance==2 the A will have 4 times more samples than B Definition at line 1196 of file point_sampling.h.
http://vcglib.net/classvcg_1_1tri_1_1SurfaceSampling.html
CC-MAIN-2020-50
refinedweb
1,043
55.54
testing.diff() function The testing.diff() function produces a diff between two streams. import "testing" testing.diff( got: stream2, want: stream1, epsilon: 0.000001, nansEqual: false, verbose: false, ) It matches tables from each stream with the same group keys. For each matched table, it produces a diff. Any added or removed rows are added to the table as a row. An additional string column with the name diff is created and contains a got table and not in the want table or -if the row was present in. Default is piped-forward data ( <-). want Stream that contains the expected data to test against. epsilon Specifies how far apart two float values can be, but still considered equal. Default is 0.000001. nansEqual Consider NaN float values equal. Default is false. verbose Include detailed differences in output. Default is false. Examples Diff separate streams import "testing" want = from(bucket: "backup-example-bucket") |> range(start: -5m) got = from(bucket: "example-bucket") |> range(start: -5m) testing.diff(got: got, want: want) Inline diff iimport "testing" want = from(bucket: "backup-example-bucket") |> range(start: -5m) from(bucket: "example-bucket") |> range(start: -5m) |> testing.diff(want: want).
https://docs.influxdata.com/flux/v0.x/stdlib/testing/diff/
CC-MAIN-2022-21
refinedweb
192
62.04
How to: Subscribe and Unsubscribe to subscribe and unsubscribe to an event that can be consumed in a loosely coupled way. For more information about events, see the Event Aggregator technical concept. Prerequisites This topic assumes that you have a solution built using the Composite Application Library that has a module and a typed event created. For information about how to do this, see the following topics: - How to: Create a Solution Using the Composite Application Library. This topic describes how to create a solution with the Composite Application Library - How to: Create a Module. This topic describes how to create a module. - How to: Create and Publish Events. This topic describes how to create a typed event. Steps The following procedure describes how to subscribe to a typed event. To subscribe to a typed event - If the event you want to subscribe to is defined in a project other than the project where your subscriber exists, add a reference to the event's project in the subscriber's project. - In the class where you want to subscribe to the event, add the following using statements and, if required, also add a using statement for the typed event's namespace. You will use the following using statements to refer to event-related classes in the Composite Application Library. - the class is instantiated by a container, it will have an instance of the event aggregator service injected when it is instantiated. - Obtain a reference to the event you want to subscribe to by invoking the GetEvent method on the event aggregator service instance. The following code example shows how to obtain an event of type FundAddedEvent. - Add a subscription to the event by invoking the Subscribe method on the event instance. The Subscribe method has several overloads that take all or some of the following parameters: - action. This required parameter is of type System.Action<TPayLoad> and is the callback delegate that gets executed when the event is published. - threadOption. This optional parameter is of type Microsoft.Practices.Composite.Presentation.Events.ThreadOption and specifies on which thread the callback delegate will be invoked. You can choose one of the following options: - ThreadOption.PublisherThread. Use this option to run the callback delegate in the same thread as the publisher. This is how typical .NET Framework events work; it is the default behavior when the threadOption parameter is omitted. - ThreadOption.UIThread. Use this option to run the callback delegate on the user interface thread. This is particularly useful if the code in the callback delegate interacts with controls in the user interface or with models bound to the user interface. - ThreadOption.BackgroundThread. Use this option to run the callback delegate in a new background thread. - keepSubscriberReferenceAlive. This parameter is of type bool. When it is set to true, the event instance keeps a strong reference to the subscriber instance, not allowing it to get garbage collected. If you want to dispose the subscriber instance, you must explicitly unsubscribe the subscriber from the event to avoid memory leaks or unexpected behavior. For information about how to do this, see the procedure "To unsubscribe from an event" later in this topic. If the parameter is set to false (this is the default value when this parameter omitted), the event maintains a weak reference to the subscriber instance, allowing the garbage collector to dispose the subscriber instance when there are no other references to it. When the subscriber instance gets collected, the event automatically unsubscribes it. - filter. This. The following example code shows different valid ways to add subscriptions to the event obtained in the previous step. // This subscription will run the callback delegate in the UI // thread and will keep the subscriber reference alive. fundAddedEvent.Subscribe(FundAddedEventHandler, ThreadOption.UIThread, true); // This subscription will run the callback delegate in the // publisher's thread, will not keep the subscriber reference alive, // and will be executed only if a particular condition is met for // the payload. fundAddedEvent.Subscribe(FundAddedEventHandler, ThreadOption.PublisherThread, false, fundOrder => fundOrder.CustomerId == _customerId); // This is the FundAddedEventHandler event handler's signature. // Note that it takes a parameter of the TPayLoad type. void FundAddedEventHandler(FundOrder fundOrder) { ... } The following procedure describes how to remove a subscription for an event. You remove a subscription for an event when you want to stop receiving notifications or when you want to dispose your subscriber object and the subscription holds a strong reference to it. To unsubscribe from an event - Obtain a reference to the event you want to remove a subscription from by invoking the GetEvent method on the corresponding event aggregator service instance (for information about how to do this, see step 3 of the procedure "To subscribe to a typed event" earlier in this topic). - Invoke the Unsubscribe method on the event instance, passing one of the following parameters: - The callback delegate you passed to the Subscribe method of the event when you added the subscription - The subscription token returned by the Subscribe method when you added the subscription - The following example code shows different valid ways of invoking the Unsubscribe method on an event instance named fundAddedEvent. Outcome You will have a subscription to a typed event that receives notifications when the event is published. Optionally, your subscription will define a filter to avoid receiving notifications if the event's payload does not meet a set of criteria. More Information For a complete list of How-to topics included with the Composite Application Guidance, see Development Activities. Home page on MSDN | Community site
https://msdn.microsoft.com/en-us/library/ff921131(v=pandp.20)
CC-MAIN-2017-39
refinedweb
916
53
I'm having a problem with the thirs case of my program...I'm trying to get a value for y only from two sets of coordinates. This works fine in the first two cases since I'm calling findY from slopeAndIntercept...but when I get into the third case my constructors get mixed up...any help? Code:// THIS IS THE .H FILE #include <iostream> using namespace std; //File: Line.h class Line { public: //constructors Line( double newSlope = 0, double newIntercept = 0);; }; #include "line.cpp"Code://LINE .CPP) { if ( (slope != 0) && (intercept != 0) ){ return (slope*x + intercept); } else { slope = (y2 - y1)/(x2 - x1); intercept = (y1 - slope*(x1)); return (slope*x + intercept); } }thanks, axonthanks, axonCode://MAIN.CPP #include "Line.h" int main() { double x = 0; double y = 0; cout << "\n----------<Case1>---------\n" << endl; Line slopeAndIntercept(1, 2), coordinates(2, 4, 3, -1); slopeAndIntercept.output1(); coordinates.output2(); x = 12; cout << "\nCalculating y-coordinate when x = " << x << endl; y = slopeAndIntercept.findY(x); cout << "When x = " << x << " , y = " << y << endl; cout << "\n----------<Case2>---------\n" << endl; Line slopeAndIntercept1(4, -1), coordinates1(0, 0, 0, 0); slopeAndIntercept1.output1(); x = 7; cout << "\nCalculating y-coordinate when x = " << x << endl; y = slopeAndIntercept1.findY(x); cout << "When x = " << x << " , y = " << y << endl; cout << "\n----------<Case3>---------\n" << endl; Line slopeAndIntercept2(0, 0),coordinates2(0, 0, 1, 1); coordinates2.output2(); x = 2; cout << "\nCalculating y-coordinate when x = " << x << endl; y = coordinates2.findY(x); cout << "When x = " << x << " , y = " << y << endl; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/44654-getting-results-different-constructors.html
CC-MAIN-2017-13
refinedweb
242
51.14
hio77: disable wifi then try again. Looks like your connecting fine but it's prefering wifi. i tried disabling the wifi adapter, lan still won't connect to internet. arcon: hio77: how many devices are connected to the modem? Wifi included. I logged into the modem, it says connected devices: 5. That would be 2 phones, the PC via wireless and my TV, and the only wired lan connection being the PC. hm, that checks out normal. So your getting a IP address, you appear to be routable. Below any conditional values. Have you tried a full reboot of modem + computer? #include <std_disclaimer> Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have. arcon: My Fibremax is working over Wifi with the HG659b anywhere, but not over lan. Lan1 is connected from the modem to my workstation's GbE port & green lit. It can't be a house wiring issue as the modem's internet is working in my room... it must be a PC config issue yes? Seems to me you have access via WifI (i.e.) of the guest network without probs and the LAN network has a different IP space or is set-up without DHCP. Nope, English isn't my mother tongue. But that's why I'm here. PhantomNVD: Scratch that, your dhcp is issueing correctly... does your firewall have a config issue? Disable that for a minute and see what happens? Have tried disabling Firewall, no change. It says no packets received, diagnoses as not having a valid TCPIP config. its all set to auto. Can't be a modem config issue as the other win7 PC runs ok via lan... weird. Btw the ethernet port definitely does work, I frequently transfer files to an old laptop via GbE. It's not an Acer PC is it? We have a few at work, and the most common issue with them is the onboard LAN cards stop receiving packets. Even when configured correctly with a manual IP address. The other thing you could try is in an elevated command prompt (run as administrator): netsh int ip reset log.txt then: netsh winsock reset and reboot the computer. something curious, checking the network connection properties for each connection, some info is different from IPconfig. eg. the subnet mask for the lan seems to be 255.255.0.0, not 255.255.255.0 as is mentioned in IPconfig & wireless. So that's never going to work right? LAN (left), wireless (right) The 169.254.x.x address means that your computer is not getting an IP address via DHCP. That range and subnet are given as the 'I can't see a DHCP server' addresses by Windows. Have you tried the netsh commands above? It does sound as if your computers LAN port is stuffed. Or the cable. You did say you have tried another LAN device on the same cable? Just to get it straight. What diagnostics have you done? Have you tried another Ethernet device (ie., NOT your PC) on that cable? Does it get a valid IP address (not 169.254.x.x) Have you tried another Modem? Have you tried a different cable direct to the Modem? Have you tried a factory reset on the Modem?
https://www.geekzone.co.nz/forums.asp?forumid=39&topicid=230685&page_no=2
CC-MAIN-2018-30
refinedweb
555
78.04
Functional Python Due September 23rd by midnight Objectives Practice functional programming with Python. Write concise and efficient algorithms in Python and package them together as a function library. Description Your job is to code up a library of handy mathematical functions in a module called mathtools. The specifications for all the functions have already been entered in the file mathtools.py in which your code should replace the pass statement in each function body. The first line of each function is a documentation string used by Python's help function. At the interactive prompt try import mathtools and then help(mathtools). One function, a decorator function called accept_sequence, does not and should not have its own documentation string for reasons that will become apparent if you do the extra credit part of this assignment. At the end of the file is a place for your testing code. This code will only execute if mathtools is run as the main program, not when it is imported. Testing your functions thoroughly before you submit them is extremely important and the quality of your test code will affect your score. If you are not already familiar with the Factorial function, Fibonaci numbers, and Pascal's triangle, you should look at these links for the specifications. Other Requirements - Included in the documentation strings are the required time and space complexity of each function. - Single line functions (functions whose name ends with '1'): - Write and test the multiline version first, and only then think about how to shorten it to one line of code which must: - Be less than 80 characters wide - Not include semi-colons - Not call any additional functions you have written (but may call built-ins) unless stated otherwise lambda, reduce, zip, and list comprehensions may all be useful getCombinationsFunc: - Must not perform any multiplications - Must return a function that does not perform any multiplications - Will find Pascal's triangle very helpful - EXTRA CREDIT permuteSetand permuteSetR: - There are n! permutations of a set of size n. 10! = 3628800, so the permutations of a set with as few as 10 elements can use a significant amount of memory if computed with permuteSetR - Writing permuteSetas an iterative generator function avoids this problem but requires a tricky implementation of depth first search. My implementation is 16 lines long and by far the longest function in mathtools - Decorating the powerSetfunctions: - These functions accept a variable number of arguments *args. - The purpose of decorating them is for when they are passed just one argument which happens to be an iterable sequence. The decorator should unpack the sequence so it will be treated as if it were an argument tuple with each element of the sequence becoming a separate argument passed on to the decorated function. - The is_iterablefunction is helpful when writing the accept_sequencedecorator. - (This is a lot easier than writing permuteSetor permuteSetR) Below is an example trace of how the functions in mathtools.py should work when used in an interactive Python session. >>> from mathtools import * >>> fact(0) 1 >>> fact(5) 120 >>> fact(10) 3628800 >>> factR(0) 1 >>> fact2(0) 1 >>> fact1(0) 1 >>> fact >>> fib(5) 5 >>> fib(0) 0 >>> fib(20) 6765 >>> fib(100) 354224848179261915075L >>> fibR(20) 6765 >>> fibR(30) 832040 >>> fib(30) 832040 >>> for i in fibs(10): print i ... 0 1 1 2 3 5 8 13 21 34 55 >>> [i for i in fibs(20)] [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765] >>> rPascal([1]) [1, 1] >>> rPascal1([1,1]) [1, 2, 1] >>> rPascal1([1,2,1]) [1, 3, 3, 1] >>> rPascal([1,3,3,1]) [1, 4, 6, 4, 1] >>> tPascal]] >>> tPascal1]] >>> c = getCombinationsFunc(100) >>> c(0,0) 1 >>> c(1,0) 1 >>> c(1,1) 1 >>> c(5,3) 10 >>> c(12,5) 792 >>> c(60,23) 23385332420868600L >>> c(100,50) 100891344545564193334812497256L >>> permutations(7,5) 2520 >>> permutations(7,1) 7 >>> powerList(1,2,3) [(), (1,), (2,), (1, 2), (3,), (1, 3), (2, 3), (1, 2, 3)] >>> powerList1(1,2,3) [(), (1,), (2,), (1, 2), (3,), (1, 3), (2, 3), (1, 2, 3)] >>> powerList1(1.2, 'hello', False) [(), (1.2,), ('hello',), (1.2, 'hello'), (False,), (1.2, False), ('hello', False), (1.2, 'hello', False)] >>> powerList(1,2,3,4) [(), (1,), (2,), (1, 2), (3,), (1, 3), (2, 3), (1, 2, 3), (4,), (1, 4), (2, 4), (1, 2, 4), (3, 4), (1, 3, 4), (2, 3, 4), (1, 2, 3, 4)] >>> >>> >>> # AND NOW FOR EXTRA CREDIT ... >>> permuteSetR([1,2,3]) [(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)] >>> [x for x in permuteSet([1,2,3])] [(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)] >>> permuteSetR([1,2)] >>> for t in permuteSet([1,2,3,4]): ... print) >>> powerList('abcd') [(), ('a',), ('b',), ('a', 'b'), ('c',), ('a', 'c'), ('b', 'c'), ('a', 'b', 'c'), ('d',), ('a', 'd'), ('b', 'd'), ('a', 'b', 'd'), ('c', 'd'), ('a', 'c', 'd'), ('b', 'c', 'd'), ('a', 'b', 'c', 'd')] >>> powerList([1,2,3,4]) [(), (1,), (2,), (1, 2), (3,), (1, 3), (2, 3), (1, 2, 3), (4,), (1, 4), (2, 4), (1, 2, 4), (3, 4), (1, 3, 4), (2, 3, 4), (1, 2, 3, 4)] >>> powerList1(xrange(5)) [(), (0,), (1,), (0, 1), (2,), (0, 2), (1, 2), (0, 1, 2), (3,), (0, 3), (1, 3), (0, 1, 3), (2, 3), (0, 2, 3), (1, 2, 3), (0, 1, 2, 3), (4,), (0, 4), (1, 4), (0, 1, 4), (2, 4), (0, 2, 4), (1, 2, 4), (0, 1, 2, 4), (3, 4), (0, 3, 4), (1, 3, 4), (0, 1, 3, 4), (2, 3, 4), (0, 2, 3, 4), (1, 2, 3, 4), (0, 1, 2, 3, 4)] >>> Put your program in a file called mathtools.py. Advice: Keep an eye on the class discussion board for helpful hints on how to do this assignment! Grading Criteria Your functions will be graded on correctness, efficiency, and brevity of code. You should also attempt to make your code easy to read. Submission Checklist mathtools.py
http://www.cs.utexas.edu/users/julian/index.php?page=Functional%20Python
CC-MAIN-2018-05
refinedweb
1,018
61.5
Perhaps you misunderstood the article. The student code is the five lines listed on the bottom of page one of the article. The have to write a class using the code public class Friend{} and then put a single method in it that has the code public String getName(){ return "Daniel";} . I'm not seeing the complexity in this assignment. Second, I do want them to think of OOP first. They will certainly benefit from having loops and conditionals in their bag of tricks -- but you don't have to begin there. I think a for loop is harder to understand than the method described above. Third, the use of interfaces is an important concept to understand early. It sounds as if the courses you have taken are teaching procedural programing using the Java programming language. Feel free to follow-up with me via this forum or email. Thanks, D.
http://archive.oreilly.com/cs/user/view/cs_msg/11590
CC-MAIN-2017-17
refinedweb
150
73.47
Member 4 Points Contributor 5041 Points Jul 28, 2012 09:56 AM|christiandev|LINK It sounds like your missing an ODBC driver? I havent used mySQL with ASP.NET, but you may need to install the driver or check in Admin Tools, Data Sources. Contributor 5913 Points Jul 28, 2012 12:03 PM|Horizon_Net|LINK Hi, I've worked with the MySQL .NET Connector in the past and it works very well. You can find the installer on the following site - Star 9918 Points Jul 29, 2012 11:26 PM|Allen Li - MSFT|LINK Hi, please import “MySql.Data.Client” namespace and make sure the connection string is correct. There is a similar issue on the following link, you can refer to the codes there: 3 replies Last post Jul 29, 2012 11:26 PM by Allen Li - MSFT
https://forums.asp.net/t/1828630.aspx?mysql+connection
CC-MAIN-2017-51
refinedweb
139
81.02
Breaking Changes Planned for .NET 4.5 The actual version number for .NET 4.5 assemblies is 4.0.30319. If that looks familiar it is because that is also the version number for .NET 4.0 assemblies. Much to the chagrin of developers, Microsoft will be updating core assemblies “in-place”. Aside from making much harder to determine which version of .NET the user actually has installed, it creates numerous pitfalls for developers targeting .NET 4.0 with machines that have 4.5 installed. While Visual Studio is quite capable of keeping developers from accidentally using .NET 3.0 or 3.5 assemblies in projects targeting .NET 2.0 the same cannot be said for individual methods. Within the IDE there is no warning that method in a common library such as mscorlib.dll or System.dll didn’t exist in the older version. Unless a static analysis tool is used to programmatically check for such mistakes, the error will probably not be detected until the code fails at runtime. A good example of this is EventWaitHandle.WaitOne, which has overloads that were added via a service pack that shipped with .NET 3.5. While this is still a theoretical problem at this time, there are also several breaking changes in .NET 4.5 that will need to be accounted for. Unhandled, unobserved exceptions In .NET 2.0 the semantics for unhandled exceptions changed. Prior to this version exceptions on a non-UI thread would simply be discarded and the associated thread terminated. As of .NET 2.0, unhandled exceptions would cause the entire application to crash. While this greatly reduced the possibility of data corruption and undetected errors, it also meant that all calls made on back-ground threads were handled a top-level exception handler. When the Task Parallel Library was introduced it followed this model. If a Task is faulted, its Exception property must be read before the Task is garbage collected. Failing to do so will cause the finalizer to terminate the application on the grounds that it had an unhandled exception. In .NET 4.5 the rule for Tasks will change to be like the pre-2.0 rule for threads. While a global event will be raised for logging purposes, faulted Tasks will no longer crash the application. System.Net.PeerToPeer.Collaboration The namespace System.Net.PeerToPeer.Collaboration is not available and no explanation was given why. These libraries were only available on non-server versions of Windows Vista and Windows 7 and are an extension to the Windows Peer-to-Peer Infrastructure. Being a rarely used subset of a rarely used technology, there is very little information on this namespace and it is unlikely to affect more than a handful of developers. WCF When the maxRequestLength or maxReceivedMessageSize quota is exceeded the HTTP status code has changed from 400 (Bad Request) to 413 (Request Entity Too Large). WPF The default value of TextBoxBase.UndoLimit has been changed from -1 (unlimited) to 100. No explanation was given, but presumably this offers a performance advantage over storing an indefinite number of prior versions of the text box contents. XML/XSLT Validation errors throw by XDocument will now include the line number and position if LoadOptions.SetLineInfo was passed to the Load method. Forward compatibility mode for the System.Xml.Xsl.XslCompiledTransform class has been fixed. Reflection by Rob Eisenberg Re: Reflection by Andrey Kuznetsov channel9.msdn.com/Events/BUILD/BUILD2011/TOOL-930C As I understand this sessions, that was true only for WinRT/Metro. Re: Reflection by Jonathan Allen Why by Mike Gale It seems to fly in the face of why such detailed version number are used. It just looks plain wrong, by definition. Anyone know who made this decision and what their thinking was? Have the decision makers written programs recently? Re: Reflection by Rob Eisenberg Re: Reflection by Andrey Kuznetsov And of course it wasn't change in WinRT, which is native library basically, and doesn't have System.Type in it. If that's not the case, it would by kind of strange 'in-place update' and 'high compatibility target' Re: Reflection by Stefan Dobrev First of all System.Type is not going away from the .NET Framework. The whole framework depends on this type so it will never go away. What they did in .NET 4.5 is to introduce a new type called TypeInfo that will contain only the reflection part the Type class (this is similar to the native IInspectable interface in WinRT). They also introduce handy extension methods on top of System.Type one of them being GetTypeInfo(). Now let's talk what is a .NET framework profile. It's is collection of APIs specifically designed for a target domain where the profile will be used. Each profile represents a set of reference assemblies (there is IL only for signatures and no actual implementation) that are used to drive the development experience. At runtime via some attribute magic the correct runtime types are resoled and everything is working as expected. This concept was introduced in Portable Library Tools and will be extended going forward as we see .NET used on many places - XBox, Silverlight, Phone, Metro, etc. You can watch this Channel9 video for more information. Having laid out the ground let's look at what is .NET Profile For Metro Applications aka .NET 4.5 Core Profile. It is a framework profile that contains subset of .NET API specifically designed for Metro apps. You can think of it as what will .NET Core Framework look like as if Microsoft is designing it right now having learned from the past. Regarding the Type class in this profile they have hidden its reflection members, because you should access them via TypeInfo class and you can get one of those via GetTypeInfo() extension method. -sdobrev
http://www.infoq.com/news/2011/09/Breaking-Changes-45
CC-MAIN-2014-10
refinedweb
977
58.89
Feature Requests item #1465406, was opened at 2006-04-05 23:30 Message generated for change (Comment added) made by ciw42 You can respond by visiting: Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: None Status: Open Resolution: None Priority: 5 Submitted By: Chris Wilson (ciw42) Assigned to: Nobody/Anonymous (nobody) Summary: Allowing the definition of constants Initial Comment: One of the current optimizations due in v2.5 includes constant folding of expressions, which as it stands serves as a way of somply getting rid of a redundant arithmetic operations and the like. In practice, it's rare a developer would leave an expression such as "2+3" sat in his/her code, but by allowing the declaration of constants within a script, it could make this new feature *much* more useful. As an example, in a recent script I had the following at the top, outside the main loop: SCREEN_WIDTH=640 SCREEN_HEIGHT=480 SCREEN_RATIO=SCREEN_WIDTH/SCREEN_HEIGHT As SCREEN_RATIO is used a number of times during my main loop, it makes sense to pre-calculate it to avoid the extra processing, but if the compiler were aware that SCREEN_WIDTH and SCREEN_HEIGHT were constants, it could optimise out the calculation and I could include the calculation in-place. I frequently make use of "constants" to make my code more readable, and wonder whether there is any performance penalty or lack of optimisation going on due to them in fact being regular variables? ---------------------------------------------------------------------- >Comment By: Chris Wilson (ciw42) Date: 2006-04-28 13:38 Message: Logged In: YES user_id=1018283 I see your point, and it's a good example of why using namespaces is so important, but in practice, with my proposal in place, the code you propose simply wouldn't compile. Assuming the compiler simply substituted the literal "3.1415" for "pi" as I've proposed, you'd end up with "3.1415 = 4", and a syntax error for trying to assign to a literal value. You'd not get as far as running the code, so in practice there'd be no issues with it running incorrectly. Being able to declare constants is important as it allows the compiler to make the sort of optimistations I mentioned previously. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2006-04-10 13:55 Message: Logged In: YES user_id=21627 The problem is that declaring the value assignment const doesn't help. Consider this: from math import pi def area(r): return r*r*pi pi = 4 print area(10) So even though math.pi might be declared as a constant, hard-coding its value into the function area would break this program - the value of the variable pi might change not change inside math, but it might change where it is imported. ---------------------------------------------------------------------- Comment By: Chris Wilson (ciw42) Date: 2006-04-06 21:59 Message: Logged In: YES user_id=1018283 I've re-opened this, as I don't feel it would be difficult to implement or require any fundamental changes to the parser or runtime. In fact, it would be very easy, and potentially very useful beyond the scope of my initial suggestion. Appologies to rhettinger if this seems rude, but I would ask that you give the following some consideration. The addition of a "const" or similar compiler directive would allow the compiler to simply do an on-the-fly substitution for the specified value/string etc. There would be no code analysis required, and adding this type of functionality would carry no real overheads or further complicate the compilation process. There would be no changes required within the runtime. Once substituted, the already incorporated compiler constant folding features would then come into play. I suppose, that what I'm suggesting is effectively a basic pre-compiler macro feature. This in itself may prove useful in many other situations. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-04-05 23:57 Message: Logged In: YES user_id=80475 Python is too dynamic for this kind of optimization to be done automatically. If those "constants" are defined at the module level, they can be changed by code outside the module. Even within the module, it would take a thorough analysis of the code to determine that nothing was trying to alter the value of the global variable. If the "constant" is defined inside a function, it is still a local variable subject to change by later lines in function. Your best bet is to use the bind_consts recipe at ASPN. It will automatically turn some global references into locals: 940 It may be possible to adapt the recipe to go an additional step and fold the globals "constants". ---------------------------------------------------------------------- You can respond by visiting:
https://mail.python.org/pipermail/python-bugs-list/2006-April/033203.html
CC-MAIN-2016-50
refinedweb
803
56.18
In this method, the declaration starts with typedef followed by key word struct. The data members of the structure are placed between a pair of curly braces after struct. The type name is placed after the closing right brace (}). This is explained in the following example: typedef struct { int datamemberl; int datamember2; int datamember3; } type_name ; Program provides an illustration of the typedef method where a structure with type-name vector is declared. Any instance of the structure may be declared as given below. type_name identifier Illustrates typedef method for declaring structures # include <stdio.h> typedef struct { int x; int y; int z; } vector; void main () { vector A= {5,8,7}; vector B = {6} ; vector C = {4,8}; clrscr(); printf("A.x = %d \tA.y = %d \tA.z = %d", A.x, A.y, A.z); printf("\nB.x = %d \tB.y = %d \tB.z = %d", B.x, B.y, B.z); printf("\nC.x = %d \tC.y = %d \tC.z = %d", C.x, C.y, C.z); printf ("\n"); }
https://ecomputernotes.com/what-is-c/structure-and-union/typedef-in-structure
CC-MAIN-2022-05
refinedweb
168
79.87
Low-memory streaming xml parser for node.js. Returns each node as an object. Uses node-expat Streaming parsers are hard to work with, but sometimes you need to parse a really big file. This module gives you the best of both worlds. You give it a specific node to look for, and it will return each of those nodes as an object, one at a time, without loading the whole document into memory at once. Uses node-expat for fast(est) xml processing. npm install xml-object-stream Let's say we have a file, hugePersonDirectory.xml, that looks something like this: <root> <people> <person>...</person> <person>...</person> <person>...</person> <person>...</person> <person>...</person> </people> <root> You want to do something with each person object, but you can't load them all into memory at once. xml = require 'xml-object.stream' fs = require 'fs' readStream = fs.createReadStream 'hugePersonDirectory.xml' parser = xml.parse readStream parser.each 'person', (person) -> # do something with the person! The parser emits some streaming events parser.on 'end', -> parser.on 'error', (err) -> parser.on 'close', -> You can pause and resume it if the xml parser gets too far ahead of your processing parser.pause() # then when you catch back up parser.resume() Since the parser takes any read stream, you can use it to parse urls without saving them to disk. Nodes are converted to objects. For the following xml: <person id="asdf123"> <firstName>Bob</firstName> <lastName>Wilson</lastName> <employee id="asdf123"/> <note author="Joe">Bob is a poor worker</note> <note author="Jim">Bob spends all his time parsing xml</note> </person> You can access attributes with the $ property person.$.id == "asdf123" You can access the last child of a given name by its name. Text is accessed with $text person.firstName.$text == "Bob" Node names are available under the $name property person.$name == "person" Every child node is put into the $children array notes = person.$children.filter (child) -> return (child.$name is "note") notes[0].$.author == "Joe" exports.parse = (nodeReadStream, [options]) -> # returns a Parser class Parser # calls cb each time it finds a node with that name each: (nodeName, cb) -> # bind to 'end', 'error', and 'close' on: (eventName, cb) -> # pause or resume the read stream to let you processor catch up pause: -> resume: -> { # removes all namespace information from the node names stripNamespaces: true }
https://www.npmjs.com/package/xml-object-stream
CC-MAIN-2015-22
refinedweb
389
68.36
Laravel 5 Middleware Tutorial example is today’s main topic. One of the primary requirement of any web application is HTTP request filtering, and we all need to implement that functionality very well. Laravel PHP Framework provides that functionality also and this concept is called as “Laravel Middleware.” If you want to know the really basic CRUD Functionality in Laravel 5.4, then check out my article Laravel 5.4 Crud Example From Scratch Middleware As its name suggests, we need to implement some functionality during the request hit on the particular URI. It is like layers; we need to put in between our request and response. Laravel 5 middleware provides us very flexible API to do that, and we can also implement our custom middleware with in no time. Just need to fire one command and Laravel all set up. We just need to write logic in the particular function and define that in our application. That is it, Folks. Okay so let’s deep dive into it with some of the examples. Step 1: Create a project Laravel 5 Middleware Tutorial. Type the following command in your CMD. composer create-project laravel/laravel LaravelMiddleware --prefer-dist Go to phpMyAdmin and create one database called laravel middleware. You can name it anything. Switch to your editor and edit .env file and put your database credentials in it. Step: 2 Laravel 5 Admin Middleware. Now, To checking that if the current user is administrator or not. So go to your users’ table migration file and add one more field called isAdmin and its data type is boolean. <?php // create_users_migrations public function up() { Schema::create('users', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('email'); $table->string('password'); $table->boolean('isAdmin')->nullable(); $table->rememberToken(); $table->timestamps(); }); } Now run the following command. php artisan migrate Next step is to create Authentication functionality provided by Laravel. So type following in your terminal. Laravel 5.4 Authentication. php artisan make:auth So auth scaffold will generate successfully. Start the server by typing following command. php artisan serve Now create three users. So go to the following URL: Right now, we have not assigned any users to admin, but we can do it manually. Remember, in the real time web application; you need to provide some interface to give administrative rights. In here I am just showcasing you that, how you can deal with admin middleware after sign in the form. So assign any user’s isAdmin field to value one manually in the database. Step 3: Make one basic Laravel Middleware. Create one middleware by typing following Laravel Command. php artisan make:middleware Admin Navigate to the following directory. app >> Http >> Middleware >> Admin.php You can see there is some boilerplate provided by Laravel. There is mainly one function you have to deal with, and that is handle() // Middleware Admin.php /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { return $next($request); } We need to write logic in this function so that we can filter the request and if it satisfies then go to destination page otherwise back to login or whatever redirect page, you will provide. I am writing one logic in this function. /** Admin.php * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { if(auth()->user()->isAdmin == 1){ return $next($request); } return redirect('home')->with('error','You have not admin access'); } Now, I need to register this route in app >> Http >> Kernel.php You can register this route in two separate ways. - You can register as a global middleware. - You can register as a particular route called route middleware. We are registering as a route middleware so, go to the protected $routeMiddleware property. <?php //::class, ]; Here as you can see, I have added our custom middleware called admin. Now, if we want to assign any route to this middleware admin then this routes now protected and only accessible when authorized user is admin otherwise it will redirect to the home page. Step 4: Admin protected route middleware Create one route, which needs to be admin protected, and if the user is not an admin, then it will redirect to the home page otherwise he can access this page. <?php // web.php Route::get('admin/routes', 'HomeController@admin')->middleware('admin'); Now, we just need to put this link on the home page after the user has signed in. <!-- home.blade.php --> @extends('layouts.app') @section('content') <div class="container"> @if(\Session::has('error')) <div class="alert alert-danger"> {{\Session::get('error')}} </div> @endif <div class="row"> <div class="col-md-8 col-md-offset-2"> <div class="panel panel-default"> <div class="panel-heading">Dashboard</div> <div class="panel-body"> <a href="{{url('admin/routes')}}">Admin</a> </div> </div> </div> </div> </div> @endsection Now, all we need is to code the admin function resides in HomeController. <?php // HomeController.php namespace App\Http\Controllers; use Illuminate\Http\Request; class HomeController extends Controller { /** * Show the application dashboard. * * @return \Illuminate\Http\Response */ public function index() { return view('home'); } public function admin() { return view('admin'); } } Step 5: Make one blade file. Create one view called admin.blade.php in the root of the views folder. <!-- admin.blade.php --> <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ADMIN PAGE</title> </head> <body> WELCOME TO ADMIN ROUTE </body> </html> Now, go to login page and logged in with the isAdmin field 1 Here is my login page of Laravel 5.4 After logging in this is how my screen looks like this. You are logged in as an admin so; you can see the following page. If you are logged in as a regular user then, you will redirect to the same page like in our case it’s a home page with the following error. Laravel 5 Group Middleware <?php // web.php Route::group(['middleware' => ['admin']], function () { Route::get('admin/routes', 'HomeController@admin'); }); We can use middleware groups to assign one middleware to multiple routes. It is very easy. If I do not want to show the Admin links to the normal user, then I have put a condition like if authenticate the user is an admin then and then we can show him that route otherwise not. But in this example, I have not put anything like that. I just want to show you that if you are not admin and still you will try to access this route, then you will redirect with a proper message. Multiple Middlewares in Single Route We can put two middlewares into one route at a time. Route::get('admin/routes', 'HomeController@admin')->middleware(['admin','auth']); Auth Middleware Provided By Laravel <?php // Kernel, ]; } auth and guest middlewares are related to Authentication of the User. VerifyCSRFToken is the another middleware, which is global middleware and protects us from Cross Site Request Forgery attack in every POST request. That TokenMismatchException is thrown by this middleware if POST request does not contain CSRF Token. That is it for Laravel Middleware Tutorial. Github: Steps to use Github code - Clone the repo in your local. - Go to the root of the project and run command “composer update.“ - Edit .env file and use your MySQL database credentials. - Add one admin field in the user’s table. - Run the command “php artisan migrate.“ - Now, we need to bootstrap Laravel server so run “php artisan serve.“ - If now go to this URL: and create three new users and change one of the user’s admin value to one - Now logged in like three different users and click the admin link in the home page, If the middleware is applied by you in that link’s route then it will work as I have described above. If you still have any doubt in this Laravel Middleware Tutorial Example then ask in a comment below.
https://appdividend.com/2017/07/18/laravel-5-middleware-tutorial/
CC-MAIN-2017-34
refinedweb
1,322
56.05
Mu v0.18.3 is now available by Maureen Elsberry - • - May 30, 2019 - • - mu• scala - | - 1 minute to read. Mu is a purely functional library for building RPC endpoint based services with support for RPC and HTTP/2. This latest release, v.0.18.3 is a minor release. Changes in 0.18.3 include: - Fixes schema-evolution links #595. - Bumps sbt-org-policies and sbt-jmh #596. - Adds seed sample #598. - Allows specifying the namespace and capitalized params #601. - Improves how the params are received in the annotation #602. - Improves rpc metrics naming #607. - Update docs #600. - Fixes metric prefix #609. - Fixes plugin release #610. - Uses updated Skeuomorph version for non-primitive protobuf fields #614. - Fixes compile errors with the generated source code #615. Please visit the official changelog for the complete list of changes. We would like to give special thanks to the following contributors for this version (in no particular order): For a full list of the contributors who have helped Mu get to where it is today, please check out: Mu Contributors. We’re always looking for additional help, if you’re interested in contributing please note that all levels are welcome and we’re happy to offer 1:1 mentoring through Mu’s Gitter channel. Resources: The active development of Mu is proudly sponsored by 47 Degrees, a Functional Programming consultancy with a focus on the Scala, Kotlin, and Swift Programming languages.
https://www.47deg.com/blog/mu-v-0-18-3-released/
CC-MAIN-2020-29
refinedweb
237
60.31
Hello i am abit stuck to how to access the data into an array public function Adddata():void{ Testing.addItem({ProcessName:ProcessNameInput.text , Cost:CostInput.text, ProcessTime:ProTimeIP.text, }); <s:ArrayCollection </s:ArrayCollection> <mx:DataGrid <mx:columns> <mx:DataGridColumn <mx:DataGridColumn <mx:DataGridColumn </mx:columns> </mx:DataGrid> The question is that i need to manipulate the data such as to do cost*processTime , what code do i have to do to access the elements inorder to work with the values achieved ... Thanks a million Thanks It depends on where you want to access it. If you want a 4th column to display Cost * ProcessTime, then you could create an ItemRenderer/GridItemRenderer which takes the data from the column in order to get the values i.e. In the GridItemRenderer override public function set data(value:Object):void { super.data = value; if(value != null) { itemRenderer_Label.text = Number(data["Cost"] * data[""] ).toString(); } } Or, you could just add it when you populate late the array. Thanks for your answer. I want to access the data to display it in another Label seperatly or just access it to insert it in another variable since i will be reusing it in a later stage Another point is that i have more then one data for cost and process time such as Name Cost Process time XXX 24 62s Name Cost Process time XXX 33 86s Name Cost Process time XXX 55 34s How will i be sure that i am using the right cost and process time that i want Then you should add it to the Array when you populate it and make sure you have a unique identifier for the row. If "XXX" is the product name, when you do the insert, you need to add a column for the unique ID. I handle my ArrayCollections by creating an Object class then inserting each Object into an ArrayCollection. I usually access my data from a webservice which provides the XML data to populate the ArrayCollection. Once the ArrayCollection is populated, then I can simply access the row and then the object column to get the data that I need. i.e. Add each item to the Array Collection myArrayCollection.addItem(ProductObjectClass.addItem(ProcessName,Cost, ProcessTime)); When you want to get the information then you can use myArrayCollection.getItemAt(i) to put it back into an ProjectObjectClass and find the correct value within that object; Where i would be the row. If you click on the DataGrid, then you would have the rowIndex value to get that rows data with the unique id. Then when you did an update, you could update based on uid. Or, you could have inline editing on the DataGrid. public class ProjectObjectClass extends Object { public static function addItem(thisProcessName:String, thisCost:String, thisProcessTime:String):Object{ var now:Date = new Date(); // create date time instance in order to use time stampe as unique id var poc:ProjectObjectClass = new ProjectObjectClass(); poc.uid = now.valueOf(); poc.processName = thisProcessName; poc.cost = new Number(thisCost); poc.processTime = new Number(thisProcessTime); poc.processTimeCost = poc.processTime * poc.cost; return poc } public function ProjectObjectClass() { super(); } [Bindable] public var uid:Number [Bindable] public var processName:String; [Bindable] public var cost:Number; [Bindable] public var processTime:Number; [Bindable] public var processTimeCost:Number; }
http://forums.adobe.com/thread/1155508
CC-MAIN-2014-15
refinedweb
543
51.78
tensorflow:: ops:: Conv2D #include <nn_ops.h> Computes a 2-D convolution given 4-D input and filter tensors. Summary Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following: - Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels]. - Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels]. - For each patch, right-multiplies the filter matrix and the image patch vector. In detail, with the default NHWC format, output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k] Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].
https://www.tensorflow.org/versions/r1.15/api_docs/cc/class/tensorflow/ops/conv2-d?authuser=1
CC-MAIN-2020-34
refinedweb
148
51.89
I use ArcGIS 10 and would like to run Matplotlib to create charts within a python script. Therefore I have installed - numpy-1.6.1rc1-win32-superpack-python2.6.exe - matplotlib-1.0.1.win32-py2.6.exe as noted here: If I test a script to check the installation like: import numpy arcpy.AddMessage("NUMPY Version " + numpy.__version__) import matplotlib arcpy.AddMessage("MATPLOTLIB Version " + matplotlib.__version__) I get the following error: Running script Script... NUMPY Version 1.6.1rc1 <type 'exceptions.AttributeError'>: 'module' object has no attribute '__version__' Failed to execute (Script). Also the command - import pylab as pl leads to the error <type 'exceptions.ImportError'>: No module named pylab Failed to execute (Script). So even matplotlib is installed properly in the directory "site-packages" (see screenshot of directory structure attached) it does not work in ArcGIS. By the way if I let it run in iPython (same installation) it works perfect. What still needs to be done to get matplotlib work in ArcGIS 10? Thanks Werner Bookmarks
http://forums.arcgis.com/threads/33772-Matplotlib-for-Python-scripting-in-ArcGIS-10-does-not-work
CC-MAIN-2014-10
refinedweb
169
62.44
Re: Write to file - From: Ben Bacarisse <ben.usenet@xxxxxxxxx> - Date: Wed, 13 Jun 2007 13:40:36 +0100 Bill <bill.warner@xxxxxxxxxxxx> writes: Thanks alot. I'm making progress. No errors now, but the output in the file is screwy. The code: Please trip your replies (and always trim signature lines). #include <stdio.h> #include <ctype.h> #include <string.h> #define DISPLAY 117 /* Length of display line */ #define PAGE_LENGTH 20 /* Lines per page */ int main(int argc, char *argv[]) { FILE *pfile; /* File pointer */ FILE *outfile; char *p; int c; unsigned char buffer[DISPLAY/4 - 1]; /* File input buffer */ int count = 0; /* Count of characters in buffer */ int lines = 0; /* Number of lines displayed */ int i = 0; /* Loop counter */ char string [100]; pfile = fopen ("my.txt" , "r"); while((c = fgetc(pfile)) != EOF) /* Continue until end of file */ { if(count < sizeof buffer) /* If the buffer is not full */ /* Read a character */ buffer[count++] = (unsigned char)fgetc(pfile); I intended you you use c here not read another character! else { /* Now display buffer contents as characters */ for(count = 0; count < sizeof buffer; count++) printf("%c", isprint(buffer[count]) ? buffer[count]:'.'); printf("\n"); /* End the line */ count = 0; /* Reset count */ Your input is long enough to trigger this condition. You reuse the buffer. if(!(++lines%PAGE_LENGTH)) /* End of page? */ if(getchar()=='E') /* Wait for Enter */ return 0; /* E pressed */ } } /* Display last line as characters */ for(i = 0; i < count; i++) { printf("%c",isprint(buffer[i]) ? buffer[i]:'.'); } printf("\n"); fclose(pfile); /* Close the file */ //re-open file and write out the buffer contents outfile = fopen ("myOutput.txt" , "w"); if(outfile==NULL) { return 0; } for(count = 0; count < sizeof buffer; count++) { if(!isprint(buffer[count])) { buffer[count] = '.'; } else { fputc(buffer[count],outfile); } You took a suggestion to modify the buffer and combined it with single character printing to produce something rather odd. It is not wrong, just odd. Why change buffer[count] when is it not a printing character if you don't then print it? //fputc(outfile, isprint(buffer[count]) ? buffer[count]:'.'); } /*for(count = 0; count < sizeof buffer; count++) { (int) fwrite(buffer, 1, isprint(buffer[count]), outfile); }*/ fclose(outfile); getchar(); return 0; } Input file contents: Fred Flinstone 123-456-7890 Barney Rubble 123-789-4561 Output file: B12y7p456r Not what I get, but still. I should point out that although I have only commented on details, the overall design of this program looks a bit wacky. If you intend to process text files, why do you have a buffer of size 117/4-1? Why are you happy to simply reuse the space when you read more than these 28 characters? If the input has 30 characters, your buffer will contain numbers 29 and 30 followed by character numbers 3, 4 and so on. It all looks very odd. You should maybe say what you are actually trying to do. -- Ben. . - Follow-Ups: - Re: Write to file - From: Bill - References: - Write to file - From: Bill - Re: Write to file - From: Ben Bacarisse - Re: Write to file - From: Bill - Prev by Date: Re: undefined references to functions when linking - Next by Date: Re: Write to file - Previous by thread: Re: Write to file - Next by thread: Re: Write to file - Index(es): Relevant Pages - Re: Cannot return values of char variable ... - buffer = ... Since you seem to be trying to return a char pointer ... int id = random; ... content is interpreted as a string. ... (comp.lang.c) - Re: K&R exercise 1-18 ... _without_ the string terminator and has MAXIMUM+1 array elements. ... char lineis in this context equivalent to char *line ... int main ... You lose one character by overwriting it with '\n'. ... (comp.lang.c) - Re: Write to file ... You can read the file one character at a time. ... compared to the current char. ... Once the (lgh - 1)th char ... void initnext(int *next, const char *id, int lgh) ... (comp.lang.c) - [KGDB PATCH][2/7] Serial updates, take 2 ... Also make put_packet look at the char it reads, ... * Empty the receive buffer first, then look at the interface hardware. ... * This is the receiver interrupt routine for the GDB stub. ... -extern volatile int kgdb_connected; ... (Linux-Kernel) - Re: Write to file ... fwrite, outfile); ... This will repeatedly try to print the first character in buffer as ... int main(int argc, char *argv) ... (comp.lang.c)
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2007-06/msg01860.html
crawl-001
refinedweb
718
75.3
IRC log of svg on 2012-06-14 Timestamps are in UTC. 20:58:14 [RRSAgent] RRSAgent has joined #svg 20:58:14 [RRSAgent] logging to 20:58:16 [trackbot] RRSAgent, make logs public 20:58:16 [Zakim] Zakim has joined #svg 20:58:18 [trackbot] Zakim, this will be GA_SVGWG 20:58:18 [Zakim] ok, trackbot, I see GA_SVGWG(SVG1)5:00PM already started 20:58:19 [trackbot] Meeting: SVG Working Group Teleconference 20:58:19 [trackbot] Date: 14 June 2012 20:58:50 [Zakim] +??P11 20:58:58 [ed] Agenda: 20:59:02 [Cyril] zakim, ??P11 is me 20:59:02 [Zakim] +Cyril; got it 20:59:52 [Zakim] -??P6 21:00:06 [Cyril] zakim, ??P6 was birtles 21:00:06 [Zakim] I don't understand '??P6 was birtles', Cyril 21:00:11 [Zakim] +??P6 21:00:18 [Cyril] zakim, ??P6 is birtles 21:00:18 [Zakim] +birtles; got it 21:00:49 [Tav] Tav has joined #svg 21:02:14 [Zakim] + +61.2.980.5.aaaa 21:02:36 [krit] krit has joined #svg 21:02:40 [nikos] Zakim, 61.2.980 is me 21:02:40 [Zakim] sorry, nikos, I do not recognize a party named '61.2.980' 21:02:49 [nikos] Zakim, +61.2.980 is me 21:02:49 [Zakim] +nikos; got it 21:03:15 [Zakim] + +1.415.832.aabb 21:03:46 [krit] Zakim, aabb is me 21:03:46 [Zakim] +krit; got it 21:04:07 [krit] nikos: no party? 21:04:24 [Zakim] + +1.612.789.aacc 21:04:46 [Tav] zakim, +1.612 is me 21:04:46 [Zakim] +Tav; got it 21:04:52 [nikos] heh. It's a pretty exclusive party 21:05:27 [birtles] ed, are you still having trouble joining? 21:07:31 [Tav] zakim, who is here? 21:07:32 [Zakim] On the phone I see Cyril, birtles, nikos, krit, Tav 21:10:03 [Zakim] +??P1 21:10:27 [ed] Zakim, ??P1 is me 21:10:27 [Zakim] +ed; got it 21:11:02 [ChrisL] ChrisL has joined #svg 21:11:12 [ed] scribeNick: ed 21:11:16 [ed] topic: Status of Shepherd integration / Test The Web Forward 21:11:28 [ed] 21:11:43 [ed] DS: heycam wanted to work on that, not sure if there's any progress on it 21:11:46 [Zakim] +ChrisL 21:12:15 [ed] ...the focus for svg is for css transforms, we can do it with the css testsuite at the moment 21:12:23 [ed] ... and then transfer the tests later 21:13:10 [ed] ED: is there any info on how to contribute on the TTWF site? 21:13:24 [ed] DS: there will be a presentation on how to do that 21:14:45 [ed] ED: just making sure the materials will be available online as the event takes place, to enable people participating even though they're not physically there 21:15:23 [ed] DS: I will publish them right after my presention 21:15:40 [ed] s/my/doug's/ 21:16:59 [ed] TB: peter linss is looking at integrating some of the changes I made for converting html/css test to svg 21:17:33 [ed] ... it's perhaps not generic enough, my code was for the submitted adobe tests 21:19:01 [ed] ED: so there was a question about whether a pass on a test (regardless of the format) should be a pass for that feature or not 21:19:32 [ed] TB: i think you have to have separate results, e.g for transforms in svg and for transforms in html/css 21:20:02 [ed] ... I don't think any of the browsers support the new svg things for transforms 21:20:24 [krit] s/new svg things for transforms/new transforms things for svg/ 21:21:05 [ed] ED: anything else needed from us in time for the event? 21:21:17 [ed] TB: would be good to have a couple of approved tests in our repo 21:21:23 [ed] DS: don't think that's necessary 21:21:37 [ed] ... we can use the same process as the csswg uses for review/approval 21:22:00 [ed] ... can=need 21:22:23 [ed] TB: we used to require tests to have a reviewer, are we giving up on that? 21:22:33 [ed] DS: no, css requires that too 21:22:59 [ed] ... you have a creator, a reviewer, and a third person to approve it 21:23:22 [ed] ... we could say reviewer and approver could be the same person if we want 21:23:39 [ed] TB: how does the test become approved? 21:23:56 [ed] DS: the shepherd tool moves the approved tests to another directory 21:24:27 [ed] TB: right now we have nothing in our approved directory 21:24:39 [ed] ... shouldn't we have a few in there? 21:24:49 [ed] DS: I think so yes 21:25:07 [ed] ... only a few people have committed tests so far 21:26:27 [ed] ... I'll look at reviewing and approving some of the tests 21:26:50 [ed] TB: i'll move some of my tests to the submitted folder 21:27:01 [ed] DS: right, only those will be picked up by shepherd 21:27:58 [ed] ACTION: Dirk to review (and approve) Tav's submitted svg2-tests 21:27:59 [trackbot] Created ACTION-3308 - Review (and approve) Tav's submitted svg2-tests [on Dirk Schulze - due 2012-06-21]. 21:28:56 [ed] ED: is this the template to use? 21:29:08 [ed] TB: yes 21:29:18 [ChrisL] 21:29:22 [ed] DS: all tests must be BSD-licensed, right? 21:29:32 [ed] CL: yes 21:29:54 [ChrisL] 21:30:34 [ed] TB: how does it work with linking to spec sections, since the spec is still pretty much in flux? 21:30:34 [ChrisL] template should be the same as 21:30:59 [ed] DS: you should link to the toplevel section 21:31:34 [ed] CL: so the file ED linked to isn't the latest template, should be revised 21:31:46 [ed] TB: I thought we agreed to allow link elements 21:32:16 [ed] DS: I think we have a resolution for that already 21:32:44 [jun] jun has joined #svg 21:33:18 [ed] CL: for link peter said it was easier for him to import if it was in the html namespace 21:33:35 [ed] ... this is all documented in the wikipage I linked to 21:33:54 [ed] ... this is based on discussions with peter last week 21:34:20 [ed] DS: so the wikipage represents what we want, ok 21:35:16 [ed] ACTION: tav to update the svg2 test template to be in sync with the agreed format in 21:35:16 [trackbot] Created ACTION-3309 - Update the svg2 test template to be in sync with the agreed format in [on Tavmjong Bah - due 2012-06-21]. 21:36:30 [ed] BB: did we come to a conclusion on the format for the reference images? 21:36:36 [ed] CL: we did discuss that 21:37:36 [ed] ... if possible we agreed that if it's easy to do solid green for pass for example then that's preferred, but there are cases where that can give false positives and cases where it's very difficult, like filters 21:38:19 [ed] BB: right... another suggestion is to use one standard text string for tests with text 21:38:44 [ed] TB: I have objections to having just green rects 21:39:17 [ed] BB: in gecko we use tens of thousands of tests, it's easy to quickly see pass if the pass images are always green 21:40:35 [ed] DS: if you see green it's passed, if you see red it's failed, basic principle 21:40:44 [Tav] 21:40:58 [ed] TB: here are the transforms tests i wrote a while ago 21:41:04 [ed] ... red indicates failure 21:41:14 [ed] ... and you can tell what's being tested 21:41:28 [ed] BB: how important is it to know what's being tested? 21:41:38 [ed] TB: in inkscape it was useful, to show someone 21:42:11 [krit] 21:42:22 [ed] BB: if inkscape had a testsuite with 10k tests, then it's still not easy 21:42:45 [ed] DS: every test is specified to test one specific thing, it's testing a part of the spec 21:43:00 [krit] 21:43:13 [ed] DS: for that test it's just the matrix value 21:43:26 [ed] ... it has a red rect behind that will show if there's something wrong 21:43:40 [ed] TB: but you can't tell what it's testing just by looking at it 21:43:52 [ed] DS: the filename tells you, and the test metadata 21:44:27 [ed] ... every test should describe what it's testing inside the metadata 21:44:36 [ed] ... and the pass criteria 21:45:26 [ed] BB: one difference is that you should be able to look at a test and see what it's testing, but that's not so important in an automated system 21:45:46 [krit] 21:45:49 [ed] DS: right, because then the automated engine doesn't care what it's testing, just compares the results 21:46:23 [ed] ... it's up to the author to provide the information 21:46:31 [ed] ... in the metadata, but it should be there 21:47:01 [ed] ... I think it's quite clear what it's testing 21:47:21 [ed] TB: what do people think? 21:47:52 [ed] CL: valuable to run automated tests, but when you get the list of failed tests it's useful if a human can quickly tell whats wrong 21:48:07 [ed] ... and then it's useful to know what those tests are testing exactly 21:48:38 [ed] ... this means we should have welldocumented testcases 21:48:49 [ed] DS: that's why we do review on them 21:49:20 [ed] TB: maybe we should have them separate? it's nice to have to some visual tests (like in SVG 1.1) 21:49:45 [ed] DS: we have reftests that can do that, two images have to look the same, otherwise it's a fail 21:50:19 [Zakim] -Tav 21:50:36 [ed] ... I agree that visual tests are good, and that if you see red it's failed 21:50:53 [Zakim] +Tav 21:51:11 [krit] 21:51:23 [ed] DS: here is a test, two rects... 21:52:14 [krit] 21:52:25 [ed] DS: this next test fails in all browsers 21:53:16 [ed] ... anyway, we can also discuss further on the mailinglist 21:53:22 [ChrisL] the 'show reference' link is broken btw 21:53:38 [ed] ... and it means others can follow the discussion 21:53:55 [ed] CL: i'd like to say another thing about the template 21:54:06 [ed] ... the template doesn't use a particular font 21:54:24 [ed] ... which means every implentation may use a different font 21:54:42 [ed] ... suggest we standardize on a particular WOFF font 21:55:03 [ed] DS: but we don't need that with reftests, because it ensures the font is the same on both reference and testcase 21:55:20 [ed] ... AHEM is often used in css tests 21:55:31 [ed] CL: ahem is not always useful though 21:55:55 [ed] DS: I think it's wrong to require a particular font 21:56:03 [ed] CL: why is consistency bad? 21:56:16 [ed] DS: but it doesn't matter, because we have reftest 21:56:57 [ed] CL: what is the problem? we've had unreadable text, and people assuming a particular default font 21:57:11 [ed] ... if you're testing svg it's not going to reflow text for example 21:57:29 [ed] DS: if you add more dependencies then that's an additional thing that can fail 21:57:54 [ed] CL: all browsers support this, explain why the pass criteria would make it fail? 21:58:16 [ed] DS: but you add unnecessary complexity 21:58:26 [ed] CL: so should we also take out the metadata? 21:58:39 [ed] DS: that's different 21:59:40 [Zakim] -ChrisL 22:00:25 [ed] ED: I think it's quite nice to have consistent fonts used throughout the testsuite 22:00:39 [Zakim] +ChrisL 22:01:02 [ed] ... looking back at the SVG testsuite, yes svgfonts/webfonts is additional complexity, but it's also nice to get consistent rendering across platforms 22:01:48 [ed] DS: webfonts are a requirement or just something we require in svg? 22:01:53 [Zakim] -Cyril 22:02:16 [krit] s/we require/we support/ 22:02:17 [ed] TB: inkscape doesn't support webfonts 22:03:09 [ed] DS: the problem is that if webfonts isn't a requirement for svg 22:03:21 [ed] CL: we have resolved that webfonts is a requirement for svg 22:03:47 [ChrisL] s/webfonts is/webfonts, and woff, are/ 22:04:38 [ed] ED: so, can we agree on having a consistent font used when possible? 22:04:59 [ed] DS: don't want to require webfonts for the tests 22:05:45 [ed] TB: dirks tests are simple, don't require labeling, but svg1.1 tests are more visual 22:05:51 [ed] ... with labels and so on 22:06:03 [ed] nikos: it's a risk if the layout obscures the text 22:06:06 [Zakim] -birtles 22:06:16 [ed] ... you don't necessaryly know the output you're going to get 22:06:27 [ed] CL: right, you might get unexpected results 22:06:43 [ed] TB: but the risk is pretty small, but I prefer the svg11 tests though 22:07:24 [ed] ... all it would take is to put in a style section in the template to use a webfont as the default font 22:07:28 [ChrisL] yes it would just take one @font-face rule plus a font family and size on the main group 22:07:39 [ed] TB: inkscape would ignore it 22:09:25 [ed] (discussion on inkscape and testing with references) 22:09:41 [ChrisL] so you would no longer need to make speccial inkscape test versions with all text elements removed 22:10:06 [ed] DS: i'm not strongly opposed to adding WOFF fonts, I just think that we should reduce the tests as much as possible 22:10:56 [ed] CL: that's why I pushed hard for pass criteria, because it doesn't matter if the WOFF is supported or not if the only thing that needs to be done is to render a rect for example 22:11:24 [ed] ... unless the pass criteria says you have to look exactly like a given font 22:11:29 [jet] jet has joined #svg 22:12:21 [ed] DS: for automated tests it's still additional complexity, requirements for passing the test 22:12:57 [ed] CL: but if we have flags, then we should just not add the flag "need webfont" if it's not necessary 22:13:22 [ed] DS: how do you style the elements? 22:13:32 [ed] CL: that's why it should be in the template 22:13:49 [ed] DS: but we should only have that if it's needed for passing the test 22:14:00 [ed] ... don't think we can agree on having this right now 22:14:53 [ed] CL: I don't accept your arguments 22:15:19 [ed] TB: don't care so much either way 22:15:19 [ChrisL] s/arguments/arguments that unflagged tests would fail/ 22:15:43 [ed] nikos: no strong opinion from me, but it should be in the template i think so that it's no risk to be missed 22:16:58 [ed] ED: i'd be fine with having two templates, one for tests that use text in the visual output, and one that doesn't (where the webfont isn't needed) 22:17:05 [ed] CL: that would be ok with me too 22:17:28 [ed] ... would that be fine with you DS? 22:17:42 [ed] DS: yes 22:18:30 [ed] TB: ok, so two templates, one for automated tests (without the webfont), one for visual tests (that have the webfont) 22:18:53 [ed] ... I'll make those templates, what font do we want to use? 22:19:03 [ed] CL: freesans would be good 22:19:45 [ed] DS: do we also want to have other fonts, serif, bold etc? 22:20:12 [ed] CL: probably, but not in the template maybe, but it's good to have a library of fonts that can be used 22:20:16 [ChrisL] <!-- your test suould turn this rect green--><rect x= y= width= height= /> 22:22:30 [ed] topic: strokebbox 22:22:43 [ed] DS: I'd like to add getting the stroked bbox to the spec, is that fine? 22:22:55 [ed] CL: yes, we agreed to do that, should be fine 22:23:37 [ed] DS: other kinds of bboxes too, like markers, filters etc? 22:23:48 [ed] CL: we tried to limit it to stroke I think 22:24:28 [ed] ... anything that affects strokes should say how it affects the strokebbox 22:24:40 [ed] DS: should markers be included? 22:24:50 [ed] CL: possibly 22:25:40 [ed] DS: ok, i'll try to do this next week (don't need an action) 22:25:45 [Zakim] -krit 22:25:48 [Zakim] -ChrisL 22:26:01 [Zakim] -nikos 22:26:02 [Zakim] -Tav 22:26:09 [Zakim] -ed 22:26:11 [Zakim] GA_SVGWG(SVG1)5:00PM has ended 22:26:11 [Zakim] Attendees were Cyril, birtles, +61.2.980.5.aaaa, nikos, +1.415.832.aabb, krit, +1.612.789.aacc, Tav, ed, ChrisL 22:27:55 [shepazu] shepazu has joined #svg 22:28:21 [krit] krit has joined #svg 22:28:29 [krit] krit has left #svg 22:32:55 [ChrisL] rrsagent, make minutes 22:32:55 [RRSAgent] I have made the request to generate ChrisL 22:56:39 [birtles] birtles has joined #svg 23:06:56 [jet] jet has joined #svg
http://www.w3.org/2012/06/14-svg-irc
CC-MAIN-2013-48
refinedweb
3,129
70.97
This page covers Tutorial v2. Elm 0.18. Navigation Next let's add buttons to navigate between views. Routing In src/Routing.elm add two new functions: playersPath : String playersPath = "#players" playerPath : PlayerId -> String playerPath id = "#players/" ++ id Players List The players' list needs to show a button for each player to trigger the ShowPlayer message. In src/Players/List.elm. First import href and Routing : import Html.Attributes exposing (class, href) ... import Routing exposing (playerPath) Add a new function for this button at the end: editBtn : Player -> Html.Html Msg editBtn player = let path = playerPath player.id in a [ class "btn regular" , href path ] [ i [ class "fa fa-pencil mr1" ] [], text "Edit" ] This button is a common a tag, which will change the browser url directly. As we are using hash routing we can just change the location hash and routing will work. And change playerRow to include this button: playerRow : Player -> Html Msg playerRow player = tr [] [ td [] [ text (toString player.id) ] , td [] [ text player.name ] , td [] [ text (toString player.level) ] , td [] [ editBtn player ] ] Player Edit Let's add the navigation button to the edit view. In /src/Players/Edit.elm: Add the imports: import Html.Attributes exposing (class, value, href) import Routing exposing (playersPath) Add a new function at the end for the list button: listBtn : Html Msg listBtn = a [ class "btn regular" , href playersPath ] [ i [ class "fa fa-chevron-left mr1" ] [], text "List" ] And add this button to the list, change the nav function to: nav : Player -> Html Msg nav model = div [ class "clearfix mb2 white bg-black p1" ] [ listBtn ]
https://www.elm-tutorial.org/en/07-routing/10-navigation.html
CC-MAIN-2019-04
refinedweb
263
64.61
Talk:Proposed features/trailblazed marked_trail* usage Usage of marked_trail* in Slovakia is now deprecated in favour of Relation:route. As I understand it, it was indeed originally used for marked trails, but as more tags were needed they were replaced with Relation:route. The remaining old tags are in the process of cleaning up, as per WikiProject_Slovakia/Hiking_routes. AlfonZ 12:39, 10 August 2010 (BST) As marked_trail has abandoned, I support this proposal. Following a current discussion on german mailing list it showed that Relation:route is sometimes to cumbersome just to flag if a path is marked (which is just yes or no). So having a simple yes|no tag is a good idea. -- Fichtennadel 07:11, 23 November 2010 (UTC) Licence problems In France, most of the hiking paths are marked by the national hiking federation. The problem for OSM is that the federation keeps the copyright on the route itself, on its name and ref, and on the symbol. So it seems impossible to add these routes in OSM. But maybe tagging the way with trailblazed=yes wouldn't be a copyright violation... Any advice? Damouns 08:15, 23 August 2010 (BST) - Wow, this looks restrictive indeed, they have the reserved trademark sign even on their painted marks. I don't think the two letters GR fall under copyright, they should be covered by trademark. In which case it is easy to omit them and just take the number. But IANAL. --Ipofanes 13:11, 26 August 2010 (BST) - Sorry for the confusion between copyright and trademark. This problem has been discussed here in French: we should wait before adding anything concerning these pathes (and their name, number, colors, anything). But maybe indicating that "there is a trailblaze here" without any precision could be OK... Damouns 09:20, 30 August 2010 (BST) - Since it's apparently a registered trademark, you should be able to query the registration office for which categories it applies to. Actually, the search seems to be here: INPI . Search with Déposant / titulaire. The ones relevant to this that I found easily were Numéro : 3283810, 1241077, 1294468, 1236674. They have reserved the trademark (the two letter symbol, white or yellow text on red background) to a whole lot of fields of endeavour; didn't bother to check which of them would apply to maps. No mention of numbers, no mention of routes. Maybe someone else will find them, if they exist. As the picture linked above showed (it's gone now), the ® was right after their registered trademark. - You'd have to confirm that the french trademark law has a clause similar to (some?) other countries (free translation): "the exclusive rights do not apply to a part of the registered symbol, if that part of the trademark would not be valid for registration in itself". That means, basically, that even if they did register all individual route signs, the numeric part would not be, as it's not "identifiable" and there's a whole lot of previous uses for short numbers; apparently a longer number might be valid for a trademark protection (for an example, see Cologne 4711). Just stick operator=Fédération française de randonnée and ref=nn on the relation. Alv 11:53, 30 August 2010 (BST) Trailblazed Specification The current trailblazed=yes proposition is not specific enough. On a single way there can be multiple trailblaze signs. I have seen them for hikers, mountainbiker, cyclers and cars. The trailblazed=* could be extended with optional namespace, such as hiking, bicycle or mtb, thus giving for instance trailblazed:hiking=yes. The same values as route=* can be used. This scheme allows me to create maps with trailblazed highways for a certain purpose, without needing defined routes. In such maps one can add the necessary guideposts by adding for instance hiking=yes as a guidepost (information=guidepost) tag. Also in this way it is possible to contribute to possible routes bit by bit, without having to implement the entire route. It also circumvents any trademark issues, by just recording what one sees in the field and not knowing anything about a route. Another option is to add an extra tag to a highway, such as hiking=yes, but this seems to generic. And it is furthermore not clear that it should apply to the trailblaze. --Aleene (talk) 16:10, 15 May 2015 (UTC) - I don't think marking the ways with hiking=yes is a good idea. Would easily be interpreted as an access tag. user:Tractor 21 July 2015 - Good point. hiking should refer to the trailblazed aspect and is not an access= tag. --aleene (talk) 18:59, 21 July 2015 (UTC) - I think your best solution would be to use namespaces, as you already suggest: trailblazed:hiking=yes. We already use piste:trailblazed=yes for (nordic) ski trails (piste:*=* is the namespace for things related to pistes/skiing/ winter activities). Tractor 23 July 2015 The major hiking routes in my area (Toronto, Canada) are blazed with paint or other marks on trees and utility poles. The two with which I am most familiar use white for the main trail and blue for side trails. Marked ski trails often use plastic markers in a variety of colours. I'd like to propose that instead of (Tag|trailblazed|yes) the tag take the form (Tag|blazed|colour). If it is a tag on a route relation or on a Path, do we really need to include "trail"? Also colour would allow users to differentiate between paths that cross or overlap each other. -- User:Greying_Geezer 03:00, 08 Dec 2016 (UTC) Cairns, poles Wikipedia names cairns as an alternative to blazes. As cairns are somehow transient and informal I would not take them as trailblazed=yes. Wikipedia names poles together with cairns in one section. But I regard poles as persistent and formal. Most ski tracks and a lot of hiking trails (e.g. E4 on Crete) are marked with poles. This would clearly apply for trailblazed=yes. Maybe this should be clarified on the main page. --GerdHH (talk) 05:27, 8 May 2016 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/trailblazed
CC-MAIN-2020-45
refinedweb
1,018
72.05
Interactive Panel: Ask Us Anything! - Date: to - Day 2 - Speakers: Andrei Alexandrescu, Hans Boehm, Chandler Carruth, Stephan T. Lavavej, Bjarne Stroustrup, Herb Sutter, Andrew Sutton - 64,622 views - 20 comments ![if gt IE 8]> <![endif]> Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” To send questions in advance from those not attedning? You'll be able to watch live and submit questions via Twitter: #Ch9Live, #GoingNative C To analyzing C++ code is very hard work. So I think the Standard must consider these kind features for who providing automation tool. What you think? As you know, there are lots of automation tools in Java development env. But C++ didn't have anything perfectly supported. Look at CLang, it provides advanced tools integration. In summary, you can access the AST of your code for code completion and probably also refactoring or whatsoever... This has not much to do with the standard I guess. But you can see their website, it's not enoght to applying real environment. Implementing 80% is easy but 20% ... I think it can't be. @Ruffin:take a look at this: So what was the question "for anybody but Herb"??? @undefined: You can find it on Twitter. Search #GoingNative C @0:31:00: Wow, is there really a debate on what a module is? Just combine the outdated concept of header / source into one file -- that's a module. Now solve all the language problems that crop up. I agree it's a huge change, modules will have to exist side-by-side with the old system for backwards compatibility. "#include file" is similar to "from file import *", always a bad idea. C++ will need fine-grained symbol import/export control for macros, namespaces, templates, classes, functions and variables. Look at Python for inspiration: "from module import symbol as local_symbol" Of course "#include" does more than just import symbols, but the idea is to develop new language to divide its power until it's no longer needed; it's a sledgehammer of a directive and should be deprecated. @ryanb: unsurprisingly, it was the question about thoughts on C++/CX. It's a pity it didn't get answered, although I can imagine it would feel uncomfortable for the speakers to criticise C++/CX in this particular situation... I was the one who raised the issue of build/compile iteration times in the Q&A. I apologize if I came across as critical of the work the committee has done on C++11 -- I think all the new language features are important and useful, and I'm very glad for the work everyone has done (thank you especially for lambdas!) The point I was trying to (poorly) raise, was that I would like to see language features on the agenda that that improve the ability of programmers to iterate quickly on code. C++ suffers from high levels of inter-dependency between compilation units, and if this were not a language problem it would be solvable by compiler/linker developers. It is true that developers can improve build efficiency with techniques such as unity builds, pimpl firewalls, better forward declarations, and distributed parallel build systems, but all these are work that developers must do outside of their fundamental goals of implementing algorithms. I don't know whether this concern is best addressed with modules, partial classes, or something else, but I think it is an important one for the future, especially since the language allows us to create larger and more complex projects. I really enjoyed the talk... STL is so cool ,and Andrei House and Herb Wilson :P are awesome... too bad I hate twitter and that I wasnt there, I always wanted to ask a compiler programmer why move was necessary as a lang feature, aka why compilers cant optimize everything that programmers optimize manually with &&. One of the questions asked had to do with using raw pointers, particular void*. Chandler Carruth said it is easy to demonstrate the compiler will generate less/faster code by avoiding the use of raw pointers, and I would like to get some code sample that illustrates this point. Thanks in advance! @Kometes: "Wow, is there really a debate on what a module is?" Yes. Sort of. The standard's committee has been looking at modules for a long time. Like concepts, the earliest modules proposals fell due to trying to solve every problem. They included stuff with a standardized C++ ABI, DLL/SO interfaces, etc. All kinds of stuff that are interesting certainly, but really muddied the waters as to the purpose of the feature. The current module proposal is much more focused. But the stuff from earlier modules proposals still lingers in their minds. What is needed is for someone big and important to step up and make a presentation that says, "THIS IS WHAT MODULES ARE FOR!" Stroustrup is doing something similar with concepts, taking the whole thing to a minimalist, first-principles approach. That really needs to be the new push with modules as well. They exist for compilation reasons, nothing more, nothing less. ok, so when do we get a clang frontend for MSVC? no joke. -Charles and Microsoft, sincerely THANK-YOU , you brought all my heroes at one talk in one room this GoingNative is the best thing that happened to c++ in a long time. -Also open-source c++ code analyzer, best tech-news on this year. @KikiAlex: You're welcome! We're glad you liked it and we hope to do it again in the future C @piersh: When you build it... C Not sure if this is the right place to dive in, but here goes: The discussion of pass-by-value or pass-by-const-reference troubles me because of the possibility of multiple threads. It seems the desire to avoid the interlocked ref count operations assumes you don't need or want the protection they provide. If your function takes a shared_ptr (or any other type) by const reference, you only guarantee that particular function won't modify the value; in another thread, the very same shared_ptr could be reset(). If your function instead took the shared_ptr by value, the reference counting mechanism assures the shared_ptr points to a valid object for the lifetime of the function call. I don't see how it's safe to blanket suggest that pure observation (of shared_ptr in particular) should always use pass-by-const-reference. I suppose you have to treat the use of a shared_ptr-controlled object in another thread as shared ownership, but since threads are created from functions you'd better make sure to, say, wrap the function (that takes the shared_ptr by const reference) in a lambda that takes the shared_ptr by value. Remove this comment Remove this threadClose
https://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Interactive-Panel-Ask-Us-Anything-?format=smooth
CC-MAIN-2016-22
refinedweb
1,152
61.87
GroupHeadnote and flynoteSearch Summary: Full judgment THE REPUBLIC OF UGANDA IN THE HIGH COURT OF UGANDA CENTRAL CIRCUIT AT NAKAWA MISCELLANEOUS APPLICATION NO 83 OF 2005 (ARISING FROM CS NO. 004/2005) RUKOOGE ENTERPRISES LTD…………………………………………PLAINTIFF VERSUS ENGINEER MUHWEZI T/A EMTEC CONSTRUCTION SERVICE…………………………………. DEFENDANT ASSOCIATED CONSTRUCTION AND ENGINEERING SERVICES LTD……………………………………. RESPONDENT BAGUMA CRESCENT RUSOKE …………………………………….:APPLICANT BEFORE HON. JUSTICE GIDEON TINYINONDI: RULING: This file was allocated to me on 14/02/2005. The Plaintiff obtained judgment under summary procedure in HCCS No. 004/05: RUKOOGE ENTERPRISES LTD VS. ENGINEER MUHWEZI t/a EMTEC CONSTRUCTION SERVICE. A warrant of attachment for the judgment debt of Shs. 21,500,000/= issued forthwith. Several properties including “The Defendant’s wheel loader 930 CAT” were to be attached and sold by public auction 14 days after notice of sale had been advertised if the Defendant had not satisfied the judgment debt by then. According to the letter dated 01/03/2005 by KOSH AUCTIONEERS & COURT BAILIFFS to the Deputy Registrar Nakawa High Court. The advert came out on 17/02/2005. Before the sale could take place court issued an interim order to last 30 days. At the expiration of the order the Bailiffs asked for renewal of the warrant. The first interim order of 28/02/2005 to last up to 7/04/2005 was extended to 15/04/2005. According to the Bailiffs’ letter of 18/04/2005 to the Deputy Registrar the wheel loader (and other properties) was sold, by private treaty on 15/04/2005. On 21/03/2005 Miscellaneous Application No. 65/05 RUKOOGE ENTERPRISES LTD VS. ENGINEER MUHWEZI t/a EMTEC CONTSTRUCTION SERVICE – DEFENDANT AND ASSOCIATED CONSTRUCTION AND ENGINEERING SERVICES LTD – OBJECTOR/APPLICANT was filed seeking an order that “Engineering plant caterpillar – 930 wheel loader Registration No. UXJ 039 be released from attachment”. On 10/05/2005 this application was dismissed for want of prosecution. On 28/04/2005 Miscellaneous Application No. 83/05: RUKOOGE ENTERPRISES LTD VS. ENGINEER MUHWEZI t/a EMTEC CONSTRUCTION SERVICE – (DEFENDANT), ASSOCIATED CONSTRUCTION AND ENGINEERING SERVICES LTD (RESPONDENT) AND BAGUMA CRESENT RUSOKE (APPLICANT) was filed. In this application Baguma Crescent Rusoke the Applicant sought orders, inter alia that: “a). The engineering plant caterpillar 930 wheel loader Registration No. UXJ 039 in possession of police on complaint by the Applicant be released.” The said Baguma Crescent Rusoke swore an affidavit in support, to state: “1. That I am an adult male Ugandan of sound mind. 2. That I am a Managing Director of BCR General Ltd. a company incorporated and carrying on construction business in Uganda. 3. That on the 17th day of February 2005 an advertisement was made in the New Vision news paper advertising for sale a wheel loader by virtue of a warrant of attachment in Civil Suit No. 004/2005 in the High Court of Uganda at Nakawa. For ease of reference find hereto attached a copy of the said newspaper marked Annexture “A”. 4. That after the 14 days I bided and bought the said machine and for ease of reference find hereto attached the Memorandum of sale / sale agreement issued to me by KUSH Court Auctioneers and Bailiffs marked Annexture “B1”. Also hereto attached is a Warrant of Attachment marked Annexture B2. 5. That on payment of the said Shs. 17,000,000/= (Seventeen Million only). I was issued a receipt by the said Auctioneers a copy of which is hereto attached and marked Annexture “C”. 6. That after the sale the said Auctioneers made a return to Court per the copy of the return hereto attached and marked C1. 7. That after paying for the said wheel loader I parked it at my workshop at Wankonko. 8. That on the 20th day of April 2005 armed men invaded my workshop and took the said wheel loader and we reported a police case at Jinja Road Police Station for malicious damages to property and theft of the said machine per SD 50/20/04/05. 9. That the said wheel loader is now parked at Jinja Police station and at the time of reporting the case no one had a claim on the said wheel loader per the police records. 10. That we have requested for the police authorities to have the said wheel loader released but the Office in Charge Jinja Road referred us to the Regional Police Commander Mr. Oyo Nyeko. 11. When I appeared before Nyeko he said that at the time of sale there was a pending objector application No. 65/2005 arising from HCCS No. 4/2005 and as such the sale was improper. 12. That at the time of sale I was not aware of the said objector application Miscellaneous Application 65/2005. The said application from the Regional Police Commander is hereto attached and marked Annexture “D”. 13. That the said obstruction is occasional by the judgment debtor who is also the Director in the Respondent company and without just cause. 14. That I have conducted a search in the company Registry and found that the objectors are the very judgment debtors trading under the different company names and even if the said objector was to be heard it cannot succeed. For case of reference find hereto attached a copy of the memorandum and Articles together with the certificate of incorporation marked Annexture “E”. 15. That I have been advised by my lawyers M/S Bitaguma & Co. Advocates and verily believe their advice to be true that I am a Bonafide purchaser of the said wheel loader and the police’s continued attachment of the said machine sold as a result of court order is contemptuous. 16. That no further court order was relied on to attach the said wheel loader from my workshop. 17. That I depone this affidavit in support of an application to have the said wheel loader released by police to me as the rightful / lawful owner of the said wheel loader and should the Respondent further obstruct my possession I pray that he be detained in Civil prison.” On behalf of the Respondent one Gorrette Kyamazima in an affidavit in reply, deponed as follows: - “1. THAT I am an adult female Ugandan of sound mind, a Director and shareholder in M/S Associated Construction & Engineering Services Ltd. a Limited Liability Company duly incorporated in Uganda and the Respondent in the suit herein. I am authorized by the company to represent it and it is on whose behalf that I swear this affidavit. 2. THAT I have read and understood the affidavit of Baguma Crescent Rusoke the Applicant, in support of the Application and in this affidavit I reply to it. 3. THAT M/S Associated Construction & Engineering Services Ltd (hereafter referred to as “the company”) bought the caterpillar, wheel loader Reg. No. UXJ 039 on 17/04/2002 from M/S MEC International Ltd of P. O. Box 21347 Kampala. The sale agreement was made by Odere & Nalyanya Advocates and solicitors as per Annexture “A” hereto. 4. THAT after purchase the company modernized its efficiency by having it repaired of any defect to put it to efficient construction services/operations which the company does in the whole of Uganda. 5. THAT the caterpillar started operating for the company and in March 2003 the company rented it out to M/S EMTEC Construction Services Ltd. a Limited Liability Company duly incorporated in Uganda and also doing construction work and M/S Associated Construction & Engineering Services Ltd and M/S Emtec Construction Services are closely related legal persons with (some) common Directors. The companies offices are at Bugolobi, Kampala. 6. THAT on 5/03/2005 I was surprised to learn from the Monitor Newspaper (photocopy hereto attached as Annexture “B” that, among others, the caterpillar had been attached and was subject of execution proceedings against Engineer Muhwezi, T/A Emtec Construction Service. 7. THAT I know Engineer Muhwezi and is a Director and shareholder in both Emtec Construction Services Ltd and Associated Construction & Engineering Services Ltd. M/S Emtec Construction Services Ltd. has no business name known as EMTEC CONSTRUCTION SERVICE and as the Director of M/S Associated Construction Engineering Services Ltd. and M/S Emtec Construction Services Ltd I am not aware of Engineer Muhwezi Trading as EMTEC CONSTRUCTION SERVICE. That I do not know that business. That Engineer Muhwezi was mostly in the management of the caterpillar on behalf of M/S Emtec Construction Services Ltd and he is also a Director and shareholder in both companies. That the hire/co-operation status in respect of the caterpillar between M/S Associated Construction & Engineering Services Ltd and M/S Emtec Construction Services Ltd was on an oral/mutual arrangement. 8. THAT after I learnt of the intended sale of the caterpillar in execution of a Decree in a case against ENGINEER MUHWEZI T/A EMTEC CONSTRUCTION SERVICE, I for and on behalf of the company filed an objector Application No. 65 of 2005 contending that the caterpillar did not belong to Eng. Muhwezi T/A Emtec Construction Service but to the company and was not supposed to be attached in a suit against Eng. Muhwezi as a person. 9. THAT I later learnt from Eng. Muhwezi that he was challenging the decree and the execution by sale of all the properties attached, including the caterpillar Eng. Muhwezi secured from this honourable court an Interim order to stay the sale of the properties until his application for setting aside the decree and execution was heard. (The order is annexed hereto as “C” dated 25/02/2005 to last for 30 days (which would end on or around 27/03/2005). 10. THAT I later learnt from Eng. Muhwezi that when the aforestated Interim order expired, it was renewed on 7/04/2005 to expire on 15/04/2005. Annexture “D” is attached. The caterpillar had not been sold. 11. THAT I further learnt from Eng. Muhwezi that the extended order was served on the Court Bailiff in charge of execution on 11/04/2005 as per Annexture “E” that is on court record. 12. THAT on 15/04/2005 at 4.00 p.m. I was shocked upon being informed by Eng. Muhwezi that the caterpillar and his personal Pajero had been sold and that he had not learnt of the names of the purchasers yet. 13. THAT later I, with the assistance of the company\par s lawyers M/S Tashobya, Byarugaba & Co Advocates perused the court record to find out what happened concerning civil suit No. 004 of 2005. 14. THAT on court record we found there a letter dated 1/03/2005 and filed in this honourable court on 3/04/2005 by Kosh Auctioneers & Court Bailiffs explaining to court that the Public Advertisement came out prayed to this honourable court to renew the warrant of execution. (The RED PEPPER advertisement is attached as Annexture “G”). The 1st warrant is hereto attached as “H”. It is not attached to the Applicant’s affidavit, and stated: “….To: Kuboba Henry, Court Bailiff WHEREAS Engineer Muhwezi (herein after referred to as the judgment debtor) was ordered by Decree of this court passed on the …..(date not mentioned) in the above suit to pay the defendant the sum of Ug. Shs. 21,500,000/= …..on of such amount has not been paid and remains owing on account of the said decree together with the sum of Shs. 21,500,000/= ….. as costs of the suit. These are to command you to attach the movable property of the said defendant as set forth in the schedule on the reverse hereof and which shall be appointed out to you by the said defendant and unless the said judgment debtor shall pay to you the said sum of Shs. …. (Not mentioned) only and further interest aforesaid and your fees for the attachment to sell by public auction. …. YOU ARE HEREBY COMMANDED to return this warrant on or before the 14th day of March (year not mentioned) certifying the manner in which has been executed or the reason why it has not been executed…. NOTIFICATION The terms of sale are set out in the High Court circular No…… (Number not mentioned) dated … (date not mentioned) issued to all Court Brokers. The public notice and advertisement …..by way of attachment of the Defendant’s property old motor-vehicle UAE 259C…..Wheel Loader 930 cat. D. ASSISTANT REGISTRAR.” 15. THAT my reading of Ann. “H” stated that “….the public notice and advertisement shall be in the form and manner set out in the above circular…” The properties attached, including the wheel loader did not appear on the reverse side of the order. The wheel loader was not on the reverse but on that page of the order. The wheel loader was not on the reverse but on that page of the order (attached warrant). The attachment warrant ordered the court bailiff to attach the Defendant’s properties showed to him by the DEFENDANT. And Engineer Muhwezi has informed me that the court bailiff never appeared before him or consulted him to find out which properties were his and should be attached and that the wheel loader (with others) were attached on the day when he was out of Kampala, to Mbarara and has further informed me that he could not show the bailiff the wheel loader for it was not his. 16. THAT from the court record and with the assistance of the company’s lawyers I perused a warrant of execution dated 4.04.2005 (here to attached as Annexture “I”) I studied it and discovered that the wheel loader 930 was mentioned therein and my reading of it, it stated and, I quote “….To. Kuloba Henry The Bailiff of the Court WHEREAS Engineer Muhwezi (herein after referred to as the judgment Debtor)” was ordered by decree of this court passed on the 14th day of February 2005 ….. to pay plaintiff sum of Shs. Ug. 21,500,000/=…….whereas the sum of Shs. 21,500,000/= of such amount has not been paid and remains owing of the said decree together with the sum of Shs. 21,500,000/= on account of interest on the decretal amount up to the ….(Not stated) 2005 making all the sum of Ug. Shs. ……. (Not stated) …..of account of costs of the suit making all the amount at the rate of six per centum per annum up to the date of payment, YOU ARE TO COMMANDED to attach the movable property of the said defendant…., to sell by public auction. The sale here by ordered shall not take place before 14 days from the date of which notice for sale has been advertised…. The public notice and advertisement shall be in the form and manner set out in the above circular….” 17. THAT with the assistance of the company’s lawyers aforementioned I got from the court record a return (letter) to this honourable court by Henry Kuloba of Kosh Auctioneers & Court Bailiffs dated 18.04.2005 and filed on 19/04/2005 (attached here to as Annexture “J” and attached to the Applicant’s affidavit as Annexture “C”) stating: “………….Warrant of attachment dated 4th day of April 2005 issued to us …. To sale by private treaty: Since the first warrant had expired without auctioning the property …. We managed to sale of by private treaty the wheel loader 930 CAT ….AT Uganda Shillings 17,0000,000…..The sale took place on the 15th day of April 2005 by private treaty…..” 18. THAT I wondered why the Court Bailiffs had reported to court that he had sold the caterpillar by private treaty when the warrant had mentioned of public auction and the company’s lawyers aforementioned informed me and I believe them to be true that the sale was illegal, invalid and not a sale at all in law in execution and that it could be challenged in this honourable court. 19. THAT later I learnt from Eng. Muhwezi that he, Trading as EMTEC CONSTRUCTION SERVICE instituted proceedings challenging the sale of the properties attached in respect of the suit against him as the Defendant and contending that the sale of the properties is illegal and null and void and that the caterpillar cannot be released to anybody for it is at Jinja Police Station, before the hearing and determination of his proceedings as per the order hereto annexed as “K”. I have also learnt from Engineer Muhwezi that his proceedings are still pending in this court. 20. THAT I have perused the Application in issue herein and the company does not know ROKOOGE ENTERPRISES LTD the plaintiff mentioned in the Application, and on my perusal of the court record to understand what took place, with the assistance of the company’s lawyers, I found out that there is a plaint vide civil suit No. 004 of 2005 filed on 3/01/2005 the parties therein being RUKOOGE ENTERPRISES LTD VERSUS ENGINEER MUHWEZI T/A EMTEC CONSTRUCTION SERVICE (Annexture “L” is attached). 21. THAT I further learnt from the court record that a Decree in the suit was acquired on 14/02/2005 against Engineer Muhwezi T/A & Emtec Construction Service Decree was for Shs. 21,500,000/= and costs of the suit but the company is not party to it, and I have realized that the plaintiff named in the suit is not the plaintiff named in this Application. (A photocopy of the Decree is annexed as “LI”). 22. THAT the affidavit of BAGUMA CRESCENT RUSOKE in support of the Chamber summons is completely false. 23. THAT in particular response to his paragraph 3 that an Advertisement was made in the New Vision Newspaper advertising for the sale of a wheel loader by virtue of a warrant of attachment in civil suit No. 004 of 2005 in the High Court of Uganda at Nakawa, it is not true. The advertisement has never been made in the New Vision. 24. THAT in response to paragraph 4 of his affidavit I state as follows: - (a). that it is false for him to answer that he bided for the buying of the machine after the advertisement of 17/02/2005 and bought it after 14 days of the advertisement and the warrant of attachment annexed on his affidavit as “B2” which he contends is the one whose advertisement he read in the New Vision on 17/02/2005 is dated “the 4th day of April, 2005”. His Annexture “B2” is quoted and is hereto annexed as Annexture “M”. (b). that it is false that he bided and bought the machine (the caterpillar) after 14 days (after the advertisement) as his Annexture “B1” (the agreement of sale) (hereto annexed as Annexture “N” is dated 15/04/2005) (and advertisement not in the New Vision). (c). that as per my averment in paragraph 14 the first warrant issued by this honourable court was dated 14/02/2005 advertised on 17/02/2005 in the Red Pepper (Annexture “G” & “H”) and as per my paragraph 14 and 1st warrant expired before execution and the court bailiff applied for renewal of the warrant to sale the property including the caterpillar on 1/03/2005 (Annexture “f”) and the same was issued on 4/04/2005 (Annexture “I”) to be advertised, which was not advertised, and the Applicant never bought the caterpillar after the due warrant of execution he allegedly read on 17/02/2005. 25. THAT I am informed by the company’s lawyers M/S Tashobya, Byarugaba & Co Advocates and I believe them to be true that the Applicant is fraudulently deceitful. The purchase was illegal and/or negligent. That it cannot stand against Engineer Muhwezi, the Defendant. 26. THAT the company is strongly interested in the caterpillar for it was its own property under the management of Engineer Muhwezi on behalf of Emtec Construction Services Ltd and Associated, Construction & Engineering Services Ltd to which I am a Director and shareholder. We are not party to the suit. 27. THAT the company was not involved in the allegations of the Applicant in his paragraphs 7 – 11 against and/or involving the police of Uganda and in response to his paragraph 12 I state that the company’s objector application (which has been overtaken by events) (Annexture “D” thereto) and Annexture “O” hereto is on court record in this honourable court. 28. THAT the company (M/S Associated Construction & Engineering Services Ltd.) does not understand why it was sued in this application, and is not accused of doing anything in conjunction with the REGIONAL POLICE COMMANDER against the Applicant. 29. THAT in paragraph 13 of his affidavit the Applicant contends that the obstruction is occasioned by the judgment debtor (Eng. Muhwezi T/A Emtec Construction Service) who is also Director in the Respondent company (the company) but the company has not obstructed him in anything and the company’s lawyers aforestated have informed me and I believe their information to be true that the company is a legal person, independent from Engineer Muhwezi T/A Emtec Construction Service and a Natural person, and that although he is a Director and shareholder in the company (Associated Construction & Engineering Services Ltd.) his personal actions cannot be visited on the company to be sued as in the Application herein. 30. THAT in answer to paragraph 14 of the Applicant’s affidavit I state that the company filed an objector application as aforestated and is not among the judgment debtors trading under the different company names. I state that the company does not trade under a different company name or any at all; and on the contention that if the objector was to be heard it cannot succeed, I state that the company is not a Judge and justice to it would be sought in this honourable court and that was why it had filed an objector application, and I agree that Annexture “E” to his affidavit hereto attached as Annexture “P” is true of the particulars of the company whom my lawyers aforestated and I believe them to be true, have informed me that it is an independent legal person from Engineer Muhwezi T/A Emtec Construction Service. 31. THAT in response to the Applicant’s affidavit in paragraph 15 the company’s lawyers aforenamed. M/S Tashobya, Byarugaba & Company Advocates have informed me and I believe their information to be true that if the Applicant’s lawyers M/S Bitaguma & Company Advocates advised him that he is a bonafide purchaser; the lawyers misadvised him. 32. THAT in further response to the applicant’s affidavit (paragraph 15) that his lawyers have advised him that the continued attachment by police of the said machine (wheel loader) as a result of a court order is contemptuous, I state that the company does not control the police and does not advise them and I am in respect of that averment advised by the company’s lawyers aforenamed and I believe their advise to be true that if the Applicant is aggrieved by the action (s) of the police, he should commence an action against the Attorney General of Uganda and not the company. 33. THAT I am informed by the company’s lawyers M/S Tashobya, Byarugaba & Company Advocates and I believe their information to be true that the application herein is vexatious, and misconceived and without any merit as 0.19rr 84 (1) (2) of the CPR concern obstruction to the judgment creditor to take possession of immovable property. Or to the person who has purchased such property it is in my knowledge that the caterpillar in issue is a movable property and the company is not an obstructer and the Applicant is not a judgment – creditor and the judgment – creditor in civil suit No. 004 of 2005 is RUKOOGE ENTERPRISES LTD, and I am further informed by the company’s lawyers that R.85 CPR still concerns immovable property but that if it concerns any property the company has not instigated anybody to take possession of the caterpillar and that it cannot be detained in a civil prison. The company had only filed an objector application to seek justice. 34. THAT again, I aver as per my paragraph 19 herein, (Annexture “K”) that there is a court order attaching the property and ordering it to remain at the police station until the proceedings by Engineer Muhwezi T/A Emtec Construction Service heard are and disposal of Engineer Muhwezi T/A Emtec construction service is not a party to this Application and as stated before his proceeding(s) are not yet disposal of. The company knows EMTEC CNSTRUCTION SERVICES LTD as per Annexture “Q”. 35. THAT what is herein stated is true to the best of my knowledge save paragraph 25, part of paragraph 15, 29, part of paragraph, 30, part of paragraph, 31 part of paragraph 32 and part of paragraph, 33 based on information and/or advice from the company’s lawyers M/S Tashobya, Byarugaba & Company Advocates.” At the hearing of this application Counsel for the Respondent raised three preliminary objections. He went to great length to argue these. Equally lengthy arguments in reply were made by Counsel for the Applicants. In my view both Counsel engaged in an exercise in futility. It took me a lot of time to peruse the whole court record. The main suit is titled: “HCCS. NO 004/05 RUKOOGE ENTERPRISES LTD VS ENGINEER MUHWEZI T/A EMTEC CONSTRUCTION SERVICES.” This suit arises out of a sale and purchase agreement, [“Annexture A” to the plaint refers]. For clarity a certified photocopy thereof is reproduced here below: THE REPUBLIC OF UGANDA IN THE MATTER OF THE CONTRACT ACT CAP 73 LAWS OF UGANDA AND IN THE MATTER OF THE TRAFFIC AND ROAD SAFETY ACT 1998 LAWS OF UGANDA AND IN THE MATTER OF A SAVE AGREEMENT FOR AN ENGINEERING PLANT SALE AGREEMENT This agreement is made this 4th day of August 2004 between RUKOOGE ENTERPRISES (U) LTD. of P. O. Box 456 Lira Municipality Lira and on behalf thereof the Managing Director Mr. Julius Mugisa hereto acts (Hereafter called “the Vendor” on first part). AND Engineer Muhwezi of EMTEC CONSTRUCTION SERVICES LTD, P. O. Box 34176 Kampala (Hereinafter called “The Purchaser” on the second part. WHEREFORE Both parties hereinafter do agree and witness as hereunder: - 1. That this is an agreement for a sale of an Engineering Plant registration number 153 UCJ. LEGAL OWNERSHIP 2. That the Vendor is the legal owner of the said Engineering Plant by the legal fact that the said Vendor bought it from the Registered Proprietor in the names of ASSIST (U) LTD. 12. That this agreement has been made when all parties are of sound mind and without undue influence whatsoever. IN WITNESS THERETO both parties hereto attest and witness these presents as hereunder: Signed by the said Julius Mugisa M. D. For Rukooge Enterprises Ltd. _____________________ VENDOR In the presence of _____________________ WITNESS Signed by the said Engineer Muhwezi T/A EMTEC CONSTRUCTION SERVICES LTD. _____________________ PURCHASER DRAWN & FILED BY: M/S KANYUNYUZI & CO. ADVOCATES, PLOT 35/37 NKRUMAH ROAD, P. O. BOX 1073, KAMPALA To cut the long story short, as they say, there followed: a). a summons in Summary Suit dated 03/01/05 bearing the same title of the plaint. b). affidavit of service by Evary Mujambere sworn on 10/02/2005. When one peruses paragraphs 10 to 14 of the same one notices that the affidavit is so defective that it amounts to no service at all. c). the decree dated 14/02/2005. d). the application for execution dated 10/02/2005. e). the warrant of attachment and sale of movable property directed to the court bailiff dated 14/02/2005. f). the notice of sale in the “Red Pepper” issue of 17/02/2005. g). the return by the court bailiff dated 1/03/2005. h). the notice of motion for an interim injunction dated 28/02/2005 in Miscellaneous Application 45/05. This application was not effectively served either on Counsel for the Plaintiff or the Court bailiff {See the defective affidavit of service by Rowland Mugisha}. To compound the mess on this file a whopping eight (8) applications were born of this main suit {HCCS. No. 004/05}. They are: 1). Miscellaneous Application No. 42/05 2). Miscellaneous Application No. 43/05 3). Miscellaneous Application No. 44/05 4). Miscellaneous Application No. 45/05 5). Miscellaneous Application No. 65/05 6). Miscellaneous Application No. 82/05 7). Miscellaneous Application No. 83/05 and 8). Miscellaneous Application No. 72/06 I do not know how many more applications have been filed since Miscellaneous Application No. 72/06 was filed!! I conclude as follows. Whereas “EMTEC CONSTRUCTION SERVICES LTD” was inserted in the said Sale and Purchase Agreement of 14/05/04 and endorsed upon signature thus making the company the purchaser or one of the purchasers of the engineering plant registration no. 153 UCJ this particular suit ought to have had the company as a party. Short of that there is not, in law, a Defendant in the main “Suit”. I hereby strike out the plaint. Thereby all the steps and applications under it are hereby declared nullities. Each party in the “Suit” and applications shall bear their own costs because none of them acted with diligence. Sgd: Gideon Tinyinondi JUDGE 6/06/2006 06/06/2006: Mr. Byarugaba for Respondent No appearance for Applicant Ms. Kauma Court Clerk. Mr Byarugaba: I spoke to Mr. Bitaguma Counsel for Applicant. He was traveling to Mukono. He told me to receive the ruling and later communicate to him. Sgd: Gideon Tinyinondi JUDGE 06/06/2006. COURT: Ruling delivered in open court at 9.35 a.m. Sgd: Gideon Tinyinondi JUDGE 06/06/2006. Download high-court-2006-20.rtf
https://ulii.org/ug/judgment/high-court/2006/20
CC-MAIN-2018-51
refinedweb
4,990
60.24
115 members 321 members 435 members 155 members 38 members Posted on March 19, 2015 at 4:00am 18 Comments 14 Likes This is really great accomplishment:-).…Continue Posted on December 5, 2013 at 3:00pm 15 Comments 2 Likes HONDA is introducing "Smartphone Case N" that has small six air bag to protect Smartphone in the case. Sorry, the video is Japanese. Followings are some essence; The engineer is start development this case by his loss of his own smartphone, drop and damage. He start drop test and found the critical height of damaging phone…Continue Posted on June 25, 2013 at 5:40am 36 Comments 3 Likes Posted on January 17, 2013 at 2:55pm 16 Comments 1 Like Konbanwa! What do you fly and is your field close to Tokyo? I'm located in Meguro and mainly into UAV aircraft, although just started to fly a KKmulticopter. rgds William I'm living in Atsugi. It looks like lots of space to fly air craft in Atsugi. However, there are very few places to flay. This is my main frustrated issue. You can visit my web site. It is some sort of language translation web site. Jiro, thanks for the suggestions on demonstrating the heli in Tokyo. I might give that a try at the Tokyo Institute of Tech. I guess the make fair is at the ookayama campus. I'll ping you when I'm getting close to being ready.. Hi Jiro, I have an issue regarding the use of xbee with pixhawk, I'm using xbee as follows, but I´m still having problems. I write the next program to try send some data but I don't get any positive results. #include <AP_Common.h> #include <AP_Math.h> #include <AP_Notify.h> #include <AP_Param.h> #include <AP_Progmem.h> #include <AP_InertialSensor_MPU6000.h> #include <AP_InertialSensor.h> #include <AP_ADC_AnalogSource.h> #include <AP_ADC.h> #include <GCS_MAVLink.h> // Rediculous dependency to AP_InertialSensor_MPU6000 #include <DataFlash.h> // Rediculous dependency to AP_InertialSensor_MPU6000 #include <AP_GPS.h> #include <AP_HAL.h> #include <AP_HAL_AVR.h> #include <AP_HAL_AVR_SITL.h> #include <AP_HAL_PX4.h> #include <AP_HAL_Empty.h> const AP_HAL::HAL& hal = AP_HAL_BOARD_DRIVER; AP_HAL::AnalogSource* ch; AP_InertialSensor_MPU6000 _INERT; // MPU6050 accel/gyro chip void setup (void) { hal.uartA->begin(115200); // USB hal.uartC->begin(57600); // RADIO _INERT.init(AP_InertialSensor::COLD_START, AP_InertialSensor::RATE_100HZ); } void loop (void) { hal.uartA->write("start \n"); hal.uartC->write("m"); hal.scheduler->delay(20); } AP_HAL_MAIN(); Do you have an idea of what is happening? Thank you in advance. Regards. Hi Gary Do you have good communication through Mission Planner by your setup? Please separate an issue for hardware or your software. regards,
http://diydrones.com/profile/JiroHattori?xg_source=activity
CC-MAIN-2015-27
refinedweb
432
62.75
How do I handle #include files or other nested input streams? Created Sep 3, 1999 There are a number of ways to handle include files. - At the parser level. Detecting the directive in the grammar. I then extract the filename and instantiate a new lexer and a new parser to parse that file. Finally, I call the getAST() method to return the syntax tree generated from the new parser and stitch that into the current syntax tree. includeFile : "#include" LESS_THAN s1:STRING_LITERAL GREATER_THAN { myLexer lexer = new myLexer(new FileInputStream(s1.getText())); myParser parser = new myParser(lexer); parser.codestart(); } ; - At the char input stream level. Another approach is to filter the input character stream, maintaining a stack of streams and changing the input state object for the lexer. When your lexer sees a #include or whatever, it pushes the current input state, sets the lexer's input stream to be the include file, set the token type to be Token.SKIP, and let the rule return (the one that matched the #include). - In between the parser and lexer. Another way is to use TokenStream objects. See the includeFile example. The TokenStreamSelector object: /** A token stream MUX (multiplexor) knows about n token streams * and can multiplex them onto the same channel for use by token * stream consumer like a parser. This is a way to have multiple * lexers break up the same input stream for a single parser. */ lets you switch between multiple input streams. When you see an #include, create a new lexer just like you've been doing (no parser), and then notify the TokenStreamSelector (push state and point at new lexer). At the close of the included stream, tell the selector to pop it's state. The parser has no idea that all of this is going on. It attaches to the selector not the lexer :) The parser sees one stream of tokens. You cannot do multiple includes in the lexer itself because the parser pulls tokens out of a lexer, a single token at a time. How could a lexer rule return a stream of tokens from a sublexer? It can't. You need to do it in the parser or in between the two as I'll explain in a second. You can try having the parser make a new lexer/parser that would go grab the files. This should work unless you have lots of class variables that should be instance variables in the parser or lexer. However, this does not let you do includes that can appear anywhere (you'd have to have a test for #include everywhere in your grammar...yuck). So, the real answer, if you don't like handling the next char stream thing yourself (I understand that concept ;)), is to use the new token stream capabilities. What you want is to create a new lexer for each included file and one for the original and then have a TokenStreamSelector (a multiplexor) handle flipping between the lexers in a stack fashion. The beauty of this is that the parser only sees a single stream of tokens and is none the wiser. You create the first lexer and parser connected via the MUX/selector like this: // open a simple stream to the input DataInputStream input = new DataInputStream(System.in); // attach java lexer to the input stream, mainLexer = new PLexer(input); // notify selector about starting lexer; name for convenience selector.addInputStream(mainLexer, "main"); selector.select("main"); // start with main P lexer // Create parser attached to selector parser = new PParser(selector); // Parse the input language: P parser.setFilename(" which looks like: Parser - selector - mainLexer - sublexer for first include - subsublexer for nested include [normally, you only have "Parser - Lexer" for most problems] When the mainLexer sees an #include, it makes a sublexer, pushes it onto the selector's stack and then does an "abort current token and try again", which is "selector.retry()". This call throws an exception that blows out of the current lexer and forces the selector to get another token, which it does from the newly-pushed sublexer! Cool, eh? All you've done is tell the selector to start pulling tokens from the sublexer. :) The complete code is in examples/includeFile of 2.7.0 release.
http://www.jguru.com/faq/view.jsp?EID=101
CC-MAIN-2013-20
refinedweb
704
62.48
Tweens¶ Introduction¶ Tweens are a light-weight framework component that sits between the web server and the app. It’s very similar to a WSGI middleware, except that a tween has access to the Morepath API and is therefore less low-level. Tweens can be used to implement transaction handling, logging, error handling and the like. signature of a handler¶ Morepath has an internal publish function that takes a single morepath.Request argument, and returns a morepath.Response as a result: def publish(request): ... return response Tweens have the same signature. We call such functions handlers. Under and over¶ Given a handler, we can create a factory that creates a tween that wraps around it: def make_tween(app, handler): def my_tween(request): print "Enter" response = handler(request) print "Exit" return response return my_tween We say that my_tween is over the handler argument, and conversely that handler is under my_tween. The application constructs a chain of tween over tween, ultimately reaching the request handler. Requests arrive in the outermost tween and descend down the chain into the underlying tweens, and finally into the Morepath publish handler itself. What can a tween do?¶ A tween can: - amend or replace the request before it goes in to the handler under it. - amend or replace the response before it goes back out to the handler over it. - inspect the request and completely take over response generation for some requests. - catch and handle exceptions raised by the handler under it. - do things before and after the request is handled: this can be logging, or commit or abort a database transaction. Creating a tween factory¶ To have a tween, we need to add a tween factory to the app. The tween factory is a function that given a handler constructs a tween. You can register a tween factory using the App.tween_factory() directive: @App.tween_factory() def make_tween(app, handler): def my_tween(request): print "Enter" response = handler(request) print "Exit" return response return my_tween The tween chain is now: my_tween -> publish It can be useful to control the order of the tween chain. You can do this by passing under or over to tween_factory: @App.tween_factory(over=make_tween) def make_another_tween(app, handler): def another_tween(request): print "Another" return handler(request) return another_tween The tween chain is now: another_tween -> my_tween -> publish If instead you used under: @App.tween_factory(under=make_tween) def make_another_tween(app, handler): def another_tween(request): print "Another" return handler(request) return another_tween Then the tween chain is: my_tween -> another_tween -> publish Tweens and settings¶ A tween factory may need access to some application settings in order to construct its tweens. A logging tween for instance needs access to a setting that indicates the path of the logfile. The tween factory gets two arguments: the app and the handler. You can then access the app’s settings using app.registry.settings. See also the Settings section. Tweens and apps¶ You can register different tween factories in different Morepath apps. A tween factory only has an effect when the app under which it is registered is being run directly as a WSGI app. A tween factory has no effect if its app is mounted under another app. Only the tweens of the outer app are in effect at that point, and they are also in effect for any apps mounted into it. This means that if you install a logging tween in an app, and you run this app with a WSGI server, the logging takes place for that app and any other app that may be mounted into it, directly or indirectly. more.transaction¶ If you need to integrate SQLAlchemy or the ZODB into Morepath, Morepath offers a special app you can extend that includes a transaction tween that interfaces with the transaction package. The morepath_sqlalchemy demo project gives an example of what that looks like with SQLAlchemy.
https://morepath.readthedocs.io/en/latest/tweens.html
CC-MAIN-2019-18
refinedweb
640
62.78
Operating systems, development tools, and professional services for connected embedded systems for connected embedded systems seteuid() Set the effective user ID Synopsis: #include <unistd.h> int seteuid( uid_t uid ); is the superuser, the seteuid() function sets the effective user ID to uid. - If the process isn't the superuser, and uid is equal to the real user ID or saved set-user ID, seteuid() sets the effective user ID to uid. The real and saved user IDs aren't changed. The "superuser" is defined as any process with an effective user ID of 0, or an effective user ID of root. Returns: Errors: - EINVAL - The value of uid is out of range. - EPERM - The process isn't the superuser,: See also: errno, geteuid(), setegid(), setuid(), setgid()
http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/s/seteuid.html
crawl-003
refinedweb
126
53.92
May 21, 2012 02:19 PM|greatbear|LINK There are several posts on the Internet that suggest implementing a custom JSON serializer to return JSON from an IEnumerable<T> action in MVC4. I am 100% sure that I got a very basic ApiController working that was returning JSON *without* any custom serialization. The type in IEnumberable was a ADO.NET Entity Framework class generated from a SQL Server table. BUT! I did something, and now I am getting this error: The type 'sometype' cannot be serialized to JSON because its IsReference setting is 'True'. The JSON format does not support references because there is no standardized format for representing references. To enable serialization, disable the IsReference setting on the type or an appropriate parent class of the type. I had it working before, beyond doubt. All I remember doing is renaming the out of the box Values controller .cs file and refactoring all references to that name. Is there any way I can get back to returning JSON without altering IsReference property or doing custom JSON formatting? Contributor 5162 Points May 23, 2012 01:50 PM|krokonoster|LINK Contributor 5162 Points May 23, 2012 06:30 PM|krokonoster|LINK greatbearhat assmebly/namespace does the Json method require? JsonResult? It's in System.Web.Mvc May 27, 2012 07:47 AM|imran_ku07|LINK In the coming RC version(and current source), JSON.NET is the default serializer. I recommend make your application with nightly build dll's. May 29, 2012 03:06 PM|greatbear|LINK imran_ku07 In the coming RC version(and current source), JSON.NET is the default serializer. I recommend make your application with nightly build dll's. imran, thanks for your post. Is there any way to test if my solution is using the dll that using JSON.NET as default serializer? May 29, 2012 06:45 PM|imran_ku07|LINK greatbear. Is there any way to test if my solution is using the dll that using JSON.NET as default serializer? There may be various way. One simple way is to check whether your web api project have this class avilable System.Net.Http.Formatting.IKeyValueModel System.Net.Http.Formatting.IKeyValueModel class has been removed from current source. May 30, 2012 03:01 AM|imran_ku07|LINK greatbear Yes, I do have that class, with Keys as a property of IEnumerable<string> type. So how do I get the current source? May 30, 2012 09:46 PM|greatbear|LINK thanks again imran. i'm getting an error when i try to install the System.Web.Mvc.dll in GAC like this: C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin>gacutil -i D:\Projects\L ITWebAPI\packages\AspNetMvc.4.0.20126.16343\lib\net40\System.Web.Mvc.dll Failure adding assembly to the cache: This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. May 31, 2012 02:55 AM|imran_ku07|LINK See the instruction carefully, This will only work for Visual Studio 2010. May 31, 2012 03:02 PM|greatbear|LINK That's what I'm using - VS 2010. This seems like last peice of the puzzle. Please help me!! This is the about dialog in VS. : Microsoft Visual Studio 2010 Version 10.0.40219.1 SP1Rel Microsoft .NET Framework Version 4.5.50131 SP1Rel Installed Version: Premium Microsoft Visual C# 2010 01021-532-2002467-70359 Microsoft Visual C# 2010 Microsoft Visual F# 2010 01021-532-2002467-70359 Microsoft Visual F# 2010 Microsoft Visual Studio 2010 Code Analysis Spell Checker 01021-532-2002467-70359 Microsoft Visual Studio 2010 2010 Team Explorer 01021-532-2002467-70359 Microsoft Visual Studio 2010 Team Explorer Microsoft Visual Web Developer 2010 01021-532-2002467-70359 Microsoft Visual Web Developer 2010 Microsoft Windows Phone Developer Tools - ENU 01021-532-2002467-70359 Microsoft Windows Phone Developer Tools - ENU Crystal Reports Templates for Microsoft Visual Studio 2010 Crystal Reports Templates for Microsoft Visual Studio 2010 Hotfix for Microsoft Visual Studio 2010 Premium - ENU (KB2581019) KB2581019 This hotfix is for Microsoft Visual Studio 2010 Premium - ENU. If you later install a more recent service pack, this hotfix will be uninstalled automatically. For more information, visit. Hotfix for Microsoft Visual Studio 2010 Premium - ENU (KB2591016) KB2591016 This hotfix is for Microsoft Visual Studio 2010 Premium - ENU. If you later install a more recent service pack, this hotfix will be uninstalled automatically. For more information, visit. Microsoft Visual Studio 2010 Premium - ENU Service Pack 1 (KB983509) KB983509 This service pack is for Microsoft Visual Studio 2010 Premium -.20823.0 NuGet Package Manager 1.6.30117.9648 NuGet Package Manager in Visual Studio. For more information about NuGet, visit. Oracle Developer Tools for Visual Studio 11.2.0.2.30 Oracle Developer Tools for Visual Studio Copyright (c) 2005, 2011 Telerik MVC VSExtensions 2011.02.712.0 Telerik Extensions for ASP.NET MVC VSExtensions Package May 31, 2012 05:21 PM|greatbear|LINK Now if I recompile, I get this error, which I was not getting before: Error 18 The type 'System.Web.Http.RouteParameter' exists in both 'd:\Projects\LITWebAPI\packages\Microsoft.AspNet.WebApi.Core.4.0.20530.0\lib\net40\System.Web.Http.dll' and 'd:\Projects\LITWebAPI\packages\System.Web.Http.Common.4.0.20126.16343\lib\net40\System.Web.Http.Common.dll' d:\Projects\LITWebAPI\LITWebAPI\Global.asax.cs 34 30 LITWebAPI Is it OK to use UrlParameter instead? routes.MapHttpRoute( name: "ScheduleApiCourse", routeTemplate: "api/{controller}/{subject}/{number}/{section}/{term}", defaults: new { controller = "schedule", subject = "engl", number = RouteParameter.Optional, section = RouteParameter.Optional, term = RouteParameter.Optional } ); May 31, 2012 10:08 PM|greatbear|LINK Ok, I am using only UrlParameter instead of RouteParameter. But I'm hitting a runtime exception: Could not load type 'System.Net.Http.HttpMessageInvoker' from assembly 'System.Net.Http, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Jun 04, 2012 12:24 PM|imran_ku07|LINK greatbearCould not load type 'System.Net.Http.HttpMessageInvoker' from assembly 'System.Net.Http, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. It is better to try VS 2012 RC now. See this Jun 06, 2012 03:49 PM|greatbear|LINK Thanks imran. I ended up uninstalling each and every SDK, framework, API etc. that had anything to do with Visual Studio, ASP.NET, MVC, C++, SQL Server and anything development. Restarted my machine. Installed VS 2012 RC. Now I'm getting no errors and expected JSON result without any custom serialization. Thanks! Jun 07, 2012 04:32 PM|mkamoski2|LINK All -- This is a follow-up regarding "Could not load type 'System.Net.Http.HttpMessageInvoker'". Well, I found the answer here. The error message that I finally found was... Could not load type 'System.Net.Http.HttpMessageInvoker' from assembly 'System.Net.Http, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' ...and I only found it when I accessed my Rest service directly from a browser. Well, according the post linked above... " marcind Developer Apr 9 at 6:39 PM. " ...so that means, AFAICT, that... ...the 2012-Jun-01 release of MVC (Web API, etc) is NOT compatible with .NET 4.5 Beta... ...once I removed .NET 4.5 Beta, things started working again. HTH. Thanks. -- Mark Kamoski 18 replies Last post Jun 07, 2012 04:32 PM by mkamoski2
http://forums.asp.net/t/1805918.aspx?MVC4+IEnumerable+JSON
CC-MAIN-2014-10
refinedweb
1,212
52.15
This C# Program Searches an Element in an Array. Here an array is declared and an element is searched and if found and displayed. Here is source code of the C# Program to Search an Element in an Array. The C# program is successfully compiled and executed with Microsoft Visual Studio. The program output is also shown below. /* * C# Program to Search an Element in an Array */ using System; using System.IO; class Program { static void Main() { string[] array1 = { "cat", "dogs", "donkey", "camel" }; string v1 = Array.Find(array1, element => element.StartsWith("cam", StringComparison.Ordinal)); string v2 = Array.Find(array1, element => element.Length == 3); Console.WriteLine("The Elemnt that Starts with 'Cam' is : " +v1); Console.WriteLine("3 Letter word in the Array is : " +v2); Console.ReadLine(); } } Here is the output of the C# Program: The Element that Starts With 'Cam' is : Camel 3 Letter Word in the Array is : cat Sanfoundry Global Education & Learning Series – 1000 C# Programs. If you wish to look at all C# Programming examples, go to 1000 C# Programs.
http://www.sanfoundry.com/csharp-program-search-element-array/
CC-MAIN-2017-26
refinedweb
172
60.82
20 Python Gem Libraries Buried In the Installation Waiting To Be Found Get to know Python’s standard libraries like never before Introduction Most people think Python’s mass dominance is due to its powerful packages like NumPy, Pandas, Sklearn, XGBoost, etc. These are third-party packages written by professional developers, often with the help of other faster programming languages like C, Java, or C++. So, one of the feeble arguments haters might throw against Python is that it won’t be as popular once you strip away all the glory these third-party packages bring. I am here to say otherwise and show that standard Python is already powerful enough to give a serious run for any language’s money. I bring to your attention 20 lightweight packages that come built-in with your Python installation and are only a single line away from being unleashed. 1️. contextlib Handling external sources like database connections, open files, or anything that requires manual open/close operations can become a giant pain in the neck. Context managers solve this issue elegantly. Context managers are a defining feature of Python, not present in other languages, and highly sought after. You’ve probably seen the with keyword used with the open function, and you might not know that you can create functions that work as context managers. Below, you can see a context manager that serves as a timer: Wrapping a function written with special syntax under a contextmanager decorator from contextlib, converts it to a manager you can use with the with keyword. You can read more about custom context managers in my separate article. 2️. functools Want more powerful, shorter, and multi-functional functions? Then, functools has got you covered. This built-in library contains many methods and decorators you can wrap around existing ones to add additional features. One of them is the partial, which can be used to clone functions while preserving some of their arguments with custom values. Below, we are copying the read_csv from Pandas so that we won’t have to repeat passing the same arguments to read some particular CSV files: Another one of my favorites is a caching decorator. Once wrapped, cache remembers every output that maps to inputs so that the results are instantly available when the same arguments are passed to the function. The streamlit library greatly takes advantage of such a function. 3️. itertools If you ever find yourself in a situation where you are writing nested loops or complicated functions to iterate through more than one iterable, check if there is already a function in itertools library. Maybe, you don’t have to reinvent the wheel – Python thought of your every need. Below are some handy iteration functions from the library: 4️. glob For users who love Unix-style pattern matching, the glob library should feel right at home: glob contains all the relevant functions to work with multiple files simultaneously without headaches (or using a mouse). 5️. pathlib The Python os module, to put it nicely, sucks… Fortunately, core Python developers heard the cries of millions and introduced the pathlib library in Python 3.4. It brings the convenient object-oriented approach to systems paths. It also tries very hard to solve all the issues related to (put in the adjective) Windows path system: 6️. sqlite3 To the delight of data scientists and engineers, Python comes with built-in support for databases and SQL through the sqlite3 package. Just hook up to any database (or create one) using a connection object and fire away SQL queries. The package performs obediently. 7️. hashlib Python has spawned deep, deep roots in the sphere of cybersecurity, not just in AI and ML. An example of this is the hashlib library that contains your most common (and secure) cryptographic hash functions like SHA256, SHA512, and so on. 8️. secrets I love mystery novels. Have you ever read The Hound of Baskervilles? It is fantastic, go read it. While it might be immense fun to implement your own message encoding functions, they won’t probably be up to the same standards as the battle-tested functions in the secrets library. There, you will find everything you need to generate random numbers and characters for the hairiest of passwords, security tokens, and related secrets: 9️. argparse Are you good at the command line? Then, you are one of the few. Also, you will love the argparse library. You can make your static Python scripts accept user input through CLI keyword arguments. The library is rich in functionality, enough to create complex CLI applications for your script or even a package. I highly recommend checking out the RealPython article for a comprehensive overview of the library. 10. random There are no coincidences in this world — Oogway. Maybe that’s why scientists use pseudorandomness, that pure randomness doesn’t exist. Anyway, the random module in Python should be more than enough to simulate basic chance events: 1️1. pickle Just as dataset sizes are getting larger and larger, so are our needs to store them faster and more efficiently. One of the alternatives to flat CSV files that come natively with your Python installation is pickle file format. In fact, it is 80 times faster than CSVs at IO and occupies smaller memory. Here is an example that pickles a dataset and loads it back: 💻 Comparison article by Dario Radecic: link 1️2. shutil The shutil library, standing for shell utilities, is a module for advanced file operations. With shutil, you can copy, move, delete, archive, or do any file operation that you would typically perform in the file explorer or on the terminal: 13. statistics Who even needs NumPy or SciPy when there is the statistics module? (Actually, everyone does – I just wanted to write a dramatic sentence). This module can come in handy to perform standard statistical computations on pure Python arrays. There is no need to install third-party packages if all you need is to make a simple calculation. 14. gc Python really pulls out all the stops. It comes with everything — from package managers right up to garbage collectors. Yeah, you heard (/read) it right. The gc module serves as a garbage collector in Python programs once enabled. In lower-level languages, this irksome task is left to the developer, who has to allocate and release chunks of memory required in the program manually. The collect function returns the number of unreachable objects found and cleaned within the namespace. In simple terms, the function releases the memory slot of unused objects. You can read more about memory management of Python below. 💻Memory management in Python — link 15. pprint Some outputs coming from certain operations are just too horrific to look at. Do your eyes a favor and use the pprint package for intelligent indentations and pretty outputs: For even more complex outputs and custom printing options, you can create printer objects with pprint and use them multiple times over. Details are in the docs. 16. pydoc Code is more often read than written — Guido Van Rossum. Guess what? I love documentation and writing it for my own code (don’t be surprised; I am a bit of an OCD). Hate it or love it — documenting your code is a necessary evil. It becomes essentially important for larger projects. In such cases, you can use the pydoc CLI library to automatically generate docs on the browser or save it to HTML using the docstrings of your classes and functions. It can serve as a preliminary overview tool before deploying your docs to other services like Read the Docs. 17. calendar What the HECK was going on during this September? Apparently, there were 19 days in September 1752 in the UK. Where did 3, 4, … 13 go? Well, it is all about the giant mess about switching from the Julian Calendar to Gregorian, which the UK was very stubborn about till the 1750s. You can watch it here. This was the case only in the UK. The rest of the world had sense and was following through the correct course of time, as can be seen using the calendar module: Python takes time seriously. 18. webbrowser Imagine jumping straight to StackOverflow from your Jupyter Notebook or your Python script. Why would you even do that? Well, because you CAN… with the webbrowser module. 19. logging One of the signs that you are looking at a seasoned developer is the lack of print statements in their code. The vanilla logging. This module lets you log messages with different priorities and custom formatted timestamps. Here is the one I use daily: 💻 Excellent tutorial on logging in Python: Real Python 20. concurrent.futures I have left something juicy for the end. This library is about executing operations concurrently, as in multithreading. Below, I send 100 GET requests to a URL and get back the response. The process is slow and tedious as the interpreter waits until each request comes back, and that’s what you get when you use loops. A much smarter approach is to use concurrency and use all the cores on your machine. The concurrent.futures package enables you to do this. Here is the basic syntax: The runtime decreased 12 times, as concurrency allowed sending multiple requests simultaneously using all the cores. You can read more about concurrency in the below tutorial. 💻 Demo tutorial: Article by Dario Radecic Conclusion There is no need to overcomplicate things. If you don’t need them, there is no need to saturate your virtual environment with heavy packages. Having a few built-in packages up your sleeve might just be enough. Remember, “Simple is better than complex” — the Zen of Python. Reach out to me on LinkedIn or Twitter for a friendly chat about all things data. Or you can just read another story from me. How about these: AI/ML Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot
https://ramseyelbasheer.io/2022/08/04/20-python-gem-libraries-buried-in-the-installation-waiting-to-be-found/
CC-MAIN-2022-33
refinedweb
1,669
63.49
michael ernst2,218 Points Just trying to figure out where I went wrong.....I've rewatched the video several times. Where am I going wrong? I feel as if I have written the same code as in the video. I definitely miss the hint button in this course. But I must be missing something somewhere because it's not coming up a pass. Anybody able to steer me in the right direction on what I must not be seeing? from flask import Flask from flask import request app = Flask(__name__) @app.route('/') def index(name="Treehouse"): return "Hello from {}".format(name) 1 Answer Rick Gleitz41,874 Points The title of the challenge is Request Args, and it had you import requests (which you did). So you need to use request and args in a line before the return statement: name = request.args.get("name", name) You also need to return the string the challenge wants: "Hello {}" versus "Hello from {}" (that would work in real life, but it won't pass the challenge). Hope this helps! michael ernst2,218 Points michael ernst2,218 Points thanks! ill try it out and see how it goes
https://teamtreehouse.com/community/just-trying-to-figure-out-where-i-went-wrongive-rewatched-the-video-several-times-where-am-i-going-wrong
CC-MAIN-2021-25
refinedweb
193
84.88
In a pair of related announcements, Zend Technologies says it is releasing new versions of its Zend Framework and Zend Studio. Zend Framework 1.9 features PHP 5.3 support, along with enhanced community-built features for web application development, while the Zend Studio 7.0 IDE now features support for PHP 5.3 as well as tight integration with Zend Server and Zend Framework. New features in Zend Framework 1.9 include: - Support for PHP 5.3. - REST-based web services made easier through automated routing/detection - Message queues which are useful for offload processing (credit card transactions, media uploads, and more), cross-platform communication, and chat functionality. - LDAP support for Microsoft ActiveDirectory and Novell, plus searching, filtering, and tree features - RSS and Atom support - DBUnit support for unit testing Built on Eclipse PHP Development Tools (PDT) 2.1, Zend Studio 7.0 provides support for PHP 5.3 features such as such as namespaces and closures. Additionally Version 7.0 provides new syntax highlighting and point-and-click creation of Zend Framework applications. Zend Studio 7.0 also supports Zend Server -- Zend's Web application server for deploying, managing and monitoring PHP Web applications. According to Zend's Andi Gutmans, "Zend Studio, together with Zend Framework and Zend Server, provides a PHP solution that addresses the entire Web application lifecycle, from development to production."
http://www.drdobbs.com/open-source/zend-releases-new-versions-of-framework/218800080
CC-MAIN-2015-35
refinedweb
226
51.34
1.1.3.6. Primary view customization¶ The ‘primary’ view (i.e. any view with the identifier set to ‘primary’) is the one used to display all the information about a single entity. The standard primary view is one of the most sophisticated views of all. It has several customisation points, but its power comes with uicfg, allowing you to control it without having to subclass it. However this is a bit off-topic for this first tutorial. Let’s say we simply want a custom primary view for my Community entity type, using directly the view interface without trying to benefit from the default implementation (you should do that though if you’re rewriting reusable cubes; everything is described in more details in The Primary View). So... Some code! That we’ll put again in the module views of our cube. from cubicweb.predicates import is_instance from cubicweb.web.views import primary class CommunityPrimaryView(primary.PrimaryView): __select__ = is_instance('Community') def cell_call(self, row, col): entity = self.cw_rset.get_entity(row, col) self.w(u'<h1>Welcome to the "%s" community</h1>' % entity.printable_value('name')) if entity.description: self.w(u'<p>%s</p>' % entity.printable_value('description')) What’s going on here? - Our class inherits from the default primary view, here mainly to get the correct view identifier, since we don’t use any of its features. - We set on it a selector telling that it only applies when trying to display some entity of the Community type. This is enough to get an higher score than the default view for entities of this type. - View applying to entities usually have to define cell_call as entry point, and are given row and col arguments tell to which entity in the result set the view is applied. We can then get this entity from the result set (self.cw_rset) by using the get_entity method. - To ease thing, we access our entity’s attribute for display using its printable_value method, which will handle formatting and escaping when necessary. As you can see, you can also access attributes by their name on the entity to get the raw value. You can now reload the page of the community we just created and see the changes. We’ve seen here a lot of thing you’ll have to deal with to write views in CubicWeb. The good news is that this is almost everything that is used to build higher level layers. Note As things get complicated and the volume of code in your cube increases, you can of course still split your views module into a python package with subpackages. You can find more details about views and selectors in Principles.
http://docs.cubicweb.org/tutorials/base/customizing-the-application.html
CC-MAIN-2014-42
refinedweb
447
56.15
This article describes CppSQLite, a very thin C++ wrapper around the public domain SQLite database library. A description of how to link applications with SQLite is provided, then an example program using CppSQLite is presented, and finally the CppSQLite classes are documented. To set the scene, here is a quote from the SQLite author.... I am always on the lookout for simple yet powerful software development tools and ideas, and SQLite definitely falls into this category. In fact, the "Lite" name is a bit misleading, as it implements a large subset of the SQL standard, including transactions, and when projects such as PHP start to bundle it as standard instead of MySQL, you have to take a look. I thought it would be fun to write a thin wrapper around the C interface to make it C++ friendly. There are already a number of C++ wrappers listed on the SQLite website, but one is commercial, another seemed a bit complex, and another is specific to the wxWidgets framework. After all, the author of SQLite looks to have gone to pains to keep things simple, so I thought a C++ wrapper for it should keep things simple as well. SQLite is provided in 2 packages on the Windows platform, as a compiled DLL, and also in source form. Even if you only wish to use the DLL, you will still need to get the source code, as this contains the required header file. If desired, the SQLite source could be compiled into a library (.lib) file for statically linking with your application, but this is not covered in this article. Compilation instructions can be found on the SQLite web site. Linking dynamically still requires that a .lib file is built for linking with your application. This can be done using Microsoft's LIB command. On my system, this is located at D:\Program Files\Microsoft Visual Studio\VC98\Bin\lib.exe. Unzip sqlite.zip which contains sqlite.dll and sqlite.def, and execute the following command to produce the lib file. c:\>lib /def:sqlite.def sqlite.h needs to be visible to your application at compile time, as does sqlite.lib. sqlite.dll needs to be available to your application at runtime. The following code demonstrates how to use the main features of SQLite via CppSQLite, with comments inline. #include "CppSQLite.h" #include <ctime> #include <iostream> using namespace std; const char* gszFile = "C:\\test.db"; int main(int argc, char** argv) { try { int i, fld; time_t tmStart, tmEnd; CppSQLiteDB db; cout << "SQLite Version: " << db.SQLiteVersion() << endl; remove(gszFile); db.open(gszFile); cout << endl << "Creating emp table" << endl; db.execDML("create table emp(empno int, empname char(20));"); /////////////////////////////////////////////////////////////// // Execute some DML, and print number of rows affected by each one /////////////////////////////////////////////////////////////// cout << endl << "DML tests" << endl; int nRows = db.execDML("insert into emp values (7, 'David Beckham');"); cout << nRows << " rows inserted" << endl; nRows = db.execDML( "update emp set empname = 'Christiano Ronaldo' where empno = 7;"); cout << nRows << " rows updated" << endl; nRows = db.execDML("delete from emp where empno = 7;"); cout << nRows << " rows deleted" << endl; ///////////////////////////////////////////////////////////////// // Transaction Demo // The transaction could just as easily have been rolled back ///////////////////////////////////////////////////////////////// int nRowsToCreate(50000); cout << endl << "Transaction test, creating " << nRowsToCreate; cout << " rows please wait..." << endl; tmStart = time(0); db.execDML("begin transaction;"); for (i = 0; i < nRowsToCreate; i++) { char buf[128]; sprintf(buf, "insert into emp values (%d, 'Empname%06d');", i, i); db.execDML(buf); } db.execDML("commit transaction;"); tmEnd = time(0); //////////////////////////////////////////////////////////////// // Demonstrate CppSQLiteDB::execScalar() //////////////////////////////////////////////////////////////// cout << db.execScalar("select count(*) from emp;") << " rows in emp table in "; cout << tmEnd-tmStart << " seconds (that was fast!)" << endl; //////////////////////////////////////////////////////////////// // Re-create emp table with auto-increment field //////////////////////////////////////////////////////////////// cout << endl << "Auto increment test" << endl; db.execDML("drop table emp;"); db.execDML( "create table emp(empno integer primary key, empname char(20));"); cout << nRows << " rows deleted" << endl; for (i = 0; i < 5; i++) { char buf[128]; sprintf(buf, "insert into emp (empname) values ('Empname%06d');", i+1); db.execDML(buf); cout << " primary key: " << db.lastRowId() << endl; } /////////////////////////////////////////////////////////////////// // Query data and also show results of inserts into auto-increment field ////////////////////////////////////////////////////////////////// cout << endl << "Select statement test" << endl; CppSQLiteQuery q = db.execQuery("select * from emp order by 1;"); for (fld = 0; fld < q.numFields(); fld++) { cout << q.fieldName(fld) << "(" << q.fieldType(fld) << ")|"; } cout << endl; while (!q.eof()) { cout << q.fieldValue(0) << "|"; cout << q.fieldValue(1) << "|" << endl; q.nextRow(); } /////////////////////////////////////////////////////////////// // SQLite's printf() functionality. Handles embedded quotes and NULLs //////////////////////////////////////////////////////////////// cout << endl << "SQLite sprintf test" << endl; CppSQLiteBuffer bufSQL; bufSQL.format("insert into emp (empname) values (%Q);", "He's bad"); cout << (const char*)bufSQL << endl; db.execDML(bufSQL); bufSQL.format("insert into emp (empname) values (%Q);", NULL); cout << (const char*)bufSQL << endl; db.execDML(bufSQL); //////////////////////////////////////////////////////////////////// // Fetch table at once, and also show how to // use CppSQLiteTable::setRow() method ////////////////////////////////////////////////////////////////// cout << endl << "getTable() test" << endl; CppSQLiteTable t = db.getTable("select * from emp order by 1;"); for (fld = 0; fld < t.numFields(); fld++) { cout << t.fieldName(fld) << "|"; } cout << endl; for (int row = 0; row < t.numRows(); row++) { t.setRow(row); for (int fld = 0; fld < t.numFields(); fld++) { if (!t.fieldIsNull(fld)) cout << t.fieldValue(fld) << "|"; else cout << "NULL" << "|"; } cout << endl; } //////////////////////////////////////////////////////////////////// // Test CppSQLiteBinary by storing/retrieving some binary data, checking // it afterwards to make sure it is the same ////////////////////////////////////////////////////////////////// cout << endl << "Binary data test" << endl; db.execDML("create table bindata(desc char(10), data blob);"); unsigned char bin[256]; CppSQLiteBinary blob; for (i = 0; i < sizeof bin; i++) { bin[i] = i; } blob.setBinary(bin, sizeof bin); bufSQL.format( "insert into bindata values ('testing', %Q);", blob.getEncoded()); db.execDML(bufSQL); cout << "Stored binary Length: " << sizeof bin << endl; q = db.execQuery("select data from bindata where desc = 'testing';"); if (!q.eof()) { blob.setEncoded((unsigned char*)q.fieldValue("data")); cout << "Retrieved binary Length: " << blob.getBinaryLength() << endl; } const unsigned char* pbin = blob.getBinary(); for (i = 0; i < sizeof bin; i++) { if (pbin[i] != i) { cout << "Problem: i: ," << i << " bin[i]: " << pbin[i] << endl; } } ///////////////////////////////////////////////////////// // Pre-compiled Statements Demo ///////////////////////////////////////////////////////////// cout << endl << "Transaction test, creating " << nRowsToCreate; cout << " rows please wait..." << endl; db.execDML("drop table emp;"); db.execDML("create table emp(empno int, empname char(20));"); tmStart = time(0); db.execDML("begin transaction;"); CppSQLiteStatement stmt = db.compileStatement( "insert into emp values (?, ?);"); for (i = 0; i < nRowsToCreate; i++) { char buf[16]; sprintf(buf, "EmpName%06d", i); stmt.bind(1, i); stmt.bind(2, buf); stmt.execDML(); stmt.reset(); } db.execDML("commit transaction;"); tmEnd = time(0); cout << db.execScalar("select count(*) from emp;") << " rows in emp table in "; cout << tmEnd-tmStart << " seconds (that was even faster!)" << endl; cout << endl << "End of tests" << endl; } catch (CppSQLiteException& e) { cerr << e.errorCode() << ":" << e.errorMessage() << endl; } //////////////////////////////////////////////////////////////// // Loop until user enters q or Q /////////////////////////////////////////////////////////// char c(' '); while (c != 'q' && c != 'Q') { cout << "Press q then enter to quit: "; cin >> c; } return 0; } The following simple classes are defined to encapsulate the functionality of SQLite. All the CppSQLite classes are contained in 2 files CppSQLite.h and CppSQLite.cpp, which will need to be added to your application. Encapsulates a SQLite error code and message. Nothing complicated here, and this class could easily be incorporated into an existing exception hierarchy, if required. Error messages returned by SQLite need to be sqlite_freemem()'d by the programmer, and this class takes on that responsibility. Note that for error messages generated by CppSQLite, we don't want to free the memory, so there is an optional trailing parameter that dictates whether CppSQLiteException frees the memory. class CppSQLiteException { public: CppSQLiteException(const int nErrCode, char* szErrMess, bool bDeleteMsg=true); CppSQLiteException(const CppSQLiteException& e); virtual ~CppSQLiteException(); const int errorCode() { return mnErrCode; } const char* errorMessage() { return mpszErrMess; } static const char* errorCodeAsString(int nErrCode); private: int mnErrCode; char* mpszErrMess; }; Encapsulates a SQLite database file. class CppSQLiteDB { public: enum CppSQLiteDBOpenMode { openExisting, createNew, openOrCreate }; CppSQLiteDB(); virtual ~CppSQLiteDB(); void open(const char* szFile); void close(); int execDML(const char* szSQL); CppSQLiteQuery execQuery(const char* szSQL); int execScalar(const char* szSQL); CppSQLiteTable getTable(const char* szSQL); CppSQLiteStatement compileStatement(const char* szSQL); int lastRowId(); void interrupt() { sqlite_interrupt(mpDB); } void setBusyTimeout(int nMillisecs); static const char* SQLiteVersion() { return SQLITE_VERSION; } private: CppSQLiteDB(const CppSQLiteDB& db); CppSQLiteDB& operator=(const CppSQLiteDB& db); sqlite_vm* compile(const char* szSQL); void checkDB(); sqlite* mpDB; int mnBusyTimeoutMs; }; open() and close() methods are self explanatory. SQLite does provide a mode argument to sqlite_open() but this is documented as having no effect, so is not provided for in CppSQLite. execDML() is used to execute Data Manipulation Language (DML) commands such as create/ drop/ insert/ update/ delete statements. It returns the number of rows affected. Multiple SQL statements separated by semi-colons can be submitted and executed all at once. Note: there is a potential problem with the way that CppSQLite returns the number of rows affected. If there are any other un-finalized() operations in progress the number of rows affected will be cumulative and include those from previous statements. So if this feature is important to you, you have to make sure that any CppSQLiteQuery and CppSQLiteStatement objects that have not destructed yet have finalize() called on them before you execDML(). execQuery() is used to execute queries. The CppSQLiteQuery object is returned by value, as this frees the programmer from having to delete it. execScalar() is an idea I got from ADO.NET. It is a shortcut for when you need to run a simple aggregate function, for example, " select count(*) from emp" or " select max(empno) from emp". It returns the value of the first field in the first row of the query result. Other columns and rows are ignored. getTable() allows for the SQLite feature which can fetch a whole table in a single operation, rather than having to fetch one row at a time as with a query. Actually, subsets of table rows can be fetched by specifying a query with a where clause, but the whole result set is returned at once. Again, the CppSQLiteTable object is returned by value for convenience. compileStatement() allows for the experimental SQLite pre-compiled SQL feature. See CppSQLiteStatement below. SQLite is typeless, which means all fields are stored as strings. The one exception to this is the INTEGER PRIMARY KEY type, which allows an auto increment field, much like the SQL Server's identity columns. The lastRowId() function is used to determine the value of the primary key from the last row inserted. interrupt() is useful when multithreading, and allows one thread to interrupt an operation in progress on another thread. setBusyTimeout() can also be useful when multithreading, and allows the programmer to dictate how long SQLite waits before returning SQLITE_BUSY if another thread has a lock on the database. The default value is 60 seconds, set when the database is opened. The copy constructor and operator=() are made private, as it does not make sense to copy a CppSQLiteDB object. Finally, the static method SQLiteVersion() returns the version number of the underlying SQLite DLL. Encapsulates a SQLite query result set. class CppSQLiteQuery { public: CppSQLiteQuery(); CppSQLiteQuery(const CppSQLiteQuery& rQuery); CppSQLiteQuery(sqlite_vm* pVM, bool bEof, int nCols, const char** paszValues, const char** paszColNames, bool bOwnVM=true); CppSQLiteQuery& operator=(const CppSQLiteQuery& rQuery); virtual ~CppSQLiteQuery(); int numFields(); const char* fieldName(int nCol); const char* fieldType); bool eof(); void nextRow(); void finalize(); private: void checkVM(); sqlite_vm* mpVM; bool mbEof; int mnCols; const char** mpaszValues; const char** mpaszColNames; bool mbOwnVM; }; nextRow() and eof() allow iteration of the query results. numFields(), fieldValue(), fieldName(), fieldType() and fieldIsNull() allow the programmer to determine the number of fields, their names, values, types and whether they contain a SQL NULL. There are overloaded versions allowing the required field to be either specified by index or name. getIntField(), getFloatField() and getStringField() provide a slightly easier to program way of getting field values, by never returning a NULL pointer for SQL NULL, and there is a default 2nd parameter that allows the programmer to specify which value to return instead. It is not possible to iterate backwards through the results. The reason for this is that CppSQLite is a thin wrapper and does not cache any returned row data. If this is required, CppSQLiteDB::getTable() should be used, or the application could inherit from this class. finalize() frees the memory associated with the query, but the destructor automatically calls this. SQLite provides a method to obtain a complete table's contents in a single block of memory, CppSQLiteTable encapsulates this functionality. class CppSQLiteTable { public: CppSQLiteTable(); CppSQLiteTable(const CppSQLiteTable& rTable); CppSQLiteTable(char** paszResults, int nRows, int nCols); virtual ~CppSQLiteTable(); CppSQLiteTable& operator=(const CppSQLiteTable& rTable); int numFields(); int numRows(); const char* fieldName); void setRow(int nRow); void finalize(); private: void checkResults(); int mnCols; int mnRows; int mnCurrentRow; char** mpaszResults; }; setRow() provides a random access method for movement between rows, and can be used in conjunction with numRows() to iterate the table. This design decision was made for simplicity, as following the same model as for CppSQLiteQuery, would have required functions for bof(), eof(), first(), next() and prev(). numFields(), fieldValue(), fieldName(), fieldIsNull(), getIntField(), getFloatField(), getStringField(), close() and operator=() provide the same functionality as for CppSQLiteQuery. Encapsulates SQLite " sprintf" functionality. SQLite provides a function sqlite_mprintf() which is like the C runtime sprintf() except there is no possibility of overrunning the buffer supplied, as sqlite_mprintf() uses malloc to allocate enough memory. The other benefit over sprintf() is the %Q tag, which works like %s except that it will massage apostrophes so that they don't mess up the SQL string being built, and also translate NULL pointers into SQL NULL values. class CppSQLiteBuffer { public: CppSQLiteBuffer(); ~CppSQLiteBuffer(); const char* format(const char* szFormat, ...); operator const char*() { return mpBuf; } void clear(); private: char* mpBuf; }; operator const char*() allows the programmer to pass an instance of this object to the functions defined on CppSQLiteDB. Because SQLite stores all data as NULL terminated strings, it is not possible to store binary data if it has embedded NULLs. SQLite provides 2 functions sqlite_encode_binary() and sqlite_decode_binary() that can be used to allow storage and retrieval of binary data. CppSQLiteBinary encapsulates these 2 functions. These 2 functions are not currently provided as part of the pre-compiled DLL, so I have copied the entire contents of SQLite's encode.c file into the CppSQLite.cpp file. Should these functions be provided in the DLL at some future point, they can easily be removed from CppSQLite.cpp. class CppSQLiteBinary { public: CppSQLiteBinary(); ~CppSQLiteBinary(); void setBinary(const unsigned char* pBuf, int nLen); void setEncoded(const unsigned char* pBuf); const unsigned char* getEncoded(); const unsigned char* getBinary(); int getBinaryLength(); unsigned char* allocBuffer(int nLen); void clear(); private: unsigned char* mpBuf; int mnBinaryLen; int mnBufferLen; int mnEncodedLen; bool mbEncoded; }; CppSQLiteBinary can accept data in either encoded or binary form using the setEncoded() and setBinary() functions. Whichever is used, enough memory is always allocated to store the encoded version, which is usually longer as nulls and single quotes have to be escaped. Data is retrieved using the getEncoded() and getBinary() functions. Depending on which form the data is currently in within the class, it may need to be converted. getBinaryLength() returns the length of the binary data stored, again converting the held format from encoded to binary, if required. allocBuffer() can be used to prevent data having to be cycled via a temporary buffer like in the example code at the start of this article. This function could be used as in the following example where data is read straight from a file into a CppSQLiteBinary object. int f = open(gszJpgFile, O_RDONLY|O_BINARY); int nFileLen = filelength(f); read(f, blob.allocBuffer(nFileLen), nFileLen); SQLite provides some experimental functionality for working with pre-compiled SQL. When the same SQL is being executed over and over again with different values, a significant performance improvement can be had by only compiling the SQL once, and executing it multiple times, each time with different values. CppSQLiteStatement encapsulates this functionality. class CppSQLiteStatement { public: CppSQLiteStatement(); CppSQLiteStatement(const CppSQLiteStatement& rStatement); CppSQLiteStatement(sqlite* pDB, sqlite_vm* pVM); virtual ~CppSQLiteStatement(); CppSQLiteStatement& operator=(const CppSQLiteStatement& rStatement); int execDML(); CppSQLiteQuery execQuery(); void bind(int nParam, const char* szValue); void bind(int nParam, const int nValue); void bind(int nParam, const double dwValue); void bindNull(int nParam); void reset(); void finalize(); private: void checkDB(); void checkVM(); sqlite* mpDB; sqlite_vm* mpVM; }; A CppSQLiteStatement object is obtained by calling CppSQLiteDB::compileStatement() with a SQL statement containing placeholders, as follows: CppSQLiteStatement stmt = db.compileStatement("insert into emp values (?, ?);"); stmt.bind(1, 1); stmt.bind(2, "Emp Name"); stmt.execDML(); stmt.reset(); The CppSQLiteStatement::bind() methods are then used to set the values of the placeholders, before calling either execDML() or execQuery() as appropriate. After the programmer has finished with the result from either execDML() or execQuery(), the reset() method can be called to put the statement back to a compiled state. The CppSQLiteStatement::bind() methods can then be used again, followed by execDML() or execQuery(). A typical use would be in a loop as demonstrated in the CppSQLiteDemo program. SQLite is compiled as thread-safe on Windows by default, and CppSQLite makes use of some SQLite features to help with multithreaded use. Included in the source code accompanying this article is a 2nd demo program called CppSQLiteDemoMT, which demonstrates these features. Each thread wishing to utilize CppSQLite on the same database file at the same time must have its own CppSQLiteDB object, and call open(). To put this another way, it is an error for more than 1 thread to call into a CppSQLiteDB object at the same time. The one exception to this is CppSQLiteDB::interrupt(), which can be used from one thread to interrupt the work of another thread. The other change to CppSQLite for multithreaded use is to make use of the sqlite_busy_timeout() function which causes SQLite to wait up to the specified number of milliseconds before returning SQLITE_BUSY. By default, CppSQLite sets this to 60,000 (60 seconds), but this can be changed using CppSQLiteDB::setBusyTimeout() as required. Various examples of doing this are shown in the CppSQLiteDemoMT program. SQLite provides a mechanism that allows the application developer to define stored procedures and aggregate functions that can be called from SQL statements. These stored procedures are written in C by the application developer, and made known to SQLite via function pointers. This is how the SQL built in functions are implemented by SQLite, but this functionality is not currently catered for in CppSQLite. SQLite provides some other variations on the functions wrapped, and the reader is encouraged to study the SQLite documentation. It is possible to compile SQLite and CppSQLite into a managed C++ program, It Just Works (IJW). You will need to set the CppSQLite.cpp file so that it does not use pre-compiled headers and also not to use Managed extensions, i.e. don't use /clr. There is a Managed C++ demo included with the CppSQLite downloads. At the time of writing, SQLite version 3 is in beta. See for further details. I have produced a port of CppSQLite to SQLite version 3, and the following notes explain the differences. There are a new set of classes with the prefix CppSQLite3, for example CppSQLite3Exception. This allows programs to link with both versions of CppSQLite, as is possible with both versions of SQLite itself. There is not support for UTF-16 initially, as it is not something I have experience of, and wouldn't know how to test. This can be added later with another set of classes, called for example CppSQLite3Exception16 etc. Note that some sqlite3 stuff such as sqlite3_exec() and sqlite3_get_table() do not appear to have UTF-16 versions, also sqlite3_vmprintf(), used by CppSQLiteBuffer. Error messages are now returned by sqlite3_errmsg() and do not need to be freed. To keep consistency between CppSQLite and CppSQLite3 the code that throws exceptions with messages returned from SQLite version 3 has been changed so that it passes DONT_DELETE_MSG as the final parameter to CppSQLite3Exception. The exception to this is the messages returned by sqlite3_exec() and sqlite3_get_table(). SQLite version 3 now has direct support for BLOB data, and therefore no need to encode or decode it, and there would seem to be no job for CppSQLiteBinary. However, the SQLite version 3 change means that the only way to work with BLOB data would seem to be using prepared statements ( CppSQLiteStatement). Not really a problem, but up until now, CppSQLiteBinary had allowed use of (encoded) binary data in calls to CppSQLiteDB::execQuery(), CppSQLiteDB::execDML() and on data returned from CppSQLiteDB::getTable(). sqlite_encode_binary() and sqlite_decode_binary() are still included in the SQLite version 3 source distribution, although it is not clear whether this is an error as they do not have the sqlite3 prefix, nor are they exported from the DLL. CppSQLite3 replicates the source to these 2 functions. This used to be the case with CppSQlite up to version 1.3 as up until version 2.8.15 of SQLite, they were not exported from the DLL. CppSQLite3Binary is an exact copy of CppSQLiteBinary, bundled with the source to sqlite_encode_binary() and sqlite_decode_binary(). This will allow easy porting between CppSQLite and CppSQLite3. Programs wishing to use sqlite3 BLOBs and their reduced storage space will not need to use CppSQLite3Binary, and will need to be rewritten anyway. SQLite version 3 introduces changes to the data typing system used. See . For this reason, CppSQLiteQuery::FieldType() has been replaced with 2 functions: CppSQLiteQuery::FieldDeclType() which returns the declared data type for the column as a string, and and CppSQLiteQuery::FieldDataType() whhich returns the actual type of the data stored in that column for the current row as one of the SQLite version 3 #defined vallues. The demo programs have been changed slightly to demonstrate the new features, and also to account for SQLite version 3's different locking behaviour. See. Note that SQLite version 3.0.5 introduced a compile time option which changes locking behaviour, see for more details. The SQLite version 3 is available as a separate download at the top of this article. I may add support for the remaining SQLite features to CppSQLite. At the moment, this means stored procedures and aggregate functions. Since version 1.2 of CppSQLite, I have tried hard not to do anything which is Microsoft specific, and have successfully compiled and run the demo programs on mingw32, as well as with Visual C++. As mingw32 is based on GCC, there should be no major problems on Linux/Unix, although the multi threaded demo program CppSQLiteDemoMT uses the _beginthread() call, which will obviously not work. This can probably be easily fixed, using pthreads for example. Thanks to fellow Code Project members for suggestions and buf fixes for CppSQLite, and also to Mateusz Loskot for acting as a reviewer. CppSQLite makes SQLite easier to use within a C++ program, yet doesn't provide significantly less power or efficiency than the flat C interface. If nothing else, writing CppSQLite has provided the author with an insight into the power and simplicity of SQLite. It is hoped that readers of this article also benefit in some way. CppSQLiteException::errorMess()to CppSQLiteException::errorMessage(). CppSQLiteException(). CppSQLiteException. sqlite_finalize()immediately to get error details after problems with sqlite_step(). CppSQLiteBinaryclass. NULLpointers sqlite_busy_timeout()and sqlite_interrupt()to help with multithreaded use CppSQLiteQuery CppSQLiteDB::execScalar() getIntField(), getStringField(), getFloatField() CppSQLiteDB::ExecDML()implemented with sqlite_exec()so multiple statements can be executed at once. CppSQLiteDB::execDML() sqlite_decode_binary()as there are now exported from the SQLite DLL General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/database/CppSQLite.aspx
crawl-002
refinedweb
3,868
54.52
Activate two MouseAreas overlapping Hi, I know overlapping two MouseAreas is often a sensitive subject. But I have a behavior I'd like to reproduce that needs me to have those. What I want |----------------------| | Rect + MouseArea | | | | |------------------| | | | Text + MouseArea | | | |------------------| | |----------------------| When hovering over Rect, its background should have a different color (to show the user is hovering this particular item inside a list of item) and when hovering over Text, a tip tool should pop up with a text. But when hovering Text, Rectis still hovered even if only one MouseArea is considered hovered (the one with the higher z-index, so I would like the Rectangle's background to be colored too. Any clues on how to do so ? Thx, MoaMoaK Hi! See example below: import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.3 ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("Hello World") Rectangle { id: rect width: 300 height: 300 color: ma_rect.containsMouse || ma_tx.containsMouse ? "orange" : "grey" MouseArea { id: ma_rect anchors.fill: parent hoverEnabled: true } Text { id: tx anchors.centerIn: parent text: "Some Text" font.bold: ma_tx.containsMouse MouseArea { id: ma_tx anchors.fill: parent hoverEnabled: true } } } }
https://forum.qt.io/topic/85486/activate-two-mouseareas-overlapping
CC-MAIN-2018-39
refinedweb
192
59.6
! Very cool challenge, Eric! I too have put together my own algorithm-oriented developer challenge while we were in "interview / hiring" mode a few months back. I was never satisfied with the approach of asking insanely technical questions and getting back canned answers, so I wanted to present a unique, non-standard way of gauging interviewee skills where one simply cannot fake it. I have not subjected any candidate to this test but it's something I have on the back-burner. I would be very interested in your thoughts on this exercise, sir! Here's my first stab at it. Admittedly it does some extra work because it regenerates the "indent" each time, but on the plus side, it's aware of where it is in the tree so it could do additional things with that information. I opted for an iterative approach because I think it's easier to debug and so forth. static class Dumper { const string BarSep = " "; const string BranchSep = "─"; const string Bar = "│"; const string Branch = "├"; const string LastBranch = "└"; struct NodeAndLevel { public Node Node; public List<bool> Level; } static public string Dump(Node root) { Stack<NodeAndLevel> stack = new Stack<NodeAndLevel>(); stack.Push(new NodeAndLevel { Node = root, Level = new List<bool>() }); StringBuilder builder = new StringBuilder(); while(stack.Count > 0) { NodeAndLevel l = stack.Pop(); foreach (var graphic in l.Level.SelectCheckLast<bool, string>(DumpSingleLevel)) { builder.Append(graphic); } builder.AppendLine(l.Node.Text); foreach (var nodeAndLevel in l.Node.Children.SelectCheckLast((isLast, child) => new NodeAndLevel { Node = child, Level = new List<bool>(l.Level) { isLast } }).Reverse()) { stack.Push(nodeAndLevel); } return builder.ToString(); static string DumpSingleLevel(bool isBranch, bool isLast) { if(isBranch) return (isLast ? LastBranch : Branch) + BranchSep; return (isLast ? BarSep : Bar) + BarSep; static IEnumerable<TOut> SelectCheckLast<TIn, TOut>(this IList<TIn> source, Func<bool, TIn, TOut> selector) { if(source.Count == 0) yield break; for (int i = 0; i < source.Count - 1; i++) yield return selector(false, source[i]); yield return selector(true, source[source.Count - 1]); } Why not have the Dump method return an IEnumerable<string> to imply a line-by-line solution, intended for generic output. Is this intended for console output or file output? Do we have a console maximum column width with which to impose word-wrapping logic upon? That would certainly allow for "prettier" formatting line-by-line so you can describe the output in terms of word-wrapped lines where each line would be able to start with the appropriate "box drawing" characters to indicate proper tree depth rather than an assumed count of spaces, or worse, no spaces and relying on the default console line wrapping. Good questions. This is actually not just an idle exercise; I wrote this challenge because I wrote this code myself for a real purpose. I am writing a code analysis tool in C# that requires that I build up a number of large, complicated trees in memory. I wanted a way to be able to dump a whole or partial tree as a string at once into the debugger "text viewer" window, so that I could rapidly ensure that the tree was the shape I expected. The trees will typically be shallow and broad, so I am not too worried about word-wrap. - Eric My attempt. I haven't looked at your code at all yet, so I'm pretty curious where it differs. My intent was to go for obvious correctness. Since recursion naturally drives you to mostly ignore everything except the current node and its children, I chose to make each node responsible for printing its own name and all its descendents, nothing more. However, I allowed myself to stray from that (a bit more cleverness here, instead of obviousness), by having each node also handle indentation that doesn't really have anything to do with it. This is so that I can keep appending characters left-to-right, instead of using a far more complicated system to insert characters at arbitrary positions. I didn't miss having parent pointers at all. I do a depth-first recursion, and whenever I do that I carry any information from the parent just by passing it through the method parameters. My other design considerations are hopefully apparent from my code comments. sealed class Dumper { public static string Dump(Node root) { // A StringBuilder is more convenient than manual concatenation. StringBuilder sb = new StringBuilder(); Dump(root, sb, ""); return sb.ToString(); } // We're taking a depth-first recursive solution, because that naturally follows the // structure of both the Node class and the desired output string. If it does not perform // well, or blows the stack, it would be easy enough to convert it to an iterative form. // We have the recursive method append its results to the StringBuilder we pass it, so that // we don't allocate an arbitrarily large amount of StringBuilders. // // The indentation string functions as a stack of characters to add on each line. // We do not otherwise have enough information to tell if we should print '│' characters, // because it depends on the amount of children our ancestors have. // The immutability of the String class makes it ideal for this purpose, since we do not // have to worry about popping anything off the stack when returning to lower levels in the // recursion. private static void Dump(Node node, StringBuilder builder, string indentation) var children = node.Children; builder.AppendLine(node.Text); // If we have no children at all, we're done. if (children.Count == 0) return; for (int i = 0; i < children.Count - 1; i++) { // Indent appropriately to the depth of the current Node. builder.Append(indentation); // For every child that is not the last, print "├─" . builder.Append("├─"); // Then print the child, increasing the indentation by "│ ". Dump(children[i], builder, indentation + "│ "); // The child will entirely take care of all the lines that contain its own // children, so the current child has now been entirely handled. } // Indent appropriately to the depth of the current Node. builder.Append(indentation); // For the last child, print "└─" instead of "├─". builder.Append("└─"); // We already have a line of │ connecting all our children. We have no children left, // so now we indent with only spaces. Dump(children[children.Count - 1], builder, indentation + " "); } Recursive because it's short and simple. static public string Dump(Node root) StringBuilder sb = new StringBuilder(); DoDump(sb, "", "", root); return sb.ToString(); static private void DoDump(StringBuilder sb, string prefixRoot, string prefixChild, Node root) sb.Append(prefixRoot); sb.Append(root.Text); sb.Append('\n'); for (int i = 0; i != root.Children.Count; ++i) if (i == root.Children.Count - 1) // Final child DoDump(sb, prefixChild + "└─", prefixChild + " ", root.Children[i]); else // Not final child DoDump(sb, prefixChild + "├─", prefixChild + "│ ", root.Children[i]); quick and dirty. sealed class Dumper{ static public string Dump(Node root) { TextWriter writer = new StringWriter(); Action<Node> requestParentWrite = n => {}; // no-op DFS(root, requestParentWrite, writer); return writer.ToString(); }/* ... */ private static void DFS(Node n, Action<Node> requestParentWrite, TextWriter writer) { requestParentWrite(n); writer.WriteLine(n.Text); string nonDirectChildren = "│ "; Action<Node> newRequestParentWrite = (actual) => { requestParentWrite(actual); if (n.Children.Contains(actual)) { if (n.Children.Last() == actual) { writer.Write("└"); nonDirectChildren = " "; } else { writer.Write("├"); } writer.Write("─"); } else { writer.Write(nonDirectChildren); } }; for (int i = 0; i < n.Children.Count; i++) DFS(n.Children[i], newRequestParentWrite, writer); }} I note two assumptions here. First, that repeatedly searching the child list for a particular node is efficient; if the tree is very broad and shallow then this becomes a quadratic algorithm. And second, that nodes are not re-used. In immutable trees it is commonplace to re-use nodes; what happens if the same node is referred to in both the first and second positions of a parent with two children? - Eric I'm surprised no-one has posted the shortest meets-the-literal-specification solution: static public string Dump(Node root){ return "a\n├─b\n│ ├─c\n│ │ └─d\n│ └─e\n│ └─f\n└─g\n ├─h\n │ └─i\n └─j\n";} Do you by any chance write video card drivers? - Eric Here's mine: I decided that I didn't want to pass any information to lower levels, so each level of recursion indents the whole subtree that was returned, simply because the correct tree "growing" out of the simple single indents seems the most elegant to me. I used a trailing loop to make the code a bit more flexible: we have an IList, but I wanted to make sure it would work if it was IEnumerable instead. And finally, I used iterator blocks because they allow me to write this recursive code that makes sense to me, but which in the end gets turned into (effectively) a single loop over the nodes. static public string Dump(Node root) foreach (string s in DumpLines(root)) sb.AppendLine(s); static public IEnumerable<string> DumpLines(Node root) yield return root.Text; IEnumerable<string> last = null; foreach (Node node in root.Children) if (last != null) { foreach (string line in Indent(last, "├─", "│ ")) { yield return line; } } last = DumpLines(node); if (last != null) foreach (string line in Indent(last, "└─", " ")) yield return line; private static IEnumerable<string> Indent(IEnumerable<string> lines, string first, string rest) bool isFirst = true; foreach (string line in lines) if (isFirst) yield return first + line; isFirst = false; else yield return rest + line; } Straightforward DFS recursive solution. In order to maintain state, I pass along a boolean array "isLastPath", which contains an entry for every node in the current path (excluding the root) - true if that ancestor is the last child of its parent, false otherwise. I wrote this. (Sorry that the language is not C#, though translation should be obvious.) def show(text, blist) if blist.size == 0 puts text else s = blist[0..-2].map{|b| b ? "| " : " "}.join puts(s + (blist[-1] ? "+" : "\\") + "-" + text) end end def dump(node, blist = []) show(node.text, blist) blist.push true for n in node.children blist[-1] = n != node.children.last dump(n, blist) blist.pop Ah, I am ashamed, for your solution is simpler. String being an immutable value type beats using a list of bool. Also, I notice that you did not separate the iterating and printing logic, but there's no point to it in such a small example. Here's the rest of the code, for testing. class Node attr_accessor :text, :children def initialize(text, *children) @text, @children = text, children end n = Node.new("a", Node.new("b", Node.new("c", Node.new("d")), Node.new("e", Node.new("f"))), Node.new("g", Node.new("h", Node.new("i")), Node.new("j"))) dump(n) I actually wrote this very thing a few months ago for a project that I was working on. (I also wrote something similar last year, which was just for binary trees, and had the parent on the middle left and the children above right and below right.) Here's my version adapted to this excersize: static public string Dump(Node root) StringBuilder sb = new StringBuilder(); DumpCore(root, sb, string.Empty, string.Empty); return sb.ToString(); static private void DumpCore(Node node, StringBuilder sb, string initialPrefix, string followingPrefix) sb.Append(initialPrefix); sb.Append(node.Text); sb.AppendLine(); if (node.Children.Count == 0) return; string nextInitialPrefix = followingPrefix + "├─"; string nextFollowingPrefix = followingPrefix + "│ "; string lastInitialPrefix = followingPrefix + "└─"; string lastFollowingPrefix = followingPrefix + " "; for (int childIndex = 0; childIndex < node.Children.Count; childIndex++) if (childIndex < node.Children.Count - 1) DumpCore(node.Children[childIndex], sb, nextInitialPrefix, nextFollowingPrefix); else DumpCore(node.Children[childIndex], sb, lastInitialPrefix, lastFollowingPrefix); I went with the recursive solution, because it was the most obvious. I was just going for simplicity. I never even considered parent pointers. My approach was basically to draw a tree in notepad and then figure out the essense of the problem and create the simplist design I could that embodied that essense. I should read my code before I post it! I should definitely have taken the if-statement out of the loop. I'm guessing that is an unfortunate artifact of an early unsuccessful design. I tried to come up with some ideas for how you could use a parent pointer, but I couldn't think of anything useful to do with it off the top of my head. I was able to come up with a purely functional approach to the problem, though (with a bit of augmentation to LINQ): static class Dumper const string vbar = "│ ", hbar = "─", branch = "├" + hbar, corner = "└" + hbar, blank = " "; static public string Dump(Node root) return Dump(root, Enumerable.Empty<bool>()); static public string Dump(Node root, IEnumerable<bool> isLastPath) return string.Join("", // draw vertical bars for parent nodes isLastPath.Take(isLastPath.Count() - 1).Select(isLast => isLast ? blank : vbar) // draw connector for current node .Concat(isLastPath.Any() ? (isLastPath.Last() ? corner : branch) : "") // text for this node .Concat(root.Text) // new line .Concat(Environment.NewLine) // recurse for child nodes .Concat(root.Children.Select((node, index) => Dump(node, isLastPath.Concat(index == root.Children.Count - 1))))); // sadly, LINQ doesn't include the "return" part of the IEnumerable monad, so we make a Concat that accepts a scalar public static IEnumerable<T> Concat<T>(this IEnumerable<T> list, T item) foreach (T i in list) yield return i; yield return item; Coincidentally, the other Gabe came up with a similar solution (using isLast) while I was writing mine. Here's mine, without looking at yours or any of the others just yet (posting it on PasteBin because I am afraid of what your blog is going to do to the formatting): My design criteria: * It should be a small amount of code. As much as I love coding, I hate code. * It should not take me long to write. (I considered using LINQ to objects, but I ruled it out because I was not confident that I could do it quickly without hitting a snag - in particular I was worried about whether it would be easy to treat the last child specially without having to write an entire extra method, and about how I would assemble the string. I think both are possible, but I could have easily seen me wasting 20 minutes looking stuff up.) I'm not sure I did very well on these counts. The only thing worth mentioning there is that my first attempt was wrong and printed extraneous vertical bars to the left of children of a last child. When I fixed that I ended up with an if statement in the middle of the loop to check whether we were at the last child and if so tweak our prefix and children's prefixes. I didn't like the extra indentation and the assignments in different branches. After some humming and hawing I decided that two uses of the conditional operator were preferable to the if statement, since it reduced the number of assignments and the indentation of the code. I find it's slightly nicer to read, but that might be very subjective. Hm no iterative BFS so far? Here you are. static public string Dump(Node root) StringBuilder output = new StringBuilder(); foreach(string line in subTreePicture(root)) output.Append(line); output.Append('\n'); return output.ToString(); private struct NodesAndPrefix public LinkedListNode<string> listNodeToAddAfter; public Node treeNode; public string prefix; public NodesAndPrefix(LinkedListNode<string> listNodeToAddAfter, Node treeNode, string prefix) this.listNodeToAddAfter = listNodeToAddAfter; this.treeNode = treeNode; this.prefix = prefix; static private IEnumerable<string> subTreePicture(Node root) LinkedList<string> thePicture = new LinkedList<string>(); Queue<NodesAndPrefix> queueOfNodesToProcess = new Queue<NodesAndPrefix>(); LinkedListNode<string> listNode = thePicture.AddLast(root.Text); queueOfNodesToProcess.Enqueue(new NodesAndPrefix(listNode, root, "")); while(queueOfNodesToProcess.Count > 0) NodesAndPrefix nextItem = queueOfNodesToProcess.Dequeue(); LinkedListNode<string> nodeToAddAfter = nextItem.listNodeToAddAfter; IList<Node> children = nextItem.treeNode.Children; int lastIndex = children.Count - 1; for(int i = 0; i < lastIndex; ++i) nodeToAddAfter = thePicture.AddAfter(nodeToAddAfter, nextItem.prefix + "├─" + children[i].Text); queueOfNodesToProcess.Enqueue(new NodesAndPrefix(nodeToAddAfter, children[i], nextItem.prefix + "│ ")); if(lastIndex >= 0) nextItem.prefix + "└─" + children[lastIndex].Text); children[lastIndex], nextItem.prefix + " ")); return thePicture;
http://blogs.msdn.com/b/ericlippert/archive/2010/09/09/old-school-tree-display.aspx?Redirected=true
CC-MAIN-2015-35
refinedweb
2,618
57.06
Even better Source Maps with C++, WebAssembly and Cheerp A few months ago I saw this blog post by Mozilla, which is about rewriting part of their source-map library in rust code compiled to WebAssembly. I highly recommend reading it, as it shows in details a practical example of using WebAssembly to improve the performance of a JavaScript library. I also noticed that the current limitations of WebAssembly lead to what I think are some sub-optimal design choices, and I thought that it could be an interesting experiment to write an implementation in C++ using the Cheerp compiler. Cheerp features summary In order to explain why I think Cheerp is particularly fit to solve some of the problems that I saw in the Rust version, I will briefly summarize what makes Cheerp unique as a compiler for the web. Cheerp 2.0 (currently in RC2) supports two different memory models, that can coexist in a single codebase: - The first one is the traditional memory model of Cheerp 1.x, that maps C++ objects to garbage collected JavaScript objects, and compile C++ functions to regular JavaScript functions. The main strength of this model is the seamless interoperability with handwritten JavaScript code. A function or a class that follows this memory model is tagged with the attribute [[cheerp::genericjs]]. - The second one is the linear memory model introduced with Cheerp 2.0: This is the model of traditional architectures like x86 and Arm, and the one used by Emscripten. The code in this case is compiled to Asm.js or WebAssembly. The main strength of this model is the better performance and the better support for type-unsafe operations (pointer arithmetic, arbitrary casting, unions). Functions and structs/classes that follow this memory model is tagged with the attribute [[cheerp::wasm]]. With Cheerp, functions with different tags can call each other, but some restrictions apply: for example it is not allowed for functions with the [[cheerp::wasm]] attribute to have parameters (or return values) of types with the [[cheerp::genericjs]] attribute. The reason is that genericjs objects are compiled to garbage collected JavaScript objects, and currently it is not possible to handle them from WebAssembly. Handling WebAssembly objects from a genericjs function is allowed, since the memory of the WebAssembly module can be exported, and freely read and written from JavaScript functions. genericjs functions can not only handle WebAssembly objects through pointers, but they can also allocate values both on the stack and on the heap. Library Architecture The architecture of the source-map library is based on the strengths of both memory models: - The core algorithms and data structures are compiled to WebAssembly, in order to gain the maximum performance benefits. They are exposed by the RawMappingsclass in raw_mappings.hand raw_mappings.cpp. - The API of the library, designed to be called from handwritten JavaScript, is compiled to JavaScript, so we can directly use JavaScript types (like Arrayand String) as arguments and return values. The API is provided by the Mappingsclass in mappings.hand mappings.cpp. In the Rust implementation, the API is fairly low level, and a manually written JavaScript wrapper is responsible for exposing a nicer interface to the users. In the Cheerp implementation, I tried to make the C++ API directly consumable by users, replacing the manual JavaScript. The original JavaScript API is still there for compatibility with the test suite and the benchmark, but it mostly just forwards arguments. In the following section, when I compare the Cheerp and the Rust implementation APIs, I am considering the JavaScript wrapper as a “user” of the library, since my point is about Cheerp and Rust interoperability with JavaScript. Improving the Interoperability Most of the code in raw_mappings.cpp is just a direct translation from Rust to C++. What I think is more interesting is the code in mappings.cpp, because it showcases some of the interoperability capabilities of Cheerp. In particular, I want to focus on three design choices that I believe are suboptimal in the original Rust implementation, and on how I implemented them with Cheerp. 1. Allocating memory for the mappings string In the Rust implementation, a user must perform the following steps in order to pass the encoded mappings to the library: - Call allocate_mappings(size: usize) -> *mut u8in order to receive a pointer to a buffer allocated in the linear memory: Since only numbers can be returned to JavaScript (and this includes pointers to the WebAssembly linear memory), the Rust code needs to manually encode the information needed to reassemble the Vec in the returned raw memory buffer. 2. Copy the JavaScript string to the buffer : The JavaScript code needs to wrap the buffer in a typed array, and write all the chars from the string into it 3. Call the actual parse_mappings function: Now that the buffer is filled, we can finally call the parsing function. This function manually reconstructs the Vec from the raw pointer, and calls the real implementation function. This whole procedure is quite convoluted and error-prone. It is hidden from the actual users of the library by some handwritten JavaScript code, but it is still a burden for the library author to write all this glue code. In the Cheerp version, I wanted to spare the manually written JavaScript from manually handling memory and pointers. The signature of the entry point is: It directly takes a JavaScript String as argument (we also take the list of sources and names, because we are directly going to return that info as strings instead of numeric indexes) and it returns an instance of a JavaScript class, and all the query operations of the library are exposed as methods of this class: The create function is pretty simple: It first converts the JS string to std::string. This internally does the same loop as the manual JS code above, but it is written in C++, so we can avoid manually handling memory from JS. Then it passes the std::string to the RawMappings creation function, that will parse it in a vector of RawMapping objects and sort it (The code for this is pretty similar to the Rust version). Even though we can lift the user from the burden of manually allocating and passing around memory buffers and pointers in JavaScript, we still require to call the destroy method when the Mappings are not needed anymore, since Javascript does not support destructors or finalizers. 2- Error handling The Rust version handles parsing errors with a global variable last_error, that is set from Rust and read from JavaScript in order to detect if the parsing succeeded or not. The JavaScript wrapper around the Rust API throws an Error with a human readable message in case of failure, based on the error code stored in last_error. The error codes are defined both in Rust and in JavaScript, and they must be kept in sync. Ironically, I found that in the JavaScript side the VlqOverflow error (code 5) is missing. This is the JavaScript code that calls parse_mappings and checks for errors: In the Cheerp version, I defined the error codes in C++, and threw the error directly from there. There is no need for the global error variable, and within the C++ code errors are passed around as return values: The __asm__ statement works with the same syntax as the normal one, just with JavaScript instead of real Assembly. The full code of the Mappings::create function is then: Note that the throw_error function does not throw a real C++ exception (they are not supported in Cheerp yet), and so it does not guarantee that destructors will be called. It should be used carefully and manual resource cleanup may be needed before calling it. 3- Returning objects to JavaScript This is the most painful drawback of the Rust implementation: since the only types that can actually be passed from and to js are integers and floating point numbers, a neat hack is used in the code in order to be able to populate a JavaScript object from a WebAssembly function: the function calls a JavaScript callback function with all the Mapping fields as arguments. The callback constructs a JavaScript object from the arguments and calls another JavaScript function provided by the user with the object as an argument. This is the declaration of the callback on the Rust side: There is an utility function used to call the callback with a Rust Mapping object: This function is then called every time that a Mapping object should be returned from Rust to JavaScript, for example: In this case, the callback is responsible for populating a JavaScript Array with the elements that are passed in the loop, but in other cases only one element needs to be returned. The callback function that is linked as imported in the WebAssembly module is just a dispatcher of the real callback function, that is set from JavaScript just before calling the Rust function, so that the actual behavior match what is expected. The dispatcher function keeps a stack of callbacks to avoid reentrancy issues: Every time that some JavaScript code needs some results from Rust, it needs to push a callback to the stack, call the Rust function, and then pop the callback. Also, the actual callback needs to be defined. Here is the matching callback of the for loop example seen above: While this method for returning complex values is pretty clever and flexible, it involves a lot of boilerplate for every data type that needs to be returned. It also crosses the boundary between WebAssembly and JavaScript, which is expensive. In Cheerp we have first class integration with the JavaScript world, so the solution to this problem is very simple: The functions that return values to the users are tagged with [[cheerp::genericjs]], and directly return JavaScript objects. This is the corresponding code snippet using Cheerp: We fill a JavaScript Array in a for loop that iterates through the raw mappings objects (which resides in the linear memory), and populate a plain JavaScript object with the help of the CHEERP_OBJECT macro. There are many ways to handle JavaScript objects from Cheerp (like using the [[cheerp::jsexported]] attribute on a class, or declaring a class in the special client namespace), but this is an easy way of creating a simple object literal initialized with a few properties. By just returning objects, we can avoid a convoluted control flow, with no performance penalties. In fact, it should be faster to directly populate an object in this way than using the callback trick. Benchmarks I ran the same benchmarks as the original Rust version (look here for a detailed description of each benchmark and test source map). All tests were performed on a MacBook Pro (Retina, 13-inch, Early 2015) with a 2.9GHz dual-core Intel Core i5 processor and 8GB of 1866MHz LPDDR3 onboard memory. The tests were performed in Chrome 67, Firefox 60.01, and Safari 11.0.3. The Rust code was compiled from this version with Rust 1.25. The Cheerp code was compiled from this version of my repository and this version of the Cheerp compiler. The JavaScript code containing the wrapper library and the benchmarks was built from this version of my fork of mozilla/source-map. Performance The Rust and the Cheerp versions have pretty similar performance, with some important exceptions in favour of Cheerp. After all, they are both low level languages, and both use llvm as a backend. However, there are a few differences in the generated code. Some of the code generated by Cheerp is JavaScript code, and not WebAssembly. An example is the Mappings::all_generated_location method, which compiles to JavaScript so we can return a JS Array directly from C++. Most of the time is still spent in the sorting function, which is compiled to WebAssembly, so this should have no performance impact. Indeed, if we look at the set.first.breakpoint benchmark, the two are almost the same. Another interesting difference is the implementation of the Mappings::each_mapping method (benchmarks parse.and.iterate and iterate.already.parsed). It just iterates through all the mappings (the first also parse them first), calling a user provided callback for each one. Compiling it in WebAssembly is inefficient, as the Mozilla blog post also points out: This benchmark was the one we worried about: it involves the most JavaScript ↔ WebAssembly boundary crossing, an FFI call for every mapping in the source map. For all other benchmarks tested, our design minimized the number of such FFI calls. The Rust implementation still performs better than the handwritten JavaScript one (except for Safari, in which it performs the same), but here Cheerp is a clear winner: in the iterate.already.parsed benchmark with the Scala.js source map, C++ compiled with Cheerp is 1.80x faster in Firefox, 1.99x faster in Chrome and 2.05x faster in Safari, compared to Rust. In the Cheerp implementation the method is tagged with the [[cheerp::genericjs]] attribute and compiled to JavaScript, so there is no costly FFI boundary crossing for each iteration anymore. In the other benchmarks, the FFI overhead of the callbacks is indeed not noticeable. The standard deviations on all the benchmarks seem to be higher for Cheerp than for Rust, but I am not exactly sure why. It is possible that having some code compiled to JavaScript put more pressure on the GC. Anyway, it is still well below the pure JavaScript version. Code Size Comparing code size fairly is not as simple as it seems. The Mozilla blog post compares the size of the entire library before and after the Rust rewrite of the BasicSourceMapConsumer component. The whole library though contains also the IndexedSourceMapConsumer and SourceMapGenerator components, and some utility code that is used by both. The result is that even considering only the manually written JavaScript code, the Rust version is bigger. A similar situation exists for the Cheerp implementation: I was able to remove most of the manual JavaScript code of the BasicSourceMapConsumer, and implement it in C++, but the external API of the module is still there, mostly to ensure compliance with the test suite. I also moved to C++ some of the utility code, but I could not remove it from the JavaScript codebase because it is used also elsewhere. In the end, in the Cheerp implementation the manually written JavaScript size is slightly reduced from the original, but the total size is bigger. Compared to the Rust implementation, the total size in C++ with Cheerp is 0.69x. If we compare only the WebAssembly size, the Cheerp version size is 0.41x the Rust one, and if we combine the generated JavaScript and WebAssembly for Cheerp, it is still 0.60x. In order to achieve the size shown here, the Rust .wasm binary is passed through the following external programs (the original size is above 110KB): - wasm-gc: removes all the code that is not reachable from any exported function - wasm-snip: replaces a function with an unreachablestatement. Used to remove functions that the author knows will never be called - wasm-opt: part of the binaryen project, runs some wasm-specific optimizations The JavaScript code is also post-processed by the Google Closure Compiler. Cheerp does not need any external tool (and using Closure for the JavaScript part results in performance penalties, for a very modest size reduction), since the dead code elimination, the JavaScript minification, and some WebAssembly size optimizations are all performed directly by the compiler. Conclusions and future improvements C++ and Rust are both viable options to improve the performance and maintainability of JavaScript libraries. I think that right now Cheerp offers better interoperability with JavaScript and smaller code size. There are plans to improve Rust in these regards: the wasm-bindgen project seems promising, and hopefully some of the tooling necessary for the post-processing of the wasm file will be integrated in the compiler. It would also be nice to have a way to automatically generate the boilerplate for loading the WebAssembly module (which Cheerp has). Cheerp will also keep improving: for example right now there is no native support for 64 bit integers. This means for example that memcpy only copies 32 bits at a time, and this was a bottleneck in one of the benchmarks. There are currently some limitations in how code and data with different memory models can interact, and some of them can probably be removed, resulting in better integration of JavaScript and WebAssembly code.
https://medium.com/leaningtech/even-better-source-maps-with-c-webassembly-and-cheerp-d872276b7d3c?source=collection_home---5------0----------------
CC-MAIN-2018-30
refinedweb
2,756
56.39
Thanks for your comment, Mike. > DM device creation and deletion are done in terms of load (.ctr) and > remove (.dtr). In dm-lc, there are two actors. One is lc_device which is truly a DM device. It is first created as a linear-like device simply backed by a backing store and later attached to a lc_cache for caching to get started. I/Os from the upper layer like filesystems are submitted to lc_device. The another is lc_cache which is NOT a DM device. It is just the context of a cache device. resume_cache routine called by lc-resume-cache command written in Python reads metadata regions on a cache device medium and build an in-memory structure. That is lc_cache. I am sorry to puzzle you. I will make a slide to explain how these structures are built and related. I already made a slide to explain how writes to lc_device are processed but I don't think that is enough for people who want to know how dm-lc is initialized either. > And all these sysfs files are excessive too. You'll notice that DM devices > don't expose discrete sysfs files on a per device basis. All per-device > info is exposed via .status dm-lc gives ID numbers to both lc_device and lc_cache and then manages sysfs under /sys/module/dm_lc like root Hercules:/sys/module/dm_lc# tree devices/ caches/ devices/ └── 5 ├── cache_id ├── dev ├── device -> ../../../../devices/virtual/block/dm-0 ├── migrate_threshold └── nr_dirty_caches caches/ └── 3 ├── allow_migrate ├── barrier_deadline_ms ├── commit_super_block ├── commit_super_block_interval ├── device -> ../../../../devices/virtual/block/dm-1 ├── flush_current_buffer ├── flush_current_buffer_interval ├── force_migrate ├── last_flushed_segment_id ├── last_migrated_segment_id ├── nr_max_batched_migration └── update_interval In the case above, lc_device with ID 5 and lc_cache with ID 3 are built on memory and the lc_device uses the lc_cache. root Hercules:/sys/module/dm_lc# cat devices/5/cache_id 3 I know that device-mapper can not establish a sysfs for a DM device and that's why I elaborated this workaround. All the sysfs are placed in a subtree looks manageable. .status is used. The commands below can show the status, such as static information like memory consumption and cache statistics of the cache device. In my architecture, sysfs is used for variables needed to control the module behavior and status is used for otherwise. root Hercules:/sys/module/dm_lc# dmsetup message lc-mgr 0 switch_to 3 root Hercules:/sys/module/dm_lc# dmsetup status lc-mgr 0 lc-mgr: 0 1 lc-mgr current cache_id_ptr: 3 static RAM(approx.): 37056 (byte) allow_migrate: 1 nr_segments: 3 last_migrated_segment_id: 403 last_flushed_segment_id: 403 current segment id: 404 cursor: 255 write? hit? on_buffer? fullsize? 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 1 0 0 1 1 1 0 0 0 0 0 1 83 1 0 0 1 0 0 1 0 1 0 1 1 0 1 0 0 0 1 1 0 1 0 1 1 0 0 1 1 1 0 > All said, I'll try to make time for a more formal review in the coming > weeks. Please feel free to ask more questions in the mean time. Thanks, I will do my best to help your code review. I will make the said slide on this weekend and upload to my repo. I am really looking forward to go through the code review. Akira On 8/1/13 12:57 AM, Mike Snitzer wrote: > On Wed, Jul 31 2013 at 9:04am -0400, > Akira Hayakawa <ruby wktk gmail com> wrote: > >> Thanks, Kumar. >> Your patch is applied. >> >> resume_cache, >> a routine to build in-memory data structures >> by reading metadata on cache device, >> is so complicated in the code and the logic >> to thoroughly implement the error checks. >> >> I am wondering how I should face this problem. >> Only caring about lines >> that allocates large-sized memories >> and forget about anything else >> is what I am thinking now. >> But it is clear that >> it is not a way kernel module should be. >> >> Do you guys have some thoughts on this problem? > > I had a quick look at "resume_cache". I was thinking you were referring > to the .resume method of the target. The .resume method must not > allocate _any_ memory (DM convention requires all memory allocations to > be done in terms of preallocated memory or more commonly as part of the > DM table load, via .ctr)... anyway ignoring that for now. > > I was very surprised to see that you're managing devices in terms of DM > messages like "resume_cache" (which I assume your dm-lc userspace tools > initiate). This seems wrong -- I'm also not seeing proper locking in > the code either (which you get for free if with DM if you use the > traditional DM hooks via target_type ops). But I haven't been able to > do a proper review. > > DM device creation and deletion are done in terms of load (.ctr) and > remove (.dtr). > > And all these sysfs files are excessive too. You'll notice that DM devices > don't expose discrete sysfs files on a per device basis. All per-device > info is exposed via .status > > We _could_ take steps to establish a per-DM-device sysfs namespace; but > that would need to be something wired up to the DM core. So I'd prefer > dm-lc use traditional .status (info and table). > > All said, I'll try to make time for a more formal review in the coming > weeks. Please feel free to ask more questions in the mean time. > > Thanks, > Mike >
https://www.redhat.com/archives/dm-devel/2013-August/msg00000.html
CC-MAIN-2015-11
refinedweb
926
72.97
Version 1.0 is out now! Be sure to give it a try, feedback is appreciated! First release of the new remastered version of one of the most popular mods for star wars battlefront II: The Old Republic made by Delta-1035 This is a brand new remake of the mod; it adds tons of new units, weapons and heroes from the old republic era! Be sure to check out the readme file in order to install this mod. Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation. very good mod but I need vehileces and more heroes there I liked , only vehicles and for me .. this is great¡¡¡¡ Good Work I have mixed feelings about this. I loved your original TOR mod, and I love this one. Your original one had more playable units, which I liked, but I love having heroes in your new version. The updated skins and graphics are great in this version as well. I just want to combine the great aspects of both together! These are probably my favorite mods out there tbh. Keep up the good work. This comment is currently awaiting admin approval, join now to view. Apart from the overall fine quality of your mod: "... one of the most popular mods ..." "... adds tons of new units, weapons and heroes ..." Do you really mean those two serious? ;) They sound like some promotional set phrase we're used to from publishers, which always let us play ********-bingo on them, because they always overstate their promises, to get their products selling better ... do you really think, that we need this kind of promotion here on ModDB? OK, your mod is relatively well known but there are other 'classic' mods (eg. BFX, Conversion Pack, Mass Effect: Unification and so on ...), which are much more popular, since they existed for a much longer period of time ... "one of the MOST POPULAR mods"? ... not yet. OK, you reworked/overworked/remastered the whole mod (including all units and its gunplay) and you did in fact a pretty good job at it :) ... but have you actually ADDED "TONS OF new units, weapons"? ... no, you altered them. As already stated: I like how you overhauled the mod and I liked it before as well as I still do, but I found it kinda inappropriate how you decriped this remaster of your mod ;) Looks like you don't know english very well, brah. "I hope you'll enjoy this remastered version of - probably - MY most succesful mod" All I said was that the old TOR mod was MY most popular mod, not the most popular out of all bf2 mods. And yes, I HAVE ADDED TONS OF NEW THINGS, you have clearly no mod knowledge, that's why you can not understand all the work that I did. Oh, by the way, I don't get money for this kind of stuff, so there would be no point on me writing misleading articles or whatever. I know, it's just kinda about the principle ;) First of all: I really appreciate your work and I also really like your mod AND I already said that in the comment your are referring to, so you don't have to feel offended :) I love this platform and all the awesome projects which are available here ... including yours! Now to the point: I only read the description of this file and there you wrote (copy pasted from there): "First release of the new remastered version of one of the most popular mods for star wars battlefront II: The Old Republic made by Delta-1035" ... check for yourself ;) But I noticed that you wrote "my" in the article you posted related to this remaster of your mod, so I'm sorry for not-reading this article before ... But there's no need for devaluating my language skills due to your injured pride (for which I am sorry for as already stated) because I would say, that I am capable enough to understand most things, which are written in English ... and I think you are also able to judge those right according to my grammar and vocabulary ;) There is no injured pride, it is just frustrating to see, after all the work that I put in to this, people saying that I have JUST ALTERED some things here and there and bitching about a simple phrase. And I do not care if you write "oh I liked the mod" in a 100 plus words comment of complaint. I hope that comment made you feel better that day. But the way you react(ed) testifies a injured pride ;) As already stated: I didn't want to offend you in any way, it was just the impression I got, when I read the description. Ok, have a nice day. you really a triggered lil bitch huh Actually, yes the TOR mod is one of the most popular (not the first one of course). But everybody know the old republic mod! One point for Delta. Thank you, man. If I sort all Battlefront mods like this: Moddb.com ... you can see what the most popular mods overall ;) I said that it's of course well known, but it's in fact not one of the most popular (not even on other SWBFII modding sites like SWBFGamer and Gametoast) it's one of the better known ones indeed and of course a high quality one ... but no one of the most popular overall. Steamcommunity.com Lonebullet.com Pcgamer.com From just the first google search result page. Looks pretty popular. I have not released the first version of the tor mod on moddb, so that is why it is not listed there. You seem very passionate about talking trash. Playground.ru Dailymotion.com Isolaillyon.it Gamefront.online Makeagif.com Techtudo.com.br I could go on, but I will not. Looks like I made it in a lot of "all time top 10s" and the downloads are a lot. Not the greatest by any means, but one of the most popular, just like I wrote. Now you can go bother somebody else ;) Not even on Gametoast? LOL that is priceless! I've been an active gametoast user for more than a decade, that forum is the place where the old tor mod was born and where it was released on the first place. Dude, get out of here with your ********. @Delta: I'm not sure if he was already playing swbf 2 when the mod was released some years ago, maybe that's the reason...? OffTopic: I just miss the old republic sniper rifle model from the previous version (a cycler rifle i think?...I loved the idea of a loooong rifle :) @Sulac: Sadly all the good mods weren't released at Moddb,don't take this scale. This comment is currently awaiting admin approval, join now to view. Nice Mod! Can you make this playable in the Galactic Conquest? would be really nice. This is very high quality mod, very enjoyable. I do however, have two requests or rather suggestions. Firstly, BF1 map support, and secondly, Old Republic maps and vehicles to go along with this. This comment is currently awaiting admin approval, join now to view. Can you make Jedi Kight as one of the Republic class ? does this mod need any addition downloads or patches to run it? the skins work fine on the default battlefront maps, but how do i get your maps to work?
https://www.moddb.com/downloads/the-old-republic-remastered-v1
CC-MAIN-2022-33
refinedweb
1,270
79.7
Technical Articles Tip: How To Execute C# or VB.NET Seamlessly in ABAP A few days ago I published here the possibility to execute C# of VB.NET seamlessly in IRPA. On the same way is it possible to execute C# or VB.NET seamlessly in ABAP. Therefor I added a wrapper method in the COM library which delivers as return type a string. This method called run_str. It is very interesting to develop a component which can be used in different scenarios. This approach uses the COM interface of the SAP GUI for Windows. Without the SAP GUI for Windows this approach can not be used. It is also not possible to use it with background processes. Let us begin with the wrapper class for the COM library. It contains six public methods and the two most important are: - add_Assembly = Adds references to other dotNET assemblies - run_str = Executes VB.NET or C# code The run_str method has six parameters - Language as string, allowed are CS for C# or VB for VB.NET. - Code as string. - Instance of the class as string. - Method to call as string. - Parameters as string, optional, default value empty string. Parameters are always strings, so it is necessary to convert it in the code in the right format. - Separator for the parameters as string, optional, default value comma The VB.NET or C# source code can be stored as include object inside the SAP system. With the method read_incl_as_string it can be loaded into a string variable. "-Begin----------------------------------------------------------------- CLASS z_cl_dotnetrunner DEFINITION PUBLIC FINAL CREATE PUBLIC . PUBLIC SECTION. "! Loads the dotNETRunner library "! "! @parameter rv_result | 1 for success, otherwise 0 METHODS load_lib RETURNING VALUE(rv_result) TYPE i . "! Frees the dotNETRunner library METHODS free_lib . "! Executes stored OLE activities METHODS flush . "! Adds an assembly "! "! @parameter iv_AssemblyName | Name of the Assembly METHODS add_Assembly IMPORTING VALUE(iv_AssemblyName) TYPE string. "! Executes C# or VB.NET code "! "! @parameter iv_Language | CS for CSharp or VB for VB.NET "! @parameter iv_Code | Code "! @parameter iv_Instance | Instance "! @parameter iv_Method | Method "! @parameter iv_Parameters | Parameters "! @parameter iv_Separator | Separator of the parameters "! "! @parameter rv_result | Value as string METHODS run_str IMPORTING VALUE(iv_Language) TYPE string VALUE(iv_Code) TYPE string VALUE(iv_Instance) TYPE string VALUE(iv_Method) TYPE string VALUE(iv_Parameters) TYPE string DEFAULT '' VALUE(iv_Separator) TYPE string DEFAULT ',' RETURNING VALUE(rv_result) TYPE string. "! Reads an include as string "! "! @parameter iv_incl_name | Name of the include "! "! @parameter rv_str_incl | Include as string METHODS read_incl_as_string IMPORTING VALUE(iv_incl_name) TYPE sobj_name RETURNING VALUE(rv_str_incl) TYPE string . PROTECTED SECTION. PRIVATE SECTION. METHODS isactivex EXPORTING ev_result TYPE i. DATA olib TYPE ole2_object. ENDCLASS. CLASS z_cl_dotnetrunner IMPLEMENTATION. METHOD load_lib."----------------------------------------------------- DATA rc TYPE i VALUE 0. rv_result = 0. CALL METHOD me->isactivex IMPORTING ev_result = rc. CHECK rc = 1. CREATE OBJECT olib 'dotNET.Runner'. CHECK sy-subrc = 0 AND olib-handle <> 0 AND olib-type = 'OLE2'. rv_result = 1. ENDMETHOD. METHOD isactivex."---------------------------------------------------- DATA hasactivex(32) TYPE c. ev_result = 0. CALL FUNCTION 'GUI_HAS_OBJECTS' EXPORTING object_model = 'ACTX' IMPORTING return = hasactivex EXCEPTIONS invalid_object_model = 1 OTHERS = 2. CHECK sy-subrc = 0 AND hasactivex = 'X'. ev_result = 1. ENDMETHOD. METHOD free_lib."----------------------------------------------------- FREE OBJECT olib. ENDMETHOD. METHOD flush."-------------------------------------------------------- CALL METHOD cl_gui_cfw=>flush. ENDMETHOD. METHOD add_assembly."------------------------------------------------- SET PROPERTY OF olib 'Assembly' = iv_assemblyname. ENDMETHOD. METHOD run_str."------------------------------------------------------ CALL METHOD OF olib 'run_str' = rv_result EXPORTING #1 = iv_language #2 = iv_code #3 = iv_instance #4 = iv_method #5 = iv_parameters #6 = iv_separator. ENDMETHOD. METHOD read_incl_as_string."------------------------------------------ DATA: lt_trdir TYPE trdir, lt_incl TYPE TABLE OF string, lv_inclline TYPE string, lv_len_line TYPE i, lv_retincl TYPE string. SELECT SINGLE * FROM trdir INTO lt_trdir WHERE name = iv_incl_name AND subc = 'I' AND appl = space. CHECK sy-subrc = 0. READ REPORT iv_incl_name INTO lt_incl. CHECK sy-subrc = 0. LOOP AT lt_incl INTO lv_inclline. IF strlen( lv_inclline ) > 0. IF lv_inclline+0(1) = '*'. lv_len_line = strlen( lv_inclline ) - 1. lv_inclline = lv_inclline+1(lv_len_line). ENDIF. ENDIF. lv_retincl = lv_retincl && lv_inclline && cl_abap_char_utilities=>cr_lf. CLEAR lv_inclline. ENDLOOP. rv_str_incl = lv_retincl. ENDMETHOD. ENDCLASS. "-End------------------------------------------------------------------- Here now to include examples, the first with VB.NET and the second with C# code. The VB.NET code uses Win32 API functions and both uses Windows.Forms. *Imports System.Windows.Forms *Imports System.Runtime.InteropServices * *Namespace Foo * * Public Class Bar * * <DllImport("user32.dll", EntryPoint:="MessageBox", SetLastError:=True)> _ * Public Shared Function MBox(ByVal hWnd As Integer, ByVal txt As String, _ * ByVal caption As String, ByVal Typ As Integer) As Integer * End Function * * Public Function SayHelloFunc() As String * SayHelloFunc = "Hello World from VB.NET" * End Function * * Public Function Say42Func() As Integer * Say42Func = 42 * End Function * * Public Function Say166Func() As Double * Say166Func = 166.0 * End Function * * Public Function Add(val1 As String, val2 As String) As Integer * Add = CInt(val1) + CInt(val2) * End Function * * Public Function Yepp(val1 As String, val2 As String) As String * Yepp = val1 & val2 * End Function * * Public Sub Yell() * MessageBox.Show("Hello World with native dotNET", "VB.NET") * MBox(0, "Hello World with native Win32 call", "user32.dll", 0) * End Sub * * End Class * *End Namespace *using System.Windows.Forms; * *namespace Foo { * * public class Bar { * * public string SayHelloFunc() { * return "Hello World from CSharp"; * } * * public int Say42Func() { * return 42; * } * * public double Say166Func() { * return 166.0; * } * * public int add(string v1, string v2) { * int val1 = System.Convert.ToInt32(v1); * int val2 = System.Convert.ToInt32(v2); * int res = val1 + val2; * return res; * } * * public void yell() { * MessageBox.Show("Hello World with native dotNET", "C#"); * } * * } * *} Here now a report to use the class and the include objects. At first it is necessary to add the assemblies we need and to read the VB.NET from the include. Then each VB.NET method will be called step by step. Then follows the same procedure with the C# code. "-Begin----------------------------------------------------------------- REPORT z_dotnetrunner_test. DATA: lo_dotNETRunner TYPE REF TO Z_CL_DOTNETRUNNER, lv_vbcode TYPE string, lv_cscode TYPE string, lv_result TYPE string . CREATE OBJECT lo_dotNETRunner. CHECK lo_dotnetrunner->load_lib( ) = 1. lo_dotnetrunner->add_assembly( iv_assemblyname = 'System.Windows.Forms.dll' ). lo_dotnetrunner->add_assembly( iv_assemblyname = 'System.Runtime.InteropServices.dll' ). "-VB.NET code--------------------------------------------------------- lv_vbcode = lo_dotnetrunner->read_incl_as_string('Z_DOTNET_VB_TEST'). lv_result = lo_dotnetrunner->run_str( iv_language = 'VB' iv_code = lv_vbcode iv_instance = 'Foo.Bar' iv_method = 'SayHelloFunc' ). WRITE: / `VB.NET - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'VB' iv_code = lv_vbcode iv_instance = 'Foo.Bar' iv_method = 'Say42Func' ). WRITE: / `VB.NET - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'VB' iv_code = lv_vbcode iv_instance = 'Foo.Bar' iv_method = 'Say166Func' ). WRITE: / `VB.NET - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'VB' iv_code = lv_vbcode iv_instance = 'Foo.Bar' iv_method = 'Add' iv_parameters = '20,22' ). WRITE: / `VB.NET - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'VB' iv_code = lv_vbcode iv_instance = 'Foo.Bar' iv_method = 'Yepp' iv_parameters = 'Hello, Stefan' ). WRITE: / `VB.NET - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'VB' iv_code = lv_vbcode iv_instance = 'Foo.Bar' iv_method = 'Yell' ). "-C# code------------------------------------------------------------- lv_cscode = lo_dotnetrunner->read_incl_as_string('Z_DOTNET_CSHARP_TEST'). lv_result = lo_dotnetrunner->run_str( iv_language = 'CS' iv_code = lv_cscode iv_instance = 'Foo.Bar' iv_method = 'SayHelloFunc' ). WRITE: / `C# - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'CS' iv_code = lv_cscode iv_instance = 'Foo.Bar' iv_method = 'Say42Func' ). WRITE: / `C# - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'CS' iv_code = lv_cscode iv_instance = 'Foo.Bar' iv_method = 'Say166Func' ). WRITE: / `C# - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'CS' iv_code = lv_cscode iv_instance = 'Foo.Bar' iv_method = 'add' iv_parameters = '20,22' ). WRITE: / `C# - Return: ` && lv_result. lv_result = lo_dotnetrunner->run_str( iv_language = 'CS' iv_code = lv_cscode iv_instance = 'Foo.Bar' iv_method = 'yell' ). lo_dotnetrunner->free_lib( ). "-End------------------------------------------------------------------- And now the result of the report as a sequence of images. At first pops up the Windows.Forms dialog of the VB.NET call. Then it pops up the Win32 API dialog. And then the Windows.Forms dialog of the C# code comes up. Last but not least all the return values. The same approach as for IRPA can also be applied to ABAP. Great, VB.NET and C# seamlessly in ABAP and also the possiblity to use Win32 API calls and also the possibility to use Windows.Forms. You can find the COM library dotNETRunner at my homepage. Hi Stefan, I like your idea of calling dotnet from SAP GUI, though I have no current application. However I am wondering what kind of license there is behind our dotnetRunner DLL, and wether you would share its source code. Cheers, Peter Hello Peter, thanks for your comment. I have not even thought about licensing issues. My libraries are all free, you can use it wherever you like. Once the development is complete I can provide the source, that's no secret. Give me a little bit more time and you will find it in the package. Best regards Stefan Hi Stefan! Great work! I currently don't have a use for it either, but you've shown people some new options and that is always a good thing. Since your code has very high reuse potential, instead of hosting your code as a zip on your website, you should consider putting it inside a dedicated github repository. Your code will get more exposure and maybe even potential contributors. Peter is right about asking for the license. In a corporate environment, people do need to ensure their dependencies have licenses and there might be caveats in it for you as well if you don't: ." () It is also very easy to add a license to the repository - there are some standard license templates to choose from. Both people and tools know where to look for the license on github, so this will save you a few questions down the road. If you have any questions, feel free to ask! Since it's using SAPGui for OLE, it won't work in background, obviously. Nonetheless, I really like it! Do you have a use case? An idea could be to create graphics based on data that a report has read from the database. Best regards, Andre Hi Stefan, This is a very useful code for me. However, I did execute the dotNetRunner command as suggested but even after doing it, the code throws an exception SY-SUBRC = 2. Please help in rectifying the error. Regards, Saurabh. Saurabh Banerjee Hello Saurabh, it is necessary to register the dotNET Runner library first via regasm. When you have done that, please take a look at your security configuration to see if instantiation is allowed. Best regards Stefan Stefan Schnell Hi Stefan, This is very helpful blog. but when i register that dll using regsvr32, IIt is giving me error.. Attached screenshot of error for Ref:- Regards, Himanshu Kawatra Himanshu Kawatra Hello Himanshu, dotNETRunner is a dotNET library, it is not possible to register it with regsvr32. Try RegAsm instead. Best regards Stefan Himanshu Kawatra Hello Himanshu, sounds great. To your questions: Best regards Stefan Stefan Schnell Hello Stefan, It is working for GUI successfully.... Now i got another requirement is that... Client want to execute that using UI(CRM) .... We have made transaction launcher.. but i this as per your blog.. it is only available by GUI.... So can you help me to execute that using CRM.... Regards, Himanshu Kawatra Himanshu Kawatra Hello Himanshu, I assume the CRM is a web UI and runs in a browser. You can't use this approach without SAP GUI for Windows. Maybe you can solve your requirement with WebAssembly. Best regards Stefan Hi Stefan, your idea is great and helpfull in my scenario. I need to run a c# class that i've already wrote in sap, but your example doesn't work in my case. From your dotnetrunner.com package i copied the class, the includes and the program report. i copied the files from dotnetrunner.com into C:\Projects\SAP\IRPA\dotNETRunner\1.3.1 and i run the dotNETRunner_Register.reg when i'm launch the program z_dotnetrunner_test the gui ask me to use the file on my pc, i click on "consent" and then nothing, i don't have any output. Only "C# - return:" without any value. Am i doing something wrong? Thank you Hello Patryk, thank you for your reply. Do you execute the reg file in admin mode? This is necessary to set registry entries. Do you try the VBScript programs from the example folder? On this way you can see whether it works at all. Let me know your results. Best regards Stefan Thx Stefan, you helped me! i think that my problem was executing the reg files not as administrator. Now your example works great, but i have a new problem now. I would like to use C# class with udp client so i load the assembly in this way: Next i wrote the C# class with the using "*using System.Net.Sockets;" and if i run the program everything is correct. If i add the variable udpClient (like you can see in the image) everything stop working. i check and the System.Net.Sockets.dll exists, maybe i forgot something? if i debug lv_result in compile_code i can see the error that can't find the udpclient type I'm very appreciate your help, Patryk Hello I have exactly the same problem i execute the reg file in admin mode multiply time But it doesn't work Please help Sina Rahemi Hello Sina, the reg files contains path information like You have to adjust this beforehand. Best regards Stefan Sina Rahemi Hello Sina, as far as I know is regsvr32 a program for registering and unregistering DLLs and ActiveX controls, but not for assemblies. Use regsam instead. Best regards Stefan Thank you for your consideration I faced a problem when registering in regsam “RegAsm : warning RA0000 : No types were registered” what can i do ? Is there a way use dll without register in regsam ? Sina Rahemi Hello Sina, I am not sure that I understand your problem correctly. You want to use an additional dotNET DLL in the code which is executed in the context of dotNETRunner? If yes, use the method AddAssembly. Best regards Stefan Thank you Hello Patryk, runs your code in your development environement? Best regards Stefan yes, my code work if i run it in my enviroment on Microsoft Visual Studio Hello Patryk, I try this C# code: I call the C# code with this VBScript test program: And it works, it delivers success. Add only the assembly System.dll, the UdpClient type is only in this library. Best regards Stefan Hi Stefan, you're great! Your VBS is working well, but i have problem when i'm running the ABAP code. Let's check my code: and the Z_CSHARP_TEST: If i run this code i have a compilation error. The output of the show_xml is this: if i try to run the command with my prompt i don't have the last 3hny32e2.0.cs file. if i run the report with the line commented: * //UdpClient client = new UdpClient(); everything is ok and the C# code run fine. Do you have some idea? i really don't know what to do. Thank you very much and have a nice weekend! Pat Hello Pat, change to Best regards Stefan it works, thank you man, you helped me a lot! I have to offer you a beer, at least! Pat Great to hear that bro. Last weekend I found a few tins of strong beer. I will drink that this evening and think about you.
https://blogs.sap.com/2019/09/17/tip-how-to-execute-c-or-vb.net-seamlessly-in-abap/
CC-MAIN-2022-27
refinedweb
2,470
60.92
Simple Extractor on Sheets or SEOS wraps extraction on Google Sheets Project description Seos Simple Extractor on Sheets (or SEOS) is an extraction tool focused on Google Sheet data scraping. It uses Google's Python API client to access those data; this allows the library to access on a lower level functions defined by Google without the need of using another Sheets abstraction. Features Seos features are put below and their status if they're well-tested using PyTest. Installation # if using poetry # highly recommended poetry add seos # also works with standard pip pip install seos Getting Started Seos uses APIs defined by Google to access Sheets data; but the idea is that developing with Seos should be understandable when connecting to a data and changing contexts; e.g. change in Sheet Name or change in scope. The initial step would be to pass a credentials file and the sheet ID as an entrypoint to the data. It assumes that you have a credentials file taken from Google Cloud in JSON format. from seos import Seos extractor = Seos( credentials_file="./credentials.json" spreadsheet_id="<SPREADSHEET-ID>" ) Once an extractor context is created, we can then define the sheet name and scope then executing extract if you're happy with the parameters. extractor.sheet_name = "Report - June 1, 1752" extractor.scope = "A1:D1" extractor.extract() With this, changing the scope and the sheet name will act as a cursor for your sheet data. We can get anything from the sheet just by changing the scope. extractor.sheet_name = "Report - June 1, 1752" extractor.scope = "A1:D" # get all from A1 until end of column D extractor.extract() We can even do sheet switching if necessary for data that contains several contexts. extractor.sheet_name = "Report - June 1, 1752" extractor.scope = "A1:D" extractor.extract() extractor.sheet_name = "Report - June 2, 1752" extractor.scope = "B5:G5" extractor.extract() Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/seos/
CC-MAIN-2020-16
refinedweb
333
57.27
This article shows creation of a static library and how to use a static library using Visual Studio. The sample projects in this article were created using Visual Studio 2010. A static library consists of object files that are linked together with an exe file. Object files are the output of compilers of unmanaged code and consists of functions that can only be used by unmanaged code. A static library is linked with the code that uses (calls) it by the link editor. If you are not familiar with link editors then the concept and purposes of link editors probably seems strange to you. The important thing is that a static library is combined with the other code such that everything is put into one executable file. A static library can be used by multiple programs and when it is, it is copied into every executable file it is used in. A static library cannot be used by managed (.Net) code directly therefore they are useless for most C# programmers. One practical use of a static library is to split a very large unmanaged project into two or more smaller projects. If that is done then the static library is likely used in just the one project and the static library project would probably be created after the project that uses it. The static libray might be tested using the project that uses it or another project could be created just for testing purposes. Another practical use of a static library is as a general-purpose library used by multiple unmanaged applications. It would then likely exist in a project created for it and a test application would be created for it. In this article, I am creating a static library as if it will be a general-purpose library used by multiple applications. Therefore we will begin by creating the project for the static library, then we will create a console application to test it. The console application will show the details of how to use a static library. It is possible to create a static library that can be called by non-managed languages other than C++ but if that is to be done then the functions must be linked with "C" linkage as described in this article. Static libraries are not defined by the C++ language; they are a Windows thing. Each operating system implements static libraries differently, or might not implement them at all. Therefore, to create a static library project, we must create a Win32 project. To create the static library project, start by creating a "Win32 Project". When you do, the first Wizard page will be the "Welcome to the Win32 Application Wizard" as in the following: In the next wizard page, change the "Application type" radio button to "Static library". Leave the others with default values. You can leave the "Precompiled header" option on. The Win32 Application Wizard will look something as in: When the project was created, there will be source code files that were generated for the project named stdafx.h, stdafx.cpp and targetver.h. We will not change them. We do however need to add a header (StaticLibrarySample.h) and implementation (StaticLibrarySample.cpp) file. In the Solution Explorer, right-click on the project and select "Add" | "New Item...". Then select "Header File (.h)" as in: Be sure to give the file a name. Then do the same to create a cpp file (C++ File (.cpp)), as in: For most programs, you would add #includes to the stdafx.h file but the details of that are outside the scope of this article. Now modify the header to be as: #pragma once extern "C" { int Test(int a, int b); } Note that the extern "C" makes the function callable by C and by other languages as well. The disadvantage of extern "C" is that it prevents use of classes and other C++ features with any functions exposed for use by callers of the static library. Modify the implementation file to be: #include "stdafx.h" #include "StaticLibrarySample.h" extern "C" { int Test(int a, int b) { return a + b; } } You can now build the project. The build will look something as: 1>------ Build started: Project: StaticLibrarySample, Configuration: Debug Win32 ------ 1> stdafx.cpp 1> StaticLibrarySample.cpp 1> StaticLibrarySample.vcxproj -> C:\Users\Sam\Documents\Visual Studio 2010\Projects\C-SharpCorner Articles\StaticLibrarySample\Debug\StaticLibrarySample.lib ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ========== To create a program that uses the library, go to the Solution Explorer and right-click on the Solution. Select "Add" | "New Project...". Create a Visual C++ Win32 Console Application. In the Application Settings window, keep the defaults; we do not want an empty project and we do want pre-compiled headers. The "Add New Project" window will look something as: When the project has been generated, set it as the Startup project. We need to add the include directory to the project. Go to the Solution Explorer and right-click on the test project, then select "Properties". In the left side, under "Configuration properties" expand the "C/C++" node, then select "General". In the top-right is "Configuration"; change it to "All Configurations". Then in "Additional Include Directories" add the directory of the static library project where the static library's header (StaticLibrarySample.h) is at. The window will look something like: Next we need to specify the library to be used. In the project properties, and with the configuration set for All Configurations, go to the "Input" node of the "Linker" properties. In the "Additional Dependencies" add the name of the static library; just the filename and extension, but not the directory. The properties window will look something like: If you click in the box for entering the Additional Dependencies then you will see an arrow at the right of that. Click the arrow and select "<Edit...>". You will then get a dialog for editing the dependencies that looks like: Next we need to specify the directory of the library. That is done in the project properties, but this time we will specify different directories for each configuration. So with the configuration set for "Active(Debug)", go to the "General" node of the "Linker" properties. Specify the directory where the Debug configuraton of the library is at. The property page will look something like: Do the same for the Release configuration. Then in the test program's cpp file, after the include for "stdafx.h", add an #include for "StaticLibrarySample.h". Then in the main function, add the line: _tprintf(_T("%d"), Test(1, 9)); One more thing worth doing is to ensure that the solution knows that the test project depends on the static library. Go to the Solution Explorer again but right-click on the Solution then choose Properties. Then in the left side click on "Project Dependencies" under the "Common Properties" node. Ensure that the test project has the checkbox checked for the static library. That property page looks like: Build and test the program. View All
http://www.c-sharpcorner.com/UploadFile/SamTomato/the-basics-of-creating-a-static-library-using-visual-cpp/
CC-MAIN-2017-26
refinedweb
1,174
64.41
Im having difficulties on the first problem in this pset1. Specifically pennies.c. So far I have done the first two requirements in the pset but Im stuck on the part where I have to double the pennies with the number of days the user inputs. I know it has to be a for loop and I also think is a combination of the while loop as well but I am not certain. I have been stuck for this problem for numerous days! I have tried working on this myself for while and Im finally asking for help on this one... Here is my code, I know asking for the days and pennies are right. The pennies pset is on pages 8 and 9<16. Thanks to all. - Code: Select all #include <cs50.h> #include <stdio.h> int main (void) { /* Gets the number of days in a month. */ long long days; do { printf ("How many days in a month: "); days = GetInt(); } while ((28>days) || (31 < days)); /* Checks if the users input is valid. */ /* Gets the number of pennies in the first day */ long long pens; do { printf ("How many pennies will you receive the first day: "); pens = GetInt(); } while (pens <= 0); /* Checks if the users input is valid. */ for (days = 1; days <= 31; days *=2); long long total1; { for (pens = 0; pens <=31; pens *=2); total1 = pens * days; } long long total2; { total2 = pens *= days * total1; printf ("$%.2lld", total 2); } }
http://www.hackthissite.org/forums/viewtopic.php?f=102&t=8208&p=63440
CC-MAIN-2016-26
refinedweb
239
89.48
Pytorch is a deep learning framework; a set of functions and libraries which allow you to do higher-order programming designed for Python language, based on Torch. Torch is an open-source machine learning package based on the programming language Lua. It is primarily developed by Facebook’s artificial-intelligence research group and Uber’s Pyro probabilistic programming language software is built on it. PyTorch is more “pythonic” and has a more consistent API. It also has native ONNX model exports which can be used to speed up inference. PyTorch shares many commands with numpy, which helps in learning the framework with ease. At its core, PyTorch provides two main features: - An n-dimensional Tensor, similar to Numpy but can run on GPUs - Automatic differentiation for building and training neural networks If you’re using anaconda distribution, you can install the Pytorch by running the below command in the anaconda prompt. conda install pytorch-cpu torchvision-cpu -c pytorch The rest of the article is structured as follows: - What is Colab, Anyway? - Setting up GPU in Colab - Pytorch Tensors - Simple Tensor Operations - Pytorch to Numpy Bridge - CUDA Support - Automatic Differentiation - Conclusion If you want to skip the theory part and get into the code right away, Niranjankumar-c/DeepLearning-PadhAI Colab — Colaboratory Google Colab is a research tool for machine learning education and research. It’s a Jupyter notebook environment that requires no setup to use. Colab offers a free GPU cloud service hosted by Google to encourage collaboration in the field of Machine Learning, without worrying about the hardware requirements. Colab was released to the public by Google in October 2017. Getting Started with Colab - Create a new notebook via File -> New Python 3 notebook or New Python 2 notebook You can also create a notebook in Colab via Google Drive - Go to Google Drive - Create a folder of any name in the drive to save the project - Create a new notebook via Right click > More > Colaboratory To rename the notebook, just click on the file name present at the top of the notebook. Setting up GPU in Colab In Colab, you will get 12 hours of execution time but the session will be disconnected if you are idle for more than 60 minutes. It means that for every 12 hours Disk, RAM, CPU Cache and the Data that is on our allocated virtual machine will get erased. To enable GPU hardware accelerator, just go to Runtime -> Change runtime type -> Hardware accelerator -> GPU Pytorch — Tensors Numpy based operations are not optimized to utilize GPUs to accelerate its numerical computations. For modern deep neural networks, GPUs often provide speedups of 50x or greater. So, unfortunately, numpy won’t be enough for modern deep learning. This where Pytorch introduces the concept of Tensor. A Pytorch Tensor is conceptually identical to an n-dimensional numpy array. Unlike the numpy, PyTorch Tensors can utilize GPUs to accelerate their numeric computations Let’s see how you can create a Pytorch Tensor. First, we will import the required libraries. Remember that torch, numpy and matplotlib are pre-installed in Colab’s virtual machine. import torch import numpy import matplotlib.pyplot as plt The default tensor type in PyTorch is a float tensor defined as torch.FloatTensor. We can create tensors by using the inbuilt functions present inside the torch package. ## creating a tensor of 3 rows and 2 columns consisting of ones >> x = torch.ones(3,2) >> print(x) tensor([[1., 1.], [1., 1.], [1., 1.]]) ## creating a tensor of 3 rows and 2 columns consisting of zeros >> x = torch.zeros(3,2) >> print(x) tensor([[0., 0.], [0., 0.], [0., 0.]]) Creating a tensor by random initialization To increase the reproducibility, we often set the random seed to a specific value first. >> torch.manual_seed(2) #generating tensor randomly >> x = torch.rand(3, 2) >> print(x) #generating tensor randomly from normal distribution >> x = torch.randn(3,3) >> print(x) Simple Tensor Operations Slicing of Tensors You can slice PyTorch tensors the same way you slice ndarrays #create a tensor >> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) >> print(x[:, 1]) # Every row, only the last column >> print(x[0, :]) # Every column in first row >> y = x[1, 1] # take the element in first row and first column and create a another tensor >> print(y) Reshape Tensor Reshape a Tensor to a different shape >> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) #(3 rows and 2 columns) >> y = x.view(2, 3) #reshaping to 2 rows and 3 columns Use of -1 to reshape the tensors -1 indicates that the shape will be inferred from previous dimensions. In the below code snippet x. >> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) #(3 rows and 2 columns) >> y = x.view(6,-1) #y shape will be 6x1 Mathematical Operations #Create two tensors >> x = torch.ones([3, 2]) >> y = torch.ones([3, 2]) #adding two tensors >> z = x + y #method 1 >> z = torch.add(x,y) #method 2 #subtracting two tensors >> z = x - y #method 1 >> torch.sub(x,y) #method 2 In-place Operations In Pytorch all operations on the tensor that operate in-place on it will have an _ postfix. For example, add is the out-of-place version, and add_ is the in-place version. >> y.add_(x) #tensor y added with x and result will be stored in y Pytorch to Numpy Bridge Converting a Pytorch tensor to a numpy ndarray is very useful sometimes. By using .numpy() on a tensor, we can easily convert tensor to ndarray. >> x = torch.linspace(0 , 1, steps = 5) #creating a tensor using linspace >> x_np = x.numpy() #convert tensor to numpy >> print(type(x), type(x_np)) <class 'torch.Tensor'> <class 'numpy.ndarray'> To convert numpy ndarray to pytorch tensor, we can use .from_numpy() to convert ndarray to tensor. >> a = np.random.randn(5) #generate a random numpy array >> a_pt = torch.from_numpy(a) #convert numpy array to a tensor >> print(type(a), type(a_pt)) <class 'numpy.ndarray'> <class 'torch.Tensor'> During the conversion, Pytorch tensor and numpy ndarray will share their underlying memory locations and changing one will change the other. CUDA Support To check how many CUDA supported GPU’s are connected to the machine, you can use the code snippet below. If you are executing the code in Colab you will get 1, that means that the Colab virtual machine is connected to one GPU. is used to set up and run CUDA operations. It keeps track of the currently selected GPU. >> print(torch.cuda.device_count()) 1 If you want to get the name of the GPU Card connected to the machine: >> print(torch.cuda.get_device_name(0)) Tesla T4 The important thing to note is that we can reference this CUDA supported GPU card to a variable and use this variable for any Pytorch Operations. All CUDA tensors you allocate will be created on that device. The selected GPU device can be changed with a torch.cuda.device context manager. #Assign cuda GPU located at location '0' to a variable >> cuda0 = torch.device('cuda:0') #Performing the addition on GPU >> a = torch.ones(3, 2, device=cuda0) #creating a tensor 'a' on GPU >> b = torch.ones(3, 2, device=cuda0) #creating a tensor 'b' on GPU >> c = a + b >> print(c) tensor([[2., 2.], [2., 2.], [2., 2.]], device='cuda:0') As you can see from the above code snippet the tensors are created on GPU and any operation you do on these tensors will be done on GPU. If you want to move the result to CPU you just have to do .cpu() #moving the result to cpu >> c = c.cpu() >> print(c) tensor([[2., 2.], [2., 2.], [2., 2.]]) Automatic Differentiation In this section, we will discuss the important package called automatic differentiation or autograd in Pytorch. The autograd package gives us the ability to perform automatic differentiation or automatic gradient computation for all operations on tensors. It is a define-by-run framework, which means that your back-propagation is defined by how your code is run. Let’s see how to perform automatic differentiation by using a simple example. First, we create a tensor with requires_grad parameter set to True because we want to track all the operations performing on that tensor. #create a tensor with requires_grad = True >> x = torch.ones([3,2], requires_grad = True) >> print(x) tensor([[1., 1.], [1., 1.], [1., 1.]], requires_grad=True) Perform a simple tensor addition operation. >> y = x + 5 #tensor addition >> print(y) #check the result tensor([[6., 6.], [6., 6.], [6., 6.]], grad_fn=<AddBackward0>) Because y was created as a result of an operation on x, so it has a grad_fn. Perform more operations on y and create a new tensor z. >> z = y*y + 1 >> print(z) tensor([[37., 37.], [37., 37.], [37., 37.]], grad_fn=<AddBackward0>) >> t = torch.sum(z) #adding all the values in z >> print(t) tensor(222., grad_fn=<SumBackward0>) Back-Propagation To perform back-propagation, you can just call t.backward() >> t.backward() #peform backpropagation but pytorch will not print any output. Print gradients d(t)/dx. >> print(x.grad) tensor([[12., 12.], [12., 12.], [12., 12.]]) x.grad will give you the partial derivative of t with respect to x. If you are able to figure out how we got a tensor with all the values equal to 12, then you have understood the automatic differentiation. If not don't worry just follow along, when we execute t.backward() we are calculating the partial derivate of t with respect to x. Remember that t is a function of z, which in turn is a function of x. d(t)/dx = 2y * 1 at x = 1 and y = 6, where y = x + 5 The important point to note is that the value of the derivative is calculated at the point where we initialized the tensor x. Since we initialized x at a value equal to one, we get an output tensor with all the values equal to 12. The entire code discussed in the article is present in the Kaggle Kernel. Feel free to fork it or download it. The best part is that you can directly run the code in Kaggle Kernel once you fork it, don’t need to worry about installing the packages. GettingStartedWithPytorch_GPU If Colab is your jam, click here to execute the code directly and get your hands dirty. Niranjankumar-c/DeepLearning-PadhAI Conclusion In this post, we briefly looked at the Pytorch & Google Colab and we also saw how to enable GPU hardware accelerator in Colab. Then we have seen how to create tensors in Pytorch and perform some basic operations on those tensors by utilizing CUDA supported GPU. After that, we discussed the Pytorch autograd package which gives us the ability to perform automatic gradient computation on tensors by taking a simple example. If you have any issues or doubts while implementing the above code, feel free to ask them in the comment section below or send me a message in LinkedIn citing this article. Recommended Reading - Deep Learning Best Practices: Activation Functions & Weight Initialization Methods — Part 1 - Demystifying Different Variants of Gradient Descent Optimization Algorithm In my next post, we will discuss how to implement the feedforward neural network using Pytorch (nn.Functional, nn.Parameters). So make sure you follow me on medium to get notified as soon as it drops. Until then Peace :) NK. Author Bio Niranjan Kumar is Retail Risk Analyst Intern at HSBC Analytics division. He is passionate about Deep learning and Artificial Intelligence. He is one of the top writers at Medium in Artificial Intelligence. You can find all of Niranjan’s blog here. You can connect with Niranjan on LinkedIn, Twitter and GitHub to stay up to date with his latest blog posts. I am looking for opportunities either full-time or freelance projects, in the field of Machine Learning and Deep Learning. If there are any relevant opportunities, feel free to drop me a message on LinkedIn or you can reach me through email as well. I would love to discuss. Originally published at on June 9, 2019.
https://hackernoon.com/getting-started-with-pytorch-in-google-collab-with-free-gpu-61a5c70b86a
CC-MAIN-2019-35
refinedweb
2,024
55.34
A jQuery UI-Based Date Picker for ASP.NET MVC 5 by Brady Kelly Introduction This article provides guidance on how to use the jQuery UI Datepicker widget for date fields in MVC forms. I assume you have a working knowledge of Visual Studio, C#, ASP.NET MVC, and Razor. You should know to add a controller and view to a new or existing project, and how to run that project to observe changes made to it. The accompanying sample code in this article was written using Visual Studio 2013 with Update 4, but it should run in most other modern versions of Visual Studio. Just What Is an Editor Template? To begin, you should understand what an editor template is and why you should want one. I'll start by providing you a very plain vanilla example of date input. For this example, let's look at an edit field for a date value. We will use my tiny PersonViewModel class, which is good only for demonstration purposes: namespace SimpleTemplates.ViewModels { public class PersonViewModel { public string LastName { get; set; } public string FirstName { get; set; } public DateTime? DateOfBirth { get; set; } } } The DateOfBirth property is nullable because we don't want to require people to divulge more personal detail than absolutely necessary. Now, to add an edit view for Person, you should, for this example at least, delegate most of the work to Visual Studio's Add View Wizard, as in Figure 1. Figure 1: Add an Edit View Adding a view in this manner causes Visual Studio to scaffold the view for you; in other words, automatically generate appropriate Razor mark-up for an Edit view for the PersonViewModel. Open the Edit view, and you can see that Visual Studio has generated a call to the EditorFor HTML Helper method. @Html.EditorFor(model => model.DateOfBirth, new { htmlAttributes = new { @class = "form-control" } }) The EditorFor HTML helper method is what is known as a templated helper. This means the MVC runtime will dynamically choose a template to determine how it should render the HTML required for a field. More comprehensive coverage of template helpers and editor templates is beyond the scope of this article, but there are several walkthroughs on this subject Without being told otherwise, and for web UI purposes, the MVC framework treats DateTime properties as text, meaning the HTML output by the helper method ends up along the following lines: <input class="form-control text-box single-line input-validation-error" data- This is rather unfortunate, because a DateTime value is not formatted, and the end user is free to type what they want, regardless of whatever format we have chosen for DateTime values, as in Figure 2. Figure 2: Default Edit Person Although the built-in jQuery validation catches obvious errors, this application will still be vulnerable to, for example, a very common issue with date values. The confusion of day and month can occur, because for some locales, the standard is dd/MM/yyyy, and for others it is MM/dd/yyyy, and for days of the month less than or equal to 12, there is no way to discern between day and month. HTML 5 Input Type : Date Fortunately, the HTML 5 standard provides some relief from this date format chaos, in that when a compatible browser finds an input element with type date, it renders a Date Picker 'dialogue'. Now, we are stuck on the question of how to get Razor to output such an input element for the DateOfBirth property, versus the text type input it outputs by default. The EditorFor helper method will render HTML mark-up based on a few factors, being: - The .NET data type of the source property - View model metadata for the source property - A custom editor template for the source property First, the data type: Just to be clear, this "data type" is the .NET DateTime data type, not the data annotations 'data type'. For our DateOfBirth property, the Razor engine will generate an input element with type="datetime". However, it seems browsers don't currently deal very well with this new-fangled input element, so they simply treat this input element the same as a raw text input, as seen in Figure 2. I tried Chrome, IE11, and FireFox, to no avail. This StackOverflow answer suggests browsers have little confidence in this type. Then, the metadata: In ASP.NET MVC, we can attach metadata to viewmodel (or model) properties by using data annotation attributes; for example: public class PersonViewModel { public string LastName { get; set; } public string FirstName { get; set; } [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}", ApplyFormatInEditMode = true)] public DateTime? DateOfBirth { get; set; } } Here, you are telling MVC to treat the DateOfBirth property as a Date value, and thus render an input element with type="datetime". You also tell MVC what format you want the date in. The browser handles this input type better, and it renders a date picker: Figure 3: Chrome's Date Picker As nice as Chrome's date picker is, it is quite the black box, and it allows hardly any customisation regarding date format, scheming, what it allows the user to change, and more. For this kind of configuration, we have to turn to third party date pickers, and for its cost (zero), the jQuery UI Datepicker is a shining example. The jQuery UI Date Picker You can see a demo of the jQuery UI Datepicker. Figure 4 shows the demo page for this widget, configured to open when its calendar icon is clicked: Figure 4: Datepicker Demo This demo page also includes other examples of common configurations and styles for this cool little widget. You can click the "view source" link, at the bottom of the page, to see how simple it is to use this control. You can view more technical reference information by clicking the "API documentation" link. Adding jQuery UI to the Project While there are many different ways of setting an MVC project up to include jQuery UI, I have opted for, in my opinion, the simplest method, which is installing the necessary files via Nuget: Step 1 In Visual Studio's Package Manager Console, enter the following command: Install-Package jQuery.UI.Combined Step 2 Your project should now have the jQuery UI JavaScript and style sheet files, as seen in Figures 5 and 6: Figure 5: jQuery UI Stylesheets Figure 6: jQuery UI Scripts All the CSS files under the 'Content/themes' folder belong to jQuery UI, as do the two 'jquery-ui-1.11.4' files under the 'Scripts' folder. Step 3 - Include these files in the web app's styles and scripts bundles. - Reference these files in views where they are required. Step 4 To include required files in the app's bundles, edit the BundleConfig.cs file: Figure 7: BundleConfig Location Step 5 Add the following two code snippets to the RegisterBundles method in BundleConfig: bundles.Add(new ScriptBundle("~/bundles/jqueryui") .Include("~/Scripts/jquery-ui-{version}.js")); bundles.Add(new StyleBundle("~/Content/jqueryui") .Include("~/Content/themes/base/all.css")); Step 6 Now, to reference the required files in views. Because you are aiming at providing the same user experience wherever a date-time value is edited, you can reference the jQuery UI files in the master layout for our application; for example: Figure 8: _Layout Location Step 7 At the top of _Layout.cshtml, in the head element, add the line shown in bold: <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>@ViewBag.Title - My ASP.NET Application</title> @Styles.Render("~/Content/css") @Styles.Render("~/Content/jqueryui") @Scripts.Render("~/bundles/modernizr") Step 8 At the bottom of _Layout.cshtml, add the line shown in bold: @Scripts.Render("~/bundles/jquery") @Scripts.Render("~/bundles/bootstrap") @Scripts.Render("~/bundles/jqueryui") @RenderSection("scripts", required: false) Step 9 Now, you can get the long awaited Datepicker widget working in your edit view. To do this, call the datepicker method on all input elements with a CSS class of datepicker. Add the highlighted code, near the bottom of _Layout.cshtml: Figure 9: Datepicker JavaScript Step 10 This code tells the Datepicker widget to: - Provide all elements that have the CSS class datepicker with Datepicker functionality. - Use the date format "dd/mm/yy". This translates to a .NET format of "dd/MM/yyyy". - Allow users to change the selected year. By default, the user is only allowed to change the selected month and day. - Show the calendar dropdown when the user clicks the "button". Step 11 The "button" functionality of this widget is rather convoluted. In its raw form, we developers must provide a relative URL to an image file for the button icon, but the jQuery UI library provides no such icon. Step 12 All icons in the jQuery UI library are provided by CSS sprites, where multiple icons are grouped together in large image files. To access a calendar icon in one of these sprites, we call upon the jQuery UI button widget, which allows us to configure its icons property with a reference into the icon's sprite. Step 13 Now, we need to add the datepicker CSS class to the input element for our DateOfBirth property. In fact, we rather need to tell the Razor view engine that all properties decorated with the [DataType(DataType.Date)] attribute should include this CSS class. We do this by defining an Editor Template for such properties. Step 14 Start by adding a new folder and file to our project, under the built-in 'Views/Shared' folder: Figure 10: Adding the Date template Step 15 Please note that exact naming is critical here. The folder under 'Shared' must be called 'EditorTemplates', and the template file must be called 'Date.cshtml'. This file name corresponds to the value of Date used in the DataType attribute, in our view model. Step 16 Add the following code, without alteration of any kind, to the Date.cshtml file: @model DateTime? @Html.TextBoxFor(model => Model, "{0:dd-MM-yyyy}", new { @class = "form-control jqueryui-marker-datepicker" }) Step 17 I use a nullable DateTime for the model, so this template applies to both nullable and non-nullable DateTime properties, and I include a format string, to ensure that the initial format of the date input matches the format of the Datepicker widget. Step 18 If you run our project now, and navigate to the Person/Edit view, you should be able to see the new widget in action, like Figure 11: Figure 11: End result About the Author Brady Kelly is a software developer in Johannesburg, South Africa. He specialises in C# and ASP.NET MVC, with SQL Server, with special fondness for MVC and jQuery. He has been in this business for about eighteen years, and is currently trying to master Angular and WCF, and somehow find a way to strengthen his creative faculties. TechPosted by Ven on 09/27/2017 04:53pm Detailed explanation! We are building HTML helpers for all controls to include layouts. This is a good reference.Reply Deployment jquery-uiPosted by LYAS on 08/31/2017 02:00pm Hello, This is very helpful article. I just have a problem, everything works fine on development, but when the applictaion is published and the files are sent to production server, the datepicker is not showed correctly, it looks without format, just text over the other html elements. The content folder and EditorTemplate are included during deployment. What is the right way to deploy jquery-ui? Thanks in advanceReply Senior DeveloperPosted by Jim on 07/28/2017 08:21pm Great article! The other postings had left our you step 5. Mine worked when I finally did it.Reply localizationPosted by zulurl on 11/16/2016 07:08pm Great job! It works for me How can i change the language and character codes?Reply DatePicker generates an error if it is disabledPosted by Boris Misic on 08/24/2016 08:28pm How to avoid JavaScript error "'$' is undefined" when datepicker control is disabled? Thanks.Reply ThanksPosted by Louis on 04/20/2016 07:08am Great article.Reply Initial ValuePosted by Mary Flaws on 03/07/2016 11:56pm How do I set an initial value from my model. Using the example here, how do I populate it with the existing value for the DOB? I have tried a million things that do not work.Reply Thank YouPosted by lionel on 03/03/2016 01:11pm you are the best, after been going in circles. Only your tutorials helped me, thank you.Reply Excellent tutorial!Posted by Ben on 02/08/2016 05:27pm Excellent tutorial! I have not tried the whole but I got it to work finally after 5 min! The steps are clear and well explained. Great jobb BradyReply MehPosted by AJI on 01/20/2016 05:35pm Didn't work. Followed to a t. This was a heck of a lot easier to do using php without all these bundles and whatnot.Reply
https://www.codeguru.com/csharp/.net/net_asp/a-jquery-ui-based-date-picker-for-asp.net-mvc-5.html
CC-MAIN-2019-26
refinedweb
2,173
61.26
Publish DHT11 Sensor Data To Adafruit IO Platform using ESP8266 In this tutorial I will show you how send temperature and humidity data from DHT11 sensor to Adafruit IO (AIO) platform via MQTT protocol. I will develop a sensor node which acts as an MQTT client and publishes the data. On AIO platform the published data will be displayed using graphs. Components Required - NodeMCU X 1 - DTH11 Sensor X 1 - Jumper Wires X 3 (Male-To-Male) - Breadboard X 1 In this tutorial I am using DHT11 sensor which has three pins, DATA, VCC and GND as shown in following figure. You can also use the other DHT11 sensor which has 4 pins. Circuit Connect the DATA pin of DHT11 sensor to NodeMCU D6 pin and VCC, GND to Vin and GND pins respectively of NodeMCU. The circuit diagram is shown below. Arduino Code #include <ESP8266WiFi.h> #include "Adafruit_MQTT.h" #include "Adafruit_MQTT_Client.h" #include "DHT.h" /************************* WiFi Access Point *********************************/ #define WLAN_SSID "YOUR_WIFI_SSID" #define WLAN_PASS "YOUR_WIFI_PASSWORD" /************************* Adafruit.io Setup *********************************/ #define AIO_SERVER "io.adafruit.com" #define AIO_SERVERPORT 1883 // use 8883 for SSL #define AIO_USERNAME "YOUR_AIO_USERNAME" #define AIO_KEY "YOUR_AIO_KEY" /************* DHT11 Setup ********************************************/ #define DHTPIN D6 #define DHTTYPE DHT11.begin(); delay(10);()); }); } // Function to connect and reconnect as necessary to the MQTT server. // Should be called in the loop function and it will take care if connecting.!"); } Point to Remember Please keep noted that I have added 1 minute delay in code (i.e. in line 93) above while publishing the values to AIO platform. Don't upload values very frequently otherwise your account may be blocked for few seconds! Build User Interface Our next step is to build the user interface (UI) where we can display the data published by sensor node. To create UI, logon to your Adafruit IO account. After you login, you will see AIO_Controller dashboard from previous tutorial. If you have not created the dashboard then please follow the instructions in previous tutorial. The first thing we need is the Feeds where our data will be published. The Feeds are the MQTT topics. These topics are then bind to our UI which will display data graphically. In this tutorial I'll show another method to create feeds. For creating feeds goto to Feeds->View All as shown in following figure. You will see already created feeds 'brightness' and 'onoff' from our previous article as shown below. Don't bother if you do not see anything, just create the feeds as instructed in this article. To do this click on Actions->Create a New Feed as shown below. You will see Create new feed dialogbox as shown in following figure. Name the feed as 'temp'. Repeat the action and create another feed 'hum' as shown below. Now you will have two feeds 'temp' and 'hum' to store temperature and humidity data as shown below. Now go to Dashboard->AIO_Controller And click the '+' button to add new block (i.e. Line Chart UI) as shown in following image. Click on Line Chart UI and select the 'temp' feed from the list of feeds in the dialog box. Click next and specify block settings Similarly add another Line Chart block by selecting 'hum' feed as shown in following two images You will now see dashboard with two line chart blocks. Now connect your device and you will see temperature and humidity values published on the UI. Is this article helpful? Popular
https://hubpages.com/technology/Publish-DHT11-Sensor-Data-To-Adafruit-IO-Platform-using-ESP8266
CC-MAIN-2020-24
refinedweb
568
66.33
From: Thorsten Ottosen (nesotto_at_[hidden]) Date: 2004-04-29 07:53:08 Hi Pavel, Thanks for your comments. They are very thoughrough as usual. | It would be useful to have many more examples in documentation, | showing how to use it together with other Boost libraries. Can you give an example of what you have in mind? | _______________________________________________________ | 1. config.hpp: [snip proper code] I haven't studied the config very much, so thanks for your tutorial. | The macros as BOOST_CT_NO_ARRAY_SUPPORT may be better | named BOOST_COLLECTION_TRAITS_NO_ARRAY_SUPPORT. | | The CT thing is rather nonintuitive and may clash. agree. | BOOST_ARRAY_REF: should it be BOOST_COLLECTION_TRAITS_ARRAY_REF? that seems reasonable. | _______________________________________________________ | 2. docs: shouldn't there be index.html somewhere? There should be one. Hartmut said it should be in the "root" ie. libs/collection_traits/index.html. | The *.php files are maybe not needed for users to see. he he, no. | _______________________________________________________ | 3. Collection.html: | | - "associated" should be explained yeah. I guess what it says it that reference_type_of<T>::type need only be convertible to T&. I am inclined to remove that requirement because I fail to see why it is useful. | - more models can be added any container, since it is weaker than the container requirement | - the email at the bottom may be better obfuscated yeah :-) I would think it wouldn't work anymore. | _______________________________________________________ | 4. external_concepts.html: the docs doesn't say _where_ | the free standing function should be, aboyt their | visibility. that should be added. 1. they can be anywhere since ADL will find the right version 2. It only makes sense for a public interface | It should be also noted whether ADL lookup may kick in here | or not. it must ... which implies the implementation for different types can reside in different namespace as long as client code uses *unqualified* calls to the functions. this means that I need to explain how to add support for more types. Eg, begin() should be overloaded in namespace boost. _______________________________________________________ | 5. docs: maybe instead of showing 'raw' source code in examples | syntax colorized HTML can be shown (both 'inline' and external | examples). yes. | _______________________________________________________ | 6. examples in docs: each example should have at the end | expected output as comment. yes. | _______________________________________________________ | 7. iterator_of name: wouldn't just iterator be sufficient? | | Dtto size_type_of etc. people have expressed concern for those short names when they were class in namespace boost, so I changed it. but I'm sure the review manager will record your opinion. | _______________________________________________________ | 8. collection_traits.html, first example: | | typename boost::const_iterator_of<EC>::type found = find( c, value ); | *found = replacement; | | Will it always work with const iterator? And even if | it will, shouldn't there be non-const iterator | for peace in mind? its a doc error. The actual example uses iterator_of<> as it should. | What if nothing is found in the example? | | my_generic_replace ==>> my_replace_first yeah, that should be fixed... if( found != boost::end( c ) ) ... else ; | _______________________________________________________ | 9. collection_traits.html, Reference section: | are slist/hash_XXX/rope supported? | | Will boost::unordered_map/set be supported? | | boost::bitset, boost::array, spirit iterators? | | circular_buffer and maybe multi_index_container? | | ptr_containers? all standard compliant containers work without problems. That's the first bullet in the reference section. I guess I should stress that the container need not be part of the standard. So that include AFAICT slist, robe, array, unordered_map/set, circular_buffer, ptr_container bitsets don't have iterators, so they are not included. multi_index_container? I can't remember if it fulfills the standard container requirements. spirit iterators...only if they inherit from std::iterator. (I can see my iterator implementation is actually broken :-() _______________________________________________________ | 10. collection_traits.html, Semantics section, tables: | it doesn't make sense to me what middle column contains. ok, would a table with --------------Abbreviations----------- SC = std container T = the type used in arrays P = std::pair etc ? | _______________________________________________________ | 11. examples/iterator.cpp: | | const char* this_file = "iterator.cpp"; | | ==>> | | const char* this_file = __FILE__; | | Some comments in the file would be useful. ok. | _______________________________________________________ | 12. collection_traits.html, Portability section: | | bcc6 ==>> BCB 6.4 I'm just curious, did you compile the test with that compiler? | _______________________________________________________ | 13. collection_traits.html: I would definitely welcome | many more small code snippets as examples. ok. If any other reviewer wants to donate examples, please do. | _______________________________________________________ | 14. collection_traits.html: there are many <br> at the | end of HTML text. It feels as if something was cut | from the file. maybe <br><i>EOF</i> can be there. ok. | _______________________________________________________ | 15. I hope string algorithms library will re-use this | library before it gets into official Boost distribution. I'm working with Pavol on that as we speak. But I think it is boost policy not to include a library accepted too close to the new version, so maybe Pavol needs to use an internal version anyway. | _______________________________________________________ | 16. I once thought about having typedef boost::end | and containers with overload of operator[]. | | It would allow to write: | | a_container[boost::end - 1] | | to access last element of container. | | a_container[boost::end - 2] would be one before | | the last one. Python has such feature. | | | Is there some way to have such support in Container | Traits? so you're describing a search facility for [] containers. it will work a bit strange with map, wouldn't it? we can't use boost::end, but maybe boost::last. In some sense it correponds to typename reverse_iterator_of<T>::type i; i[0] = .. i[1] = .. or rbegin()[0] = ...; rbegin()[1] = ...; Maybe that could justify having reverse_iterator_of<> and const_reverse_iterator_of<> + rbegin(), rend(). ? | Maybe the library can provide function as | boost::last(c, int n): | | int x = boost::last(vec, -2); I guess you would want this to work with any collection. And it seems to be another way to search. Anyway, would'nt we need to versions: n_before_end( 2, vec ); n_after_begin( 2, vec ); ? | _______________________________________________________ | 17. collection_traits.hpp: the name of macro guard | BOOST_CONTAINER_TRAITS_HPP should be changed. | | Dtto elsewhere. yep. | _______________________________________________________ | 18. value_type.hpp: comments as | | | ////////////////////////////////////////////////////////////////////////// | // pair | | ////////////////////////////////////////////////////////////////////////// | | are not particularly valuable. | If necessary, it should be complete sentence. they are mostly a way to seperate code in blocks. | _______________________________________________________ | 20. value_type.hpp: do overloads for volatile make sense | in this library? (I ask only for completeness and prefere | not to clutter library with them.) I'm not sure, but I think they don't make sense. What will happen is that you will iterate over volatile vector<int> vec; using normal iterators. As usual--we don't have volatile_iterator vector<T>::begin() volatile either. | _______________________________________________________ | 21. value_type.hpp: | shouldn't there be #include <iosfwd> or so | for istream definition? it should be enough with <iterator> when I fix the implementation. | OTOH why is #include <cstddef> there? (Dtto elsewhere.) std::size_t | _______________________________________________________ | 22. sizer.hpp: is the | | template< std::size_t sz > | class container_traits_size | { | char give_size[sz]; | }; | | immune against padding/alignement? no. It's not part of the official interface. The real version should be something like char sizer( T (&array)[sz] )[sz] but I decided not to include it. I failed to see it as particular important. a macro a la BOOST_ARRAY_SIZE( a ) \ sizeof((a)) / sizeof((a[0])) \ seems to be far more portable. | The file sizer may be moved into details and | names size_calculator.hpp or so. yeah. | _______________________________________________________ | 23. headers: in guard like | | #if defined(_MSC_VER) && (_MSC_VER >= 1020) | # pragma once | #endif | | sometimes value 1020 and sometimes value 1200 is used. | Both are OK from practical point of view but | it would feel better if the same. I would like to know which one to use. I think the "right" one is 1200. | _______________________________________________________ | 24. docs: there should be step-by-step example how to add | support for new container (e.g. scattered array consisting | of few subarrays). ok | _______________________________________________________ | 25. end.hpp: what is #undef BOOST_END_IMPL doing here? | Why such dangerous name is used? to be removed. | _______________________________________________________ | 26. testing Unicode string for being empty: I read | somewhere Unicode files have two characters at the | beginning to indicate endianess. I do not know if | this applies to strings in memory as well. | | If so, then maybe Container Traits or String Algos | should take care about this. | | (I could talk complete nonsense on this topic.) my understanding is that both char and wchar_t has a trailing null and nothing else. Unicode charactors is not supported right now. I have about zero understanding about unicode chars and what to expect for them...eg...will std::char_traits exists for them etc. | _______________________________________________________ | 27. functions.hpp: what is BOOST_STRING_TYPENAME and where | does it come from? A leftover from Pavol. To be removed. | _______________________________________________________ | 28. implementation_help.hpp: | | strlen() should be used instead of | for( ; s[0] != end; ) | ++s; | | strlen/wstrlen may be better optimized. ok. how? | In | inline T* array_end( T BOOST_ARRAY_REF[sz], | char_or_wchar_t_array_tag ) | { | return array + sz - 1; | } | | | there should be static assert sz > 0. ok. | GCC allows zero size arrays. | does not? | _______________________________________________________ | 29. sfinae.hpp: isn't the file name misleading? you tell me.:-) It's basically sfinae for use in my type-traits. | _______________________________________________________ | 30. result_iterator.hpp: what does the "give up" | note there mean? I can't implement that. | _______________________________________________________ | 31. functions.hpp\: some commenst should be added. | The source has 21 kB. The MPL expressions here | are quite complex... isn't that overkill? | _______________________________________________________ | 32. functions.hpp: in list of types | | char, | signed char, | unsigned char, | signed short, | unsigned short, | signed int, | unsigned int, | signed long, | | shouldn't floats and long long/__int64 be there as well? hm...I think you're looking in a file that is depricated.. detail/function.hpp is not officially part of the dist. | _______________________________________________________ | 33. each header should have short comment on the top what | is it for, especially the ones in detail/ ok. | _______________________________________________________ | 34. functions.hpp: is the reference e.g. in | | template< typename P > | static result_iterator begin( P& p ) | { | return p; | } | | necessary? I assume 'p' is always pointer. yeah. It should be changed. | _______________________________________________________ | 35. naming conventions: maybe names as iterator_ should | be replaced with something else. Underscore is easy | to miss and is quite unusual. _impl ? br Thorsten Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/04/64779.php
CC-MAIN-2021-31
refinedweb
1,684
68.97
I've had a lingering frustration with the recent blowup due to the solr-commons-csv Maven artifact (SOLR-3204). We've worked around the particular issue (we now hide the dependency)... But in thinking it over, and avoiding rehashing all the technical details, what bothers me most about what happened is that the Lucene PMC is/was held accountable for "having released code in another project's namespace", yet, none of us realized we had done so. We are (or at least I am) generally ignorant of the consequences of deploying artifacts and dependencies into Maven. I think that's bad: I'm not comfortable being held accountable for something I don't understand. I am very sorry (to the Apache Commons project) that we "released" their sources like that; I had no idea we were doing so. Ignorance is not an excuse and really the Lucene PMC is/was negligent... I think, to fix this, we (the Apache Lucene PMC) should stop officially posting artifacts to Maven ourselves. I think it's great if Steve (and/or others) continue to do so, just outside of the Apache Lucene project and outside of the Lucene PMC. This would mean the Maven build/deploy is fully downstream from our official releases, just like how the numerous other package managers/repositories (yum, apt, pkg, etc.) distribute our release artifacts. This can be beneficial for users who consume our artifacts via Maven as well: for the 3.4.0 release, the Maven artifacts were broken, but we couldn't change the already-released bits (SOLR-2770). Had Maven been downstream, this should have been easily resolved since it'd just be a re-spin of Maven's artifacts, not Lucene/Solr's. All of this being said, I'm very very grateful to Steve for working so hard to understand Maven, and build/deploy the Lucene/Solr Maven artifacts, for our releases. I know it's a huge amount of work, and in general rather thankless around here, being stuck between people who love Maven and people who don't, yet Steve has done an amazing job at it. Mike McCandless --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional commands, e-mail: dev-help@lucene.apache.org
http://mail-archives.apache.org/mod_mbox/lucene-dev/201204.mbox/%3CCAL8PwkaMqw87E97gk49wqV7Jao5whpu+5yA2p754+2N8GcBksQ@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
378
60.85
Many times new developers stumble upon a very simple task, “sending emails” over the .NET framework. Well, I am not going to cover any specific language here (C# or VB.NET, Visual C++ you're also included), I will talk generally about the .NET framework and the assemblies exposed by .NET framework for sending the emails, using your own SMTP server's settings, such as username/word combination, port number and (most specially) the hostname for your SMTP server.BackgroundEmails stands for electronic mail and they are widely used on a regular basis for communication. You can send email for sharing text data or you can send your albums over emails easily. Email has been a part of the internet entertainment for a great time and people use a variety of email clients, some love online clients for example Gmail, Yahoo! and so on and some prefer an offline version of their email clients that use an internet connection to download the emails from a server, such as Thunderbird, Outlook and so on.But the fact is that all of them use the same protocol for transferring the emails over the internet network. In this article, I will talk about sending the emails over the network, downloading the emails is a totally separate topic and would have a separate protocol working in the back end to download the emails from the server.Sending the emailsEmails are sent using the SMTP protocol, over the internet. It is similar to the Hypertext protocol (not in the manner of communication, but in a way that is a protocol for the communication). For more on SMTP you can find yourself glad to read the Wikipedia page, I am not going in depth of the protocol here, instead I will just elaborate the methods to send the emails over the .NET framework.What .NET framework offersThe .NET Framework (who is oblivious to that?) has many cool assemblies for us to work with, using our favorite languages, from C# to C++ and the assemblies in the .NET framework allow us to focus on the quality of the application and the logic, leaving the rest of the low-level coding to the framework itself, including and most specially the garbage collection like stuff and memory management..NET framework has a namespace, known as System.Net. This namespace is responsible for the network communication for the .NET applications. But we will be more concerned about the System.Net.Mail namespace, for working with the mail protocol that exposes the SmtpClient,MailMessage classes for us to easily just our data to the objects and send the email using the .NET framework.Creating the module for sending emailSince the .NET framework exposes many frameworks to create your applications over, starting from something as basic as a Console application, to as much user-friendly as Windows Presentation Foundation. The interesting thing is that in the .NET framework the same code can be used on the back-end of a Console app and the WPF application. So, the code that would be used to send the email in a Console application is just the same as you would be using for the WPF application. That is why I am not going to specify any framework, instead I will use a Console application for our project, for being simpler to be understood and to focus more on the code instead. You can (in your own IDE) create any kind of application you want, from Windows Forms, to WPF to a web application (using ASP.NET).Once your application has been created, you can create a simple module (function; not to be confused with the VB.NET's Module). Inside that you can write the following code, don't worry I will explain the code in the future section of the article. Since SmtpClient is disposable we're using it inside a using statement. We're about to send an email, for that we create an object called MailMessage and we our data to it. MailMessage object can set From, To, Subject and Body fields of an email message and then can be sent. You can see in our example, we're using the constructor to create the MailMessage object that would hold the data for our From, To, Subject and Body fields.Finally, we're sending the email using the Send() function. The interesting thing is, in a GUI framework such as WPF or Windows Forms, we should be using SendAsync for the sake of asynchrony in our application that would help us to create a fluid GUI for our application, otherwise the application would stay stuck until the email has been sent and the control continues from this line of code. To learn more on asynchronous programming, please move to the MSDN link and learn more from there, they've got a great content for beginners like you.A few errors in programmingGenerally, there are always errors that developers miss and then they become about “Where did I miss it?”. Similarly, in sending the email and establishing a secure connection, there are usually many problems, some are syntax, some are logically, but I would talk about the connection errors that might be raised. I tried to raise some exceptions myself to share them with you here, for you to understand when these exceptions might cause a problems for your environment.Usually, the exceptions in the connection are raised only at the Send, or SendAsync method when the SmtpClient is not able to send your email successfully. It can be due to the connection problem, authentication problem or any other problem.Problems with SMTP hostnameA general problem can be the hostname that you're ing to the client to connect to, it must be correct and without the “http://“. You might stumble upon such a problem.Hostname could not be resolved, because it is having “http://” in it. Just the smtp.gmail.com, if you're using Gmail as your SMTP server. Otherwise you should contact the SMTP developers for their SMTP hostname.This would be resolved, by making sure that the hostname is correct. Every SMTP provider has its own setting for its server. Make sure you're using the correct ones. This is the first problem you would stumble upon if you're going to get any error. Failure to send mail can also be raised if the Firewall is blocking the network. Another problem with the SmtpClient is, if you're not using the correct port number, then the connection might not be establishes and the worst thing is that there won't be any exception raised. For example, use port number 295. The command would continue to execute without any success message or exception. Make sure you're using the correct port number, otherwise use the default TCP port number; 25. Port number 25 always works for me.Errors authenticating the userWhereas servers require the correct authentication, it is necessary that you the correct and required authentication details to the server. The first stage is to enable the SSL over your connection. Usually, servers close the connection if the connection isn't over SSL. Recall the code in this article and see the enable SSL command as in the following: View All
http://www.c-sharpcorner.com/UploadFile/201fc1/sending-emails-over-net-framework-and-general-problems-usi/
CC-MAIN-2017-43
refinedweb
1,213
61.06
For my internet radio streaming project I had to modify the music module source code (added an option to play from streams). When I’m ready to share my code, what should I do with the music module source code? At this point I didn’t change the namespace and class name. I just add the file to my project and remove the default reference to the music module. Or should I put the music module source code under my own namespace and write some code to detect the music module. Then users will not be able to use the designer to attach the music module to the mainboard.
https://forums.ghielectronics.com/t/publish-modified-ghi-source-code/8309
CC-MAIN-2020-40
refinedweb
109
78.79
: DTMIterator.java,v 1.7 2004/02/16 23:03:44 minchau Exp $18 */19 package org.apache.xml.dtm;20 21 /**22 23 * <code>DTMIterators</code> are used to step through a (possibly24 * filtered) set of nodes. Their API is modeled largely after the DOM25 * NodeIterator.26 * 27 * <p>A DTMIterator is a somewhat unusual type of iterator, in that it 28 * can serve both single node iteration and random access.</p>29 * 30 * <p>The DTMIterator's traversal semantics, i.e. how it walks the tree,31 * are specified when it is created, possibly and probably by an XPath32 * <a HREF=">LocationPath</a> or 33 * a <a HREF="">UnionExpr</a>.</p>34 * 35 * <p>A DTMIterator is meant to be created once as a master static object, and 36 * then cloned many times for runtime use. Or the master object itself may 37 * be used for simpler use cases.</p>38 *39 * <p>At this time, we do not expect DTMIterator to emulate40 * NodeIterator's "maintain relative position" semantics under41 * document mutation. It's likely to respond more like the42 * TreeWalker's "current node" semantics. However, since the base DTM43 * is immutable, this issue currently makes no practical44 * difference.</p>45 *46 * <p>State: In progress!!</p> */47 public interface DTMIterator48 {49 50 // Constants returned by acceptNode, borrowed from the DOM Traversal chapter51 // %REVIEW% Should we explicitly initialize them from, eg,52 // org.w3c.dom.traversal.NodeFilter.FILTER_ACCEPT?53 54 /**55 * Accept the node.56 */57 public static final short FILTER_ACCEPT = 1;58 59 /**60 * Reject the node. Same behavior as FILTER_SKIP. (In the DOM these61 * differ when applied to a TreeWalker but have the same result when62 * applied to a NodeIterator).63 */64 public static final short FILTER_REJECT = 2;65 66 /**67 * Skip this single node. 68 */69 public static final short FILTER_SKIP = 3;70 71 /**72 * Get an instance of a DTM that "owns" a node handle. Since a node 73 * iterator may be passed without a DTMManager, this allows the 74 * caller to easily get the DTM using just the iterator.75 *76 * @param nodeHandle the nodeHandle.77 *78 * @return a non-null DTM reference.79 */80 public DTM getDTM(int nodeHandle);81 82 /**83 * Get an instance of the DTMManager. Since a node 84 * iterator may be passed without a DTMManager, this allows the 85 * caller to easily get the DTMManager using just the iterator.86 *87 * @return a non-null DTMManager reference.88 */89 public DTMManager getDTMManager();90 91 /**92 * The root node of the <code>DTMIterator</code>, as specified when it93 * was created. Note the root node is not the root node of the 94 * document tree, but the context node from where the iteration 95 * begins and ends.96 *97 * @return nodeHandle int Handle of the context node.98 */99 public int getRoot();100 101 /**102 * Reset the root node of the <code>DTMIterator</code>, overriding103 * the value specified when it was created. Note the root node is104 * not the root node of the document tree, but the context node from105 * where the iteration begins.106 *107 * @param nodeHandle int Handle of the context node.108 * @param environment The environment object. 109 * The environment in which this iterator operates, which should provide:110 * <ul>111 * <li>a node (the context node... same value as "root" defined below) </li>112 * <li>a pair of non-zero positive integers (the context position and the context size) </li>113 * <li>a set of variable bindings </li>114 * <li>a function library </li>115 * <li>the set of namespace declarations in scope for the expression.</li>116 * <ul>117 * 118 * <p>At this time the exact implementation of this environment is application 119 * dependent. Probably a proper interface will be created fairly soon.</p>120 * 121 */122 public void setRoot(int nodeHandle, Object environment);123 124 /**125 * Reset the iterator to the start. After resetting, the next node returned126 * will be the root node -- or, if that's filtered out, the first node127 * within the root's subtree which is _not_ skipped by the filters.128 */129 public void reset();130 131 /**132 * This attribute determines which node types are presented via the133 * iterator. The available set of constants is defined above. 134 * Nodes not accepted by135 * <code>whatToShow</code> will be skipped, but their children may still136 * be considered.137 *138 * @return one of the SHOW_XXX constants, or several ORed together.139 */140 public int getWhatToShow();141 142 /**143 * <p>The value of this flag determines whether the children of entity144 * reference nodes are visible to the iterator. If false, they and145 * their descendants will be rejected. Note that this rejection takes146 * precedence over <code>whatToShow</code> and the filter. </p>147 * 148 * <p> To produce a view of the document that has entity references149 * expanded and does not expose the entity reference node itself, use150 * the <code>whatToShow</code> flags to hide the entity reference node151 * and set <code>expandEntityReferences</code> to true when creating the152 * iterator. To produce a view of the document that has entity reference153 * nodes but no entity expansion, use the <code>whatToShow</code> flags154 * to show the entity reference node and set155 * <code>expandEntityReferences</code> to false.</p>156 *157 * <p>NOTE: In Xalan's use of DTM we will generally have fully expanded158 * entity references when the document tree was built, and thus this159 * flag will have no effect.</p>160 *161 * @return true if entity references will be expanded. */162 public boolean getExpandEntityReferences();163 164 /**165 * Returns the next node in the set and advances the position of the166 * iterator in the set. After a <code>DTMIterator</code> has setRoot called,167 * the first call to <code>nextNode()</code> returns that root or (if it168 * is rejected by the filters) the first node within its subtree which is169 * not filtered out.170 * @return The next node handle in the set being iterated over, or171 * <code>DTM.NULL</code> if there are no more members in that set.172 */173 public int nextNode();174 175 /**176 * Returns the previous node in the set and moves the position of the177 * <code>DTMIterator</code> backwards in the set.178 * @return The previous node handle in the set being iterated over,179 * or <code>DTM.NULL</code> if there are no more members in that set.180 */181 public int previousNode();182 183 /**184 * Detaches the <code>DTMIterator</code> from the set which it iterated185 * over, releasing any computational resources and placing the iterator186 * in the INVALID state. After <code>detach</code> has been invoked,187 * calls to <code>nextNode</code> or <code>previousNode</code> will188 * raise a runtime exception.189 */190 public void detach();191 192 /**193 * Specify if it's OK for detach to release the iterator for reuse.194 * 195 * @param allowRelease true if it is OK for detach to release this iterator 196 * for pooling.197 */198 public void allowDetachToRelease(boolean allowRelease);199 200 /**201 * Get the current node in the iterator. Note that this differs from202 * the DOM's NodeIterator, where the current position lies between two203 * nodes (as part of the maintain-relative-position semantic).204 *205 * @return The current node handle, or -1.206 */207 public int getCurrentNode();208 209 /**210 * Tells if this NodeSetDTM is "fresh", in other words, if211 * the first nextNode() that is called will return the212 * first node in the set.213 *214 * @return true if the iteration of this list has not yet begun.215 */216 public boolean isFresh();217 218 //========= Random Access ==========219 220 /**221 * If setShouldCacheNodes(true) is called, then nodes will222 * be cached, enabling random access, and giving the ability to do 223 * sorts and the like. They are not cached by default.224 *225 * %REVIEW% Shouldn't the other random-access methods throw an exception226 * if they're called on a DTMIterator with this flag set false?227 *228 * @param b true if the nodes should be cached.229 */230 public void setShouldCacheNodes(boolean b);231 232 /**233 * Tells if this iterator can have nodes added to it or set via 234 * the <code>setItem(int node, int index)</code> method.235 * 236 * @return True if the nodelist can be mutated.237 */238 public boolean isMutable();239 240 /** Get the current position within the cached list, which is one241 * less than the next nextNode() call will retrieve. i.e. if you242 * call getCurrentPos() and the return is 0, the next fetch will243 * take place at index 1.244 *245 * @return The position of the iteration.246 */247 public int getCurrentPos();248 249 /**250 * If an index is requested, NodeSetDTM will call this method251 * to run the iterator to the index. By default this sets252 * m_next to the index. If the index argument is -1, this253 * signals that the iterator should be run to the end and254 * completely fill the cache.255 *256 * @param index The index to run to, or -1 if the iterator should be run257 * to the end.258 */259 public void runTo(int index);260 261 /**262 * Set the current position in the node set.263 * 264 * @param i Must be a valid index.265 */266 public void setCurrentPos(int i);267 268 /**269 * Returns the <code>node handle</code> of an item in the collection. If270 * <code>index</code> is greater than or equal to the number of nodes in271 * the list, this returns <code>null</code>.272 *273 * @param index of the item.274 * @return The node handle at the <code>index</code>th position in the275 * <code>DTMIterator</code>, or <code>-1</code> if that is not a valid276 * index.277 */278 public int item(int index);279 280 /**281 * Sets the node at the specified index of this vector to be the282 * specified node. The previous component at that position is discarded.283 *284 * <p>The index must be a value greater than or equal to 0 and less285 * than the current size of the vector. 286 * The iterator must be in cached mode.</p>287 * 288 * <p>Meant to be used for sorted iterators.</p>289 *290 * @param node Node to set291 * @param index Index of where to set the node292 */293 public void setItem(int node, int index);294 295 /**296 * The number of nodes in the list. The range of valid child node indices297 * is 0 to <code>length-1</code> inclusive. Note that this requires running298 * the iterator to completion, and presumably filling the cache.299 *300 * @return The number of nodes in the list.301 */302 public int getLength();303 304 //=========== Cloning operations. ============305 306 /**307 * Get a cloned Iterator that is reset to the start of the iteration.308 *309 * @return A clone of this iteration that has been reset.310 *311 * @throws CloneNotSupportedException312 */313 public DTMIterator cloneWithReset() throws CloneNotSupportedException ;314 315 /**316 * Get a clone of this iterator, but don't reset the iteration in the 317 * process, so that it may be used from the current position.318 *319 * @return A clone of this object.320 *321 * @throws CloneNotSupportedException322 */323 public Object clone() throws CloneNotSupportedException ;324 325 /**326 * Returns true if all the nodes in the iteration well be returned in document 327 * order.328 * 329 * @return true if all the nodes in the iteration well be returned in document 330 * order.331 */332 public boolean isDocOrdered();333 334 /**335 * Returns the axis being iterated, if it is known.336 * 337 * @return Axis.CHILD, etc., or -1 if the axis is not known or is of multiple 338 * types.339 */340 public int getAxis();341 342 }343 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/xml/dtm/DTMIterator.java.htm
CC-MAIN-2016-50
refinedweb
1,943
54.93
Welcome Folks, in this module we are going to discuss the Variables and Constants in C Programming, Cheers to all of you for making up till here, we are yet to climb more stairs, till now we have seen the basic structure of C Programming and also about the input-output function, in this we will see the one step higher. So, are you all ready, let’s dive into the depth of this lecture. Variables and Constants in C Programming In C Programming, variables are like a storage area or we can say it is like a container that is used to store data and that data can be changed also depending upon the requirement. Here the container means in which you store some materials in daily life. For example, you drink water and that water is stored in the water bottle, so here water is the data, and the water bottles are the container. Similarly, in programming data is stored in a variable, the constant is nothing but the type of variable only but you can’t change the value of that variable declared as constants throughout the whole program. Variables in C Programming Till now, we have written a code that only gives the simple sentence written on the screen. However, Programming is not limited to here only, we can also write code that performs some useful tasks. A variable is nothing but a name given to a storage area or container that our programs use to store the values and also can manipulate whenever required. Each variable in C has a specific data type, which determines the type of value the particular variable is going to hold, and also the size and layout of the variable’s memory, and the set of operations that can be applied to that particular variable. There are some rules for declaring the variable i.e., the name of a variable can contain the letters, digits, and the underscore character. It must begin with either an underscore or with some letter. Upper and lowercase letters are considered as different letters because C is case-sensitive. A variable is a name, in which some value is assigned and that can be changed. For example, int num= 2; here in this example the variable name is num, which is associated with the value 2, int is a data type which represents that this variable can hold the integer values. We will cover the data types in the next module. The syntax for declaring a variable is: Data_type variable_name = value; In the above syntax you all can notice the Data_type, so don’t worry about that we will cover this topic in the upcoming module, it simply talks about what type or kind of value you are going to store in the variable like integer type, decimal type, etc. Let’s see an example to get a clearer idea. #include <stdio.h> int main() { int a; int b; char ch; float f; a = 20; b = 30; ch = ‘A’; f = 20.97; return 0; } Here in the example, the int, char, and float are the data types that tell us the nature of the value which the variable is going to hold and a, b, ch, and f are the variable names that are storing a particular value, like variable a is storing a value 20 in it and a variable b is storing a value 30 in it, variable ch is storing a character A in it, and variable f is storing the value 20.97 in it. So, these are the simple ways through which you can declare variables and assign the different values to I as per requirements. Constants in C programming Suppose you are writing a program and want some value to be fixed throughout the program i.e., the value should not be altered anywhere in the middle of the program for that we have constant in C Programming, with the help of this you can make the value fixed throughout the program. Constants are something that is used for representing a fixed value. It is also called Literals. It is used to depict a particular value throughout the program. The constants can be defined in 2 ways: By using the const keyword Syntax: const data_type constant_name = value; Example: const float PI = 3.14; Here, PI is the variable name that is assigned with the value 3.14, and also the const keyword has been used, so the value of the variable PI has become constant throughout the program. By using the #define pre-processor Syntax: #define constant_name value Example: #define PI 3.14 Here, we have used a preprocessor directive to define the constant. It is declared outside of the main function with the header files. Difference between the Variables and Constants So, in this module we have covered a lot about the variables and constants in C Programming, hope you all liked it and learned a lot. Must be excited about the next module. Until then, keep practicing, analyzing, Happy Learning!
https://usemynotes.com/variables-and-constants-in-c-programming/
CC-MAIN-2021-43
refinedweb
843
65.66
Announcements Crazy_VaseyMembers Content count940 Joined Last visited Community Reputation100 Neutral About Crazy_Vasey - RankAdvanced Member Worst Book Ever Awards Crazy_Vasey replied to no hit wonder's topic in GDNet LoungeBattlefield Earth. There are no words in the English language quite up to the task of describing how bad that book is. I only paid a quid or so for it and felt ripped off - it's just that bad. What are YOUR pc specs? Crazy_Vasey replied to nullsquared's topic in GDNet LoungeMy PC (Bender): Athlon64 3200+ 1x1024MB DDR-400 RAM ATI Radeon x800GT 256MB Nforce4 based motherboard 1x200GB SATA HDD My brother's PC (SHODAN): AthlonXP 2800+ 2x512MB DDR-333 RAM ATI Radeon 9800 Pro 128MB Via 4-in-1 based motherboard 1x120GB IDE HDD, 1x40GB IDE HDD There's also Skynet but that PC stopped working ages ago and has had its hard disc and RAM recycled into SHODAN and its graphics card given away to one of my brother's friends. I'm in a good place financially right now so I could upgrade but I can't see a point when I don't play many games anymore. I'd still be using SHODAN if Skynet hadn't imploded for no apparent reason and refused to start working properly again for no reason that I could determine. All my family's machines use Windows XP Pro, though Bender also has an Ubuntu install on a secondary partition. How tall are you and how tall are your parents? Crazy_Vasey replied to ManaStone's topic in GDNet LoungeI'm 5'6 and my parents are both about the same height. My brother's well over six foot tall though. Sony possibly licensing games instead of selling them?? Crazy_Vasey replied to chollida1's topic in GDNet LoungeQuote:Original post by Eelcoi dont know the first thing about eula's, but i do know that exchanging money for stuff doesnt automaticly mean you own said stuff. there are more possible transactions than exchange of ownership, and i dont see why they couldnt legally apply to videogames. Most transactions that don't actually give you ownership of what you pay for are a bit more explicit about it than a few lines squirelled away in a license you don't see till long after the money's already changed hands in my experience. And the only reason for this sort of approach would be to keep computer game prices artificially high, which is not something that makes me happy. Sony possibly licensing games instead of selling them?? Crazy_Vasey replied to chollida1's topic in GDNet LoungeFound that article. It's a British company and they have Microsoft's blessing to resell licenses from 'insolvent or downsizing companies'. Not quite as I remembered it. Oh well. Sony possibly licensing games instead of selling them?? Crazy_Vasey replied to chollida1's topic in GDNet LoungeQuote:most transactions take place without an explicit contract. Most transactions don't come with half as many gotchas as buying computer software too. Quote:in an ideal world, a drop in value would be the expected market response. except that people who want to sell their game still would anyway (whos going to stop them?), so the value wouldnt drop, nor would the company sell more. hence why nothing changes if sony would choose to do this, except pissing off some people. hence why it isnt likely they will try this anyway. Well it does seem to have been untrue anyway. I just didn't like the principle of it. It's one thing to say no making duplicates and selling them on, it's another thing entirely to say that you're not allowed to sell on your copy of the software. And are you quite sure about the EULAs? I distinctly remember reading about a company that made its money from recycling Windows licenses and that doesn't seem particularly compatible with Microsoft licensing. Sony possibly licensing games instead of selling them?? Crazy_Vasey replied to chollida1's topic in GDNet LoungeQuote:Original post by Eelco what was the last time you asked for it? and ofcource, if you dont like it, dont buy it. Asked for it? Since when has the default state of a contract been ' you don't get to see it'? Anyway, the don't buy it suits me fine: I've bought three games in the last eight months and none of them have held my attention for more than a few hours. It's more the principle of the matter than any practicality for me. If they want us to stop having ownership of games we buy, then prices should be dropped in accordance with this drop in status. Quote:Again, Crazy, like I said they could just put it on the outside of the box where it's easily seen. The EULA is how long and box is how big? Maybe if you used a microscopic font. You'd never ever be able to fit it all on with the usual screenshots and text, not even in the old days with the gigantic cardboard boxes. Sony possibly licensing games instead of selling them?? Crazy_Vasey replied to chollida1's topic in GDNet LoungeI managed to find some more information and apparently the enforceability of EULAs depends on which court you're being heard in. Some US courts have upheld them, some have rejected them. What a bizarre situation that is. Still, I utterly loathe the idea of a contract that you don't see till after the transaction has been made. That way lays madness. Sony possibly licensing games instead of selling them?? Crazy_Vasey replied to chollida1's topic in GDNet LoungeQuote:you made a certain transaction which has a certain legal meaning. Bullshit. There is no contract or license, written or otherwise, agreed when you buy a game. How you can claim that there is with a straight face is entirely beyond me. The only time we get to see the EULA is after the transaction has been made and the product opened, at which point mant places in the USA will simply not accept a return (thankfully that is not the situation where I live - we still have consumer rights that are worth a damn). There's no way that a contract only seen by one side after the deal is made can be regarded as fair. Or even legal, really. Has an EULA ever been succesfully enforced? I can't recall any cases and a quick google search shows nothing. And the idea of computer games being rented when you buy them is bloody absurd. There's a hell of a difference between paying a few hundred quid to rent a car worth thousands of pounds for a while and paying £35 for a piece of plastic that cost about 15p to manufacture. if they want me to accept that games are just rented then they'll have to drop prices considerably because this way they're having it all ways and the consumer's just lubing up and bending over. Thankfully, this appears to be nothing more than an alarmist rumour. any good 2d fps? Crazy_Vasey replied to tldalton1622's topic in GDNet LoungeDowload Operation Wolf and an emulator. That's the stuff. New Physics Chips for video games Crazy_Vasey replied to IronGryphon's topic in GDNet LoungeI don't feel particularly enthused about this development myself. The graphics and sound cards are enough specialised processors in the PC for my tastes. I could do without another expensive card to purchase when putting together a decent gaming PC, thank you very much, and what happened with 3D accelerators doesn't fill me with confidence on the doing without front if these things take off. Seagate Announces 750GB Barracuda Hard Drives Crazy_Vasey replied to OpenGL_Guru's topic in GDNet LoungeQuote:Original post by frob It's actually fairly common (another one listing 9 sources of memory bloat and ways to deal with them) and there are many ways that it happens. By default, it sets it's maximum RAM cache size to the amount of physical memory you have at startup. There is the session history issue where fully-rendered pages are kept in memory. There are many major memory leaks that certain people tend to encounter, some big ones were fixed in the very recent 1.5.0.2 release. Yet another problem is buggy plug-ins. Another issue (historically) was the way that it kept multiple tabs in memory; poor design and implementation could cause a single tab to consume 10+MB. There are more, but that should be enough to tell you how it happens. Ah. Sounds like one of those problems that isn't very likely to show up for me because I tend to switch my computer off when I go to do something else instead of just leaving it on. Seagate Announces 750GB Barracuda Hard Drives Crazy_Vasey replied to OpenGL_Guru's topic in GDNet LoungeQuote:Sure, its fine of the OS takes 100 RAM of memory. And Office takes another 300, but that's why we bought 1GB of ram. Oh, and then we have Firefox taking another 400. Then Java taking 100 or so for Zuma and other web games. Plus your IM program you can't live without -- Some of those can take 50MB or more. Oh, and the other little desktop toys, services, and spyware... How the bloody hell did you get Firefox to take up 400MB of RAM? Seriously, that's way, way more than I've ever seen any Gecko-based browser use and I've been using them for donkey's years now. How do you 'read' fantasy? Crazy_Vasey replied to Doggan's topic in GDNet LoungeQuote:Original post by tstrimp So, I guess I'm the only one who thoguht LoTR was too boring to finish? I didn't even finish the first chapter. Something about the writing style just turned me off completely. What is worse then big rigs? Crazy_Vasey replied to The C modest god's topic in GDNet LoungeQuote:Original post by Cold_Steel I don't consider them bugs. I consider them nostalgia. Brings me back to the days of playing Fallout 2. Good times, buggy game. The Fallout 2 patch even invalidated saved games. Oh, those were the days. Say, doesn't Bethesda own the rights to Fallout now? Unfortunately, yes. They're in the early stages of developing Fallout 3 now, I think.
https://www.gamedev.net/profile/1783-crazy_vasey/
CC-MAIN-2017-34
refinedweb
1,751
71.34
A while ago, React introduced React Hooks. Since version 0.59, you can also use them in React Native. What are React Hooks? React Hooks are a way to use stateful functions inside a functional component. Functional components are components written as a function, so they take some input (props) and return a React Element. Before React Hooks, you would need to write a JSX (JavaScript and XML) class that extends from the React Component, in order to access state or lifecycle related code. So in short: with React Hooks you can write leaner code, reuse code (functions) and make everything more maintainable and testable. In the end, the more time you spend on making your code maintainable, the less time someone else will lose by trying to understand your code. Also, if you reuse code rather than have duplicates, you save time on refactoring and have potentially less sources of bugs (fix something in one place means you also have to fix it at all other places). A breakout on the Lifecycle of a React Component I don’t want to go deep on the lifecycle, since React already has a great documentation on that. There are some important things that you should know: rendering, component mounting, the constructor as well as props and state. For a good lifecycle overview you can have a look at the following: These are the most important things you should know about a React Component and its lifecycle: Props Props are input of a component, so it is something you put into a component when you create it. Per definition props cannot change, but you can add a function to the props that do that for you (could be confusing). State State is something that can dynamically change (like a text input) and is always bound to something (a component for example). You can change the state by using the setState() function, which only notifies the component about a state change. Take a look at the following example and common pitfall with React and setState: // not so good console.log(this.state.test); // 5 this.setState({ test: 12 }); console.log(this.state.test); // might be 5 or 12 // good this.setState({ test: 42 }, () => { console.log(this.state.test); // 42 }); Constructor The constructor is not always necessary to have. However, there are some uses cases: initializing state and binding methods to this. What you definitely should not do there is invoking long-running methods, since this may slow down your initial rendering (see the diagram above). So a common component and constructor could look like the following: class MyComponents extends React.Component { constructor(props) { super(props); this.state = { test: 42 }; this.renderSomeText = this.renderSomeText.bind(this); } // you could also do this, so no constructor needed state = { test: 42, } renderSomeText() { return <Text>this.state.test</Text> } } If you don’t bind method in the constructor and only initialize the state, you don’t even need a constructor (save code). See my article about React Performance here, if you don’t know why you should bind certain methods. This also has some valueable code examples. Component did mount and will unmount The componentDidMount lifecycle method is invoked only once after the component was rendered for the first time. This could be the place where you do requests or register event listeners, for example. Apart from that, the componentWillUnmount lifecycle method is invoked before the component is getting “destroyed”. This should be the place where you cancel eventually running requests (so they don’t try to change the state of an unmounted component or something), as well as unregister any event listener you use. Lastly will prevent you from having memory leaks in your app (memory that is not being used is not released). A problem probably many (and also I) ran into was exactly what I described in the last paragraph. If you use the Window setTimeout function to execute some code in a delayed manner, you should take care of using clearTimeout to cancel this timer if the component unmounts. Other lifecycle methods The componentWillReceiveProps(nextProps) or from React Version 16.3 getDerivedStateFromProps(props, state) lifecycle method is being used to change the state of a component when its props changed. Since this is a more complex topic and you probably use (and should use) it rarely, you can read about it here. Difference between Component and PureComponent: You might have heard about React’s PureComponent already. To understand its difference, you need to know that shouldComponentUpdate(nextProps, nextState) is used/called to determine wether the change in props and state should trigger a re-rendering of the component. The normal React.Component always re-renders, on any change (so it always returns true). The React.PureComponent does a shallow comparison on props and state, so it only re-renders if any of them have changed. Keep in mind that if you change deeply nested objects (you mutate them), shallow compare might not detect it. Where do Hooks fit into the Component Lifecycle? If you ask yourself where hooks fit into this lifecycle, the answer is pretty easy. One of the most important hooks is useEffect. You pass a function to useEffect, which will run after the render call. So in essence, it is equal to componentDidUpdate. If you return a function from the useEffect’s passed function, you can handle the componentWillUnmount code. Since useEffect runs after every render (which might not always make sense), you can limit it to being closer to componentDidMount and componentWillUnmount by passing [] as a second argument. This tells React that this useEffect should only be called when a certain state has changed (in this case [], which means only once). The most interesting hook is useState. It’s usage is pretty simple: You pass an initial state and get a pair of values (array) in return, where the first element is the current state and the second a function that updates it (like setState()). If you want to read more about hooks, check out the React documentation. Lastly, I want to present a simple example of a React Native component with React Hooks. It contains a View with a Text and Button component. By clicking the button, you increase the counter by 1. If the counter reaches value 42 or greater, it stays at 42. You can argue if it makes sense or not. Especially since the value will shortly be increased to 43, then render once, then the useEffect will set it back to 42. import React, { useState, useEffect } from 'react'; import { View, Text, Button } from 'react-native'; export const Example = () => { const [foo, setFoo] = useState(30); useEffect(() => { if (foo >= 42) { setFoo(42); } }, [foo]) return ( <View> <Text>Foo is {foo}.</Text> <Button onPress={() => setFoo(foo + 1)} </View> ) } Conclusion React Hooks are a great way to write even cleaner React components. Its natural ability to create reusable code (you can combine your hooks) makes it even greater. The fact that cleaning side effects (subscriptions, requests) happen for every render by default helps avoid bugs (you may forget unsubscribe), as stated here.
https://mariusreimer.com/2019/05/hooks-in-react-native/
CC-MAIN-2020-05
refinedweb
1,181
63.09
#include <scriptingplugin.h> Detailed Description The ScriptingPlugin class loads additional actions stored in rc files with the KrossScripting format: The 'name' attribute in collection element will be used to match the menu object name. If no menu already exists with this name, a new one is created. In this example, the user will see a menu item with the text "Dummy Script" in "File" menu, which will execute the dummy_script.py script. By default it tries to find kross rc files in APPDATA%/scripts directory. Clients of this class can use slotEditScriptActions() as a way to override and/or extend the default script actions (if they exist at all). You may create multiple instances of ScriptingPlugin by using alternative c'tor. Definition at line 61 of file scriptingplugin.h. Constructor & Destructor Documentation Constructor. - Parameters - Definition at line 65 of file scriptingplugin.cpp. Allows having actions defined in a custom location, eg for project-specific actions. - Parameters - Definition at line 73 of file scriptingplugin.cpp. Destructor. Definition at line 82 of file scriptingplugin.cpp. Member Function Documentation Add a QObject to the list of children. The object will be published to the scripting code. - Parameters - Definition at line 109 of file scriptingplugin.cpp. - Deprecated: - use another addObject overload Definition at line 103 of file scriptingplugin.cpp. Re-implement in order to load additional kross scripting rc files. Reimplemented from KXMLGUIClient. Definition at line 97 of file scriptingplugin.cpp. This slot will open/create a scriptactions.rc file in XDG_DATA_HOME/application/scripts/ which will overide other kross rc files. This allows a user to extend existing menus with new actions. Definition at line 258 of file scriptingplugin.cpp. Deletes the user rc file, which has the effect of falling back to the default script actions (if any). Definition at line 271 of file scriptingplugin.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2020 The KDE developers. Generated on Fri Jan 17 2020 04:16:46 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/kross/html/classKross_1_1ScriptingPlugin.html
CC-MAIN-2020-05
refinedweb
349
51.34
0 * I meant pointer parameters This function that has two parameters, both of type reference and return reference to pointer. Do I need to fix anything? I have been stumped for days and dunno what I should fix here is my code: #include <iostream> using namespace std; double *ComputeMaximum( const double *num1a, const double *num2a); int main() { double num1; double num2; cout << "Please enter two numbers: "; cin >> num1 >> num2; cout << "The greatest value is: " << ComputeMaximum(num1, num2) << " and its pointer is: " << *ComputeMaximum(num1, num2); return 0; } double *ComputeMaximum( const double *num1a, const double *num2a) { return (double *)(num1a > num2a ? num1a : num2a); } Edited by potato4610: n/a
https://www.daniweb.com/programming/software-development/threads/303562/did-i-do-this-right-function-returning-two-reference-parameters
CC-MAIN-2017-26
refinedweb
105
54.05
There (to or) in "Edit.cshtml" and "Create.cshtml" views, you get the following error: Now, if you open the "Edit.cshtml" and "Create.cshtml" views, you will notice something missing there. So, why is this issue occuring? As a developer we expect to run a template (empty, internet, intranet etc) at the very beginning, I guess you are thinking the same. From my point of view, the MVC product team forgot to pack a pre-release version of the "Optimization" pack with the "Empty" template; other templates work fine. How do you fix this? Look, there are some even worse alternatives: First One: You can delete the above codes (in red rectangular box) and then running your application will work. But the jQuery validations of any new entry or editing will no longer be available for you on the view. Second One: You can use other templates (Internet, Intranet etc) that have everything already and will work for you. And I'm not going to use any of them, what I'm going to do is install the "Optimization" pack and then add namespaces in the config file. Step 1: Install from NuGet Open the power console and install this package using the following command: PM> Install-Package Microsoft.Web.Optimization -Pre You can find this over here []. Step 2: Add a namespace reference in the config files Open both Web.config files (the first one is in the root and the second one is in the view folder) and add this new namespace. <add namespace="System.Web.Optimization" /> Now you are all set to run, the bug will no longer occur. I hope this helps. Thanks. MVC Bind Attributes When Database Doesn't Match Conventions or Using OnModelCreating Feature
http://www.c-sharpcorner.com/UploadFile/abhikumarvatsa/the-name-scripts-does-not-exist-in-the-current-context/
CC-MAIN-2013-20
refinedweb
291
74.39
On 02/16/2009 05:30 PM, Michael Niedermayer wrote: > On Mon, Feb 16, 2009 at 05:27:08PM +0100, Benoit Fouet wrote: > >> On 02/16/2009 05:20 PM, Michael Niedermayer wrote: >> >>> On Mon, Feb 16, 2009 at 05:13:15PM +0100, Benoit Fouet wrote: >>> >>> >>>> Hi, >>>> >>>> On 02/16/2009 05:00 PM, Michael Niedermayer wrote: >>>> >>>> >>>>> On Mon, Feb 16, 2009 at 02:35:23PM +0100, Benoit Fouet wrote: >>>>> >>>>> >>> [...] >>> >>> >>>>>> Index: libavformat/avidec.c >>>>>> =================================================================== >>>>>> --- libavformat/avidec.c (revision 17366) >>>>>> +++ libavformat/avidec.c (working copy) >>>>>> @@ -87,7 +87,7 @@ static void print_tag(const char *str, u >>>>>> } >>>>>> #endif >>>>>> >>>>>> -static int get_riff(AVIContext *avi, ByteIOContext *pb) >>>>>> +static int get_riff(AVFormatContext *s, AVIContext *avi, ByteIOContext *pb) >>>>>> >>>>>> >>>>>> >>>>> this seems redundant >>>>> >>>>> >>>>> >>>>> >>>>> >>>> I'm not sure I understand. Is there an av_log()-friendly context already >>>> available in one of the two parameters ? >>>> >>>> >>> AVIContext *avi = s->priv_data; >>> so pasing AVIContext seems redundant >>> >>> >>> [...] >>> >>> >>> >> of course, silly me... new patch attached. >> > > ok > > [...] > > applied
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-February/065123.html
CC-MAIN-2015-35
refinedweb
153
61.97
Hello Guys, Today In This post, I am going to share with you my walk through experience of Exploit Exercise Proto Star Final1 Level. Before Starting Our Walkthrough Let's Take a Look At Hints And Details. Today In This post, I am going to share with you my walk through experience of Exploit Exercise Proto Star Final1 Level. Before Starting Our Walkthrough Let's Take a Look At Hints And Details. Note: I want to highlight Few Points. - I'm not the creator of protostar war game. I am just a player. - Here, I am Just providing you hints and reference so, that if you feel stuck anywhere. Take a Look Here. - Understand all previous levels before starting this one. - Do some research on Assembly, C/C++ and Gdb - Do Some Research About Heap overflow exploitation. - All Credit Related To Exploit Exercise War Games Goes To exploit-exercises.com. let's Start After That, hmm, check below. Source Codes #include "../common/common.c" #include <syslog.h> #define NAME "final1" #define UID 0 #define GID 0 #define PORT 2994 char username[128]; char hostname[64]; void logit(char *pw) { char buf[512]; snprintf(buf, sizeof(buf), "Login from %s as [%s] with password [%s]\n", hostname, username, pw); syslog(LOG_USER|LOG_DEBUG, buf); } void trim(char *str) { char *q; q = strchr(str, '\r'); if(q) *q = 0; q = strchr(str, '\n'); if(q) *q = 0; } void parser() { char line[128]; printf("[final1] $ "); while(fgets(line, sizeof(line)-1, stdin)) { trim(line); if(strncmp(line, "username ", 9) == 0) { strcpy(username, line+9); } else if(strncmp(line, "login ", 6) == 0) { if(username[0] == 0) { printf("invalid protocol\n"); } else { logit(line + 6); printf("login failed\n"); } } printf("[final1] $ "); } } void getipport() { int l; struct sockaddr_in sin; l = sizeof(struct sockaddr_in); if(getpeername(0, &sin, &l) == -1) { err(1, "you don't exist"); } sprintf(hostname, "%s:%d", inet_ntoa(sin.sin_addr), ntohs(sin.sin_port)); } int main(int argc, char **argv, char **envp) { int fd; char *username; /* Run the process as a daemon */ background_process(NAME, UID, GID); /* Wait for socket activity and return */ fd = serve_forever(PORT); /* Set the client socket to STDIN, STDOUT, and STDERR */ set_io(fd); getipport(); parser(); } Hint Vulnerable Function PlanningHonestly, friends after understanding all functions and source code. I was little bit confused because In first look, My mind doesn't recognize any format vulnerable function but as hint line points, format string vulnerability. So, I decided to check all functions Opcodes and also tried various type of input and boom! Error Occur In syslog function. After That, hmm, check below. Exploit #!/usr/bin/python import struct import socket # Configurations # # hex(68+108+79+46829) # Overwrite With 0xb7ecffb0 [system] got = 0x0804a1a8 # Where We want to right [strncmp] payload = 'AAA\xa8\xa1\x04\x08%x%21$x' payload = 'AAA' payload += struct.pack("i", got) payload += struct.pack("i", got+1) payload += struct.pack("i", got+2) # 000000b0 payload += "%108x" payload += "%21$n" # 0000ffb0 payload += "%79x" #payload += "%22$n" # b7ecffb0 payload += "%46829x" #payload += "%23$n" print [payload], len(payload) ### Testing ######### # # Found format string vulnerability # # After Inserting Username # # login AAABBBB%21$x [BBBB is our targeted Address ] # AAA\xa8\xa1\x04\x08 # # Strncmp GOT Entry Address : 0804a1a8 # # # Username variable Starting Point 0x804a240 # Victim Configuration PORT = 2994 HOST = "192.168.56.101" # Create Socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Connect Socket s.connect((HOST, PORT)) s.settimeout(2.0) print s.recv(1024) s.send("username suraj\n") print s.recv(1024) s.send("login "+str(payload)+"\n") try: while True: print s.recv(1024), s.send(raw_input(':~# ')+'\n') except Exception as E: print "Error ", E s.close() print "Closing socket" s.close()
https://www.bitforestinfo.com/2018/07/binary-exploitation-protostar-final1.html
CC-MAIN-2020-34
refinedweb
607
56.55
Opened 5 years ago Closed 4 years ago #672 closed defect (fixed) compiler crash in redefinition Description import time def time(self): return time.time() Make sure the right error is thrown as well. Change History (3) comment:1 Changed 5 years ago by haoyu comment:2 Changed 4 years ago by scoder - Milestone changed from wishlist to 0.15.1 - Owner changed from somebody to vitja Compiles for me in latest master branch, assuming it was fixed as part of Vitja's function definition fixes. comment:3 Changed 4 years ago by scoder - Resolution set to fixed - Status changed from new to closed Note: See TracTickets for help on using tickets. Can't reproduce compiler crash on git Cython.
http://trac.cython.org/ticket/672
CC-MAIN-2016-07
refinedweb
120
69.62
I didn’t know that libtommath has such a convoluted building process; more than half a dozen files need a touch to add a single file to the library! So for now: to something different. Mise en Place It is seen as one of the fundamentals of every good kitchen to have everything right at hand and well prepared when service starts. The very same view holds in programming, too. The final program—the calulator— needs something to hold some settings some-body might seem worth to keep and, more so wants to change. That’s what the so called configuration files are made for. We could use one of the many libraries available but we don’t. Who would have thought. The two simple reasons: I was not able to find a library with a fitting license and most of these libraries are way to complex, not flexible enough or lacked some functions needed. We don’t need much, just a simple “key = value” list where neither “key” nor “value” exceeds the length of some dozen bytes. A simple task one is tempted to think and a simple task it is. In some way. Or the other. Reading such a file is simple. The following file shall be our example file. me@home:~/PARSER/$ cat test.ini # Example config key_one=This is a test # stringvalue key_two = 1234 # number value # key = value gooood_entry = 4 notherkey = ----------- when_the_value_left_its_key = # no pairs over multiple lines, should throw an error for this lines # when commented out another = 1 another = 2 undetected error = 4 nested_comments = 4#5#6 should give only "4" #commented = out # commented = out, too I put two errors in it. The first one is commented out: a multi-line value. We do not need multi-line values. BTW: most famous last words start with “We don’t need no…”. The second one will not get detected by the parser by design: the white space in the key “undetected error”. Those errors will get their proper treatment by a different function because they are not a technical error. The delimiter between key and value is an equal sign and the delimiter between the individual pairs is a new-line, that’s all of the grammar of our configure-file-parser there is with the only extra rule that neither keys nor values may start and/or end with a white-space. So the following two entries are equal (with the underbars denoting white-space): key=value ___key________=_________________value_____________ The reading part suffers a bit from bit-juggling but should nevertheless be legible. #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h> #include <errno.h> /* This should be enough for everyone */ #define MAX_LINE 256 int read_config(FILE* file){ char line[MAX_LINE]; char *start,*end, *name, *value; int lineno=0,error=0; while (fgets(line, MAX_LINE, file) != NULL) { /* We work on copy */ start = line; /* Line numbering starts at 1 because the first line should be the 1st */ lineno++; /* Early out here because expected length is at most a quarter of MAX_LINE */ if(strlen(line)+1 == MAX_LINE){ return lineno; } /* Get rid of leading space */ while (*start && isspace((int)(*start)))start++; /* Single line comment: skip the whole line */ if(*start == '#'){ continue; } /* Rubbish? Skip line */ else if(strlen(line) < 3){ continue; } else if(*start){ /* Set a pointer to the first occurence of the equal sign if any*/ end = strchr(start,'='); if(end != NULL){ /* Overwrite the eqal sign with a zero, marking "end of string" */ *end = '\0'; /* Trim white-space from beginning and end */ name = trim(start); /* Step pointer one further to the part behind the equal sign*/ end++; /* value might be empty but that is not our problem */ value = remove_comment(trim(end)); /* Do what ever needs to done with the tuple */ printf("\"%s\" = \"%s\"\n",name,value); } else{ /* A lonely key at line lineno. Or a lonely value, who knows */ return lineno; } } /* something unexpected happened at line lineno */ else if(!feof(file)){ error = lineno; } } return error; } The two helper function did ot get the attention they deserve, I’m afraid, so here they are: char * trim(char *s){ size_t size; char *end; size = strlen(s); if (!size)return s; end = s + size - 1; while (end >= s && isspace(*end))end--; *(end + 1) = '\0'; while (*s && isspace(*s))s++; return s; } char * remove_comment(char *s){ char *end = strchr(s,'#'); if(end != NULL) *end = '\0'; return s; } The only interesting thing is, that the trim() function trims the end first. It does it that way to be able to replace the leftmost white_space with a zero, such that the next loop can stop there. Getting rid of the comments is easily done by replacing the hash with a zero. Both functions work on the original. Writing the configure file back is a bit more work if we want to keep the original comments. The file will be very small which makes it possible to work with an in-memory copy. One little problem with that approach: we need to know the file size to avoid trying to read a very large file. It would work without, we could just reallocate as needed but i wanted to show what happens when you sail out of the save haven that is standard ISO-C. To use the POSIX variant we need to add a bit to the preliminaries. #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h> #include <errno.h> #if (defined _POSIX_C_SOURCE) && (_POSIX_C_SOURCE >= 200112L) #include <sys/types.h> #include <sys/stat.h> #include <limits.h> #endif And in the code itself: /* Both ways seems to change the position of the FILE pointer */ rewind(file); Both have their advantages and their disadvantages, too. The POSIX way can measure the size of binary files but has trouble to do so with some device files and vice versa. We have only a small little textfile and could just happily go with the ISO-C way. int write_config(FILE* file){ char line[MAX_LINE]; char *start,*end, *name, *value; char *config, *end_config, temp_config[MAX_LINE]; int lineno=0,error=0; long file_size=0, mem_size, available_mem_size; size_t line_length; errno = 0; rewind(file); /* allocate memory for the in-memory copy */ config = malloc(file_size + SOME_EXTRA_MEM); if(config == NULL){ return errno; } /* keep the number for further use*/ available_mem_size = file_size + SOME_EXTRA_MEM; /* We need some pointers for the juggling */ end_config = config; /* The memory already used */ mem_size = 0; while (fgets(line, MAX_LINE, file) != NULL) { start = line; lineno++; while (*start && isspace((unsigned char)(*start))){ /* Basically the same as in reading but put every ws in memory*/ /* This is also the single place assuming preallocated memory. Placing a reallocation subroutine here would make the hassle with detecting the size of the file come to an end. */ *end_config = *start; mem_size++; end_config++; start++; } if(*start == '#'){ /* Keep the comments */ line_length = strlen(line); /* memory allocated might not be enough */ while((mem_size + (long)line_length +1) > available_mem_size){ config = realloc(config,available_mem_size*2); if(config == NULL){ return errno; } available_mem_size *= 2; } /* We know that we have enough memory, so strcat will do */ config = strcat(config,line); end_config = config + strlen(config); /* Keep ledger up-to-date */ mem_size += line_length +1; continue; } /* Yes, we keep everything, even the rubbish */ else if(strlen(line) < 3){ while((mem_size +(long)strlen(line) +1 ) > available_mem_size){ config = realloc(config,available_mem_size*2); if(config == NULL){ return errno; } available_mem_size *= 2; } config = strcat(config,line); end_config = config + strlen(config); mem_size += strlen(line); continue; } else if(*start){ end = strchr(start,'='); if(end != NULL){ *end = '\0'; name = trim(start); end++; value = remove_comment(trim(end)); /* The same as in reading, so we get both, key and value */ if(strcmp(name,"gooood_entry") == 0){ value = "The value for the key \"gooood_entry\" has been changed"; } /* additional three bytes for " = " and two for line-end and zero*/ /* make that three for Windows! */ line_length = strlen(name) + 3 + strlen(value) + 2; while((mem_size + (long)line_length) > available_mem_size){ config = realloc(config,available_mem_size*2); if(config == NULL){ return errno; } available_mem_size *= 2; } if(snprintf(temp_config,line_length,"%s = %s\n",name,value) < 0){ /* set errno to a value to be able to detect where it came from */ return errno; } config = strcat(config,temp_config); end_config = config + strlen(config); mem_size += line_length; } } else if(!feof(file)){ /* Set something externally to make clear that the following is not a value of errno. */ error = lineno; } } /* Let's add one more tuple, because we can */ name = "additional_key"; value = "additional value: 3628800"; line_length = strlen(name) + 3 + strlen(value) + 2; if(snprintf(temp_config,line_length,"%s = %s\n",name,value) < 0){ /* set errno to a value to be able to detect where it came from */ return errno; } while((mem_size + (long)line_length) > available_mem_size){ config = realloc(config,available_mem_size*2); if(config == NULL){ return errno; } available_mem_size *= 2; } config = strcat(config,temp_config); /* Print to a file, here: stdout */ printf("%s",config); /* Give memory back to OS */ free(config); return errno; } The main function does not check for every error. int main(int argc, char **argv){ FILE* file; int error; if(argc < 2){ fprintf(stderr,"Usage: %s filename\n",argv[0]); exit(EXIT_FAILURE); } file = fopen(argv[1], "r"); if (!file){ fprintf(stderr,"Opening file: \"%s\" failed\n",argv[1]); exit(EXIT_FAILURE); } error = read_config(file); printf("\n\tChanging one entry and adding one at the end\n\n"); if(error){ fprintf(stderr,"Error in reading at line: %d\n",error); exit(EXIT_FAILURE); } error = write_config(file); fclose(file); if(error){ fprintf(stderr,"Error in writing. errno: %d\n",error); exit(EXIT_FAILURE); } exit(EXIT_SUCCESS); }
https://deamentiaemundi.wordpress.com/2013/10/13/adventures-of-a-programmer-parser-writing-peril-iv/
CC-MAIN-2017-30
refinedweb
1,552
50.57
dropdownlist twitter Reddit Topics No topic found Content Filter Articles Videos Blogs News Complexity Level Beginner Intermediate Advanced Refine by Author [Clear] Nilesh Jadav (6) Gowtham K (6) Saineshwar Bageri (5) Rohatash Kumar (5) Abhimanyu K Vatsa (5) Satyapriya Nayak (5) Vithal Wadje (4) Rahul Saxena (4) Debendra Dash (3) Suthish Nair (2) Nimit Joshi (2) Bryian Tan (2) Sivaraman Dhamodaran (2) Sandeep Singh Shekhawat (2) Vishal Gilbile (2) Vijay Prativadi (2) Debasis Saha (2) Sagar Pardeshi (2) Khushbu Saini (1) Joginder Banger (1) Mahak Gupta (1) Bharath Reddy (1) Sushila Patel (1) Gnanavel Sekar (1) Atul Rawat (1) Ankit Sharma (1) Manoj Kalla (1) Jose Carlos Macoratti (1) Rupesh Kahane (1) Ramakrishna Basagalla (1) Ehsan Sajjad (1) Rohit Mane (1) Sthitaprajnya Debasis Nayak (1) Sandeep Kumar (1) Rakesh Kalluri (1) Mukesh Kumar (1) Ajay Gandhi (1) Aniket Narvankar (1) Vinodh Narayanan (1) suryakant nayak (1) Jaipal Reddy (1) Safayat Zisan (1) Hemant Panchal (1) Shridhar Sharma (1) Vincent Maverick Durano (1) Priti Kumari (1) Sourabh Mishra (1) Deepak Verma (1) Gagan Bansal (1) Mudita Rathore (1) Vasanth (1) S Ravindran (1) Neha Sharma (1) Shirsendu Nandi (1) Sapna (1) Amit M (1) Alok Pandey (1) Related resources for dropdownlist No resource found Filter Grid With Cascading Dropdownlist In MVC Using Razor 11/23/2020 9:14:29 AM. This article shows how to filter Grid records using a cascading Dropdownlist in MVC. Cascading A DropDownList With Another DropDownList in ASP.Net 11/10/2020 3:04:20 AM. This article shows how to cascade a DropDownList with another DropDownList in ASP.NET using C#. Multi-Select Checkbox Dropdown List - Create And Edit In .NET Core 3.1 9/21/2020 11:06:48 PM. In this article, you will learn how to create a multi-select checkbox dropdown list in .NET core 3.1 and edit it. Creating Simple Cascading DropDownList In ASP.NET Core MVC With New Tag Helpers 5/6/2020 2:24:47 AM. In this article, we are going to learn how to create a simple cascading dropdown list in ASP.NET Core MVC using Entity Framework Core in a step-by-step way. Data Validation Using JavaScript 5/1/2020 4:16:42 AM. This article shows how to do validation of textbox, dropdownlist and fileuploader in JavaScript. Validating RadioButtonList and DropDownList Using JavaScript 4/13/2020 4:30:45 AM. In this article we learn how to validate a RadioButtonList and a DropDownList using JavaScript in ASP.NET. Create a Simple DropDownListBox In Javascript 4/11/2020 1:48:44 PM. The example in this article shows how to create a simple DropDownListBox in JavaScript. Cascading Dropdown List In MVC Using LINQ to SQL 1/31/2019 3:05:29 AM. In this article we will learn how to bind data from a table to a DropDownList in MVC using LINQ to SQL.. Cascading DropDownList In Blazor Using EF Core 5/15/2018 10:07:30 AM. We will be creating a cascading dropdown list in Blazor using Entity Framework Core database first approach with the help of Visual Studio 2017 and SQL Server 2014. Working With DropDownList in MVC 5 Using jQuery 1/31/2018 5:11:35 AM. This article describes how to work with the Cascading DropDownList in an ASP.NET MVC Application. DropDownList In ASP.NET MVC 12/6/2017 5:15:28 PM. In this article you will come know how to bind data and get dropdownlist selected value in Asp.Net MVC in Client Side and Server Side.. DropDownList With Country, State And City In ASP. NET 6/16/2017 7:45:06 AM. This article shows when you select a country then its automatically shows the states name of that country in next DropDownList. Then when you select a state, the cities DropDownList will fetch the rel Binding Dropdownlist With Database In MVC 5/31/2017 4:14:49 AM. This article shows how to bind a dropdownlist in various ways with a database. Creating Simple Cascading DropDownList In MVC 4 Using Razor 5/25/2017 6:43:37 PM. This article shows how to create a Cascading Dropdownlist. ASP.NET Multiple Selection DropDownList with AJAX HoverMenuExtender 1/5/2017 1:20:35 AM. An article on how to build a Multiple Selection DropDownList with AJAX HoverMenuExtender. Change Kendo Grid DataSource Based On Selection In DropDownList 10/17/2016 10:48:53 AM. In this article, you will learn how to change the Kendo Grid DataSource, based on selection in Kendo DropDownlist. Xamarin Android - Generating A Dropdownlist Using Widget Spinner 10/14/2016 1:07:58 AM. In this article, you will learn how to generate a dropdownlist , using widget Spinner in Xamarin Android. Dynamically Bind The DropDownList On Change Event In ASP.NET MVC 5 9/30/2016 3:18:15 PM. In this article, we will cover how to dynamically bind the dropdownlist on change event in ASP.NET MVC 5. Cascading DropDownList In jqGrid Using MVC 9/19/2016 11:53:56 AM. In this article, you will learn about cascading dropdownlist in jqGrid using MVC. Asp.Net Databound DropDown List with Custom Query 7/24/2016 6:36:56 AM. This video shows supplying data to Asp.net DropDownList Control by making use of SqlDataSource Control.. Creating Cascading DropDownList In MVC Using Entity Framework And ADO.NET 7/16/2016 5:32:41 PM. In this article, you will learn how to create cascading drop down lists in MVC, using Entity framework. Creating a DropDownList For Enums in ASP.Net MVC 7/15/2016 12:41:56 AM. This article explains how to populate a DropDownList by enum type in ASP.NET MVC. Bind Enum To DropDownList In ASP.NET MVC 7/3/2016 6:00:45 AM. In this article you will learn how to bind Enum to DropDownList in ASP.NET MVC. Remote Bind The Kendo DropDownlist Using AngularJS And ASP.NET Web API 4/13/2016 12:54:12 AM. In this article you will learn how to remote bind the Kendo dropdownlist using AngularJS and ASP.NET Web API. Cascading DropDownList In ASP.NET MVC 3/30/2016 12:54:07 PM. In this article, I explain how to populate items in DropDownList on the basis of another DropDownList value. Populating Kendo DropDownLists With Multiple JSON Objects Using ASP.NET WEB API 2/24/2016 9:32:00 AM. From this article you will learn how to populate Kendo dropdownlist with multiple JSON objects using ASP.NET Remote Binding of Kendo DropDownlist with MVVM Pattern Using WEB API 2/22/2016 9:27:15 AM. From this article you will learn how to populate the Kendo dropdownlist with MVVM Pattern using WEB API. Working with Multiple Dropdown Lists in ASP.NET 1/29/2016 8:46:02 AM. In this article I am going to explain how to handle multiple drop-down lists in a registration/signup form. How To Bind DropDownList From Back End To Front End 12/30/2015 12:12:05 PM. In this article we will be learn how to bind Dropdownlist from back end to front end to perform save operation in web form. Bind Strongly Typed DropDownList Using JSON In ASP.NET MVC 5 12/28/2015 1:58:58 PM. In this article we will learn how to bind strongly typed DropDownList using JSON In ASP.NET MVC 5. Different Ways To Bind DropDownList In MVC 12/22/2015 1:28:43 AM. In this article I am going to discuss about how to bind DropDownList in MVC in different Ways. Cascading DropDownList In ASP.NET 11/30/2015 2:09:53 AM. In this article, you will learn how to create cascading DropDownList in ASP.NET. Binding Image In DropDownList In ASP.NET 11/5/2015 2:01:39 AM. In this article you will learn how to bind Image in DropDownList in ASP.NET. Various Ways To Use DropdownList Control In MVC Application 11/3/2015 2:19:19 AM. In this article I am showing you how to use DropDownList control in MVC and how to bind the data to DropDown control and list control. Bind All DropDownLists On A Page Using A Common Method 10/18/2015 12:20:58 PM. In this article you will learn how to bind all drop down lists on page using a common function. CRUD Operation Of DropDownList Inside GridView In ASP.NET 10/16/2015 6:57:04 AM. In this article we will learn about how to use DropDownList inside GridView. Creating Multiselect DropDownList Control In ASP.NET 4.0 Using Bootstrap: Part 2 10/15/2015 11:43:57 AM. In this article we will learn about how to create MultiselectDropDownList Control in ASP.NET 4.0 Using Bootstrap. Apply Checkbox And RadioButton Inside DropDownList 10/14/2015 1:40:26 AM. In this article you will learn how to apply Checkbox and RadioButton Inside DropDownList. Use DropDownlist In GridView In ASP.NET Using C# 10/9/2015 1:37:08 AM. In this article I will show you how to use DropDownlist in Gridview in ASP.NET using C#. Bind Dropdownlist In SharePoint Using C# 10/5/2015 3:57:25 PM. In this article I have explained how to bind the SharePoint list columns into the Dropdownlist using C#. How To Bind DropDownList In MVC Using C# 9/28/2015 11:52:28 AM. In this article we will learn how to bind dropdown list in mvc using c#. Creating Multiselect DropDownList Control in ASP.NET 4.0 Using Bootstrap 9/7/2015 2:02:13 PM. In this article you will learn how to create Multiselect DropDownList Control in ASP.NET 4.0 using Bootstrap. Implementing HTML Optgroup in DropDownList Using ASP.NET MVC 5 and Entity Framework 8/26/2015 1:54:56 PM. This article shows how to implement the HTML optgroup in a DropDownList using ASP.NET MVC and Entity Framework. Use DropDownList Dynamically Using Ajax 8/14/2015 11:30:35 PM. In this article we will learn how to use dropdownlist dynamically using Ajax. Highlight DropDownList Item Color in ASP.Net 7/24/2015 4:12:56 PM. In this article we will learn how to dynamically bind DropDownList from SQL Server database table. Show Images in DropDownList Using jQuery 7/21/2015 3:19:52 PM. This article explains how to show images in a DropDownList in ASP.NET using jQuery. Filter WebGrid With Cascading Dropdownlist Along With Paging in MVC 7/7/2015 8:12:20 PM. This article shows how to filter Grid records using a cascading Dropdownlist along with paging in MVC. Save Selected Data of DropDownList to Database in ASP.Net Using C# 6/26/2015 5:29:23 PM. In this article I’ll show you how to save the selected data of a DropDownList to the database table in ASP.Net Using C#.. JQuery: Fill DropDown and Show Records in GridView Format in ASP.Net 6/10/2015 10:27:11 PM. 6/9/2015 7:47:15 PM. This article shows how to bind a dropdown list from the database and on selecting any record from this dropdown how to fill in respective records in a GridView. Insert Data Into Access Database and Retrieve Them Into DropDownList in ASP.Net 6/9/2015 5:04:23 PM. In this article you will learn how to use an Access database introduction and getting data into DropDownList using OleDbDataReader. Bind Data To DropDownList in ASP.NET C# 6/6/2015 12:46:18 AM. This article shows how to bind a Dropdown in ASP.Net C#. In this tutorial I'll show you how data is bound to a dropdown list in ASP.Net using C#. Populate Kendo DropDownList Dynamically Using ASP.Net Web API 6/3/2015 6:04:22 PM. This article explains how to populate the Kendo dropdown using the ASP.NET Web API REST full service. How to Use DropDownList Helper in MVC Application 5/26/2015 1:36:08 PM. This article shows how to use the dropdownlist helper in MVC applications. DropDownList Helper to Handle HttpPost in MVC 5/26/2015 12:18:22 PM. This article shows how to use a dropdownlist helper to handle HttpPost in MVC applications. Various Ways To Populate Dropdownlist in MVC 5/25/2015 4:14:04 PM. In this article we will learn various ways to populate a Dropdownlist in MVC. Working With DropDownList SelectedIndexChanged Event 5/22/2015 6:40:30 AM. In this article we will learn how to bind data with DropDownList thereby working with the SelectedIndexChanged Event ASP.Net GridView Implementing Cascading DropDownList on Edit Mode 4/28/2015 12:43:38 AM. In this article we will learn about the ASP.NET GridView and implementing a Cascading DropDownList. SSRS Requirements in SQL Server 4/6/2015 8:59:43 PM. This article explains something we sometimes need to do and maybe this will be helpful sometime. Generate Dropdownlist With Custom Attribute Column Using Kendo UI and jQuery 1/14/2015 2:32:02 PM. In this article you will learn how to Generate Dropdownlist with Custom Attribute Column using Kendo UI and Jquery. Cascading DropDownList in ASP.Net MVC 1/3/2015 11:21:59 PM. In this article I explain how to make a cascading DropDownList in ASP.Net MVC. Cascading DropDownList in MVC 5 Using Web API 2 1/2/2015 9:09:13 PM. This article describes how to update a second dropdownlist based upon a change of the first one. Populate a DropDownList in Change of Another DropDownList 12/16/2014 1:34:31 PM. This article shows how to populate a dropdownlist on a selection change of another dropdownlist depending on the selected value. Cascading DropDownList in ASP.NET MVC 10/6/2014 11:23:16 PM. In this blog post you will learn how to create cascading DropDownList in ASP.NET MVC. I will take you through step by step approach so, let’s begin. Bind DropDownList With Month Name without DataBase Using Asp.net 9/29/2014 1:40:03 AM. In this article I have explained how to bind the DropDownList with month name using Globalization namespace without using any database or connection. Cascading Drop-Down List in SharePoint 2010 List Using jQuery 8/5/2014 2:23:49 PM. This article explains how to implement a Cascading Drop-Down List in a SharePoint 2010 List using jQuery and Sp services. Page Navigation Using a DropDownList in SharePoint List Using jQuery 5/3/2014 12:12:24 PM. In this article I will demonstrate how to do the navigation to a link just by clicking a selected text of the dropdown list item. Sum of DropDownList Selected Values Inside GridView Using jQuery 10/16/2013 2:01:29 PM. This article demonstrates the use of jQuery with ASP.Net to calculate the sum of DropDownList selected values and display the total. Add Countries in Your DropDownList Using WebService 10/8/2013 11:28:28 AM. Here is the code to add countries in your dropdownlist using WebService. Create DropDownList Using DropDownList Helper in Web API 8/18/2013 10:08:00 PM. This article explains how to use the DropDrownList helper for creating a DropDrownList. Excel DropDownlist Change Event Using VBA 7/5/2013 12:17:18 AM. This article describes how to handle the worksheet change event using VBA. The Visual Basic Editor is a program within Excel that allows you to communicate with Excel. Cascading DropDownList Box in ASP.Net MVC4 Using Json, jQuery 6/25/2013 2:36:08 PM. This article explains the usage of jQuery and CSS in MVC 4 an application to invoke server-side code. DropDownList in ASP.Net MVC 6/12/2013 11:59:17 AM. This article explains that how can create a dropdownlist means select element of HTML in asp.net mvc application. We create a subject list which will be show in Dropdownlist. ASP.NET: Bind Dropdownlist With Images 5/20/2013 4:47:26 PM. This article explains how to bind images with text in a dropdownlist from your table. Choose Country State Corresponding City List Appears in PHP 2/17/2013 11:12:38 PM. In this article we will know when we are choosing a selected country from DropDownList. DropDownList Helper Data Binding in MVC 2/7/2013 8:09:21 PM. In this post you will learn how to bind the data to DropDownList helper in MVC. We will try binding data to DropDownList from List<SelectListItem>, List<Model> and also from database. Paging in a DataGrid Using DropDownList 12/1/2012 6:05:43 AM. In this article we learn how to do paging in a DataGrid using a DropDownList. DropDownList Control in ASP.NET 12/1/2012 4:13:26 AM. In this article we will discuss how to use Drop Down List Control in ASP.NET. Related data from one dropdownlist to another in VB.NET 11/9/2012 10:44:28 AM. Generally while doing registration in a website. When we choose any city dropdown list, its related states, country and zip code bind in respective dropdownlist.and textboxes. Choose Country State Corresponding City List Appears without Refreshing the Page in VB.NET 11/9/2012 7:36:07 AM. In this article we will know when we are choosing a selected country from dropdown list its related state appears in another dropdown and after selecting the state its related city list appears in other dropdown list, without refreshing the webpage. Filter Records in MVC 10/28/2012 4:17:34 PM. In this quick article you will learn various ways to filter records in MVC. Fill ASP.Net Dropdown List From Database Table Using ASP.NET C# 10/25/2012 4:05:50 PM. In this article I will explain how to fill the Drop Down List from database table. DropDownList in ASP.Net MVC 3 Razor with Entity Framework 10/20/2012 6:17:17 AM. In this application I will developed a small Blog post application where, in the home page, all the blog pages will be bound to a DropDownList. Automatic Binding of Days, Months, Years to a DropDownList Control 9/29/2012 5:33:53 AM. This blog shows a simple snippet of binding days, months and years to a DropDownList. ASP.NET Select DropDownList Item Using Tab Key 7/2/2012 2:54:01 AM. In this article you will learn how to select a drop-down list item with jQuery plug-in in ASP.NET. Data Binding to DropDownList and ListBox in ASP.NET 6/18/2012 9:54:09 PM. In this quick article you will learn how to bind the data to a DropDownList and ListBox controls in ASP.NET. Display Text and Value of selected DropDownlist item using jQuery 5/15/2012 3:19:56 PM. In this article we will explore how to display the selected value using jQuery. How to Submit Form When User Press Enter Key on DropDownList Box 5/15/2012 2:30:44 PM. This small codes written in JavaScript will help you to submit form when user press enter key on dropdownlist. Changing Color of Text Using DropDownList in C# 3/25/2012 3:19:57 PM. Today, I have provided an article showing you how to change the color of text using a DropDownList in ASP. NET. Paging With DropDownList in ASP.NET 2/29/2012 7:29:12 PM. In this article we will learn about Paging with a DropDownList in ASP.NET. How to Show Records In a GridView Using DropDownList in Web Application 2/1/2012 2:05:15 PM. In this article you will learn to show records in a GridView using DropDownList in an ASP.NET Web Application.
https://www.c-sharpcorner.com/topics/dropdownlist
CC-MAIN-2020-50
refinedweb
3,331
68.26
Creating a Tag Cloud using ASP.NET MVC and the Entity Framework One of the reasons for the change is that I feel I have been mis-using "Categories". Generally, an item should really only be classified within one category, whereas applying more that one Tag to an item is de rigeur within Web 2.0 social networking and bookmarking sites. I've been "filing" articles under more than one category since I started the site, so it's about time I sorted that out. If you haven't already seen the object model I am working to behind this site (which was featured in a previous article) here's a refresher: I am using the ADO.NET Entity Framework as my Object Relation Mapping (ORM) tool. From the screenshot of the edmx file above, you should be able to infer that there is a one-to-many relationship between Articles and ArticleTypes (Article, Snippet, Cheat Sheet etc), but a many-to-many relationship between Articles and Categories. This means that each Article can be of one ArticleType, but it can belong to many Categories. SIDE NOTE: The actual database schema contains an Articles table, a Categories table and a bridging table to take care of the many-to-many relationship (ArticleCategories). If I was using LINQ to SQL instead of the Entity Framework, the bridging table would be exposed in the diagram, but EF works with a much higher level object model than that. As far as EF is concerned, each Article has a collection of Categories, and conversely, each Category has a collection of related Articles. The logic behind that approach, once you discover it, is undeniable. The real difference here between the designer diagrams are that LINQ to SQL is more representative of the underlying database schema, whereas EF is modelling the entities. Back to the task at hand - a Tag Cloud, as mentioned earlier - is a visual depiction of the relative frequency that each Tag is used. Accordingly, some calulations are required to establish how many articles are tagged with a specific Category relative to the total of articles. This is done simply by working out the percentage of Articles within each Tag. Clearly, I don't want to have to manually recalculate as articles are added or deleted from the site, so the calculation has to be done dynamically. For that to work, I need the total number of articles, and the number of articles by tag. So I created a new class called MenuCategory, which extends the Category class already generated by EF, and has the following properties: namespace MikesDotnetting.Models { public class MenuCategory : Category { public int CountOfCategory { get; set; } public int TotalArticles { get; set; } } } What's needed now is a method that will create the collection of MenuCategory objects that will form the Tag Cloud: public IEnumerable<MenuCategory> GetMenuCategories() { var totalArticles = de.ArticleSet.Count(); return (from c in de.CategorySet.Include("Articles") orderby c.CategoryName select new MenuCategory { CategoryID = c.CategoryID, CategoryName = c.CategoryName, CountOfCategory = c.Articles.Count(), TotalArticles = totalArticles }); } This appears in ArticleRepository.cs, which is my main data access point for the application. You will notice one thing - the ObjectQuery<T>.Include() method. This forces related entities to be returned as part of the result from the database (Eager Loading). It's a different approach to the one used by LINQ To SQL, where you need to set the options for eager loading at the DataContext level, through the DataLoadOptions. With the Entity Framework, eager loading is defined at the query level. So, I have my TagCloud object which will contain a collection of MenuCategory items. I have a method in the Repository that will create the collection. Now all I need is a way to get this collection to the MasterPage. Since the Tag Cloud will appear in the Master Page of the site, the data will need to be pulled every time there is a page request. So which controller should be used? The answer is one which all other controllers will inherit from. This means that whenever a Controller that inherits from BaseController is invoked, it will cause the parent Controller to instantiate, along with any properties and methods it contains. In my case, I have unimaginatively called this BaseController: using System.Web.Mvc; using MikesDotnetting.Models; namespace MikesDotnetting.Controllers { public abstract class BaseController : Controller { private readonly IArticleRepository repository; protected BaseController(): this(new ArticleRepository()) { } private BaseController(IArticleRepository rep) { repository = rep; } } } To keep things relatively simple during thie exercise, I will be using untyped ViewData to pass the Tag Cloud data to the MasterPage. So, within the BaseController constructor, I add the following: ViewData["TagCloud"] = repository.GetCategoriesForTagCloud(); And that's all it takes to add the data to the ViewDataDictionery. All I need now is to vary the font size of each Category within the MasterPage. For that, I need a little helper class and some CSS. the helper class takes two ints representing the number of articles in a category, and the total number of articles. It returns a string based on what the first number represents as a percentage of the second:6"; return result <= 50 ? "tag7" : ""; } The strings refer to classes within the CSS file, where the size of the font progressively increases: /* ------------------------------ Tag Cloud ------------------------------ */ .tag1{font-size: 0.8 em} .tag2{font-size: 0.9em} .tag3{font-size: 1em} .tag4{font-size: 1.2em} .tag5{font-size: 1.4em} .tag6{font-size: 1.7em} .tag7{font-size: 2.0em} Within the MasterPage, each category is assessed using the GetTagClass method, and the css style applied to it dynamically: <h2>Tags</h2> <div id="tags"> <% foreach (var t in (IEnumerable<MenuCategory>) ViewData["TagCloud"]) {%> <%=Html.RouteLink( t.CategoryName, "Category", new { controller = "Category", action = "Listing", id = t.CategoryID }, new { @class = Utils.GetTagClass(t.CountOfCategory,t.TotalArticles)} )%> <% }%> </div> Currently rated 4.27 by 26 people Rate Now! Date Posted: Monday, June 1, 2009 7:26 AM Last Updated: Friday, July 17, 2009 4:26 PM Posted by: Mikesdotnetting Total Views to date: 395. Tuesday, June 9, 2009 11:42 AM from Kelvin Your articles are always clearly written and easy to follow. Keep up the good work! Tuesday, June 9, 2009 9:09 PM from Adam Great article! "If you haven't already seen the object model I am working to behind this site (which was featured in a previous article) here's a refresher:" Could you provide a link to this article? Also, I would like to follow your articles - is there an RSS feed available? Thanks! Tuesday, June 9, 2009 10:38 PM from Mike @Adam, I've linked to the previous article now. If you check the third paragraph, you should find it. Alternatively, click the ASP.NET MVC link in the Categories menu and you will find an article on PartialViews. But the image is the same one used in that article. There's also an RSS feed link (the orange square) in the left hand column under the MVP logo. Tuesday, June 9, 2009 11:27 PM from Adam Got it - thanks! Thursday, June 11, 2009 7:49 PM from Tim Exelent article and I'm glad to see it using EF it's something I've been wanting to start using. But the down side is that I've been working on the exact same idea for a submission to the ASP.NET Daily Articles. Guess I better come up with another idea. Thursday, June 11, 2009 8:06 PM from Mike @Tim Oops. Sorry ;o) Friday, June 12, 2009 7:07 AM from jeeva What is MVC? How to use MVC.Please explain clearly thank ypu. Friday, June 12, 2009 7:54 AM from Mike @Jeeva Use a search engine. Or buy a book. Friday, June 12, 2009 1:23 PM from Bob I have been enjoying reading these articles. Just wondering about caching the results? With this being your base controller each time the site page is refreshed it will hit the database for the article counts? The counts would only change as articles are submitted or removed. Would this be a candidate for caching the result somewhere? Is this something you plan to cover in a later article? Friday, June 12, 2009 8:24 PM from Mike @Bob, Yes, it would be a candidate for caching by all that's right. The data changes just 3 or 4 times a month maximum. I haven't bothered implementing caching for 2 reasons - first I don't want to confuse the main thrust of the article with additional concepts outside of the main ones to get the tag cloud to work, and secondly, I haven't implemented caching at all in my site. No need - it's one of only 5 sites living on a massive Quad Core box :o) The odd visit that this site gets is just about all that SQL Server gets to remind itself that it exists.... But I may well look at caching in another article (once I've eventually found the time to finish the MVC migration...) Friday, July 10, 2009 4:17 PM from Jayaram Krshnaswamy Excellent and well written. Friday, July 17, 2009 3:28 PM from totalNewbie Thank you very much for a clear article. your navigational property should be called ArticleType(S) missing the S... there is an error in numbering (6..7) in GetTagClass, and I would use switch instead of ifs... question, isn't there an easier way to count how many times particular category name is present in the Category table? (without the need for the eager load another table) storing values somewhere (collumn)? Monday, July 27, 2009 9:42 PM from johnny Very nice article. I am wondering is there any link for the sample code? Wednesday, August 26, 2009 12:39 PM from Ravinder hi This Article make my job vary easy. thanks Warm Regards Saturday, June 11, 2011 9:32 AM from outletGucci I lilke your article,it is very clear for me and help me solve many problems,thank you very much. Thursday, February 21, 2013 11:31 AM from Atul Sharma Hi, Thanks for giving understanding on tag cloud, can you also please help in sharing link of downloadable example sample application? Thanks in Advance.
http://www.mikesdotnetting.com/Article/107/Creating-a-Tag-Cloud-using-ASP.NET-MVC-and-the-Entity-Framework
CC-MAIN-2014-41
refinedweb
1,711
55.64
Text on Layers - NoneNone94 hi i was trying to blend a text with an image and use it as an image of a layer. i opened the background image, wrote a text on it using ImageDraw.text and loaded it as layer's image with load_pil_image. but it seems that quality of the text is poorer that a text directly drawn on the scene using scene.text. 1- is there a solution for that? 2- while i was searching for a solution i found the TextLayer command in the Cascade example, for which no documentation seems to be available. can i use this to solve my problem? thnx When you use ImageDraw on a retina device, you might need to double the width and the height of your image to make it looks crisp. The scene module uses a "point" coordinate system (on retina 1 point = 2 pixels), while ImageDraw uses normal pixels. The TextLayer class is currently undocumented, but pretty simple to use. It's basically just a layer that uses the result from render_text() as its image. There isn't much more to it than what you see in the examples, in fact, this is the entire source code of the class: <pre> class TextLayer (Layer): def init(self, text, font, font_size): Layer.init(self) img, size = render_text(text, font, font_size) self.image = img self.frame = Rect(0, 0, size.w, size.h) </pre> To combine it with an image, you could add a TextLayer as a sublayer of an image layer, using the add_layer() method. You can simply set the <code>tint</code> property of the TextLayer to set it to a different color, e.g. for red text: <pre>text_layer = ... text_layer.tint = Color(1, 0, 0)</pre>
https://forum.omz-software.com/topic/28/text-on-layers/3
CC-MAIN-2022-27
refinedweb
291
65.52
This tutorial provides an introduction to the development tools provided by Micron Technology for the Automata Processor (AP). This is not meant to be a comprehensive overview of the tools. Rather, this should provide inquiring minds with a solid foundation for further inquiry and exploration. The following topics will be covered in this tutorial: Note: This tutorial assumes the use of Linux; however, the SDK also supports Windows operating systems. The AP SDK is currently available by request at micronautomata.com. Verify that your system will support the SDK. Requirements are given here. Request the tools from micronautomata.com. Once your request has been approved, download and install the SDK. Note that on Linux, there are several packages that must be installed to have the full set of SDK tools. Register the SDK using the provided key using sudo apsdk_activate. The Automata Processor executes non-deterministic finite automata (NFAs) directly in hardware using a DRAM array and a reconfigurable routing matrix. Consequently, programming the AP consists of specifying one or more NFAs to be executed. There are three primary programming languages and associated programming models for the AP: Another way to think about the AP is as a regular expression accelerator. Input is streamed to the AP, and the device checks the input against one or more regular expressions. Any matches are reported back to the host system. The AP compiler provides direct support for compilation of PCRE. We will use update rules from Brill Tagging to understand the use of the compiler. A single regular expression can be provided on the command line to the compiler. apcompile -f single_regex.fsm "/ right/JJ to/[^\s]+ /" Multiple regular expressions can be provided in a file to the compiler. Each regular expression is provided on a separate line. Below is the content of regex.txt. / right/JJ to/[^\s]+ / / the/[^\s]+ back/RB / / [^/]+/DT longer/[^\s]+ / / ,/[^\s]+ have/VB / / [^/]+/VBD by/[^\s]+ / / her/PRP\$ ,/[^\s]+ / / [^/]+/DT right/[^\s]+ / / [^/]+/IN 's/[^\s]+ / / [^/]+/RBS of/[^\s]+ / / had/[^\s]+ had/VBD / apcompile -f multiple_regex.fsm regex.txt The binary .fsm file generate by apcompile can be loaded onto the AP and executed using the runtime API. This falls outside the scope of the current tutorial. The remainder of this tutorial will focus on ANML programming. ANML is a language based on XML for definite state machines. It is not recommended to program directly in ANML (at least initially!). For an introduction to ANML programming, this tutorial will begin by using APWorkbench, a graphical tool for laying out automata designs. An STE accepts a given symbol set (character, PCRE character class, wildcard) Each STE is identified by a unique ID STEs can be active for the first input character, all input, or when activated by an incoming edge A latched STE will, once activated, remain activated An STE can report, triggering a report event on the AP, which is sent back to the host code Simple thresholded up-counter Has a threshold and a unique ID Other elements can connect to count and reset ports Can latch (output remains high), pulse (output once), or roll (reset) when the threshold is reached Can report, triggering a report event on the AP Combine activation signals Inverter, AND, NAND, OR, NOR, PoS, NPoS, SoP, NSoP Can report, triggering a report event on the AP Can optionally activate only on EOD signal Open APWorkbench and create a new project donut Drag out a total of eight (8) STEs from the palette on the right side of the window For the starting STE (Dd), choose Start: All Input For the final STE (t), chick Reporting For each STE, set the symbol in the Element Properties pane on the right side of the window Hover the mouse over an STE to create an edge. Drag from the outgoing arrow to another state to create an edge Create input.txt in the project directory This text was adapted from Wikipedia Choose Simulation > Start Simulation from the menu bar Choose input.txt as the symbol file Use the playback controls to step through the simulation Inactive states are grey Matching states are green Non-matching states are red Reports show up as a number below symbols in the input stream Double-click on a number above a symbol in the stream to jump to that symbol cycle As a first step, we are going to export ANML from example 1. This is a fairly common task for an AP developer (allowing for the design to be compiled). With the donut project open, choose File > Export Project Save the file as donut.anml Close the old project (small X next to palette) Create a new project named donut2 Check the box for Initialize Project with ANML File Choose donunt.anml from within the donut project directory This imports the design from the previous project Disable reporting on the (t) state Drag out a new counter from the palette Set the counter target to 5 in the element properties pane Enable reporting on the counter Connect the (t) state to the (c) port on the counter Create the same input.txt as above and simulate The simulation will report on symbol 577 (indicating that we have seen five do[ugh]nuts) While reporting immediately when a pattern is matched may seem convenient, there can be implications when multiple patterns are being searched in parallel. Low cycles per report (CPR) will cause the processor to stall while reports are copied off of the AP. It is therefore beneficial to stage reports and cause multiple patterns to report on the same clock cycle. Below is an optimization to the counting do(ugh)nuts example to demonstrate this. Disable reporting on the counter Change the counter mode to latch Add an STE that matches on '\xFF' and reports Connect the counter's output to the new STE Add an \xFF character to the end of input.txt What is going on here? The counter will continuously output a signal after counting five do(ugh)nuts. Whenever we want to check the count, we inject an \xFF character. In this case, we do it at the end of the input stream. This allows us to control when output occurs. Let's see how boolean logic can help in our designs. We only wish to see a report if there are five do(ugh)nuts and an even number of characters in the input stream. Create a new project donut3 and import ANML from donut2 Disable reporting on the STE after the counter Add two '*' STEs One should be set to start on "Start of Data" Connect these STEs in both directions Drag an AND gate from the palette Connect the non-starting '*' STE and the '\xFF' STE to the AND gate Set the AND gate to report Simulate this against input.txt from donut2 Macros allow for the reuse of a design. They can be thought of as rubber stamps that save the developer time and abstract away design details. In this example, we will create a Hamming distance macro by hand. Create a new project named macro_example Choose File > New Macro Name the macro Hamming Create the following design: The small boxes are ports, which we will use to connect macros The symbols on the STEs allow us to set parameters (fill in the actual values later) Select 'Hamming Parameters' In the element properties pane, add the following parameters: Back in the main project tab, create the following design: In the element properties pane, set the following parameters for the Hamming macro: Simulate using the following text: AATCGTCGAGGCGTCG Double-click on the macro to view simulation inside the macro Some designs will not run efficiently on the AP. Often, this will occur when there are chains of boolean elements. Consider the example in the verify project. The chaining of the boolean elements will result in the clock cycle of the AP being reduced to accommodate signal propagation. Here is an alternate design that will not have this problem: By placing the additional STEs between the logic, we can run the AP at full clock speed. The only drawback is that the report will come one cycle after we expect, but this is easy to account for in post-processing! Programming with the Workbench can quickly become tedious as designs grow in size. The SDK also provides programming language bindings to help generate automata. We will use Python bindings in this tutorial. import argparse, os from micronap.sdk import * def main(): parser = argparse.ArgumentParser() parser.add_argument("pcre_file", help="the pcre file") parser.add_argument("outfile", help="the anml file") args = parser.parse_args() # strip the name name = os.path.splitext(os.path.split(args.outfile)[1])[0] # make the ANML workspace A = Anml() AN = A.CreateAutomataNetwork(anmlId=name) # open the regex file f = open(args.pcre_file, 'r') regex = f.readlines() f.close #use this to keep track of the pcre rule i = 0 for r in regex: i += 1 id = "_" + str(i) # get rid of whitespace r = r.strip() try: AN.AddRegex(r, startType=AnmlDefs.ALL_INPUT, reportCode=i, anmlId=id, match=True ) except ApError as e: print "error:", e, r AN.ExportAnml(args.outfile) if __name__ == '__main__': main() from micronap.sdk import * def main(): A = Anml() M = A.CreateMacroDef(anmlId='hamming') # the number of mismatches d = 3 # length of string to compare against string_length = 7 ste_list = dict() #set up param ste lists positive_param = [] negative_param = [] for i in range(string_length): positive_param.append([]) negative_param.append([]) # generate automata structure for i in range(2*d+1): for j in range(i/2,string_length): ste_id = '_'+str(i)+'_'+str(j) # set default values # we will replace these when instantiating if i%2 == 0: pattern = 'a' else: pattern = '[^a]' # starting states if j == 0: start = AnmlDefs.ALL_INPUT else: start = AnmlDefs.NO_START # if we reach the end, report if j == string_length-1: report = True else: report = False ste_list[ste_id] = M.AddSTE(pattern,anmlId=ste_id,startType=start,match=report) # connect sequences of matching characters if i%2 == 0 and j>i/2: old_id = '_'+str(i)+'_'+str(j-1) M.AddAnmlEdge(ste_list[old_id], ste_list[ste_id]) # diagonal mismatch transitions if i>0 and j>(i-1)/2: old_id = '_'+str(i-1)+'_'+str(j-1) M.AddAnmlEdge(ste_list[old_id], ste_list[ste_id]) # two mismatches in a row if i>1 and i%2 == 1: old_id = '_'+str(i-2)+'_'+str(j-1) M.AddAnmlEdge(ste_list[old_id], ste_list[ste_id]) # add ste to parameter list if i%2 == 0: positive_param[j].append(ste_list[ste_id]) else: negative_param[j].append(ste_list[ste_id]) # set parameter information for i in range(string_length): M.AddMacroParam(paramName='%'+str(i+1),elementRefs=positive_param[i]) M.AddMacroParam(paramName='%n'+str(i+1),elementRefs=negative_param[i]) M.ExportAnml('hamming'+str(string_length)+'_'+str(d)+'.anml') if __name__ == '__main__': main() from micronap.sdk import * def main(): # create workspace A = Anml() AN = A.CreateAutomataNetwork(anmlId='hamming_reads') # load the macro hamming = A.LoadAnmlMacro('hamming7_3.anml') # read in the DNA sequences f = open ('reads.txt', 'r') reads = f.readlines() f.close() for r in reads: r = r.strip() k = AN.AddMacroRef(hamming,anmlId=r) for i in range(len(r)): # get a handle to the parameter ref = hamming.GetMacroParamFromName('%'+str(i+1)) # get a handle to sub the parameter sub = hamming.GetMacroParamSubstitutionHolder(ref) # set the value sub.ste.new_symbols = r[i] # write this out AN.SetMacroParamSubstitution(k,sub) # do the same for negation # get a handle to the parameter ref = hamming.GetMacroParamFromName('%n'+str(i+1)) # get a handle to sub the parameter sub = hamming.GetMacroParamSubstitutionHolder(ref) # set the value sub.ste.new_symbols = "[^" + r[i] + "]" # write this out AN.SetMacroParamSubstitution(k,sub) # write out the file AN.ExportAnml('hamming_reads.anml') if __name__ == '__main__': main() The final step is to compile the generated ANML file into a format that can be run on the AP. This is very similar to compiling for PCRE. We use the additional -A flag to specify that ANML is being compiled and to provide a file name for the element map. This stores STE ID information for reconstruction of report events. apcompile -Abrill.emap -f brill.fsm brill.anml Compiling in verbose mode provides information about the final design statistics. This can be helpful for estimating space utilization on the AP. The important numbers are STE Utilization and Total Rectangular Blocks. apcompile -v -Abrill.emap -f brill.fsm brill.anml To speed things up, we can also turn on multi-threading in the compiler. apcompile -v -MT -Abrill.emap -f brill.fsm brill.anml Macros can also be pre-compiled. This allows for faster compilation of the final design. apcompile -v -MT -A -f hamming7_3.fsm hamming7_3.anml ANML files can be simulated using batchSim: batchSim -v brill.anml brill_input.txt Compiled designs can be emulated using apemulate: apemulate -m brill.emap brill.fsm brill_input.txt AP Portal. Accessed 2016-03-03. AP Programmers Reference. Accessed 2016-03-03. Brill: Trainable Part of Speech Tagger. Accessed 2016-03-03. Doughnut. Accessed 2016-03-03. Linux® SDK Activation. Accessed 2016-03-03. I. Roy and S. Aluru. Finding Motifs in Biological Sequences Using the Micron Automata Processor. In Proceedings of the 28th IEEE International Parallel and Distributed Processing Symposium, pages 415–424, 2014. Last modified: 2016-03-04
https://web.eecs.umich.edu/~angstadt/research/ap/getting_started.html
CC-MAIN-2017-43
refinedweb
2,210
56.15
any input on this would be nice......originally I was unable to use the command "sslstrip -l 10000" and then I think I made a mistake somewhere and I don't know what happend from there [IMG][/IMG] idk if that pic will show up so here is the text error root@bt:/pentest/web/sslstrip# sslstrip -l 10000 Traceback (most recent call last): File "/usr/local/bin/sslstrip", line 30, in <module> from sslstrip.StrippingProxy import StrippingProxy File "/pentest/web/sslstrip.py", line 30, in <module> from sslstrip.StrippingProxy import StrippingProxy ImportError: No module named StrippingProxy root@bt:/pentest/web/sslstrip# ls build COPYING lock.ico README setup.py sslstrip sslstrip.py root@bt:/pentest/web/sslstrip# any help would be greatly accepted
http://www.backtrack-linux.org/forums/printthread.php?t=61951&pp=10&page=1
CC-MAIN-2015-22
refinedweb
124
58.99
At Manifestly, we’re investing in the Slack platform. In this article, I’ll show you exactly how we enabled sign in with Slack. First, we added the button to our login page: # apps/views/devise/shared/_links.html.erb ... ... We generated the initial html of the button using Slack’s sign in button generator. We extracted this to a helper: # app/helpers/slack_helper.rb module SlackHelper def sign_in_with_slack_button %Q(<a href="{slack_login_url}&scope=identity.basic,identity.email,identity.team,identity.avatar&client_id=#{Chamber.slack.client_id}"><img alt="Sign in with Slack" height="40" width="172" src=""" data- 2x" /></a>).html_safe end end The helper depends on some environment-specific settings (we use the Chamber gem) and a route: # config/routes.rb namespace :slack do get 'login', to: 'sessions#create' end The route depends on a controller, where most of the magic happens. First, I’ll show you the simplicity of the create action: def create end All of the preconditions of ensuring that user is a legitimate user are handled with before actions: - Ensure the oauth_access method had no errors. - Ensure the oauth_access method is ok. - Ensure there is a Manifestly account corresponding to the user’s Slack team. - Ensure the Manifestly account still has a valid token. - Find the user by email in the Manifestly account. Here’s the controller and its tests, which I’ve shared in a public gist: Dependencies We use the slack-ruby-client gem (among others) provided by Daniel Doubrovkine. He’s written a number of useful Slack gems. The other dependencies are internal and have to do with how we manage accounts and users in Manifestly: - An Account has many Users through a Membership model. - The User model is the authentication resource. - When someone adds our Slack app, they are adding it to the Account, which has one SlackToken. Interested in Checklists? We write about checklists on Medium at Manifestly ❤ Checklists. Does your team have recurring processes? (Most teams do.) Find out why teams love Manifestly by giving our service a try!
https://medium.com/@m5rk/sign-in-with-slack-2bc20d735b17
CC-MAIN-2017-39
refinedweb
340
58.89
KDE 3.x Splash Screens 244 comments so the installation works fine without errors? if so, can u go to the splashscreen settings and launch the "test" mode? - Oct 12 2006 you need at least the devel packages for kdebase - Jul 06 2006 there you could be able to find the neccessary options. - Jul 03 2006 sorry for the bad described theme options file. - Jul 03 2006 but please send me both for testing or maybe i'll add an option for slow machines (would be fair) thank you! just contact me: moodwrod@web.de (please use a BIG, SIGNIFICANT subject, my inbox is growing like never before :)) - Jan 16 2006 no special ideas. simply a nice funny tux could be the solution :) and remember that its has NOT to be rectangluar. think of the tux with a translucent background or something. i hope u can imagine what i mean. - Jan 14 2006 i'll do that this evening. - Jan 03 2006 make sure the theme contains a Background entry with only the filename given (without any path element) - Dec 08 2005 some adjustments to version 0.4.3 would be neccessary to make it stable. but it would include MNG animations and progressbar support. additionally i'm working on a style engine. at least, my job also takes time. maybe you get it at xmas or as a present for 2k6 :) - Dec 05 2005 i forgot to disable the "announce update" option. sorry for that - Dec 04 2005 ./configure --with-qt-includes=/usr/include/qt3 (if this is our path) and/or install qt3 dev packages (if not already done) - Sep 21 2005 There's a lot of crap in the current versions, so please be patient. - Aug 30 2005 btw: i've never tested to compile with qt4. tell me if it works - Aug 28 2005 have you looked at config.log? - Aug 22 2005 do you have dev files (kdebase-dev) installed? if so, mail me your complete error log to moodwrod@web.de. so i can have a detailed look at it. - Aug 13 2005 deb ../project/experimental main to your sources.list (as i did) and you'll get the latest kde 3.4.1 - Jul 18 2005 Bootsplash Various 18 comments that would be great! cya - Oct 03 2006 KDE 3.x Splash Screens 1 comment here is what i get: moodwrod@tibo (0) 01:20:29 PM 13:20:29 [~] superkaramba ^sys.path.insert(0, '/home/moodwrod/download/karamba/aero_aio.skz') Traceback (most recent call last): File "/home/moodwrod/download/karamba/aero_aio.skz/aero_aio.py", line 119, in ? File "/home/moodwrod/download/karamba/aero_aio.skz/aero_aio.py", line 101, in __globalimport__ File "", line 2, in ? File "/home/moodwrod/.aero_aio/ps_aio.py", line 1, in ? import karamba, re, time, traceback, subprocess ImportError: No module named subprocess best regards, christian - Jan 15 2006 Karamba & Superkaramba 66 comments [lastname], [firstname] so you split by " " this causes my contactlist only to show maybe u can fix this :) btw: clicking on a nick opens a chat window would be great, too greets - Aug 15 2005 KDE 3.x Splash Screens 33 comments Background = Background.jpg in the theme file. this would set it to a fix file name. sorry for the bad (not really existing documentation). next version is coming soon. i was very busy the last time. btw: great theme - Sep 23 2005 Various Stuff 24 comments sometimes kicker needs a restart to show all the stuff as u expect, so just invoke: dcop kicker kicker restart on the commmand line i have no theme at all, but i can make one for you. just email me. thats no problem cya - Aug 29 2005 my windows look like every other windows with plastik windec and style. - Aug 28 2005 forgot to remove - Jun 30 2005 KDE 3.x Splash Screens 36 comments look forward to moodin 0.4.3! More effects will be available - Jul 26 2005 KDE 3.x Splash Screens 8 comments apart from that i don't know if you use 0.4.1 or 0.4.2 of moodin engine but in 0.4.2 (or it depends on my current development version :)) the theme defaults fail. well, the following two lines in Theme.rc fix that (only 0.4.2): Background = Background.png Label2 = ML:USER:loginname - Jul 22 2005 KDM3 Themes 53 comments feel free to "moodify" the theme itself :) greets - Jul 21 2005 last two updates, it was tricky to find out - Jul 21 2005 KDE 3.x Splash Screens 12 comments KDE 3.x Splash Screens 20 comments which theme are you using? is it the same with all themes? - Jul 04 2005 KDE 3.x Splash Screens 2 comments i think i get it back some day. just ask google for the sources -.- - Nov 07 2008
https://www.pling.com/u/moodwrod/
CC-MAIN-2019-51
refinedweb
808
84.88
Hi Nick, Using the latest Jackrabbit 1.5 Snapshot with this .cnd file: Testnodes.cnd -------------------- < < [sp:processMarker] mixin - sp:inProcess (boolean) = 'false' autocreated [sp:folder] > nt:folder,sp:processMarker [sp:resource] > nt:resource,sp:processMarker -------------------- And the following code: -------------------- Session session = getSession(); Workspace wsp = session.getWorkspace(); JackrabbitNodeTypeManager ntMgr = (JackrabbitNodeTypeManager) wsp.getNodeTypeManager(); // Register nodetypes if (!ntMgr.hasNodeType("sp:processMarker")) { ntMgr.registerNodeTypes(new FileInputStream(<testnodes.cnd file>), JackrabbitNodeTypeManager.TEXT_X_JCR_CND); } session.save(); I could add the nodetypes without problems. So maybe you like to test again with the latest version of Jackrabbit or compare your code/cnd with this one. Best regards, Markus -----Original Message----- From: Nick Stuart [mailto:nstuart@speranzasystems.com] Sent: Thursday, June 19, 2008 9:24 PM To: users@jackrabbit.apache.org Subject: Re: Cant create nodetypes Yes, thats what the < is (or at least that what I thought it was for). All the examples I have seen look almost the same as what i am doing, but it doesn't work. Again, my cnd file looks like this, the first line being the namespace: < < [sp:processMarker] mixin - sp:inProcess (boolean) = 'false' autocreated How is this different then say the example from, and yet I get the exception saying "something_here is not a valid namespace uri' (have tried other rand http addresses as well and it doesn't work either) On Thu, Jun 19, 2008 at 3:15 PM, Pulla Venkat <pcsri1956@gmail.com> wrote: > You need to add the namespace information at the begining of CND file. > Just like in xml file , namespace is specified before using , cnd file also > should be similar. > > On Thu, Jun 19, 2008 at 2:59 PM, Nick Stuart <nstuart@speranzasystems.com> > wrote: > > > Hi all, I've looked around for this problem, but all I see related is > > JCR-888, but I'm not using the IBM jdk as listed there. My jave version > is: > > java version "1.6.0_06" > > Java(TM) SE Runtime Environment (build 1.6.0_06-b02) > > Java HotSpot(TM) 64-Bit Server VM (build 10.0-b22, mixed mode) > > > > I am trying to create a fairly simple custom node type with the following > > cnd: > > < > > < > > > > [sp:processMarker] > > mixin > > - sp:inProcess (boolean) > > = 'false' > > autocreated > > > > [sp:folder] > nt:folder,sp:processMarker > > > > [sp:resource] > nt:resource,sp:processMarker > > > > > > I have two issues. > > First, if i leave < as it is, I get the exception > shown > > in JCR-888 saying its not a registered namespace uri. Not helpful. > > Second, if I try < then I just get sp: is not a registered > namespace > > whenever I try to use sp:folder. > > Either way I am at an roadblock here. The rest of the process has gone > > great > > and is doing everything I need it to, but I would love to be able to add > > some custom logic/info here to make things fit better. > > > > Any ideas on what I could be doing wrong? I am using the 1.4.5 release > from > > maven. > > Thanks! > > -Nick > > >
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200806.mbox/%3C000601c8d2af$07190890$29b2a8c0@spiritlink.de%3E
CC-MAIN-2015-14
refinedweb
487
56.86
I have some objects in the following configuration: · A has the many-to-many relationship with B. (B has inverse="true") · B has the many-to-one relationship with C. (I have cascade set to "save-update") · C is the kind of type or category table. Also primary keys are generated by the database on the save. But with my data sometimes I run into problems such as A has the set of different B objects and now these B objects refer to the same C object. When I try to call session.saveOrUpdate(myAObject) I get following hibernate error which says: "a different object with the same identifier value was already associated with the session: C". I can understand that hibernate cannot insert/update/delete the same object twice in the same session, and so looking for some workaround on this? I don’t think this is the uncommon situation. One thing I want to mention here is that for architectural reasons beyond my control each read or write needs to be done in a separate session. Does anybody has any solution on my issue? You just need to Run session_object.clear() and after that just save the new object. This will clear the session for you and will remove the offending duplicate object from your session. OR You can just add the annotation @GeneratedValue to the bean you are inserting. Its because the B objects are not referring to the same Java C object instance. They are referring to the same row in the database (i.e. the same primary key) but they're different copies of it. So what is happening is that the Hibernate session, which is managing the entities would be keeping track of which Java object corresponds to the row with the same primary key. One option would be to make sure that the Entities of objects B that refer to the same row are actually referring to the same object instance of C. Alternatively turn off cascading for that member variable. This way when B is persisted C is not. You will have to save C manually separately though. If C is a type/category table, then it probably makes sense to be that way. You only need to do one thing. Run session_object.clear() and then save the new object. This will clear the session (as aptly named) and remove the offending duplicate object from your session. session_object.clear() @Test public void testSavePerson() { try (Session session = sessionFactory.openSession()) { Transaction tx = session.beginTransaction(); Person person1 = new Person(); Person person2 = new Person(); person1.setName("222"); person2.setName("111"); session.save(person1); session.save(person2); tx.commit(); } } public class Person { private int id; private String name; @Id @Column(name = "id") public int getId() { return id; } public void setId(int id) { this.id = id; } @Basic @Column(name = "name") public String getName() { return name; } public void setName(String name) { this.name = name; } } @GeneratedValue(strategy = GenerationType.AUTO) <generator class="native"/> hashcode() getHibernateTemplate().flush();
https://kodlogs.com/34175/different-object-with-the-same-identifier-value-was-already-associated-with-the-session
CC-MAIN-2021-21
refinedweb
495
57.87
... Wednesday, November 29, 2006 Liking Vista, mostly I've got no sound, Visual Studio 2003 is a bit unsure of its new environment, it took me an hour to fix up my source control installation due to Vista clobbering file permissions, UAC is crap (I have turned it off and turned off the warning telling me to turn it back on) and SonicWall VPN Client doesn't work but... Vista is actually pretty good. Search works quickly for the first time ever in Windows (and searches emails as well!), the new UI is actually quite beautiful (something that could never be said about the XP Fisher-Price look), IIS 7 is a damn fine upgrade (although it took a while to get my ASP.NET 1.1 apps up and running) and performance seems to be pretty much the same as XP, which is quite remarkable given all the visual additions and extra services that have been added. No doubt the Mac did it better two years ago but the Mac doesn't have any apps I need. My Mac OSX box sits unloved in the corner with nothing to do... Sunday, November 26, 2006 Vista headaches I'm a bit of a fool when it comes to new software, if I can get a copy of something cool for free I'll probably install it. So, since the company I work for now has an MSDN license I thought I'd give Vista a spin. So I pretty much ignored the warnings the installer gave me about the problems I might have and went ahead and installed it. And the warnings were pretty much spot on. I now have no sound and I can't play videos. I have no idea why but I guess it's a driver problem. Hopefully Dell will fix the problem PDQ because sound is kind of important for me since I use Skype all the time... Other than that Vista seems to work fine. I'm not sure at this point whether the upgrade was worth it, nothing has jumped out at me as an aboslutely must have feature. Saturday, November 18, 2006 FireFox and IE differences Often when I'm reading up on support for particular HTML or CSS features, I'll see something along the lines of "Most modern browsers support this feature, Internet Explorer doesn't". A loose translation of this could be "if you use this feature it won't work for 80% of your users". I do most of my testing in IE then go over to FireFox to make sure it looks OK and probably because of this it always seems to be the other way round, FireFox doesn't support a lot of stuff that IE does. FireFox doesn't have a way to render text vertically. OK, there's no official way to do this, so IE has come up with it's own CSS attribute, but I'm kind of surprised that FireFox hasn't come up with some way of doing it, it seems like quite a common thing to want to do. Another problem is hiding and showing rows in a table. FireFox kind of supports using 'display:block' but the row gets taller and taller with each hide and show. So I have to use 'display:table-row', which doesn't work in IE. Not sure which is standards-compliant but 'display:block' seems like the more consistent approach. Friday, November 17, 2006 XSLT not completely insane I've played with XSLT before and never quite got to grips with it, but over the last few days I've finally got a basic understanding of what it's all about. It still seems like it was designed by somebody who was an XML/HTML addict and could only think in terms of tags when developing a programming language. Like the old saying goes, when the only tool you have is a hammer, every job looks like a nail. But the thing is if you have an XML document and you want to transform it to some other flavour of XML or HTML then XSLT is a fine choice. I'm still not sure if my XSLT is any good or not, I've not seen any coding standards for it anywhere. Should I be splitting things out into multiple templates? Should I be doing anything in particular to make my XSLT more maintainable? Dunnow. But I've got 1000 lines of it and it produces quite a nice HTML document so that's good enough for me at the moment. Currently listening to Love Less by New Order from the album Technique Wednesday, November 15, 2006 Well done Ken Chelsea Tractors are going to be hit by another tax soon, as the London Congestion Charge will be increased to £25 for band G vehicles. Well done Ken! Tuesday, November 14, 2006 Breaking news: No-one buys Borland's IDE tools However they try to spin it, it's fairly obvious that Borland were unable to find a buyer for their IDE business. So my prediction was right. Is it good or bad news for Delphi and their other tools? It doesn't look great but at least they can hopefully stand on their own two feet and won't have their income sucked away to finance the ALM business. Currently listening to The Only One by Billy Bragg from the album Workers Playtime Crude awakening - Peak Oil Found a couple of interesting videos about peak oil and some of the possible solutions Saturday, November 04, 2006 IE7 take-up IE7 hasn't been released as part of Windows Update yet, but its usage is up to 6% of all IE users on the Random Pub Finder. Saying that, FireFox 2 is being used by 38% of all FireFox users. I reckon that is down to the techy nature of most FireFox users. Funnily enough 20% of our visitors are now using FireFox, which is pretty astounding. I'd guess we've had quite a few techy people visit recently due to the coverage we've had. Thursday, November 02, 2006 Build it and they will come After five years, the Random Pub Finder has finally become an overnight success. Google seems to have started re-indexing pages, although there's still a random nature to its indexing. We were up to 300 pages indexed, now we're back down to about 150. But the main reason for the sudden surge in traffic has been appearing at Programmable Web. Thanks to that, we've been covered on a couple of other sites. We've never had so many hits. Now I'm actually getting traffic here as well, thanks to this post. Seems I'm one of the top results on Google if you search for directmailchat. Clearly I'm not the only one suffering from this spam. Tuesday, October 31, 2006 I am a sex god I. Monday, October 30, 2006 Hurrah for Richmond Council Richmond Council are introducing increased parking charges for high-polluting vehicles. This brought all the usual rubbish from the 4x4 drivers. Lets go through the points one by one. 4x4s are efficient If your 4x4 is efficient then it won't be charged very highly. The charge is based on CO2 emissions, not car size. I need a big car, I've got 4 kids Presumably it was your decision to have 4 kids. Having kids is expensive, you have to feed and clothe them, house them. You wouldn't expect these to be subsidised would you? It's not like we have a shortage of kids on the planet. Climate change is just a theory Even though the vast majority of scientists agree that temperatures are rising and we are at least partly to blame, this line still gets trotted out. Can someone explain why I've still got bees flying round my garden at the end of October? This isn't normal. But let's assume climate change isn't in fact happening. Cars produce pollution at a local level, are noisy and generally reduce quality of life, particularly in cities like London where there isn't enough space to accomodate them. Not only that, but fossil fuels will run out, some time. So it seems like a good idea to reduce our dependence on them. It's undemocratic The motoring dinosaur Jeremy Clarkson claimed the whole thing is undemocratic. How's that then? The council were voted for by the people of Richmond. If the residents don't like it, they can vote them out again. What I like about this scheme is that it shows we can do something about climate change at the local level, we don't have to wait for the government or, even worse, the international community to do something. Since we live in the neighbouring borough and our council is also run by the Lib-Dems, I'm going to be lobbying them to introduce the same scheme. Currently listening to Have A Day / Celebratory by The Polyphonic Spree from the album The Beginning Stages Of... [UK] Friday, October 27, 2006 iTunes and Windows Live Writer love-in For the past few months I've been using my Toshiba Gigabeat to listen to music in my office, but something has been bugging me about it for a while. The random play functionality is completely whacked out. It seems to be random based on the artist, so if you've got one track by an artist, that artist will get as much coverage as an artist with 100 tracks. So I decided to move my MP3 collection on to my new PC and use iTunes instead. That in itself was a bit of a pain. The Gigabeat converts MP3s into a file with a SAT extension so you can't just copy them back across, since nothing but the Gigabeat knows what a SAT file is. But if you plug it in right, Windows recognises it as a media player and knows what to do with the files and does the conversion on the fly. All seems a bit odd really, but I could now listen to my MP3s through iTunes. And now I've found a very nice plugin (well two actually) that lets you insert the track you're currently listening to into your blog post. Perhaps it should be called 'Completely Pointless Vanity Plug-In' but I like it. Couple of things to note. The Apple website doesn't seem to be correct when it tells you where to put plug-ins for iTunes to pick up, it says they should go under 'My Documents' somewhere but I had to put it in C:\Program Files\iTunes\Plug-ins. The Live Writer plug-in goes in C:\Program Files\Windows Live Writer\Plugins but by default the plug-in will insert 'Nothing playing', you need to update the HTML template to something like <P>Currently listening to %t% by %a% from the album %al%</P> to get it to work. Currently listening to Carrion by British Sea Power from the album The Decline of British Sea Power Tuesday, October 24, 2006 Disabling the Dell malware When I purchased my new PC it came with a whole host of crapware that I didn't want. Most of this could be removed but one piece of software remained, an extension for IE that brought up a Dell search page whenever a website failed to load. I've never tried to get rid of this because it's never caused too much harm but today it began interfering with my work so it was time to get rid of it. IE7 makes it pretty easy to disable pointless or dodgy add-ins. If you have the same problem as me, go to Tools/Manage Add-ons/Enable or Disable Add-ons and disable CBrowserHelperObject object. No more 'helpful' Dell search page, yippee! Sunday, October 22, 2006 Google Sitemaps improvements Saturday, October 21, 2006 So how quickly will IE7 be adopted? Thursday, October 19, 2006 Somebody listened OK, probably not to me, but the silly error message no longer appears in the final released version of IE 7 Wednesday, October 18, 2006 IE7 Stupidity Internet Explorer 7 is mostly a good experience. Yeh, it's ripped off some of Firefox's features, but that's what Microsoft do well. But what on earth is this dialog box about? I tried to paste into a field in Google Maps and it popped up. I understand that Microsoft is getting a little paranoid about security these days, but of course I want to allow the webpage to access my clipboard, I just hit CTRL-V! Oddly enough this doesn't happen on all websites, so perhaps Google Maps is doing some odd scripting when I do a paste. Who knows, but please someone fix it! Exciting new website In their ongoing attempts at SEO, BetterDeal have got a new site (New Car Showroom) and they want some inbound links. Where better to get them than from here, with my huge number of visitors and massive PageRank? Monday, October 09, 2006 Hello World for ZX Spectrum lovers When I was a young lad, there was nothing I liked more than going down to my local Dixons after school and typing my very own Hello World application into the nearest ZX Spectrum. But the Hello World apps I see today just don't match up to the beauty and elegance of that original version, so I've written one for C# using System; namespace HelloWorld { class App { [STAThread] static void Main(string[] args) { line10: Console.WriteLine("Doogal is cool!"); line20: goto line10; } } } Who says goto is evil? I think I might adopt this style for all my applications... Sunday, October 08, 2006 Bid on my birthday present My brother has given everybody the opportunity to bid on my birthday present. Looks like he'll send it to me, even if somebody else outbids me, but any proceeds go to charity, so get bidding! Thursday, October 05, 2006 Google have cocked up again It looks like Google has cocked up its data update again. The PageRank updates it started a few days ago look like they are being rolled back, so the Random Pub Finder is back to PR 0. More worryingly, if I do a site: search, all URLs bar three are now shown as supplemental. But If I do a search for particular keywords, other pages on the site are returned and aren't marked as supplemental. Go figure. Wednesday, October 04, 2006 How to have a reasonably successful website I can't pretend to know a lot about making a website a huge success, none of the sites I've been involved with have been massively successful, but I've kind of worked out how to get a reasonable number of visitors to a site without any kind of massive outlay. Here are a few of my thoughts. Don't spend a lot of cash Websites are pretty cheap to run. Even if your website is your business, you can probably keep it running with no visitors and no income for a pretty long time. Blowing any money you do have on advertising before you're even sure your site is up to scratch is a sure fire recipe for disaster. Better to get just a few visitors in who'll give you a good idea if you're heading in the rifgr direction and you can fix any problems you have before too many people spot them. I think advertising in the old media is pretty much a waste, it's very expensive and it's hard to see if it's been a success. You can advertise using AdWords for next to nothing, giving you the chance to see if it works at all and also trying out different advertising approaches. Don't ignore SEO Google is fond of saying that you should optimise for the end user, not the search engine. I can understand where they are coming from but essentially this is pretty much untrue. If end users can't find your site, whether the site has been optimised for users or not is pretty much irrelevant, they'll never see it. Chances are search engines will be the biggest source of traffic to your site, so of course you need to optimise for them. I managed to triple the number of visitors to the Random Pub Finder just by applying some simple SEO, all completely above board and not impacting on the end user at all. I found this free download a very useful read. Design isn't too important There are many examples of butt ugly sites being pretty damn successful (craigslist being the most obvious example) and plenty of examples of beautiful sites that have crashed and burned (boo.com comes immediately to mind). The fact is if the idea behind your site is appealling, people will come back. If the idea doesn't appeal, it doesn't matter how good looking it is, people will go elsewhere. One company I worked for redesigned their site about 4 times in one year, in an attempt to get more conversions. I've no idea how much they spent in total but the effect on conversions was pretty minimal, so it was pretty much wasted cash. Every redesign brought a host of new problems that needed fixing, along with pages stored in search engines that no longer existed. Not only that but time spent on those redesigns could have been spent refining and improving what was already there. Keep focussed I've never really stuck to this advice (this site covers whatever I can be arsed to write about) but if you want to get people coming back to your site it sure helps if your site covers a single topic, or a few related topics. Iterate and watch I come from a software development background and I've always practised a kind of iterative development technique. And I use the same approach when developing websites. Make a small change, see if it works, move on. Unlike software development, when developing websites the 'see if it works' stage doesn't just mean 'make sure it runs OK'. it also requires looking to see how it has affected traffic, so the iterative cycle can be somewhat longer. and to see if it works, you need to have some data. The cheapest tools (i.e free) for this job are Google Analytics and Google Sitemaps, which tell you how many visitors you're getting and what keywords are driving them there, along with heaps of other information. Take advantage of free advertising There are plenty of opportunities to get more hits on your site that cost nothing. When I post to a forum and I'm given the opportunity to provide a URL I always do. Of course, posting randomly to forums just to get a link to your site is generally considered as spam and will likely cause more harm than good. Sites like Technorati and digg can drag in more readers and directories related to your site can help pull in yet more eyeballs. Wait, and wait some more I have no knowledge of the search engine algorithms but I'm pretty sure, all other things being equal, a site that has been around for 5 years will rank higher than one that's been around for 1 month. So if you just hang around doing nothing, your site should start to get some more hits. I've never really done anything to get hits to doogal.co.uk, but I now get about 200 visitors a week. Monday, October 02, 2006 PageRank update I've just noticed that Google seems to have updated PageRanks across my various sites. And finally The Random Pub Finder has got some PageRank! It's been PR0 for about a year, now we are up to PR3. My home page and this blog are both up, wahey! Wednesday, September 27, 2006 The Great Australian Survey are spammers I. Thursday, September 21, 2006 Working from home is different . Thursday, August 31, 2006 Metastorm e-Work 7 and DEP I've already written about Metastorm e-Work 7, so I was quite excited when I got hold of the CD and immediately started to install it on a test machine. That is until I read the Installation Guide which said "Before attempting to install Metastorm BPM ensure the Data Execution Prevention (DEP) feature is disabled.". Er, excuse me? I read on and was dumbfounded to see that I had to turn off DEP for the whole machine before I could install version 7. I ignored the advice and tried to install anyway but the Installation guide was telling the truth. I couldn't install because DEP was enabled. What is DEP and why should you worry? Applications on your computer have a code section and a data section. Executable code is contained within the code section and any other data required by the application is within the data section. Most applications will only ever execute code from the code section but until recently nothing stopped you from writing an application that ran code from the data section. This could be potentially useful if you need to write self-modifying code, but there is a very small minority of applications that would ever need to do this. In fact the overwhelming number of applications that do execute code in the data section are viruses, trojans and other malware. So Microsoft have recently introduced DEP that will stop applications executing code in the data section. There are two flavours of DEP, software and hardware. The hardware version only runs on the most recent processors but other than that I'm not sure of the differences. But one thing is for sure, DEP is a good thing! It is just one line of protection for your PC but an important one. Disabling it is asking for trouble. So why does e-Work 7 require me to disable DEP? To be blunt, the software has a bug. Clearly Metastorm thought disabling DEP was an easier option than fixing the bug. What I do find odd is the fact that it is possible to disable DEP on a per-application basis, so I don't understand why they didn't go down this route. I would certainly consider installing the software if this was how it was set up. As it is, e-Work 7 will be remaining on my shelf. If I was Metastorm I'd be somewhat concerned about the possibilities of a lawsuit caused by a server being compromised due to DEP being disabled... Wednesday, August 30, 2006 Delphi the best tool for .NET? Er, no Nick Hodges is doing a fine job of evangelizing Delphi which makes me think Delphi may have some kind of future but in his latest post, he suggests that Delphi is the best choice for .NET development. Delphi is certainly still a great choice for native Win32 application development but it just can't compete in the .NET world. There still isn't a version that supports .NET 2 so Delphi developers don't get to play with all the new features. Delphi is always likely to be playing catch-up in this respect. Also, almost all .NET code examples on the web are in C# or VB.NET which would make life more difficult for anyone developing in Delphi for .NET. There are situations where using Delphi in a .NET environment makes sense. If you have a large Delphi code base that you want to get to .NET, rewriting in C# or VB.NET isn't likely to be the most cost-effective solution. If for some reason you need to target Win32 and .NET, Delphi would also be a good solution, in fact probably the only solution. Finally, looking round at new applications coming out, we are now starting to see .NET managed desktop apps appearing. How many of them are written in Delphi? I don't know of one. In the world of ASP.NET I guess it's harder to know what language is being used for the development but I'd guess the vast majority of them are in C# or VB.NET. Compare this to the world of Win32 development where there are quite a few high quality apps being actively developed in Delphi and it's clear that .NET developers also don't think Delphi is the best tool for the job. Sunday, August 27, 2006 Getting into MySql with PHP Our host for the Random Pub Finder has been providing MySql integration for a while. I've been holding off using it for a while because I was thinking of converting the RPF to ASP.NET. But I eventually decided that was going to be too much like hard work and would break any links in. OK, I could probably do some URL rewriting to get round that but again it seemed like a lot of bother and PHP does pretty much everything we need. Anyway, I signed up for the MySql integration and I've been pleasantly surprised with how good the phpMyAdmin web interface is. RPF has been running off several text files for the last five years so I had to make quite a few changes to our code but getting the data in to MySql was pretty straightforward (phpMyAdmin will suck in any kind of delimited text file). Setting up phpMyAdmin on my local machine was pretty straightforward. OK, it's not MS easy, but then this is a free bit of software developed by people in their spare time. So I had to tweak a few text config files to get it working and I couldn't get it to play with PHP 5 but PHP 4 is fine for our needs. And now it's working I'm presuming I won't need to touch it again. Updating the RPF live site is now much simpler, I just insert a new row in a table and a new pub appears on the site. Previously I had to update one of our text files and email it to myself, run to my other half's laptop, download the email and FTP it to our host via a phone modem, because they won't let me connect to the FTP site via our broadband connection. I guess that's because their business model is based on getting revenue from phone calls... So, in conclusion, a thumbs up for PHP, MySql and phpMyAdmin. Perhaps I'll stop being an MS fanboy and become a open-source zealot. Time to grow a beard... Programmable Web Just found this site, which looks like a useful resource for knocking together Web 2.0 (God I hate that term) mashups (God I hate that term) Thursday, August 24, 2006 What is the point of Second Life? Apparently it's very popular with the young uns but I'm failing to understand why. I've been wandering around for a while and there just doesn't seem a great deal to do, except check out some butt ugly buildings. If I want to live in a fantasy world I want to be able to do some fantastic things, like wield big guns and blow up things, not just socialise... Worst of all, I seem to have to download a new version every time I log in. Wednesday, August 23, 2006 Google Maps and the Random Pub Finder OK, I got bored waiting for a response to my queries about ripping off people's code so the Random Pub Finder London Map has gone live. Trying not to be too big headed, I think it's superb. None of it would have been possible without phpcoord and overplot. If programming is an art form then the old adage 'good artists borrow, but great artists steal' must be true... Solving the Google Maps performance problem Google Maps has a nice API to create your own map interfaces on a website. Unfortunately it has some serious performances problems when you add a large number of markers. In my case, I have about 500 markers and both IE and Firefox grind to a halt on my page. I've seen various suggested solutions, the most common being to reduce the number of markers by only showing one marker when there are several clustered together when zoomed out. I'm sure that would work but I wasn't too keen on the idea because it just sounded like too much work. Fortunately somebody has come up with a nice solution that works well for me. thanks to the wonderful open source nature of the web, I can happily rip off his code. Just need to get approval from him before I show off my new page to the world... Sunday, August 20, 2006 Pompous rock star is hypocrite shocker Thursday, August 17, 2006 More Windows Live Writer A comment to my previous post from Spike Washburn (of the Windows Live Writer team no less!) suggests image uploading won't be supported until Blogger improve their API, which is a shame. So Blogger, get your finger out! And apologies if it appeared that I was suggesting Windows Live Writer deliberately wasn't supporting image uploads to Blogger. A couple of other points about Writer. First, I'm very pleased to see it is a managed application. Great to see MS dogfooding the .NET Framework and proving it's possible to produce professional apps using WinForms. Second, it's great to see developers from MS responding to blog posts from little old me. Fact is I'm a Z-list blogger that no-one reads but I've had two lots of feedback from MS people. That kind of communication will really help build a good feeling towards MS in the IT community. And yes, I'm using Writer to write this post. Windows Live Writer Just playing around with Windows Live Writer and it's mostly pretty cool. The first problem I've encountered is the lack of image support on Blogger, which is pretty much a showstopper. Odd really, because Blogger does support images, but I guess this is a beta... More fun with <noscript> tags The guys at BetterDeal wanted to know why their website didn't rank as highly as some of their competitors. Short answer is inbound links, which seems to be the prime ranking decider used by Google. To illustrate I did a link:www<dot>carbroker<dot>com<dot>au search (replace the <dot> with ., don't want them getting anymore inbound links), which is one of their competitors. The odd thing was that one of their inbound links came from a penis enlargement site. Penis enlargement doesn't have a lot to do with new cars, other than the obvious psychological enlargement for men who buy big red Ferraris. Looking at the source of the penis site (purely for research of course) showed up the reason. The link was hidden away in a <noscript> tag. Car Broker are also the guys who have <noscript>betterdeal</noscript> hidden away in one of their pages slagging off reverse auctions, which is why they rank pretty highly for a search on "BetterDeal reverse auction". Although Google know about this, they haven't done anything about it and their algorithm still doesn't seem to handle <noscript> tags too well... Tuesday, August 08, 2006 Borland goes Turbo Monday, August 07, 2006 Land Rover's misplaced marketing Sunday, August 06, 2006 Richmond Park Thursday, August 03, 2006 Lots of new posts DesignerSerializationVisibility attribute Use the tool, tool part 8 Use the tool, tool part 7 Use the tool, tool part 6 Use the tool, tool part 5 Use the tool, tool part 4 Use the tool, tool part 3 Use the tool, tool part 2 Use the tool, tool part 1 Wednesday, August 02, 2006 BPM software comparison Monday, July 31, 2006 House price crash? I hope so Friday, July 28, 2006 Windows Live Local vs Google Maps part 2 Wednesday, July 26, 2006 Windows Live Local vs Google Maps Cunts are still running the world Tuesday, July 18, 2006 An unobjective review of Metastorm BPM 7 Friday, July 14, 2006 MMC RIP? Friday, July 07, 2006 How to choose colours for a website Wednesday, July 05, 2006 Ajax.NET Professional Sunday, July 02, 2006 England suffer completely predictable World cup exit Friday, June 30, 2006 How to rescue a Delphi project Tuesday, June 20, 2006 What is System.RuntimeType? _5<< Thursday, June 15, 2006 The story of an SL-1210 Saturday, June 10, 2006 HTTPS and stupid error messages Wednesday, June 07, 2006 Server-side view state Monday, June 05, 2006 Handling errors in ASP.NET Tuesday, May 30, 2006 DCOM from .NET Type type = Type.GetTypeFromProgID(progId, serverName); if (type == null) throw new Exception("Unable to find the object, ensure the type library is registered"); Object objTest = Activator.CreateInstance(type); ComObject comObject = (ComObject)objTest;
https://doogalbellend.blogspot.com/2006/
CC-MAIN-2018-47
refinedweb
5,560
69.01
: def call(env) connection = DB.checkout_connection env["db.connection"] = connection @app.call(env) ensure DB.checkin_connection connection end: def call(env) connection = DB.checkout_connection env["db.connection"] = connection response = @app.call(env) ResponseProxy.new(response).on_close do DB.checkin_connection connection end end: def call(request, response) connection = DB.checkout_connection request.env["db.connection"] = connection response.on_close { DB.checkin_connection(connection) } @app.call(request, response) end. Yes, it was clear that current Rack API is not suitable for streaming, websockets, etc. But are there any concrete plans? Rewrite Rack and drop compatibility with current middlewares? I use Rack::Static, :urls=>{…} to identify cases where I just need a file delivered statically—perhaps there should be something like Rack::Stream, :urls=>{…} Placing streaming and action-based webapps in the same basket is dangerous. You can’t just “enable streaming” and hope it will work for all apps at the same time. Building a streaming/always connected solution requires a massive change to the way the apps are built, just look at NodeJS in JavaScript, BlueEyes/Spray in Scala, they are built in a completely different way and that’s how it should be done. Hiding the details and pros and cons of building a streaming webapp by a “common platform” using an action-based framework is just going to create new problems for developers. If you want streaming, you neeed to build your app from the ground up with that. Also, on common APIs, the Java Servlet API just added support for streaming/websockets apps using specific (and a different) request cycle, separated form the common request/response cycle of the common Java Servlet. Jose and Aaron are spot on that this needs to change, the sooner the better. It is a huge hole in Rails concurrency. As mentioned in the post, Java does a lot right here, but so does node.js If you look at how connect implements middleware () it has 2 key features that make anything on the radar today possible: 1) request and response can pub/sub events (on::data, on::end, etc). 2) middleware app has to explicitly call next for the chain to continue. The coolest part about #1 is middleware can inject functionality to happen later, this is how they implement x-runtime: module.exports = function responseTime(){ return function(req, res, next){ var start = new Date; if (res._responseTime) return next(); res._responseTime = true; res.on(‘header’, function(header){ var duration = new Date – start; res.setHeader(‘X-Response-time’, duration + ‘ms’); }); }; }; > You can’t just “enable streaming” and hope it will work for all apps at the same time. Yes, this is correct. However, I disagree with other points in your comment. First, I wouldn’t put streaming and always connected/websockets in the same bucket. The latter certainly can’t be mixed and that’s why I haven’t mentioned it in the blog post. Also, the blog post is focusing at the framework infrastructure layer and there is absolutely no need to have two completely different infrastructures, as they certainly share a lot. After all, an action based app is simply a streaming application that streams just once (the headers + body at once). The problem is that the Rack API limits us on this subset scenario, making it hard for us to properly support streaming for those who need to. Are you advocating that Rack evolve to support streaming with the callback hooks you describe in your final code example, or do you think we need something entirely new? I believe Rack should evolve. We can’t afford to go back to the state we had before Rack. So Python has a similar spec called WSGI that solves/helps this problem in a couple ways. First, for some quick background: When a WSGI app is called, the function takes two arguments – the “environ” for the request and a “start_response” callable. The app then returns an interator object, which the server iterates over until the end of the response. It’s the app author’s job to call the start_response callable no later than the first iteration of the iterator. 1. The generation of the HTTP status and headers is decoupled from the response body. This is accomplished by using the start_response callable an make calling of it decoupled from returning the response body iterator. 2. It’s possible in WSGI to have a middleware that is wrapping an app delegate it’s own .next() call (.each in Ruby) to the underlying app it’s written in. When the server calls the middleware (which wraps the app) it gets back the middleware’s iterator rather than the app’s iterator. The middleware can then be notified with the app is done iterating a response (via a StopIteration exception) and then it can do the right thing – i.e. in your example, check in the DB connection. Hope that helps! Very simple example, stream a large (more than 100mb file), you can’t possibly do this in an action based framework without magic (like x-seendfile) simple because you would never want to hang up a thread doing expensive IO operations both on your disk and network, you would select an efficient async implementation that would only write to the socket if there is a buffer available for it. Another example, reverse proxy, you just can’t write an efficient reverse proxy implementation using the usual request-response cycle inside of a thread, again due to the IO constraints. So, while it’s perfectly possible to build an async backend and make it look synchronous for clients, it’s a serious waste of resources and could lead to subtle, hard-to-pinpoint bugs. That’s why I really don’t believe having a single point of entry for both solutions would work, the abstractions would easily leak to the top level layers (the app layer) and people would be struggling to understand what’s going wrong. The Java servlet model is possibly the best, if you want to go request-response directly, just stay where you are, if you want do to streaming or async solutions, pick the new async API and enjoy it. If there is code that can be shared between both implementations, awesome, share it, but don’t force a single model on top of two very different solutions to the problem. I think we are agreeing then. This is very platform specific so when running on Node.js or Erlang (or thin in Ruby) the backend is inherently async so the cost you mention is quite low (which was what I was arguing). But you are totally correct that depending on your platform, the cost may not be worth it and grab a different stack. I think this is an oversimplification of the solution for which there is no problem. There are a number of use-cases for which streaming is not the answer. I can probably come up with more antipatterns than patterns. Richard, thanks for the comment. Can you please be a bit more specific? > I think this is an oversimplification of the solution for which there is no problem. What is an oversimplication? Which solution (the middleware example given or a Rack API alternative)? > There are a number of use-cases for which streaming is not the answer. Agreed. I didn’t say at any moment that streaming is the solution to all problems. In fact, streaming html responses leads to a number of complications, as linked in the post above. Yeah, I saw your gist earlier. It would be cool, if rack will evolve in it’s API, but providing compatibility layer for “non-streaming” apps (in rails’ terms). > In general, we would like to have a response objects that provides several life-cycle hooks I seem to recall saying the same thing to Aaron at some conference a year or so ago. Rack is a great simplifying abstraction, but there’s a limit to what you can do with just one event hook (“on_request”, effectively).
https://blog.plataformatec.com.br/2012/06/why-your-web-framework-should-not-adopt-rack-api/
CC-MAIN-2021-25
refinedweb
1,338
63.29
Clojure Loops in Ruby I bet you’re intimately familiar with for and while loops. When was the last time you used one in Ruby, though? Ruby introduced me to a whole new world of loops called iterators. It was the first time I’d dabbled with each and map. We’ve been chummy ever since and I haven’t looked back. Recently though, I’ve been spending time learning Clojure. Clojure favors value objects to mutable classes, provides rich immutable data structures, and emphasizes functional programming. As languages go, it’s a far cry from Ruby. During my studies, I was surprised to see yet another style of loop. Feeling inspired I decided to port this new Clojure loop into Ruby. I settled on using continuations, a little known Ruby feature, to make it all work. Let’s walk through how these loops work, what continuations are, and what happens when these worlds collide. Oh, before we begin, there’s just one thing. You’ll need to go through a crash course in Clojure. It’ll only take a minute and the rest of this won’t read like gobbledygook. Clojure Basics Let’s start off with function syntax. It’s important to know how a basic function call works. We’ll start with the familiar and add up an array of integers in Ruby: > [1, 2, 3, 4].reduce(:+) 10 In Ruby, you start with an array and call reduce on it. You tell reduce that it should use + on all of the elements in the array. It does the work and gives you back 10. I mentioned before that Clojure has a functional focus. That means you don’t just new up an object and then call stuff on it. Instead, you pass everything being used right to function itself. Here’s the same code in Clojure: [clojure] > (reduce + [1 2 3 4]) 10 [/clojure] The first thing you’ll notice is that parentheses surround everything. Inside them, start with the function to call, which is reduce in this example. Then, follow it up with the function to reduce with, +, and the array to reduce. With that knowledge you’re ready to learn loop. Clojure’s Loop In Clojure, loop has the same layout as a function. The first argument you provide is an array of initial values followed by the code to call during an iteration: [clojure] (loop [x 0] (println x)) [/clojure] The tricky part in the code above is [x 0]. You see, loop takes an array of pairs. Each pair consists of a variable and an initial value for the variable. That bit of code is setting x to 0 for the first iteration of the loop. If we had more than one variable, we’d add one right after the other. It might look something like this: [clojure] [x 0 y 1 z 2] [/clojure] It could also be written as [x 0, y 1, z 2] (commas are optional in arrays and act as whitespace) but the vertical style is a little easier on the eyes. After setting the variables, give loop at least one piece of code to run on each iteration. We’ll keep it simple and go with (println x), which is the Clojure equivalent of puts x. At this point you might be wondering how we stop this loop. I’ll let you in on a secret. Our code doesn’t actually loop at all. It’ll run one iteration and exit. If we want another iteration we’ll have to call recur and pass in new values for our variables. [clojure] (loop [x 0] (println x) (recur (+ x 1))) [/clojure] The recur function takes arguments and calls the nearest loop or function it finds while passing those arguments along. It’s how Clojure handles recursive functions and creates recursive loops. Our loop needs one value for x, so we send (+ x 1), which it uses that for the next iteration of the loop. You’ve probably figured out that (+ x 1) is just x + 1 so the next iteration runs with x as 2. Now, we’ve created an infinite loop. I blame you. Clojure works this way because you can’t reassign variables. We have to create a new iteration of the loop with its own scope where x is 2 and only 2 and will never be anything other than 2. In the context of Ruby this feels very foreign. It does have an interesting advantage though: You can call recur in as many different places as you want inside the loop. Let’s see why that might be helpful. Prime Factors The prime factors of a number are the primes that can be multiplied together to reach that number. For 15, that means 3 and 5. In the case of 60 the prime factors are 2, 2, 3, and 5. Notice that you’re allowed to repeat a prime. How would we go about computing the prime factors for a number? We’ll start with a divisor of 2, the first prime. We’ll see if our number is evenly divisible by the divisor. If so, we’ll store it on the list of primes and try again on our new smaller number. If not, we’ll increment the divisor and try that. When we hit 1, we’re done. Here’s an implementation of the prime-factors function in Clojure: [clojure] (defn prime-factors [number] (loop [remaining number primes [] divisor 2] (cond (= remaining 1) ;; stop at 1 primes (= (rem remaining divisor) 0) ;; evenly divisible (recur (/ remaining divisor) (conj primes divisor) divisor) :else ;; not evenly divisible (recur remaining primes (+ divisor 1))))) [/clojure] I used cond, which you can think of like case. Each condition is accounted for and paired with the appropriate action. The lines to keep an eye on are 9 and 11. On line 9, I call recur with the new smaller number, a list of primes that has the divisor added to it ( conj means push), and the same divisor. On line 11, when the divisor fails to evenly divide the number, I increment the divisor and try again. The implementation turns out to be pretty easy. The code reads a lot like the text describing how we compute prime factors. Alright, that’s enough Clojure. Let’s Ruby How would we implement loop/ recur in Ruby? I like to start with an example of how I want it to work. Let’s write that same primes factors function in Ruby: def prime_factors(number) Clojure.loop(number, [], 2) do |remaining, primes, divisor| case when remaining == 1 primes when remaining % divisor == 0 recur(remaining / divisor, primes.push(divisor), divisor) else recur(remaining, primes, divisor + 1) end end end Ruby already has its own loop so I put ours inside a Clojure class. I think that looks pretty good. Implementation time! Down to Business To start, let’s just see if we can get the block to run: class Clojure def self.loop(*initial_args, &block) block.call(*initial_args) end end This will get us one loop, just like the Clojure version. > Clojure.loop(1) { |x| puts x } 1 We need a way to call it again with new arguments. Recursion seems like the obvious choice here, but that plan has problems. How are we going to provide the recur method inside the block? Even if we figure out how to do that, Ruby isn’t optimized for lots of recursive calls. We might end up causing a stack overflow. Continuations to the rescue! Ruby comes with continuations as part of the standard library. All you have to do is require 'continuation'. If you read that and thought, “A continuwhat?” don’t worry that’s a normal, healthy response. Now, allow me to warp your brain. The basic principle of a continuation isn’t too complicated. You set a mark in the code, do some stuff, click your heels three times, and end up back on the line of code you originally marked. Let’s look at an example that counts from 1 up to 10: require 'continuation' mark = nil number = callcc { |continuation| mark = continuation; 1 } puts number mark.call(number + 1) unless number == 10 Let’s break it down starting on line 3. Start off by setting the variable mark to nil so when we assign it on the next line, it’s available in the correct scope. Speaking of the next line, a lot happens on line 4. Continuations are created using callcc. You’ll notice it requires a block. It will execute the block immediately and pass it a continuation object. In the block, set mark to the continuation object so that we retain access to it. Then, return 1, which callcc uses as its return value. By the time we hit line 5, mark holds our continuation object and number equals 1. Line 5 speaks for itself. Line 6 is where the second half of the magic occurs. Using call on a continuation object returns you to the line on which the continuation was created. In this case, that’s line 4. The values passed into call act as the new return value of callcc on line 4. When line 6 is done executing we’re returned to line 4, number is set to 2, and execution continues from line 4. It’ll run like this until number equals 10. Back to our loop code. We need a way to kick off another iteration of the loop. Using call on a continuation gets us just that. We’ll create a continuation and then execute the block in the context of the continuation: require 'continuation' class Clojure def self.loop(*initial_args, &block) continuation = nil callcc { |c| continuation = c } continuation.instance_exec(*initial_args, &block) end end If we use call anywhere inside block, it’ll send us back to line 5 in the above code. Now, we have code that can loop infinitely. Again, I blame you. > Clojure.loop(1) { |x| puts x; call } 1 1 ... We’re getting closer but we still can’t pass values to the next iteration. Let’s fix that. require 'continuation' class Clojure def self.loop(*initial_args, &block) continuation = nil args = callcc do |c| continuation = c initial_args end continuation.instance_exec(*args, &block) end end Just like in our counting example, start on line 5 by preparing a variable to hold the continuation. Add an args variable on line 6 and start by setting it to initial_args. Now, when values are passed to call they’ll end up in args. Once again counting from 1 to 10: Clojure.loop(1) do |number| puts number call(number + 1) unless number == 10 end At this point our loop works. The only thing left to do is alias recur to call so we can use the method name we want. require 'continuation' class Clojure def self.loop(*initial_args, &block) continuation = nil args = callcc do |c| continuation = c class << continuation alias :recur :call end initial_args end continuation.instance_exec(*args, &block) end end We’ve done it! The Only Thing We Have to Fear… If at any point during this you thought “GOTO” and ran from your desk screaming to the confusion of your co-workers, that’s not entirely unwarranted. Like GOTO, continuations can be used for evil. If you carelessly litter your code with continuations, you can expect an execution path that is impossible to follow. It’ll also mean that you’re doing it wrong. We’ve seen the capabilities of continuations, jumping through code and carrying data along for the ride. They are an amazing tool, capable of building powerful control flow primitives. Continuations can be used to add exception handling to a language or create generators. They’re not something you’ll reach for regularly, but when you want to do something like, say, build a Clojure style loop, they’ve got your back.
https://www.sitepoint.com/clojure-loops-ruby/
CC-MAIN-2018-51
refinedweb
1,987
74.69
![endif]--> Arduino <![endif]--> Buy Download Products Arduino AtHeart Certified Learning Getting started Examples Reference Playground Forum Support Advanced Search | Arduino Forum :: Members :: Sembazuru Show Posts Pages: [ 1 ] 2 3 ... 35 1 Using Arduino / Project Guidance / Re: 3 XBees Communication on: August 04, 2014, 10:48:45 pm Because there is no synchronization between transmitters with your 2 transmitter and 1 receiver you will need to change communication paradigms to the receiver polling the transmitters for responses. For this, you might want to start referring to the transmitters as "slaves" and the receiver as "master". The slaves won't transmit anything unless asked, so it is up to the master to request data from the slaves. This is commonly known as polling. From wikipedia's entry on Polling : Quote. Basically, your master Arduino would send a message out over the XBees and both slaves will listen. If the polling message is for the temperature sensor Arduino (say the character "*"), it would respond with the temperature. May as well keep the current message format, but change the XBeeTX.print to XBeeTX.println which will add a CR and LF to the end of the message. Ignore the CR (0x0D) and use the LF (0x0A) as an end-of-message character. For the same polling message for the temperature sensor, the RPM sensor would throw out all received characters until it gets the LF, and then it would start listening for another character. It might make sense in your setup for the temperature Arduino to only query the I2C temperature sensor when it gets polled for data. Then have the RPM Arduino to continually count Hall Sensor triggers and calculate new RPM values, and when it gets polled respond with the latest calculated value. As far as the code you posted: Yes, data_format[] as you define it will/should work. (kudos to you) :-) Regarding RPM calculations, what is the range of RPM you are expecting (and thus what is the fastest and slowest that 5 rotations will be counted)? Will the maximum RPM be below the maximum value that an unsigned int can hold (65,535)? More kudos to you, good on you for keeping your ISR (rpm_fun) short and sweet. I might choose a more suitable name like rpm_count, but that is mostly personal preference. Something to be careful with your actual calculation (rpm = 60000/k*revolutions). You are doing integer math with longs and then storing the result into an int. Watch out for rounding errors and unintentional implicit casting truncating your values. I'm no expert on figuring these things out, so you might want to do some trials on your math in a scratch sketch. Give the largest and smallest values to the variables k and revolutions that you would expect to see, then run the formula. Check to see if the result matches what you expect. If not, you may have to do some explicit casting. Someone else may have to assist with this, my skills here aren't up to par yet. For all I know, because I haven't tested it yet outside my own mind, your formula may work properly for your situation. But I would devote some time trying to break this to understand any potential limitations. When rebuilding the received message composed with "%c%04X", the second received byte when converted to a hex value will be shifted left 12 bits, the third byte shifted 8 bits and added to the second, the fourth byte shifted 4 bits and added to the previous two, and then the fifth byte shifted zero bits (i.e. not shifted at all) and added to the previous three. Using an end-of-message (EOM) character will allow you to check each received character for the EOM character to trap some cases of malformed messages. If the EOM character arrives early, or doesn't arrive in the expected number of characters, the received message can be thrown out as bad. Just some thoughts for you to ponder. 2 Using Arduino / Project Guidance / Re: 3 XBees Communication on: July 20, 2014, 11:59:50 pm I just got your PM to join this thread. I'm not that familiar with more than 2 XBEEs communicating. But once you get into a network type mode I doubt you can have T1 and T2 transmitting willy-nilly and expect R3 to understand who is saying what when, not to mention data collisions when T1 and T2 just happen to try to transmit simultaneously. You may need to define a protocol where T1 and T2 don't transmit anything unless asked. Then have R3 poll each individually. Either strictly alternating, or if one bit of data needs to be updated more often than the other by some pattern to match the data requirements. You may need to set up some sort of mesh-like network, but I'm not sure. Anyone else more experienced with multi-XBee networks? BTW, why are you using PuTTy to configure the XBees. Because you are using PuTTy, that tells me that you are on some Windows variant. Digi has a tool called XCTU that runs on Windows that provides a GUI for configuring XBee modules. Much easier to use than remembering what all the AT codes mean. I'm about to head to bed so I don't have the time to look anything up for you right now. I'll try to get to this after work tomorrow. Hopefully I won't forget. ;-) 3 Using Arduino / Programming Questions / Re: Serial.print() issue or maybe something else on: July 08, 2014, 02:35:05 pm I notice in your code Adafruit_VC0706 cam = Adafruit_VC0706(&Serial1); ... Which Arduino are you using? You never mentioned. Also, Quote One weird symptom I noticed is that whenever the program reaches the GSM method client.connect() the arduino IDE loses serial connectivity. The arduino still shows up in device manager but the IDE just says COMPORT not found. Also client.connect() seems to successfully connect to the server but still returns false in the program. It just tries to connect again and again even though it is succeeding. This almost sounds like the GSM module is trying to pull too much current, pulling down your voltage source causing a brownout. How do you have everything powered? 4 Using Arduino / Programming Questions / Re: Must have more files! (Datalogging to SD card) on: June 27, 2014, 08:00:47 am Quote from: Nick_Pyner on June 26, 2014, 11:27:12 pm Quote from: bhay on June 26, 2014, 10:07:08 am , I need it to make a new file for every series of data. So it will turn on, record the data to a .txt file and then be turned of when the test is over. Then when it is booted back up again it needs to make a new file. Surely, you already use an RTC. This will give you a new file by datestamp. It surely can't get any simpler than this. I just use it daily but you can move or expand to H:M:S as you need. Silly me, I didn't take note that bhay (OP) was already using an RTC. Another way of using the RTC to generate file names that are chronologically sequential is using either secondstime() or unixtime(). From RTClib.h: Code: // 32-bit times as seconds since 1/1/2000 long secondstime() const; // 32-bit times as seconds since 1/1/1970 uint32_t unixtime(void) const; (Hmmm... there seems to be a potential glitch with secondstime() being a signed value...) A 32bit value represented in HEX will fit an 8 character filename, but isn't quite human readable so I'd also put a human readable date/time stamp at the beginning of each record/line of the file. But it should alphabetically sort properly. Just change the sprintf(filename, "%02d%02d%02d.csv", now.year(), now.month(), now.day()); line in your sample code to sprintf(filename, "%08x.csv", now.unixtime()); or sprintf(filename, "%08x.csv", now.secondstime()); Though, I suppose (since the SD library supports sub directories) one could use folders with dates as names with files with times as names to have it human readable at the filesystem level. Quote from: michinyon on June 27, 2014, 02:37:58 am The scheme to create numbered files will work, but it might take a long time to check for all the numbered files which already exist. What I did was created a file with the lastest file number in it. The program opens this file, reads the next number, increments it in the file, and then closes the file, and opens the actual data file with the number. Clever solution. Though I'm not really sure how long the code would take to realize that ...99.CSV is the next available file. And I don't have any of my Arduinos handy to empirically find out. Quote from: michinyon on June 27, 2014, 02:37:58 am I've found a big problem with sd files becoming corrupted. I'd endorse the suggestion to have a switch which disables file writing before powering the device off. One trick that I've developed for at least the prototype stage on my UNOs: If available use two adjacent I/O pins. Configure one pin as an output at LOW, and the other pin as INPUT_PULLUP. I have a 2-pin section of pinheader that I've soldered a wire across the short ends that I use as a male shorting block. No, I wouldn't suggest using this technique as an input switch (bounce would be horrible I imagine), but as a rarely-changed jumper setting it is sufficient and doesn't require any wires to a breadboard. 5 Using Arduino / Programming Questions / Re: Must have more files! (Datalogging to SD card) on: June 26, 2014, 09:21:40 pm Quote from: PeterH on June 26, 2014, 04:12:53 pm You're using the number of existing files to choose a name for the new file. It might work, but only when all the previous files are present. If you ever deleted one of the historical files, while leaving the most file in place, this algorithm would leave you trying to use the name of an existing file, so it would fail. The approach Sembazuru demonstrated is closer to what I was trying to describe. Well, with my technique (especially since the Arduino doesn't naturally have an RTC for date stamping files), if one deletes a file from the middle of multiple files, then my technique will fill in the empty spots, essentially destroying any sort of link of chronological order to sequential order. I just needed something quick and dirty when I wrote that. 6 Using Arduino / Programming Questions / Re: Must have more files! (Datalogging to SD card) on: June 26, 2014, 03:57:02 pm Here is how I did something similar a while ago. This technique will give you up to 100 incrementally numbered files (00 through 99). Code: #include <SPI.h> #include <SD.h> char fileName[] = "LOGGER00.CSV"; // Base filename for logging. void setup() { // Construct the filename to be the next incrementally indexed filename in the set [00-99]. for (byte i = 1; i <= 99; i++) { // check before modifying target filename. if (SD.exists(fileName)) { // the filename exists so increment the 2 digit filename index. fileName[6] = i/10 + '0'; fileName[7] = i%10 + '0'; } else { break; // the filename doesn't exist so break out of the for loop. } } } I would suggest that every time you need to write you should open the file and then close it immediately after writing, or open the file once and always send a file.flush() after writing. This should help reduce the chance of loosing data, files, or the SD card when you power off the Arduino. I suppose if you really wanted to be safe, have a switch that you throw that disables writing so you can be sure that the write buffers are flushed and you aren't actively accessing the SD card before you intentionally power off. But, that wouldn't cover accidental power outages. 7 Using Arduino / Programming Questions / Re: Convert amount received by the serial on: June 22, 2014, 08:44:45 am Quote from: zoomkat on June 21, 2014, 09:48:47 pm Quote The issues with your sketch is it makes too many assumptions about the incoming serial stream. It works under the conditions I listed. Quote I'm not going to bother doing the math to see if your 2ms delay is long enough, I'll assume you've already found that to be true. Sounds like you can't do the math, as the code works as described. Why are you whining about simple code that demonstrates the "int n = readString.toInt();" function? Did I somehow make your cornflakes taste like urine? I'm sure it works under the conditions that you listed (i.e. each number sent as a burst of digits as fast as the serial port's baud rate would allow). And it isn't that I can't do the math, I just didn't bother because I trusted you enough not to intentionally post non-functional code. I was just pointing out the fragility of the code. Your code certainly won't work if the transmitting serial device (UECIDE Serial Terminal, any one of a number of standalone serial terminal emulators like Realterm, serially connected keyboard, etc) is sending multi-digit numbers one character at a time at the speed a normal person types. For robustness it is best to watch for an end token (like a newline character) instead of relying on a timeout. No need to imagine urine flavored cornflakes. 8 Using Arduino / Programming Questions / Re: Convert amount received by the serial on: June 21, 2014, 07:36:00 pm Quote from: zoomkat on June 21, 2014, 07:01:18 pm Quote from: Gilliard on June 21, 2014, 06:31:15 pm Do not have a simpler way to convert char into integer? sure... There are a few issues with your suggestion zoomkat. I'm not going to argue pro or con on the String object, I expect that from you so I would be surprised not to see it. The issues with your sketch is it makes too many assumptions about the incoming serial stream. I'm not going to bother doing the math to see if your 2ms delay is long enough, I'll assume you've already found that to be true. The assumption your code is making is that each set of digits for the numbers will be sent in a burst as fast as the serial port baudrate will allow, and there will be at least a 2ms delay between numbers (bursts of digits). Also, the error condition (i.e. a non digit character starting a new burst of serial transmission) will result in getting a value of zero. What if zero is a valid number to accept? An error transmission will be giving false values (instead of either being ignored or producing an error message). It will work in many cases, but it just isn't robust enough for all reasonable cases. 9 Using Arduino / Programming Questions / Re: Convert amount received by the serial on: June 21, 2014, 10:41:23 am Quote from: HazardsMind on June 21, 2014, 09:40:38 am Say you read in a 5, it's currently a char type so you need to make it an integer. To do this, you take whatever char comes in(preferably a digit) and subtract 48 or '0' from it. 5 as a char is 53, then subtract 48 and you get 5. Now this is when you use (temp = temp * 10). Temp is currently 0, so it looks like this: 0 = (0 * 10) + ("incoming char"); You add the 5 now. 5 = (0 * 10) + 5; Ok now read in a '9'. Subtract 48 to give you 9. 59 = (5 * 10) + 9; Add another char 8, subtract and now you have: 598 = (59 * 10) + 8; If you keep reading in chars, it will keep storing them. Note an int is only so big, so if you go over 32767, you will start to get negative numbers. To fix this you can either make the variable temp "unsigned" or type long or even unsigned long. @Gilliard Continuing HazardsMind's logic, you probably also want to have your Arduino know when you have finished sending a number and want to start sending a new number. To do this you need to have some sort of token that you use to tell the Arduino to either start building a number from sequential digits and/or stop building a number from sequential digits. Commonly an end token is used, either in the form of a carriage return and/or a line feed character (depending on what your system uses for an newline character or sequence when you press the return key). Using current technology, it is probably safe to just watch for the line feed character because both MS-Win systems and HTTP use the sequence carriage return then line feed, and Unix-style systems (linux, OS-X, actual unix, android, etc) use just the line feed character. Because of your check to make sure the incoming character is a digit, the carriage return character will be ignored so you can just focus on the character that is common in the two (modernly) common newline character sequences. But if you are going to be connecting to an old-school Mac running OS9 or earlier, it will use just the carriage return character (and as I recall the same thing is true about C64s)... If you can reasonably expect to connect to an old dinosaur Mac (don't laugh, the lab I just started working at has an old PC running Windows 3.1 because that is the latest OS that runs the needed software and has the proper hardware drivers for the attached expensive equipment, classic "if it works don't fix it") you may want to check for a carriage return OR a line feed. Check this wiki page if you are interested in the history of newline character sequences. A start token might be useful to tell the Arduino what variable the following sequence of numbers should be stored in. This is useful if you want to be able to control multiple parameters via the serial port. 10 Using Arduino / Programming Questions / Re: Installing 3rd party libraries without . on: June 21, 2014, 09:52:50 am Quote from: HVXmania on June 21, 2014, 09:18:22 am Hi, I'm a newbie so my apologies if this is a stupid question but how do you install 3rd party arduino libraries when there is no .cpp and .h (or zip) file, just the code? I've done it successfully with those files before (eg for AccelServo) but the third party library links take you straight to the code in your browser. How do you get the .cpp and .h file? For example, I'd like to install Tim Herzel's Wii Classic Controller library: It's just a page of code with no links to any files to download. Do I copy it into a text document and save it as .cpp or .h? If you look carefully on that particular page there are actually two pieces of code, each listing ending with a right-justified link called "[Get Code]". At the beginning of the code listings (in a different font) there is a description of what the following code listing is for. For pages like this (and there are a lot in the playground), it is probably best to click the [Get Code] link at the end of the listing(s) and then save the resulting page as the proper name either alluded to or directly stated in the text before the beginning of the listing(s). It is then up to you to move the resulting files from your downloads area into the proper folder structure. There will always be a .h file, but as you can see in the example you linked to, sometimes the .cpp file isn't needed. That said, some of the pages in the playground are woefully out of date. One example that I know of is the RunningMedian library. I worked with robtillaart adding some functionality to that class, but he never got around to back-porting the changes to the Playground Page. One would just have to know to search for him on github to find the latest version of that library. 11 Using Arduino / Programming Questions / Re: Updating sketch from old IDE serial HELP!! on: June 20, 2014, 01:44:54 pm Quote from: thatguy on June 20, 2014, 12:04:29 pm i don't understand what you mean. do you want me to try my code outside of the interrupt and in the main loop ? Wait... You are trying to write to the hardware serial port from within an ISR? As of Arduino1.0+ the serial access is now asynchronous, meaning it uses interrupt events. Not something you can do from within an ISR. Best to set a flag inside the interrupt and then have your main loop continuously poll the flag. Then print and clear the flag if the main program finds the flag set. 12 Using Arduino / Programming Questions / Re: Read variables from SD Card on: June 12, 2014, 07:48:57 pm Quote from: casemod on June 12, 2014, 04:34:23 pm Quote from: Sembazuru. I see. I am using something similar reading from the serial port. It checks if the data is a numeric constant, if not it discards and waits for the next result. I guess I could use something similar here. I am just wondering what would happen if the actual log file got corrupted, for example a power down in the midle of a reading? Honestly there is only so much you can plan for. There is always the slight possibility that if power was interrupted during a write to the SD card you could actually corrupt the SD card itself. The power would have to be lost at just the right (or wrong?) time. Best case would be a partial record saved. Depending on how critical not loosing data is, you might want to consider a backup battery for the Arduino. Sense the main input power to the Arduino (I haven't thought through far enough to flesh out an actual circuit yet), and when the main power dies, prepare for a clean power off (make sure any records that need to be written are completely written) and then switch off the power from the backup. I personally think that this is overkill, but you may have different priorities. 13 Using Arduino / Programming Questions / Re: Read variables from SD Card. 14 Using Arduino / Programming Questions / Re: Read variables from SD Card on: June 12, 2014, 04:43:10 am This would be a bit easier if we knew exactly how you are saving records to the SD card to know how to parse the file. So, without that knowledge, I'll just have to speak on general terms. Since you are writing to a SD card I'll assume you already know the reference page for the SD Library . Let me highlight some functions that you will want to read up on: file. size() , file. seek(pos) , and file. peek() . When you open a file object one of it's properties is a read/write pointer (or can be called a cursor). Some programming languages use two pointers, a read pointer and a write pointer. With the SD library we only have one pointer that does both jobs (I'm not totally clear if the reason is because of how the SD library was written, or if that is how C/C++ deals with file classes, but that level of detail really doesn't matter.) So one needs to remember to move the pointer to the end of the file before writing to avoid overwriting parts of the file. When the last time your Arduino was on and writing to the data file it may have recorded lots of records. One should expect to find 0 to [very large number] of records. Think of the records as pages in a book. If you just want to know what is on the last page of the book do you start reading from the beginning, or do you turn to the last page? Reading the whole thing might be feasible if the "book" was a short 4-page menu, but what if it is the Oxford English Dictionary? Would you really read through all 12 volumes (for OED1) just to look at the last page? Unfortunately, there the analogy starts to break apart. One can't simply turn to the last saved record in the file. But one can turn to the last byte in the file using file. seek(pos) . One knows where that last byte is by using file. size() , and one can see what that last byte is without moving the read/write pointer by using file. peek() . Then it is a simple exercise of scanning backwards from the end of the file looking for your record separator (if you used file. println() when writing to the file your separator would be <CR><LF>, but since you can only scan one byte at a time, just scan for the <LF> character.) Then read and parse the end of the file much like one would read and parse the same data coming in over the serial port (except instead of using Serial.read() you would use file. read() ). Hmmm... I suppose one could modify the analogy to fit the exercise better... Think about what you would do if you wanted to read the last chapter of Harry Potter and the Order of the Phoenix ? Do you start flipping through pages from the front of the book, or start flipping pages backwards from the last page (page 870 of the US printing)? (I'm ignoring the possibility of a table of contents...) Remember to put error corrections in there. Like consider what to do if the file isn't found, if the SD card isn't found, what to do if the last record isn't complete or is corrupted, etc. My suggestion would be to write it as a function that you only call from setup(). That would keep the clutter down in setup(), and allow using local variables in the function that will cease to exist (and thus stop consuming valuable RAM) once the function completes. The 4 bytes of the unsigned long that you will be using to keep tabs of and manipulate the read/write counter should be a local variable, I'm sure there will be others... Hopefully this is enough for you to cobble together some code that you can then post for us to critique. Not all of us are very tolerant of massive amounts of hand-waving. 15 Using Arduino / Programming Questions / Re: Using Random() on: June 05, 2014, 03:50:48 pm Quote from: robtillaart on June 05, 2014, 02:45:46 pm but Code: randNumber = random(1,5); // generate random number between 1 & 4 (minimum is inclusive, maximum is exclusive) if (randNumber >= prevRand) randNumber++; // corrects for values equal to the previous value, //and shifts values greater than previous values up by one to get a generated range // of 1 to 5 excluding the last generated value prevRand = randNumber; if the prevRand is 4 and randNumber too, randNumber will be 5 and therefor out of range !!! Except that the stated range from the OP is 1 through 5 inclusive. Quote you should correct for that. Furthermore this method will generate "predictable" pairs e.g. after a 1 there are twice as many 2's as statistically expected. There is a reason why it's called pseudorandom... Quote This code will never generate the same random number. If there is a "collision" with the previous value an offset is added. Then the offset is changed so that no predictable pairs come up (unless the range is only 2 as in example below) . As the code has no loops it has a predictable execution time [range]. Code: int prevValue = -1; int x = 0; int y = 0; long count = 0; void setup() { Serial.begin(115200); Serial.println("Start "); } void loop() { x = getRandomNoDuplicate(1, 3); // gives "random" 1,2,1,2,1,2,1,2,1,2,1,2,1,2,1,2 (very predicatable!! if (x == y) Serial.println("..."); y = x; count++; if (count % 1000 == 0) Serial.println("M"); } int getRandomNoDuplicate(int lower, int upper) { static int offset = 0; int randNumber = random(lower, upper); if (prevValue == randNumber) { offset++; if (offset >= upper - lower) offset = 1; randNumber += offset; if (randNumber >= upper) randNumber = lower + randNumber - upper; } prevValue = randNumber; return randNumber; } Looks good to me. I'd have to take some time (which I don't have at the moment) to fully study it, but I feel that there is still a possibility of duplicates for some values of offset on some ranges due to rollover effects. I could be wrong though. Pages: [ 1 ] 2 3 ... 35 | SMF © 2013, Simple Machines Newsletter ©2014 Arduino
http://forum.arduino.cc/index.php?action=profile;u=160112;sa=showPosts
CC-MAIN-2014-42
refinedweb
4,917
69.72
In this blog post, we’ll highlight how all the basic commands you end up using in the first few minutes after installing PostgreSQL are identical in YugabyteDB. We’ll cover connecting to the database, creating users, databases, schemas, and calling external files from the SQL shell. In the next blog post in this series we’ll tackle querying data to demonstrate that if you know how to query data in PostgreSQL, you already know how to do it in YugabyteDB. First things first, for those of you who might be new to either distributed SQL or YugabyteDB…. - Smart distributed query execution so that query processing is pushed closer to the data as opposed to data being pushed over the network and thus slowing down query response times. -… Installing YugabyteDB YugabyteDB is only slightly more involved than getting PostgreSQL up and running. At the end of the day it should only take a few minutes or less depending on your environment. Let’s look at a few scenarios: Single Node Installation on Mac $ wget $ tar xvfz yugabyte-2.3.0.0-darwin.tar.gz && cd yugabyte-2.3.0.0/ $ ./bin/yugabyted start Single Node Installation on Linux $ wget $ tar xvfz yugabyte-2.3.0.0-linux.tar.gz && cd yugabyte-2.3.0.0/ $ ./bin/post_install.sh $ ./bin/yugabyted start Note: If you want to run 3 local nodes instead of a single node for either the Mac or Linux setups, just tweak the last command so it reads: ./bin/yb-ctl --rf 3 create 3 Node Installation on Google Kubernetes Engine $ helm repo add yugabytedb $ helm repo update $ kubectl create namespace yb-demo $ helm install yb-demo yugabytedb/yugabyte --namespace yb-demo --wait For more information on other installation types and prerequisites, check out the Quickstart Docs. Connecting to a YugabyteDB Cluster Connect Locally Assuming you are in the YugabyteDB install directory, simply execute the following to get to a YSQL shell: $ ./bin/ysqlsh ysqlsh (11.2-YB-2.3.0.0-b0) Type "help" for help. yugabyte=# Connecting on GKE Assuming you are connected to the Kubernetes cluster via the Google Cloud Console, execute the following. $ kubectl exec -n yb-demo -it yb-tserver-0 -- ysqlsh -h yb-tserver-0.yb-tservers.yb-demo ysqlsh (11.2-YB-2.3.0.0-b0) Type "help" for help. yugabyte=# Check out the documentation for more information about YugabyteDB’s PostgreSQL-compatible YSQL API. Connecting via JDBC Assuming we are using the PostgreSQL JDBC driver to connect to YugabyteDB, the construction of the connect string will be identical to PostgreSQL. For example here’s a snippet for setting up a connection to a database called "northwind" in YugabyteDB using the PostgreSQL driver in Spring. spring.datasource.url=jdbc:postgresql://11.22.33.44:5433/northwind Note: In the example above we assume YugabyteDB’s YSQL API is being accessed at 11.22.33.44 on the default port 5433, using the default user “yugabyte” with the password “password”. For more information about YugabyteDB connectivity options check out the Drivers section of the documentation. Setting Up Users in YugabyteDB Creating roles/users, and assigning them privileges and passwords is going to be the same in YugabyteDB as it is in PostgreSQL. Create a Role with Privileges CREATE ROLE felix LOGIN; Create a Role with a Password CREATE USER felix2 WITH PASSWORD ‘password’; Create a Role with a Password That Will Expire in the Future CREATE ROLE felix3 WITH LOGIN PASSWORD 'password' VALID UNTIL '2020-09-30'; Change a User’s Password ALTER ROLE felix WITH PASSWORD 'newpassword'; List All the Users \du For more information about how YugabyteDB handles users, permissions, security, and encryption check out the Secure section of the documentation. Creating Databases and Schemas in YugabyteDB Creating databases and schemas in YugabyteDB is identical to how it is done in PostgreSQL. Create a Database CREATE DATABASE northwind; Switch to a Database \c northwind; Describe the Database \dt) Create a Schema CREATE SCHEMA nonpublic; Create a Schema for a Specific User CREATE SCHEMA AUTHORIZATION felix; Create Objects and Load Data from External Files If you have DDL or DML scripts that you want to call from within the YSQL shell, the process is the same in YugabyteDB as it is in PostgreSQL. You can find the scripts used in the examples below in the "~/yugabyte-2.3.x.x/share" directory. For information about the sample data sets that ship by default with YugabyteDB, check out the Sample Datasets documentation. Call an External File to Create Objects \i 'northwind_ddl.sql'; Call an External File to Load Data into the Objects \i 'northwind_data.sql'; What’s Next? Stay tuned for part 2 in this series where we’ll dive into querying data from a YugabyteDB cluster using familiar PostgreSQL syntax. Discussion
https://dev.to/jguerreroyb/a-postgresql-compatible-distributed-sql-cheat-sheet-the-basics-4ep7
CC-MAIN-2020-45
refinedweb
801
51.38
It's actually a boolean checkbox..... so I basically want the label to change based upon whether or not the checkbox is checked. Is there any way to do that? Again, I've tried using the Cheetah syntax to do the #if #end inside the <output> tags.... but that didn't work. Advertising - Nik. On Thu, Aug 25, 2011 at 6:34 AM, SHAUN WEBB <swe...@staffmail.ed.ac.uk>wrote: > > I meant to say >> %else >> <data format="txt" name="blah" label="Label2" /> >> %endif >> </outputs> >> >> Thanks, >> >> K >> >> On Wed, Aug 24, 2011 at 8:24 PM, Nikhil Joshi <najo...@ucdavis.edu> >> wrote: >> >> Hi all, >>> >>> Is there a way to set the label of the output based on the input >>> parameters? Perhaps by using the <action> tag? Basically, I want the >>> output label to be different if the user sets a particular parameter to >>> be >>> true. >>> >>> - Nik. >>> >>> ______________________________**_____________________________ >>> Please keep all replies on the list by using "reply all" >>> in your mail client. To manage your subscriptions to this >>> and other Galaxy lists, please use the interface at: >>> >>> >>> >>> >> > > > -- >:
https://www.mail-archive.com/galaxy-dev@lists.bx.psu.edu/msg02429.html
CC-MAIN-2017-39
refinedweb
178
74.69
by Arnel Enero How to simplify state in your React app — Redux with a twist New, much easier syntax and semantics for good old Redux The words “simple” and “Redux” rarely appear together in the same sentence. And yet, much of the React community has come to embrace Redux as one of the best solutions for implementing application state. Now there is a way to use Redux even if you don’t write a single line of Redux boilerplate code. You don’t even need to know or learn Redux. As long as you are convinced that Redux is the top choice for your app’s state requirements, you will want to read this. In this article we will cover these topics: - Managing simple app state changes - Working with async operations (e.g. data fetches) - Code splitting and lazy-loaded app state The Reactor Library I originally wrote the Reactor Library to minimize the boilerplate needed in my personal projects that use React. One of its features is the super simple app state management that I will share with you here. I have since decided to make the library available to everyone who may be looking to simplify their React/Redux code. Feel free to use it; it’s yours as much as mine. To install: npm install @reactorlib/core The 3 Key Things To write our application state management using Reactor Library, there are 3 key things we need to know about: - Store: This is the single place where the entire state of our application is kept. - Entities: These are pieces of the app state, each representing a specific area of concern or functionality. - Actions: These are functions that our components can invoke to trigger some change in the app state. These also reside in the store. Step 1: Creating Entities When we define an entity, we think about how the entity would react to certain actions. We refer to this as its reactions. Each reaction comprises state changes that occur within the entity (remember, each entity is just a portion of our app state). Reactor Library provides a function called createEntity that we will use to define our entities. It accepts two arguments, the entity’s reactions, as well as its initial state: createEntity(reactions: Object, initialState: any) Let’s get the easier part out of the way first. The initialState should basically define the data structure of our entity by assigning a default value to it. The reactions argument is a mapping of action names against corresponding reactions. Note that the mapping is not meant to define the actual action functions. In its simplest form, a reaction looks like this: action: (state, payload) => newState where action corresponds to the name of an action, while payload(optional) is any single argument that the entity expects you to pass to the action. All this really means is, when action(payload) is invoked, the entity applies certain logic to change its state from state to newState. Here is a simple example of entity definition: const initialState = { value: 0 }; const counter = createEntity( { increment: (state, by) => ( { ...state, value: state.value + by } ), reset: state => ({ ...state, value: 0 }) }, initialState); IMPORTANT: In defining an entity’s reactions, keep in mind that the React golden rule of not mutating the component state also applies to the application state. So if your entity’s state is of object or array type, always make sure to return a fresh object or array. Easy peasy so far, right? Let’s go on… Step 2: Setting Up the Store I said ‘the store’ because there can only be one store throughout our entire application. To make this store available to all our components, we would need to inject this into a top-level component, typically <App>. Reactor Library includes the withStore HOC that creates the store, puts entities into it, and designates its target component as the provider/owner of the store. withStore(entities: Object) (Component) Here the entities argument is a mapping of entity names against the actual entity objects created using createEntity(). This mapping is important because we access entities from the store using the names assigned here. Let’s take the counter entity from our previous example, and create our store then place the entity in it: import counter from './store/counter'; const _App = () => ( <Router> <Shell /> </Router>); const App = withStore({ counter })(_App); As simple as that, really. Our store is now all set. Step 3: Importing Props from Store Now the last remaining step is to make the application state accessible to our components. There are 2 simple rules: - Components are able to read the application state by importing entities from the store. - They can also change the app state, by importing actions from the store. We use Reactor Library’s getPropsFromStore HOC to do either or both, and inject them to our component as props. getPropsFromStore( entities?: Array<string>, actions?: Array<string>) (Component) Here, entities is a list of entity names, and actions is a list of action names. Imported entities are injected as state props. This means that whenever any of these entities change, the component will re-render. Imported actions are injected as function props that we can directly invoke inside our component. You may be wondering, where do we define these action functions? Well, we don’t. The store creates these for us, based on all the action names we mapped to the reactions when creating our entities with createEntity. Continuing our previous examples, we import the counter entity from the store as follows: const _ClickCount = ({ counter, increment, reset }) => ( <> You have clicked {counter.value} times. <button onClick={() => increment(1)}>Click Me</button> <button onClick={reset}>Reset Counter</button> </>); const ClickCount = getPropsFromStore( ['counter'], ['increment', 'reset'])(_ClickCount); That’s it! In 3 easy steps, we have connected our component to the app state. Working with Async Actions An async action is essentially one that requires some sort of non-blocking, asynchronous operation such as fetching data, timer, computation-intensive task, or anything else that is unable to immediately complete its execution. With the simple form of reaction, the calculation of new state is done immediately. But when dealing with async actions, the entity needs to perform an async operation, and wait for it to finish before it can calculate the state change. For this we need a different form of reaction, which is aptly called an async reaction. Defining Async Reactions Reactor Library’s createEntity enables us to easily define async reactions, declaratively, in the following form: action: [ (state, payload) => newState, async (payload, next) => { const result = await doSomethingAsync(); next(result); }, (state, result) => newState] This is an array consisting of the 3 steps of our async reaction: - The startup step where any preparatory state change can be made, e.g. setting a ‘loading’ or ‘wait’ flag. - The async step where the entity performs the async operation. It waits until the async operation completes, before calling the next step. - The completion step where the final state change is made, normally based on the result of the preceding async step. This diagram illustrates how data flows throughout the 3 steps of the async reaction: The first step (startup) is actually optional, as there are times when you don’t really need a preparatory state change. Example Usage Here is an example of a complete entity with both simple and async reactions. You can always go back to the illustration above if the flow of data and state changes still seem somewhat unclear. const initialState = { auth: null, waiting: false }; const session = createEntity( { login: [ state => ({ ...state, waiting: true }), async ({ username, password }, next) => { const response = await login(username, password); next(response); }, (state, { auth }) => ({ ...state, auth, waiting: false }), ], logout: state => ({ ...state, auth: null }), }, initialState); Once you get used to this 3-step format, you will be able to create entities quickly because you would only need to focus on the state-change logic and data flow, and not worry about any complex boilerplate code to write. That’s it! Isn’t that way too easy? Lazy Loading the App State If you do code splitting, you will want to code-split your application state as well. A lazy-loaded module can have its own feature store containing feature-specific entities. As there can only be a single store in the app, Reactor Library provides a simple way to dynamically merge lazy-loaded feature stores into the main store. This is using the withFeatureStore HOC, which has the following signature: withFeatureStore(entities: Object) (Component) As you might notice, this has exactly the same format as the withStore HOC that we discussed earlier. It specifies entities that are lazy-loaded together with your feature modules, to let Reactor Library know that these entities are to be dynamically merged into the store once the feature modules are loaded. Example Usage Let’s take, for example, a lazy-loaded timer feature that has a TimerPage component as its entry point, and a timer entity to manage its state. import timer from './store/timer'; const _TimerPage = () => ( <Countdown />); const TimerPage = withFeatureStore({ timer })(_TimerPage); That’s it! Again, quick and easy. Further Information To learn more about the Reactor Library that we used in this article, you can find its official documentation at. Thanks for reading.
https://www.freecodecamp.org/news/how-to-simplify-state-in-your-react-app-redux-with-a-twist-41b0e5b12dcb/
CC-MAIN-2021-43
refinedweb
1,537
52.7