text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
As of writing these word VS 2011 is in consumer preview stage (Beta). The Goal In order to build a large JavaScript application that can scale easily you need to distribute your code into modules and place each module in a separate file. Unlike JavaScript compilers, we humans need this structure for the sake of sanity. If we break a JavaScript application into multiple files we can easily loose IntelliSense and have performance issues when loading many files in runtime. We can easily get around this with 3 strategies: - On release compile everything into a single file - On debug remain with the file structure and load modules on demand. debug inside VS your specific files like a “normal” C# application. - On design time have full IntelliSense inside a module that have dependencies to other modules In order to achieve that you can easily add references to each module/file telling VS to load specific files, but that will only work in design time so we need to leverage this capability to something more suitable. The Details I have only a concept and only tested it in VS2011 since there are many JavaScript improvements like loading file asynch, and automatic IntelliSense build without the need to hit ctrl+shift+J. This may work in VS2010, to be honest I don’t see a reason why not? So here is what I got: Basically this is a vanilla solution structure with some helpers. My Script folder contains JQuery and RequireJS as NuGet packages. My index.html is as follows: So I’ve defined a global namespace and set to it my configuration value to multiple files like so: OM.debugConfig = “multiple files”; Then my ajaxController just defines a module under OM.Communication namespace Now, I want to have full IntelliSense on other modules, like my commonHelper module, that will be aware to my ajaxController So what’s going on here? - I’ve added a VS reference to require.js (/// <reference path=”../../Scripts/require.js” />) now VS knows about require.js lib - I’ve defined my dependencies in the following manner: - if (!OM.debugConfig) – [Visual Studio as Interpreter] the !OM.debugConfig is not defined (falsy). This is the case when vs tries to build its IntelliSense when you work on that specific file. So the folders to look for my modules are relative to my current file position - if (OM.debugConfig === “multiple files”) – [Browser as Interpreter] the OM.debugConfig === “multiple files” is only defined in runtime (remember my index.html where I defined the OM.debugConfig = “multiple files”;) which now tells my browser where are my files. This enables me to have breakpoints inside VS in order to debug my application as a “normal” C# app for instance. - if (OM.debugConfig === “single file”) – [Browser as Interpreter] the OM.debugConfig === “single file” can also be defined for instance in indexRelease.html, that will reference a single compiled file for all of my modules similar to what I’ve done here with T4 template compilation of my scripts. here is the runtime (FireBug DOM) This is only a POC, I’m sure this concept can help achieve much robust architecture.
https://www.tikalk.com/posts/2012/03/12/vs-2011-full-intellisense-with-large-javascript-application/
CC-MAIN-2018-26
refinedweb
525
63.59
Himai Minh wrote:I noticed the WSDL in this page: The targetname space defined in the client code and the WSDL matches. Himai Minh wrote:When I read CalculatorClient, the service is created using this targetnamespace : Then, the service adds a new port with targetnamespace : So, the dispatch is using this new port to invoke the method. a sarkar wrote:I noticed that a dispatch client works even if the port QName is wrong. This doesn't make sense. Here's a client that shows the aforementioned behavior: Himai Minh wrote:So, addport means to add a port that never exist in the corresponding WSDL? That also means no matter the QName of the port is wrong or right, the dispatch client can still connect to the service as long as the endpoint address of the service is correct? Frits Walraven wrote:However, when you use the getPort(...) method the portName has to match the portName published in the WSDL. Himai Minh wrote:The service won't know what request JAXBElement the client sends as the WSDL is not used. a sarkar wrote:The getPort methods require an SEI, which suggests to me that it is intended to be used with the dynamic proxy clients. a sarkar wrote:For dispatch clients, there is no client side SEI, so addPort may be the only option. Frits Walraven wrote: ... Service service = Service.create(url, serviceName); Dispatch<Source> dispatch = service.createDispatch(portName, Source.class, Service.Mode.PAYLOAD); Source response = dispatch.invoke(createSOAPcontents()); ... a sarkar wrote: The portName in the code above, is it verified against the WSDL as in getPort or just about anything like addPort? a sarkar wrote: Also, addPort is the only method that accepts a binding type, so without that, it might fallback to the default binding?
http://www.coderanch.com/t/624659/java-Web-Services-SCDJWS/certification/Dispatch-client-works-port-QName
CC-MAIN-2015-18
refinedweb
297
63.49
How to make Launch Screen only portrait not support other orientations I need set launch screen only portrait but in other view controller that can support all orientation.Firstly, I try this method: IOS 8 iPad make only Launch Screen portrait only but it can't make other view controllers that support all orientations. How can i do something to fix it? See also questions close to this topic - How to fix "Trailing Whitespace Violation" warnings caused by swiftlint in Xcode? How can we fix the "Trailing Whitespace Violation" warnings caused by swiftlint in my iOS project all in one go? I don't want to manually correct each one of them. Moreover, I don't want to disable these warnings so you can skip that suggestion. I have been trying Find And Replace option but I am not getting a correct keyword to sort this out.enter image description here -. - Figure out where the user taps on SKStoreReviewController I want to trace out the tap coordinate of where user taps on SKStoreReviewController.requestReview(). Apple displays this in a separate Window that appears on top of my own application and I can access that. How would I track the tap event on the view itself? I tried adding gesture recognizer to the UIWindow that owns SKStoreReview but it's not working. I tried subclassing UIApplication to my own but that doesn't intercept event sent to SKStoreReviewPresentationWindow either. - While typing text in the textField the text clips at the left edge in iOS11 In iOS 11 devices the text in the textField clips while typing, however after ending editing it appears properly. My textField is centre aligned to the screen I tried adding a padding view at the beginning of textField. That did not help This issue happens when the 'textField.borderStyle = .none', for other border styles it appears properly. Please check the image in the link of text clipping issue. - failed to find PDF header: `%PDF' not found iOS, PDFKit I'm downloading a PDFKit. most of the time it works properly. for some cases, it gives me an error called failed to find PDF header: `%PDF' not found I went through several readings and StackOverflow questions like Question. but still didn't figure out the solution to outcome of this problem. in some articles, it says it is a bug from Apple. isn't it fixed it yet? here is how I'm downloading the pdf let destination: DownloadRequest.DownloadFileDestination = { _, _ in let documentsURl = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileurl = documentsURl.appendingPathComponent(mydocumentname + ".pdf") return (fileurl, [.removePreviousFile, .createIntermediateDirectories]) } Alamofire.download(urlrequest!, to: destination).responseData { (response) in switch response.result { case .success( _): completion(response.destinationURL) case .failure(let err): } } this works fine in most cases. but fail in some cases with the above error. does anyone know a solution for this? hope your help with this. have a nice day. - How can I access TableView from the LaunchScreen.storyboard? I want to access the tableViewfrom the launchscreen.storyboard, I'm submitting a name, and when I initializea new class myViewControllerbut in ViewDidLoad()I can not access it. I'm using VS 2019 for windows connected to mac if that is making some difference. - Cannot Animate Activity Indicator through Storyboard I am actually using activity indicator in LaunchScreen But cannot animate through storyboard using Attributes inspector.
http://quabr.com/56659750/how-to-make-launch-screen-only-portrait-not-support-other-orientations
CC-MAIN-2019-30
refinedweb
562
58.38
Internet Button - RGB LEDs Introduction: Internet Button - RGB LEDs After posting a lot of tutorials about Particle Core, I taught of creating a series using the internet button and I had also posted a few tutorials on that as well. But what I did not teach was how to control the RGB LED's. So in this instructable I'm going to show you how to control the RGB LED's, to make better projects. So this is a beginners tutorial with the Particle core. This tutorial is very simple, but it is recommended to try out my precious instructables before jumping into this one. So lets get started..... Step 1: Tools and Components All you need for this tutorial is - - Particle Core - Internet Button Note- The particle internet button now comes with a photon so you don't need to buy: Hardware If you followed my blink tutorial with the internet button you can skip this step. The Internet button is a shield which holds on to the Core and has 11 RGB LEDs soldered already so no soldering skills are required to get started with this tutorial If your internet button looks different it is probably because you have the newer version which has a protective cover and a buzzer included. Step 4: Code The LED's are triggered by declaring the led followed by the color in RGB format, this should be familiar if you are used to programming in CSS. The code below is used to display a color full pasterns this serves as a basic for better projects. This can also be used for decorative purposes. #include "InternetButton/InternetButton.h" #include "math.h" InternetButton b = InternetButton(); void setup() { b.begin(1); } void loop(){); } } }
http://www.instructables.com/id/Internet-Button-RGB-LEDs/
CC-MAIN-2017-39
refinedweb
288
59.74
Using Alerts, Images, Timers, and Gauges in MIDlets By Richard G. Baldwin Java Programming Notes # 2580 Preface Viewing tip Figures Listings Supplementary material General background information The Alert class The AlertType class The Image class The Gauge class The Timer class The TimerTask class Preview Discussion and sample code The MIDlet named Alert01 The MIDlet named Alert02 Run the programs Summary What's next? Resources Complete program listings About the author Java Programming Notes # 2580: What you will learn in this lesson In this lesson you will. constructor for the Alert class As of MIDP 2.0, there are two overloaded constructors for the Alert class. The constructor that I will use in this lesson takes four parameters:. In the remainder of this lesson, I will present and explain two MIDlets named Alert01 and Alert02. The primary differences between the two will be in the areas of alert type and Gauge mode. The purpose of this MIDlet is to illustrate: Each time the Alert becomes visible, it obscures a TextBox object that is also being displayed by the MIDlet. When the Alert disappears, the TextBox reappears. Requirements This MIDlet requires:. public class Alert01 extends MIDlet{ Alert01 theMidlet; Image image; int count = 0; long baseTime; public Alert01(){ System.out.println("Construct MIDlet"); theMidlet = this; baseTime = new Date().getTime()/1000; }//end constructor. public void startApp(){ System.out.println("Create and display a TextBox"); TextBox textBox = new TextBox("TextBox Title", "TextBox contents", 50,//width TextField.ANY); //Make the TextBox the current Displayable object. Display.getDisplay(this).setCurrent(textBox);. Timer myTimer = new Timer(); myTimer.schedule(new MyTimerTask(),2000,3000);. //Sleep for 20 seconds. try{Thread.currentThread().sleep(20000); } catch(Exception e){} //Cancel the timer. myTimer.cancel(); //Enter the destroyed state. this.destroyApp(true); }//end startApp. public void pauseApp(){ }//end pauseApp public void destroyApp(boolean unconditional){ System.out.println("Destroy MIDlet"); notifyDestroyed(); }//end destroyApp. class MyTimerTask extends TimerTask{ long time; public void run(){ System.out.println("Display an Alert"); try{ //Select among two image files on the basis of // whether the current time in seconds is odd // or even. time = new Date().getTime()/1000 - baseTime; //Note that the following file names are case // sensitive. if((time % 2) == 0){//Even value image = Image.createImage( "/Alert01/redball.PNG"); }else{//Odd value image = Image.createImage( "/Alert01/blueball.PNG"); }//end else. Alert alert = new Alert("Alert Title", "", image, AlertType.ALARM); //Cause the alert to display the time in seconds. alert.setString("Time in seconds:" + time); //Cause the alert to be visible for two seconds. alert.setTimeout(2000);. Gauge gauge = new Gauge(null,false,6,0); //Set the number of Gauge bars to be illuminated. gauge.setValue(++count); //Attach the Gauge to the alert. alert.setIndicator(gauge);. Display.getDisplay(theMidlet).setCurrent(alert); }catch(Exception e){ e.printStackTrace(); }//end catch }//end run }//end class MyTimerTask }//end class Alert01 Listing 9 also signals the end of the run method, the end of the member class named MyTimerTask, and the end of the MIDlet class named Alert01. As mentioned earlier, this MIDlet is very similar to the MIDlet named Alert01. Therefore, I will confine my explanation to the code that is different between the two and the results imparted by those code differences.. //Create an Alert object of type CONFIRMATION. // This results in an audible alert that is three // chimes. Alert alert = new Alert("Alert Title", "", image, AlertType.CONFIRMATION);. Gauge gauge = new Gauge( null, false, Gauge.INDEFINITE, Gauge.INCREMENTAL_UPDATING);. gauge.setValue(++count % 3); The remaining code in the MIDlet named Alert02 is the same as the code in the MIDlet named Alert01. I encourage you to copy the code from Listing 13, Listing 14, and Listing 15. Run the two MIDlets in the updated MIDlet development framework named WTKFramework03 that is provided Listing 13.. You will also need two small image files. You can substitute any image files containing small images for the two image files listed above. You will have to make the names of your image files match the references to the image files in the code (see Listing 6). In this lesson you learned In the next. Finally, you will learn how to create a List, how to display it in the Sun cell phone emulator, and how to determine which elements in the List are selected. Complete listings of the programs discussed in this lesson are shown in Listing 13, Listing 14, and Listing 15 below: Listing 13. The updated MIDlet development framework named WTKFramework03. /*WTKFramework03.java Updated: December 17, 2007 Version: WTKFramework03.java Upgraded to prevent the deletion of image files and other resource files when the program cleans up after itself. This results in resource files being included in the JAR file. The resource files should be in the same directory as the source files. Version: WTKFramework02.java Upgraded to capture and display standard output and error output from child processes. Also upgraded to allow user to enter MIDlet name on the command line. This is particularly useful when repeatedly running this program from a batch file during MIDlet development. Version: WTKFramework01.java, which are required for the deployment of the MIDlet program. Given a file containing the source code for the MIDlet, a single click of the mouse causes this framework to automatically cycle through the following steps: Compilation (targeted to Java v1.4 virtual machine) Pre-verification Creation of the manifest file Creation of the JAR file Creation of the JAD file Deletion of extraneous files, saving the JAR and JAD files Deployment and execution in Sun's cell phone emulator The MIDlet being processed must be stored in a folder having the same name as the main MIDlet class. The folder containing the MIDlet must be a child of the folder in which the framework is being executed. Note: When you transfer control to a new process window by calling the exec method, the path environment variable doesn't go along for the ride. Therefore, you must provide the full path for programs that you call in that new process. Tested using Java SE 6 and WTK2.5.2 running under Windows XP. *********************************************************/ import java.io.*; import javax.swing.*; import java.awt.*; import java.awt.event.*; public class WTKFramework03{ String toolkit = "M:/WTK2.5.2";//Path to toolkit root String vendor = "Dick Baldwin";//Default vendor name String midletVersion = "1.0.0"; String profile = "MIDP-2.0"; String profileJar = "/lib/midpapi20.jar"; String config = "CLDC-1.1"; String configJar = "/lib/cldcapi11.jar"; //Path to the bin folder of the Java installation String javaPath = "C:/Program Files/Java/jdk1.6.0/bin"; String prog = "WTK001"; int initialCleanupOK = 1;//Success = 0 int compileOK = 1;//Compiler success = 0 int preverifyOK = 1;//Preverify success = 0 int deleteClassFilesOK = 1;//Delete success = 0 int moveFilesOK = 1;//Move success = 0 int manifestFileOK = 1;//Manifest success = 0 int jarFileOK = 1;//Jar file success = 0 int jadFileOK = 1;//Jad file success = 0 int cleanupOK = 1;//Cleanup success = 0 long jarFileSize = 0; JTextField progName; JTextField WTKroot; JTextField vendorText; JTextField midletVersionText; JTextField javaPathText; JRadioButton pButton10; JRadioButton pButton20; JRadioButton pButton21; JRadioButton cButton10; JRadioButton cButton11; static WTKFramework03 thisObj; //----------------------------------------------------// public static void main(String[] args){ //Allow user to enter the MIDlet name on the command // line. Useful when running from a batch file. thisObj = new WTKFramework03(); if(args.length != 0)thisObj.prog = args[0]; thisObj.new GUI(); }//end main //----------------------------------------------------// void runTheProgram(){ //This method is called when the user clicks the Run // button on the GUI. System.out.println("PROGRESS REPORT"); System.out.println("Running program named: " + prog); //This code calls several methods in sequence to // accomplish the needed actions. If there is a // failure at any step along the way, the // framework will terminate at that point with a // suitable error message. //Delete leftover files from a previous run, if any // exist deleteOldStuff(); if(initialCleanupOK != 0){//Test for success System.out.println("Initial cleanup error"); System.out.println("Terminating"); System.exit(1); }//end if compile();//compile the MIDlet if(compileOK != 0){//Test for successful compilation. System.out.println("Terminating"); System.exit(1); }//end if preverify();//Pre-verify the MIDlet class files if(preverifyOK != 0){ System.out.println("Terminating"); System.exit(1); }//end if //Delete the class files from the original program // folder deleteClassFilesOK = deleteProgClassFiles(); if(deleteClassFilesOK != 0){ System.out.println("Terminating"); System.exit(1); }//end if //Move the preverified files back to the original // program folder movePreverifiedFiles(); if(moveFilesOK != 0){ System.out.println("Terminating"); System.exit(1); }//end if //Make manifest file makeManifestFile(); if(manifestFileOK != 0){ System.out.println("Manifest file error"); System.out.println("Terminating"); System.exit(1); }//end if //Make Jar file makeJarFile(); if(jarFileOK != 0){ System.out.println("JAR file error"); System.out.println("Terminating"); System.exit(1); }//end if //Make Jad file makeJadFile(); if(jadFileOK != 0){ System.out.println("Terminating"); System.exit(1); }//end if //Delete extraneous files cleanup(); if(cleanupOK != 0){ System.out.println("Terminating"); System.exit(1); }//end if //Run emulator runEmulator(); //Reset success flags initialCleanupOK = 1;//Success = 0 compileOK = 1;//Compiler success = 0 preverifyOK = 1;//Preverify success = 0 deleteClassFilesOK = 1;//Delete success = 0 moveFilesOK = 1;//Move success = 0 manifestFileOK = 1;//Manifest success = 0 jarFileOK = 1;//Jar file success = 0 jadFileOK = 1;//Jad file success = 0 cleanupOK = 1;//Cleanup success = 0 //Control returns to here when the user terminates // the cell phone emulator. System.out.println( "\nClick the Run button to run another MIDlet."); System.out.println();//blank line }//end runTheProgram //----------------------------------------------------// //Purpose: Delete leftover files at startup void deleteOldStuff(){ System.out.println( "Deleting leftover files from a previous run"); //Delete subdirectory from output folder if it exists. int successFlag = deleteOutputSubDir(); //Delete manifest file if it exists. File manifestFile = new File("output/Manifest.mf"); if(manifestFile.exists()){ boolean success = manifestFile.delete(); if(success){ System.out.println(" Manifest file deleted"); }else{ successFlag = 1; }//end else }//end if //Delete old JAR file if it exists. File jarFile = new File("output/" + prog + ".jar"); if(jarFile.exists()){ boolean success = jarFile.delete(); if(success){ System.out.println(" Old jar file deleted"); }else{ successFlag = 1; }//end else }//end if //Delete old JAD file if it exists. File jadFile = new File("output/" + prog + ".jad"); if(jadFile.exists()){ boolean success = jadFile.delete(); if(success){ Sy
http://www.developer.com/java/j2me/article.php/3736301
crawl-001
refinedweb
1,673
50.23
Hi. I received errors such as float bmi is a private members. How can I access the private members in the main? Thanks in advance. Code:Code:#include <iostream> using namespace std; class BMI { private: float height, weight, bmi; public: BMI() { height = 1.0; weight = 1.0; } void set(float a, float b) { height = a; weight = b; } void calculate() { bmi = ((weight / 1000) / (height * height)); } void display() { string status; if (bmi < 18.5) { status = "Underweight"; } else if (bmi > 18.5 && bmi < 24.9) { status = "Normal"; } else if (bmi > 25 && bmi < 29.9) { status = "Overweight"; } else { status = "Obese"; } cout << status; } }; int main() { BMI c; cout << "This program will calculate your body mass index." << endl; cout << "Enter your height in meter (m) unit : "; cin >> c.height; cout << "Enter your weight in kilogram (kg) unit : "; cin >> c.weight; cout << "Your bmi is : " << c.bmi << endl; c.display(); return 0; } Output that needs to display: This program will calculate your body mass index.Enter your height in meter (m) unit : 1.63Enter your weight in kilogram (kg) unit : 45Your bmi is : 16.937You are underweight.
http://cboard.cprogramming.com/cplusplus-programming/163454-how-access-private-members-class.html
CC-MAIN-2015-40
refinedweb
180
87.62
table of contents - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.07-1 - unstable 5.08-1 NAME¶aio_suspend - wait for asynchronous I/O operation or timeout SYNOPSIS¶ #include <aio.h> int aio_suspend(const struct aiocb * const aiocb_list[], int nitems, const struct timespec *timeout); Link with -lrt. DESCRIPTION¶¶If this function returns after completion of one of the I/O requests specified in aiocb_list, 0 is returned. Otherwise, -1 is returned, and errno is set to indicate the error. ERRORS¶ VERSIONS¶The aio_suspend() function is available since glibc 2.1. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶POSIX.1-2001, POSIX.1-2008. NOTES¶.
https://manpages.debian.org/buster-backports/manpages-dev/aio_suspend.3.en.html
CC-MAIN-2020-45
refinedweb
118
53.68
The Vaadin Plugin for Eclipse has ever been shipped with a visual designer for creating custom components for Vaadin applications. This visual designer enables you to construct a layout visually by dragging and dropping UI components on a given layout. In the background, the tool generates the corresponding Java code that sets up the design during runtime for you. The Vaadin company has recently released a complete overhaul of this visual designer. The tool is now termed Vaadin Designer and is included in the current version of the Vaadin Eclipse Plugin. First the downside of this: the Vaadin Designer is not free anymore. A license is included in a subscription of the Vaadin Pro Tools ($39 / developer / month) or can be purchased for $389 per developer (as of today). You can obtain a trial license which is valid for one month. The old designer can still be used free of charge but it is marked as deprecated. As we will see shortly, however, the feature set of the new designer is well worth the license fee. For those developers who like to construct their UIs visually, the Vaadin Designer has quite a lot to offer for speeding up UI design. It boasts an increase in productivity by a factor of two compared to hand-written UIs. Design by drag’n’drop UIs are designed using drag and drop as usual. You have a palette of UI components to choose from, an overview of the current UI’s component hierarchy, and a property editor where you can change each component’s properties. Declarative layout One very important difference of the new designer to the old version is that it builds on declarative layouts instead of generated Java code. This means while you are constructing a new UI design with the visual editor, the designer will simultaneously build an HTML file in the background representing your design. This HTML file does not use standard HTML elements, however, but Vaadin-specific elements. The following example shows this notation for a vertical layout containing two text fields and a button. <vaadin-vertical-layout <vaadin-text-field</vaadin-text-field> <vaadin-password-field</vaadin-password-field> <vaadin-button Log me in </vaadin-button> </vaadin-vertical-layout> You can switch the editor from edit view to source view anytime. You can even safely edit the generated HTML code directly, without running the risk of confusing the tool (as long as you write valid XHTML). The size and complexity of the actual Java class (named design class) representing such a design is kept at a minimum. The Java class for the example looks like so: @DesignRoot @AutoGenerated @SuppressWarnings("serial") public class ExampleDesign extends VerticalLayout { protected Button logInButton; public ExampleDesign() { Design.read(this); } } As you can see, you can assign a name (attribute _id in the HTML file) to individual UI components which will result in a correspondent protected field variable in the design class. By sub-classing this class, you can further interact with these components, e. g. by registering event listeners on them. The big advantage of this declarative approach is that now you don’t have to mess with a brittle piece of automatically managed Java code any longer. The UI’s construction is cleanly separated from application code. In addition, you can choose any layout as base layout, not just AbsoluteLayout. Live preview on any device Another really nice feature of the new Vaadin Designer is that you can launch a live preview of your design and check this with your browser. For that, the designer starts a Vaadin process in the background that serves your currently edited design. You can then visit a specific URL for this design with your browser and test it. Any changes you make in the designer are reflected in this live preview right away, without the need to reload the page. You can even switch the currently used theme in the designer and view this change instantly in the browser. This goes even further since you’re not restricted to your local machine. You can preview your design with any device on your local network. So, you can check whether your responsive layouts work as expected with any supported mobile device. Templating The Vaadin Designer supports templating. You can create your own designer templates that can be used as basis for new design classes. This boosts your productivity even more as you can source out commonly used UI elements to such templates, thus fostering reuse and a better structuring of the UI. Live theming Not only a live preview of the design is offered by the designer, you can also preview changes made to your theme’s SASS style sheets instantaneously. By that, you immediately see how your changes affect the look and feel of your theme. Conclusion The new Vaadin Designer is a huge step forward in the area of professional Vaadin tooling. Building on declarative layouts and statically typed Java classes it puts visual UI design with Vaadin on a firm foundation. The tool may even be worth considering for development teams who hitherto frowned upon visual UI designers in general. I was considering Vaadin for my company but since it seems the plugin for Eclipse will stop being supported, I’m disappointed. I don’t think turning the whole thing into a paid tool was a good move.
https://blog.oio.de/2015/11/09/building-vaadin-uis-visually-with-the-new-vaadin-designer/
CC-MAIN-2018-30
refinedweb
893
52.49
1.. 3. Explain the use of try: except raise, and finally? Answer: final block is executed. Raise may be used to raise your own exceptions. 4.. 5. What Is A String In Python? Answer:. 6. What is the difference between locals() and globals ()? Answer: locals() is accessed within the function and it returns all names that can be accessed locally from that function. globals() returns all names that can be accessed globally from that function. 7. How many Keywords are there in Python? And why should we know them? Answer: There are 33 keywords in Python. We should know them to know about their use so that in our work we can utilize them. Another thing is while naming a variable, the variable name cannot be matched with the keywords. So, we should know about all the keywords. 8. What do you mean by the dictionary in Python? Answer: The built-in data types of Python known as Dictionary. For e.g. “Country”. 9. Explain Python Functions? Answer:. 10. Does the same Python code work on multiple platforms without any changes? Answer: Yes. As long as you have the Python environment on your target platform (Linux, Windows, Mac), you can run the same code. 11. What do you understand by monkey patching in Python? Answer: The dynamic modifications made to a class or module at runtime are termed as monkey patching in Python. Consider the following code snippet. 12. What Is Python, What Are The Benefits Of Using It, And What Do You Understand Of PEP 8? Answer:. (bypassing an empty list as the value of the list parameter). 13. What is the use of break statement? Answer: It is used to terminate the execution of the current loop. A break always breaks the current execution and transfer control outside the current block. If the block is in a loop, it exits from the loop, and if the break is in a nested loop, it exits from the innermost loop. 14. What is the difference between ‘match’ and ‘search’ in Python? Answer: Match checks for the match at the beginning of the string whereas search checks for the match anywhere in the string. 15. How to create a Unicode string in Python? Answer: In Python 3, the old Unicode type has replaced by “str” type, and the string is treated as Unicode by default. We can make a string in Unicode by using art.title.encode(“utf-8”) function. 16. What are the differences between Python 2.x and Python 3.x? Answer:”). 17. What Are Class Or Static Variables In Python Programming? Answer:. (E Learning Portal) 18. When would you use triple quotes as a delimiter?. (interview questions on python) 19. What happens in the background when you run a Python file? Answer: When we run a .py file, it undergoes two phases. In the first phase, it checks the syntax and in the second phase, it compiles to bytecode (.pyc file is generated) using Python virtual machine, loads the bytecode into memory and runs. 20. Explain the difference between local and global namespaces? Answer: Local namespaces are created within a function. when that function is called. Global namespaces are created when the program starts. 21. What is Python? Answer: Python is a high-level, interpreted, interactive and object-oriented scripting language. Python is designed to be highly readable. It uses English keywords frequently whereas other languages use punctuation, and it has fewer syntactical constructions than other languages. 22. How to save an image when you know the URL? Answer: To save an image locally, you would use this type of a code: import urllib.request urllib.request.urlretrieve(“URL”, “image-name.jpg”) 23. What is the Python decorator? Answer: Python decorator is a concept which allows to call or declare a function inside a function, pass a function as an argument, return a function from the function. The decorator provides extra facility to the function. It also helps to organize a piece of code within a function. 24. What Is A Function In Python Programming? Answer:. 25. How do you execute a Python Script? Answer: From the command line, type python .py or python.y.py where the x.y is the version of the Python interpreter desired. Learn how to use Python, from beginner basics to advanced techniques, with online video tutorials taught by industry experts. Enroll for Free Python Training Demo! 26.: 27.. 28. What is Threads Life Cycle? Answer: Threads Life Cycle Creating the object of a class which is overwriting run method of thread class is known as a creating thread Whenever a. 29.. 30. string that is stored in the global variable name. 31. Explain List, Tuple, Set, and Dictionary and provide at least one instance where each of these collection types can be used? Answer:. Set: Collection of items of a similar data type. Dictionary: Collection of items with key-value pairs. Generally, List and Dictionary are extensively used by programmers as both of them provide flexibility in data collection. 32. Give an example of shuffle() method? Answer: This method shuffles the given string or an array. It randomizes the items in the array. This method is present in the random module. So, we need to import it and then we can call the function. It shuffles elements each time when the function calls and produces different output. 33. Does Python allow you to program in a structured style? Answer: Yes. It does allow to code is a structured as well as Object-oriented style. It offers excellent flexibility to design and implement your application code depending on the requirements of your application. 34. What is a decorator?. 35. What is Abnormal Termination? Answer: The concept of terminating the program in the middle of its execution without executing the last statement of the main module is known as an abnormal termination Abnormal termination is an undesirable situation in programming languages. 36.. 37. What happens when a function doesn’t have a return statement? Is this valid? Answer: Yes, this is valid. The function will then return a None object. The end of a function is defined by the block of code being executed (i.e., the indenting) not by any explicit keyword. 38.. 39.. 40. What is swap case() function in the Python? Answer: It is a string’s function which converts all uppercase characters into lowercase and vice versa. It is used to alter the existing case of the string. This method creates a copy of the string which contains all the characters in the swap case. If the string is in lowercase, it generates a small case string and vice versa. It automatically ignores all the non-alphabetic characters. 41. What is Multithreading? Answer: Thread Is a functionality or logic which can execute simultaneously along with the other part of the program Thread is a lightweight process Any program which is under execution is known as process We can define the threads in python by overwriting run method of the Thread class Thread class is a predefined class which is defined in the threading module Thread in the module is a predefined module If we call the run method directly the logic of the run method will be executed as a normal method logic In order to execute the logic of the run method as we use the start method of thread class. 42. What Packages in the Standard Library, Useful for Data Science Work, Do You Know? Answer: When Guido van Rossum created Python in the 1990s, it wasn’t built for data science. Yet, today, Python is the leading language for machine learning, predictive analytics, statistics, and simple data analytics. This is because Python is a free and open-source language that data professionals could easily use to develop tools that would help them complete data tasks more efficiently. The following packages in the Python Standard Library are very handy for data science projects: NumPy NumPy (or Numerical Python) is one of the principal packages for data science applications. It’s often used to process large multidimensional arrays, extensive collections of high-level mathematical functions, and matrices. Implementation methods also make it easy to conduct multiple operations with these objects. There have been many improvements made over the last year that have resolved several bugs and compatibility issues. NumPy is popular because it can be used as a highly efficient multi-dimensional container of generic data. It’s also an excellent library as it makes data analysis simple by processing data faster while using a lot less code than lists. Pandas Pandas is a Python library that provides highly flexible and powerful tools and high-level data structures for analysis. Pandas is an excellent tool for data analytics because it can translate highly complex operations with data into just one or two commands. Pandas comes with a variety of built-in methods for combining, filtering, and grouping data. It also boasts time-series functionality that is closely followed by remarkable speed indicators. SciPy SciPy is another outstanding library for scientific computing. It’s based on NumPy and was created to extend its capabilities. Like NumPy, SciPy’s data structure is also a multidimensional array that’s implemented by NumPy. The SciPy package contains powerful tools that help solve tasks related to integral calculus, linear algebra, probability theory, and much more. Recently, this Python library went through some major build improvements in the form of continuous integration into multiple operating systems, methods, and new functions. Optimizers were also updated, and several new BLAS and LAPACK functions were wrapped. 43. Describe how to send email from a Python Script? Answer: The smtplib module is used to defines an SMTP client session object that can be used to send email using Pythons Script. 44. What happens with the following function definition? Answer: ‘Hello’, name, ‘Welcome to’, city The order of passing values to a function is, first one has to pass non-default arguments, default arguments, variable arguments, and keyword arguments. 45.. An excellent support community to get your answers. 46. Explain Python is one Line? Answer: Python is a modern powerful interpreted language with threads, objects, modules, exceptions and also have the property of automatic memory management. 47. Python has something called the dictionary. Explain using an example Answer: A dictionary in Python programming language is an unordered collection of data values such as a map. Dictionary holds the key: value pair. It helps in defining a one-to-one relationship between keys and values. Indexed by keys, a typical dictionary contains a pair of keys and corresponding values.
https://svrtechnologies.com/interview-questions-on-python/
CC-MAIN-2020-29
refinedweb
1,769
66.44
Created on 2013-02-11 19:39 by roysmith, last changed 2013-02-15 21:08 by ezio.melotti. This issue is now closed. # Python 2.7.3 # Ubuntu 12.04 import re pattern = r"( ?P<phrase>.*)" regex = re.compile(pattern, re.VERBOSE) The above raises an exception in re.compile(): Traceback (most recent call last): File "./try.py", line 6, in <module> regex = re.compile(pattern, re.VERBOSE) File "/home/roy/env/python/lib/python2.7/re.py", line 190, in compile return _compile(pattern, flags) File "/home/roy/env/python/lib/python2.7/re.py", line 242, in _compile raise error, v # invalid expression sre_constants.error: nothing to repeat The problem appears to be that re.VERBOSE isn't ignoring the space after the "(". Maybe this is a duplicate of issue15606 ? It does look like a duplicate to me.
https://bugs.python.org/issue17184
CC-MAIN-2021-21
refinedweb
141
64.07
Resizing image size animated gif jobs Logo introsu kurgu montaj tanıtım filmi reklam filmi sosyal paylaşım videoları gifler effectif videolar videoya ait herşey :) take xml from 1 website. But that sizes for retail. But i will sell wholesale. So can you change this ? Example: 38 size -2 40 size-3 42 size -5 44 size-0 It has to write 38-40-42 2 serial. .. ... Hello, I want a developer to increase the space size between two elements in a shopify store. Simple task, but i want it now. Thanks, .. sizes shown at [login to view. I need someone to be able to make 1 i already have done the right size to print and other files i need mocked up same exact as i have em designed out to be redone and ready for print Formatting a word document of 400 pages. This include fixing the headings of content, resizing tables and fixing fonts. import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks Hi there, I need this logo re sized to suit Facebook profile image size I a image or video exactly like this [login to view URL]] We need to add a button in our footer with a small animation. the button is for our TV page ( Video ), we will post many vidéos on this page. T...vidéos on this page. The button need to be very nice, something like Fashion TV. I have attached the copy of our footer so you can have an idea. The button is called : CELINE TV Image has to be very luxury. Quick job: I need a graphic designer to convert this PDF logo in illustrator. Save it in the suitable formats ready for laser cutting, printing and resizing..
https://www.tr.freelancer.com/job-search/resizing-image-size-animated-gif/
CC-MAIN-2018-43
refinedweb
318
82.44
Introduction There are plenty of grown adults out there who really seem to believe that America's stock markets are somehow magical, and that under it's magic anyone who invests in the market will always come out rich. It's easy to be enticed into investing by the success stories seen all over the newspapers, advertisements, magazines, and television. It is important to realize, however, that there is no gain without any potential loss in investing. That means you can lose part, or even all of your money in the markets. Hopefully this writeup will help you to put your goals and expectations into perspective. In order to be a successful investor, you must be realistic about your expectations. If you're a beginning investor, it's extremely important to realize the risks and rewards involved in investing, and consider taking it slow, especially if you're investing for the long haul. "Investing should be boring, boring, boring." says Jane Bryant Quinn, syndicated personal finance columnist and author. Taking it slow may be boring, but it's safe and you'll always come out a winner. Just ask the tortoise. Risk versus reward As I mentioned before, without risk there is no reward. You can't keep inflation at bay and prosper financially if you don't take some risks with your portfolio. Rewards and risks are thus closely related to each other. The greater your potential for reward, the greater your potential for risk and loss. The stock that went up 100% percent last quarter is most likely the same stock that will fall like a rock the next couple of months. The same applies to any other kind of investment - the more profitable the venture, the more costly the potential risk will be. Your Best-Case Scenario Depending on the type of investment you choose, your best-case scenario is different. Some investments are secure, fairly predictable, and stable, such as savings accounts and certificates of deposit (CDs). You can easily figure out your return (your investment's performance over time) on these investments. Other investments, however, are more dependent on market conditions and are more volatile, such as stocks, bonds, and mutual funds. There is no way to accurately predict your returns from such investments. What you can do is a little research on your investments - there are plenty of resources available out there for you to research, for example, how the investment has performed in recent history. If you look at the history behind your potential investments, you can understand why they go up or down. Consider the following example: In 1998, the stock for an internet bookseller named Amazon.com gave investors returns of 966%. That means if you had invested $1,000 in the last quarter of 1997, your money would be worth $10,664 by the last quarter of 1998. That's an impressive return for any investor in the stock market - much more than the average investor should expect. Let's take a look at a few more examples in 1998. The 24 best performing stocks under Amazon.com gave investors returns between 164% and 896%, and the best performing mutual funds returned an average over 70%. On the contrary, the best corporate bonds received a return of little more than 15%. Keep in mind, however, that these are best-case returns, and under normal conditions, stocks, bonds, and mutual funds don't give investors these kinds of returns. Worst-Case Scenario In contrast to the best-case scenarios, let's take a look at a few of the worst-case scenarios from recent history. In 1998, the worst performing stock cost investors 83%. That means an investment of $1,000 would have whittled away to a mere $170 by the end of the year. Keep in mind - you will certainly lose all of your money in an investment if a company declares bankruptcy. Realistic goals "Tis the part of a wise man to keep himself today for tomorrow, and not venture all his eggs in one basket." - Miguel de Cervantes, Don Quixote de la Mancha It's necessary to choose your investments carefully, and have a diverse portfolio with different kinds of investments, not just high growth investments, and not just low risk investments. Take this example: Which of the following portfolios do you think is historically more volatile? A.) A portfolio with 100 per cent invested in bonds B.) A portfolio with 60 per cent invested in bonds, and 40 per cent invested in stocks The correct answer was A. In the past fifty years, a portfolio devoted totally to bonds is actually riskier than one with a healthy amount of stocks in it. Ideally, you should be able to offset your losses in one investment by the successes of another investment. By having a diverse portfoilo, you can reduce your risk. As the adage goes, "don't venture all your eggs into one basket." If you put them all into one basket, and the basket falls, they all break, and you'll have a mess everywhere. Basically, by putting all of your money into one investment, you risk losing all of it at once. If you put all of your eggs into separate baskets, if one of them falls, you still have the others intact; if you diversify your investments, if one fails, you still have the other investments to rely upon. Just putting your money into diverse investments is not going to make your money grow - you also have to choose these investments carefully, and that means that you have to have realisitic expectations on the risks and rewards for such investments as well. Since the 1930s, stocks have yielded an average yearly return of 10%, so returns of 15% to 20% are probably unrealistic. Same goes with corporate bonds, which returned about 6% in the same time frame, so 10% to 15% annual return is probably unrealistic. Compounding Interest In school, you might remember the kid who always smugly did his or her work early - always had the essay or paper done way before it's due, while you might just wait till the last minute and start furiously typing your paper the night before. Maybe you felt vindicated when the papers come back and you both got an A on the paper anyway. However, it doesn't work that way in the financial world. In the financial world, the early birds always win. It's those who invest early that'll be retiring early and happily while everyone else is wondering how. So how is it possible? The answer is simple - and you don't have to be a mathematics genius to understand it either. The earlier you start, the more compound interest you can begin to receive. Compound interest is the interest that you can earn on your interest. Let's take a simple example: Let's say you've invested $10,000 and the next year you earned 10% interest in the next year, your interest income would be $1000. Now, if you earned 10% the following year, the $100 that you earned off the $1000 (the interest you gained the previous year) is called compound interest. Take a look at the chart below to see how compounding can really make a difference over the long run: $100 saved every month - Growth through compounding % Return 5 years 10 years 15 years 20 years 30 years 0% $6,000 $12,800 $18,000 $24,000 $36,000 5% $6,829 $15,592 $26,840 $41,275 $83,573 8% $7,397 $18,417 $34,835 $59,295 $150,030 10% $7,808 $20,655 $41,792 $76,570 $227,933 12% $8,247 $23,334 $50,458 $99,915 $352,992 As you can see, compounding is a very convincing reason to start early in investing. If you're investing for retirement, perhaps you should take a look at this chart below: Monthly investment for a $500,000 retirement by age 65 Age At 8% return At 10% return 35 $333 $219 40 $522 $374 45 $843 $653 50 $1,435 $1,196 55 $2,715 $2,421 For information on how to calculate the compound interest, see other users' writeups on compound interest and the Rule of 72. Getting Started Before you begin in investing, it's suggested that you do a few things to ensure you are ready to dive in. Every cent of your money counts. Everybody knows that bills can easily reduce even the largest paychecks to spare change. Even for good causes - your education, a babysitter, lawn trimmers, the plumber.... the money is easily taken away by anything. The simple way to make sure that you have money to make sure you start investing is to save money. One way to do that is set aside a certain percentage or amount from your paycheck for your investments. . Imagine how much you can save by bring carrot sticks and tuna salad every day to work instead of spending $6 a day on lunch. By investing those $6 daily for 30 years, (and assuming it earns 9% per year), you can end up with well over $200,000! So, just taking it slow, going for the long run, and being just plain conservative with your money will make sure you are able to reach your financial goals. Further Reading - Other Nodes alex.tan has written some great writeups about investing that you might want to check out - Rules of investing in the 'New Era' and how to get rich trading on the stock market include basic rules you should follow when investing. Log in or register to write something here or to contact authors.
http://everything2.com/title/What+to+know+before+you+invest+in+stocks
CC-MAIN-2013-20
refinedweb
1,623
68.2
David Ebbo's blog - The Ebb and Flow of ASP.NET To get the latest build of T4MVC:. This is similar to the RedirectToAction/ActionLink support, but applied to route creation. The original Nerd Dinner routes look like this: routes.MapRoute( "UpcomingDinners", "Dinners/Page/{page}", new { controller = "Dinners", action = "Index" } ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults );", // Route name "{controller}/{action}/{id}", // URL with parameters MVC.Home.Index(), // Default action new { id = "" } // Parameter defaults );! :) Short version: the MVC T4 template (now named T4MVC) is now available on CodePlex, as one of the downloads in the ASP.NET MVC v1.0 Source page. Yesterday, I posted asking how people felt about having the template modify their code in small ways. Thanks to all those who commented! The fact that Scott Hanselman blogged it certainly helped get traffic there :) The majority of people thought that it was fine as long as. The template on CodePlex (version 2.0.01 at the top of the file) supports what I described in my previous post, plus some new goodies.: One caveat is that you have to initiate the cycle by opening and saving T4MVC.tt once. After you do that, you don’t need to worry about it. Credit for this idea goes to Jaco Pretorius, who blogged something similar. The template generates static helpers for your content files and script files. So instead of writing: <img src="/Content/nerd.jpg" /> You can now write: <img src="<%= Links.Content.nerd_jpg %>" /> Likewise, instead of <script src="/Scripts/Map.js" type="text/javascript"></script> You can write: . I also fixed a number of bugs that people reported and that I ran into myself, e.g. I’m sure there are still quite a few little bugs, and we’ll work through them as we encounter them Update: Please see this post for what came out of this ‘poll’, and for a pointer to the newest T4 template on CodePlex. When working on my MVC T4 template, I was not able to use reflection to discover the Controllers and Actions, because the code that the template generates is itself in the same assembly as the controllers. So that causes a bit of a chicken and egg problem. Instead, I had to get out of my element and learn something I was not familiar with: the Visual Studio File Code Model API. It’s very different from using reflection, because instead of working at the assembly level, you work at the source file level. You have to first locate the source file you want to look into. You can then ask for the namespaces that it contains, and the classes that they contain, and finally the various members in those classes. To be honest, I find this API quite ugly. It’s a COM interop thing with a horrible object model that looks like it grew organically from version to version rather than having been designed with usability in mind. So all in all, I used it because I had to, but the whole time I was wishing I could use reflection instead. But then I made an important realization. Ugly as it is, this object model supports something that would never be possible with reflection: it lets me modify the source code! If you look at my previous post, I wrote “But to make things even more useful in the controller, you can let the T4 template generate new members directly into your controller class. To allow this, you just need to make you controller partial”. And I have logic in the template that tests this, and does extra generation if it is partial, e.g. if (type.ClassKind == vsCMClassKind.vsCMClassKindPartialClass) { ... } But instead, I have now realized that I can turn this check into an assignment, and change the class to partial if it isn’t already! type.ClassKind = vsCMClassKind.vsCMClassKindPartialClass; Likewise, I have scenarios where I can do cool things if the Controller actions are virtual, and I can just change them to be with a simple line: method.CanOverride = true; To be clear, those statements actually modify the original source file, the one that you wrote. While this is certainly powerful and opens up some new doors, it also raises a big question which is the main purpose of this post: We’re only talking about pretty harmless things (making classes partial and methods virtual), but I know developers can get nervous if even small changes magically happen in their source files. So please tell me how you feel about this, e.g. is it more: Tell me where you stand, and please don't sue me. So I have had this blog since October 2005, and the entire time it has been named “David Ebbo’s blog”. There were a number of solid reasons that had led me to choose this catchy name:. Update: Please read this post for the newest and greatest.. Before we go and re-invent the wheel, let’s discuss what the issues with the runtime T4 approach were, and how this is solved by this new approach. Complex configuration: to enable the runtime template, you had to add a DLL to your bin, modify two web.config files, and drop two T4 files in different places. Not super hard, but also not completely trivial. By contrast, with this new approach you just drop one .tt file at the root of your app, and that’s basically it. No partial trust support: because it was processing T4 files at runtime, it needed full trust to run. Not to mention the fact that using T4 at runtime is not really supported! But now, by doing it at design time, this becomes a non-issue. Only works for Views: because only the Views are compiled at runtime, the helpers were only usable there, and the controllers were left out (since they’re built at design time). With this new approach, Controllers get some love too, because the code generated by the template lives in the same assembly as the controllers! Let’s jump right in and see this new template in action! We’ll be using the Nerd Dinner app as a test app to try it on. So to get started, go to, download the app and open it in Visual Studio 2008 SP1. Then, simply drag the T4 template (the latest one is on CodePlex) into the root of the NerdDinner project in VS. And that’s it, you’re ready to go and use the generated helpers! Once you’ve dragged the template, you should see this in your solution explorer: Note how a .cs file was instantly generated from it. It contains all the cool helpers we’ll be using! Now let’s take a look at what those helpers let us do. Open the file Views\Dinners\Edit.aspx. It contains: <% Html.RenderPartial("DinnerForm"); %> This ugly “DinnerForm” literal string needs to go! Instead, you can now write: <% Html.RenderPartial(MVC.Dinners.Views.DinnerForm); %> Now open Views\Dinners\EditAndDeleteLinks.ascx, where you’ll see: <%= Html.ActionLink("Delete Dinner", "Delete", new { id = Model.DinnerID })%> Here we not only have a hard coded Action Name (“Delete”), but we also have the parameter name ‘id’. Even though it doesn’t look like a literal string, it very much is one in disguise. Don’t let those anonymous objects fool you! But with our cool T4 helpers, you can now change it to: <%= Html.ActionLink("Delete Dinner", MVC.Dinners.Delete(Model.DinnerID))%> Basically, we got rid of the two unwanted literal strings (“Delete” and “Id”), and replaced them by a very natural looking method call to the controller action. Of course, this is not really calling the controller action, which would be very wrong here. But it’s capturing the essence of method call, and turning it into the right route values. And again, you get full intellisense: By the way, feel free to press F12 on this Delete() method call, and you’ll see exactly how it is defined in the generated .cs file. The T4 template doesn’t keep any secrets from you! Likewise, the same thing works for Ajax.ActionLink. In Views\Dinners\RSVPStatus.ascx, change: <%= Ajax.ActionLink( "RSVP for this event", "Register", "RSVP", new { id=Model.DinnerID }, new AjaxOptions { UpdateTargetId="rsvpmsg", OnSuccess="AnimateRSVPMessage" }) %> to just: <%= Ajax.ActionLink( "RSVP for this event", MVC.RSVP.Register(Model.DinnerID), new AjaxOptions { UpdateTargetId="rsvpmsg", OnSuccess="AnimateRSVPMessage" }) %> You can also do the same thing for Url.Action(). As mentioned earlier, Controllers are no longer left out with this approach. e.g. in Controllers\DinnersController.cs, you can replace return View("InvalidOwner"); by return View(MVC.Dinners.Views.InvalidOwner); But to make things even more useful in the controller, you can let the T4 template generate new members directly into your controller class. To allow this, you just need to make you controller partial, e.g. public partial class DinnersController : Controller { Note: you now need to tell the T4 template to regenerate its code, by simply opening the .tt file and saving it. I know, it would ideally be automatic, but I haven’t found a great way to do this yet. After you do this, you can replace the above statement by the more concise: You also get to do some cool things like we did in the Views. e.g. you can replace: return RedirectToAction("Details", new { id = dinner.DinnerID }); The previous runtime-based T4 template was using reflection to learn about your controllers and actions. But now that it runs at design time, it can’t rely on the assembly already being built, because the code it generates is part of that very assembly (yes, a chicken and egg problem of sort). So I had to find an alternative. Unfortunately, I was totally out of my element, because my expertise is in the runtime ASP.NET compilation system, while I couldn’t make use of any of it here! Luckily, I connected with a few knowledgeable folks who gave me some good pointers. I ended up using the VS File Code Model API. It’s an absolutely horrible API (it’s COM interop based), but I had to make the best of it. The hard part is that it doesn’t let you do simple things that are easy using reflection. e.g. you can’t easily find all the controllers in your project assembly. Instead, you have to ask it to give you the code model for a given source file, and in there you can discover the namespaces, types and methods. So in order to make this work without having to look at all the files in the projects (which would be quite slow, since it’s a slow API), I made an assumption that the Controller source files would be in the Controllers folder, which is where they normally are. As for the view, I had to write logic that enumerates the files in the Views folder to discover the available views. All in all, it’s fairly complex and messy code, which hopefully others won’t have to rewrite from scratch. Just open the .tt file to look at it, it’s all in there! In addition to looking at the .tt file, I encourage you to look at the generated .cs file, which will show you all the helpers for your particular project. This was briefly mentioned above. The T4 generation is done by VS because there is a custom tool associated with it (the tool is called TextTemplatingFileGenerator – you can see it in the properties). But VS only runs the file generator when the .tt file changes. So when you make code changes that would affect the generated code (e.g. add a new Controller), you need to explicitly resave the .tt file to update the generated code. As an alternative, you can right click on the .tt file and choose “Run Custom Tool”, though that’s not much easier. Potentially, we could try doing something that reruns the generation as part of a build action or something like that. I just haven’t had time to play around with this. Let me know if you find a good solution to this. This was also the case with the previous template, but it is worth pointing out. Because all the code is generated by the T4 template, that code is not directly connected to the code it relates to. e.g. the MVC.Dinners.Delete() generated method results from the DinnersController.Delete() method, but they are not connected in a way that the refactoring engine can deal with. So if you rename DinnersController.Delete() to DinnersController.Delete2(), MVC.Dinners.Delete() won’t be refactored to MVC.Dinners.Delete2(). Of course, if you resave the .tt file, it will generate a MVC.Dinners.Delete2() method instead of MVC.Dinners.Delete(), but places in your code that call MVC.Dinners.Delete() won’t be renamed to Delete2. While certainly a limitation, it is still way superior to what it replaces (literal strings), because it gives you both intellisense and compile time check. But it’s just not able to take that last step that allows refactoring to work. It is worth noting that using Lamda expression based helpers instead of T4 generation does solve this refactoring issue, but it comes with a price: less natural syntax, and performance issues. It has been pretty interesting for me to explore those various alternative to solve this MVC strongly typed helper issue. Though I started out feeling good about the runtime approach, I’m now pretty sold on this new design time approach being the way to go. I’d be interested in hearing what others think, and about possible future directions where we can take this. Update: Please see this newer post for the latest and greatest MVC T4 template Earlier this week, I wrote a post on using a BuildProvider to create ActionLink helpers. That approach was using CodeDom to generate the code, and there was quite some user interest in it (and Phil blogged it, which helped!). Then yesterday, I wrote a post on the Pros and Cons of using CodeDom vs T4 templates for source code generation. They are drastically different approaches, and while both have their strengths, T4 has definitely been getting more buzz lately. The logical follow-up to those two posts is a discussion on using T4 templates to generate MVC strongly typed helpers. The general idea here is to use the existing ASP.NET extensibility points (BuildProvider and ControlBuilder), but rely on T4 templates to produce code instead of CodeDom. Hence, I called the helper library AspNetT4Bridge (I’m really good at naming things!). As far as I know, this is the first time that T4 templates are executed dynamically inside an ASP.NET application, so let’s view this as an experiment, which has really not been put to the test yet. But it is certainly an exciting approach, so let’s see where it takes us!! This is similar to the previous section, except it covers the case where you need to generate raw URL’s rather than HTML <a> tags. Instead of writing: <%= Url.Action("Edit", new { id = item.ID }) %> <%= Url.UrlToTestEdit(item.ID) %>. This post is supposed to be about using T4 templates, and so far we haven’t said a whole lot about them. They are certainly the magic piece that makes all this work. We are actually using two different .tt files, which cover two distinct scenarios:! :) Look for this logic in AspNetT4BridgeBuildProvider.cs.! So here we are, dynamically executing T4 templates at runtime in an ASP.NET app. One big caveat that I mentioned in my previous post is that you’re not really supposed to do that! Copying from there:!).! There are many scenarios where the need to generate source code arises. The MVC helpers I introduced in my last post is one such example. Note that I am focusing on generating source code here, and not on scenarios where you may want to generate IL directly (which certainly do exist as well, but it’s a difference discussion). To perform the code generation, there are several different approaches that can be used. The most simplistic one is to use a plain StringBuilder and write whatever code you want to it. It’s rather primitive, but for simple scenarios, it just might be good enough. For more complex scenarios there are two widely different approaches that I will be discussing here: CodeDom and T4 templates. Let’s start by introducing our two competitors. CodeDom: this has been around since the Framework 2.0 days, and is used heavily by ASP.NET. Its main focus is on language independence. That is, you can create a single CodeDom tree and have it generate source code in C#, VB, or any other language that has a C# provider. The price to pay for this power is that creating a CodeDom tree is not for the faint of heart! T4 Templates: it’s a feature that’s actually part of Visual Studio rather than the framework. The basic idea here is that you directly write the source code you want to generate, using <#= #> blocks to generate dynamic chunks. Writing a T4 template is very similar to writing an aspx file. It’s much more approachable than CodeDom, but provides no language abstraction. Funny thing about T4 is that it’s been around for a long time, but has only been ‘discovered’ in the last year or so. Now everyone wants to use it! Let’s say that you’re trying to generate a class that has a method that calls Console.WriteLine("Hello 1") 10 times (with the number incrementing). It’s a bit artificial, since you could just as well generate a loop which makes the call 10 times, but bear with me for the sake of illustration, and assume that we want to generate 10 distinct statements. First, let’s tackle this with CodeDom. In CodeDom, you don’t actually write code, but you instead build a data structure which later gets translated into code. We could say that you write metacode. Here is what it would look like: using System; using System.CodeDom; using Microsoft.CSharp; using Microsoft.VisualBasic; class Program { static void Main(string[] args) { var codeCompileUnit = new CodeCompileUnit(); var codeNamespace = new CodeNamespace("Acme"); codeCompileUnit.Namespaces.Add(codeNamespace); var someType = new CodeTypeDeclaration("SomeType"); someType.Attributes = MemberAttributes.Public; codeNamespace.Types.Add(someType); // Create a public method var method = new CodeMemberMethod() { Name = "SayHello", Attributes = MemberAttributes.Public }; someType.Members.Add(method); // Add this statement 10 times to the method for (int i = 1; i <= 10; i++) { // Create a statement that calls Console.WriteLine("Hello [i]") var invokeExpr = new CodeMethodInvokeExpression( new CodeTypeReferenceExpression(typeof(Console)), "WriteLine", new CodePrimitiveExpression("Hello " + i)); method.Statements.Add(new CodeExpressionStatement(invokeExpr)); } // Spit out the code in both C# and VB (new CSharpCodeProvider()).GenerateCodeFromCompileUnit(codeCompileUnit, Console.Out, null); (new VBCodeProvider()).GenerateCodeFromCompileUnit(codeCompileUnit, Console.Out, null); } } You will either find this beautiful or atrocious depending on your mind set :) Basically, writing CodeDom code is analogous to describing the code you want. Here, you are saying: Build me a public class named SomeType in the namespace Acme. In there, create a public method named SayHello. In there, add 10 statements that call Console.Write(…). Build me a public class named SomeType in the namespace Acme. In there, create a public method named SayHello. In there, add 10 statements that call Console.Write(…). It certainly takes a fair amount of work to do something so simple. But note that you’re not doing anything that ties you to C# or VB or any other language. To illustrate this language abstraction power, this test app outputs the code in both C# and VB, with no additional effort. That is one of the strongest points of CodeDom, and should not be discounted. Now, let’s look at the T4 way of doing the same things. You’d write something like this (just create a test.tt file in VS and paste this in to see the magic happen): <#@ template language="C#v3.5" #> namespace Acme { public class SomeType { public virtual void SayHello() { <# for (int i=1; i<=10; i++) { #> System.Console.WriteLine("Hello <#= i #>"); <# } #> } } } As you can see, for the most part you’re just writing out the code that you want to generate, in the specific language that you want. So for ‘fixed’ parts of the code, it’s completely trivial. And when you want parts of the generation to be dynamic, you use a mix of <# #> and <#= #> blocks, which work the same way as <% %> and <%= %> blocks in ASP.NET. Even though it’s much simpler than CodeDom, it can get confusing at times because you’re both generating C# and writing C# to generate it. But once you get used to that, it’s not so hard. And of course, if you want to output VB, you’ll need to write the VB equivalent. To be clear, only the ‘gray’ code would become VB. The code in the <# #> blocks can stay in C#. If you want the <# #> blocks to be in VB, you’d change the language directive at the top. The generator code and the generated code are two very distinct things! Now that we’ve looked at samples using both techniques, let’s take a look at their Pros and Cons, to help you make on informed decision on which one is best for your scenario. Hopefully, this gave you good overview of the two technologies. Clearly, T4 is the more popular one lately, and I certainly try to use it over CodeDom when I can. With proper framework support, it would become an even easier choice. One downside of using Html.ActionLink in your views is that it is late bound. e.g. say you write something like this: <%= Html.ActionLink("Home", "Index", "Home")%> The second parameter is the Action name, and the third is the Controller name. Note how they are both specified as plain strings. This means that if you rename either your Controller or Action, you will not catch the issue until you actually run your code and try to click on the link. Now let’s take the case where you Action takes parameters, e.g.: public ActionResult Test(int id, string name) { return View(); } Now your ActionLink calls looks something like this: <%= Html.ActionLink("Test Link", "Test", "Home", new { id = 17, name = "David" }, null) %> So in addition to the Controller and Action names changing, you are vulnerable to the parameter names changing, which again you won’t easily catch until runtime. One approach to solving this is to rely on Lambda expressions to achieve strong typing (and hence compile time check). The MVC Futures project demonstrates this approach. It certainly has merits, but the syntax of Lambda expressions in not super natural to most. Here, I’m exploring an alternative approach that uses an ASP.NET BuildProvider to generate friendlier strongly typed helpers. With those helpers, the two calls below become simply: <%= Html.ActionLinkToHomeIndex("Home")%><%= Html.ActionLinkToHomeTest("Test Link", 17, "David")%> <%= Html.ActionLinkToHomeIndex("Home")%> <%= Html.ActionLinkToHomeTest("Test Link", 17, "David")%> Not only is this more concise, but it doesn’t hard code any of the problematic strings discussed above: the Controller and Action names, and the parameter names. You can easily integrate these helpers in any ASP.NET MVC app by following three steps: 1. First, add a reference to MvcActionLinkHelper.dll in your app (build the project in the zip file attached to this post to get it) 2. Then, register the build provider in web.config. Add the following lines in the <compilation> section: <buildProviders> <add extension=".actions" type="MvcActionLinkHelper.MvcActionLinkBuildProvider" /> </buildProviders> 3. The third step is a little funky, but still easy. You need to create an App_Code folder in your app, and add a file with the .actions extension in it. It doesn’t matter what’s in the file, or what its full name is. e.g. add an empty file named App_Code/generate.actions. This file is used to trigger the BuildProvider. I included all the sources in the zip, so feel free to look and debug through it to see how it works. In a nutshell: At this point, this is just a quick proof of concept. There are certainly other areas of MVC where the same idea can be applied. e.g. currently it only covers Html.ActionLink, but could equally cover Url.Action(), or HTML form helpers (standard and AJAX). Please send feedback whether you find this direction interesting as an alternative to the Lambda expression approach. Actually. Undeniably, this case is broken, and is the primary reason that we can’t turn on this optimization by default. Luckily, in practice this situation in not extremely common, which is why the optimization is still very usable for users that are aware of the limitations. ASP.NET uses a per application hash code which includes the state of a number of things, including the bin and App_Code folder, and global.asax. Whenever an ASP.NET app domain starts, it checks if this hash code has changed from what it previously computed. If it has, then the entire codegen folder . In addition to this preview, you’ll want to also install Microsoft .NET RIA Services in order to get some useful tooling. This too can be confusing, because it makes it sound like it’s tied to RIA and Silverlight in some way, when in fact it is not. The deal is that there is this new. For more information about the Silverlight side of things, check out Nikhil’s post (and his MIX talk). Important: after getting the ASP.NET DD preview and the RIA Services mentioned above, you’ll need to do a little extra step to avoid a tricky setup issue. Find System.Web.DynamicData.dll under DefaultDomainServiceProject\bin in the ASP.NET Preview zip file, and copy it over the one in \Program Files\Microsoft SDKs\RIA Services\v1.0\Libraries\Server (which is where the RIA install puts them). Now you’re actually ready to start playing with DomainService. First, you’ll want to make a copy of the whole DefaultDomainServiceProject folder so you can work with it without touching the ‘original’. Then, just open DefaultDomainServiceProject.sln in VS. Normally, this would be a Project Template, but right now we don’t have one. }); Now you can just Ctrl-F5 and you should get a working Dynamic Data app. Note how it only lets you do things for which you have CRUD methods. e.g. you can edit Products but not Categories. To make things more interesting, try various things:). Last Friday, I gave a talk at MIX on various things that we’re working on in ASP.NET data land. This includes both some Dynamic Data features and some features usable outside Dynamic Data. The great thing about MIX is that they make all talks freely available online shortly after, and you can watch mine here. Enjoy! I’ll try to blog in more detail about some of the features discussed in the talk in the next few days. When. If. When! There are many ways to customize a ASP.NET Dynamic Data site, which can sometimes be a bit overwhelming to newcomers. Before deciding what customization makes sense for you, it is important to understand the two major buckets that they fall into: They can both very useful depending on your scenario, but it is important to understand how they are different in order to make the right choices. The rule of thumb is that you want to stay in the world of generic customization whenever possible, and only use the schema specific customization when you have to. Doing this will increase reusability and decrease redundancy. We’ll now look at each in more details. Generic customization includes everything that you can do without any knowledge of the database schema that it will be applied to. That is, it’s the type of things that you can write once and potentially use without changes for any number of projects. It lets you achieve a consistent and uniform behavior across an arbitrarily large schema, without needing to do any additional work every time you add or modify a table. Here are some key types of generic customization: Under the ~/DynamicData/PageTemplates folder, you’ll find some default Page Templates like List.aspx and Edit.aspx. If you look at them, you won’t find any references to a specific table/column. Instead, they define the general layout and look-and-feel of your pages in an agnostic way. e.g. if you look at List.aspx, you’ll see a GridView and a LinqDataSource (or EntityDataSource with Entity Framework), but the data source doesn’t have any ContextTypeName/TableName attributes, and the GridView doesn’t define a column set. Instead, all of that gets set dynamically at runtime based on what table is being accessed. You can easily make changes to these Page Templates, such as modifying the layout, adding some new controls or adding logic to the code behind. It’s a ‘normal’ aspx page, so you can treat is as such, but you should never add anything to it that is specific to your schema. When you feel the need to do this, you need to instead create a Custom Page (see below). Under the ~/DynamicData/FieldTemplates folder, you’ll find a whole bunch of field templates, which are used to handle one piece of data of a given type. e.g. DateTime_Edit.ascx handles DateTime columns when they’re being edited. As is the case for Page Templates, Field Templates should always be schema agnostic. That is, you don’t write a field template that’s only meant to handle a specific column of a specific table. Instead, you write a field template that can handle all columns of a certain type. For example, in this post I describe a field template that can handle Many To Many relationships. It will work for any such relationship in any schema (though it is Entity Framework specific). Under Page Template above, I mentioned that the GridView didn’t specify a column set. Instead, the way it works is that there is something called a Field Generator which comes up with the set of columns to use for the current table. The idea is that the Field Generator can look at the full column set for a table, and use an arbitrary set of rules to decide which one to include, and in what order to include them. e.g. it can choose to look at custom model attributes to decide what to show. Steve Naughton has a great post on writing a custom field generator, so I encourage you to read that for more info on that topic. Any time you make a customization that is specific to a certain table or column, you are doing schema specific customization. Here are the main type of things you can do: When you want to have a very specific look for a certain page (e.g. for the Edit page for your Product table), a generic Page Template will no longer do the job. At that point, you want to create a Custom Page. Those live under ~/DynamicData/CustomPages/[TableName]. e.g. to have a custom page to edit products, you would create a ~/DynamicData/CustomPages/Products/Edit.aspx. You can start out with the custom page being an identical copy of the page template, but then you can start making all kind of schema specific changes. For instance, you could define a custom <Fields> collection in the DetailsView, at which point you no longer rely on the Field Generator (discussed below). Taking this one step further, you can switch from a DetailsView into a FormView (or ListView), giving you full control over the layout of the fields. Another very important type of schema specific customization is model annotation. This is what most introductory Dynamic Data demos show, where you add CLR attributes to the partial class of your entity classes. For instance, you can add a [Range(0, 100)] attribute to an integer field to specify that it should only take values between 0 and 100. Dynamic Data comes with a number of built-in annotation attributes, and you can easily build your own to add news ways to annotate your model. The general idea here is that it is cleaner to add knowledge at the model level than to do it in your UI layer. Anything that you can add to describe your data in more details is a good fit for model annotations. All of the different types of customization that I describe above have useful scenarios that call for them. My suggestion is to get a good understanding of what makes some of them Generic while others are Schema Specific, in order to make an informed decision on the best one to use for your scenarios. For the longest time, I had set my blog subtitle to “Dynamic Data and other ASP.NET topics”, which some might argue would not have won any originality contests. On the bright side, it was a fine match for my equally original blog title: “David Ebbo's blog”. So I went on a quest for a new subtitle as part of my 2009 New Year resolutions (ok, it’s the only one so far). As for changing the Title itself, I’ll save that for New Year 2010. My first stop in this memorable journey resulted in the use of the extremely witty word ‘Ebblog’. While undeniably cool, I somehow decided to pass on it. The future will probably not tell whether it was a good move. Still looking to capitalize on my uniquely catchy last name, I came up with the soon-to-be memorable phrase: “The Ebb and Flow of ASP.NET”. Some online dictionary defines it as “the continually changing character of something”. I guess that’s not so bad. And then there is Wikipedia, which defines it as “a form of hydroponics that is known for its simplicity, reliability of operation”. Of course, I don’t have a clue what hydroponics means (and neither do you), but the rest sounds pretty good. So that’s that. Thanks for letting me waste your time. By now, the momentum is clearly building for renaming the blog Title itself, but you’ll just have to wait another year for this thriller to unfold.
http://blogs.msdn.com/davidebb/
crawl-002
refinedweb
5,788
64.91
mona is hosted at Github. mona is a public domain work, dedicated using CC0 1.0. Feel free to do whatever you want with it. mona is available through both NPM and Bower. $ npm install mona-parser or $ bower install mona Note that the bower version requires manually building the release. You can also download a prebuilt UMD version of mona from the website: {return mona;}{return mona;}{return mona;}{return mona;}{return mona;}{var str = monastring;return mona;}{return mona;};// => [['foo', 'bar'], ['b"az', 'quux']] Writing parsers with mona involves writing a number of individually-testable parser constructors which return parsers that mona.parse() can then execute. These smaller parsers are then combined in various ways, even provided as part of libraries, in order to compose much larger, intricate parsers. mona tries to do a decent job at reporting parsing failures when and where they happen, and provides a number of facilities for reporting errors in a human-readable way. mona is based on smug, and Haskell's Parsec library. examples/context.js) parseAsync. parseStream, including piping support src/mona-test.js) Documentation of the latest released version is available here. Docs are also included with the npm release. You can build the docs yourself by running npm install && make docs in the root of the source directory. The documentation is currently organized as if mona had multiple modules, although all modules' APIs are exported through a single module/namespace, mona. That means that mona/api.parse() is available through mona.parse(). Simply creating a parser is not enough to execute a parser, though. We need to use the parse function, to actually execute the parser on an input string: mona; // => "foo"mona; // => throws an exceptionmona; // => "a"mona; // => error, unexpected eof These three parsers do not seem to get us much of anywhere, so we introduce our first combinator: bind(). bind() accepts a parser as its first argument, and a function as its second argument. The function will be called with the parser's result value only if the parser succeeds. The function must then return another parser, which will be used to determine bind()'s value: mona; // => "found an 'a'!" bind(), of course, is just the beginning. Now that we know we can combine parsers, we can play with some of mona's fancier parsers and combinators. For example, the or combinator resolves to the first parser that succeeds, in the order they were provided, or fails if none of those parsers succeeded: mona;// => "this one!" mona;// => "this one!" and() is another basic combinator. It succeeds only if all its parsers succeed, and resolves to the value of the last parser. Otherwise, it fails with the first failed parser's error. mona;// => "bar" Finally, there's the not() combinator. It's important to note that, regardless of its argument's result, not() will not consume input... it must be combined with something that does. mona;// => "end of input" The string() parser might come in handy: It results in a string matching a given string: mona;// => "foo" And can of course be combined with some combinator to provide an alternative value: monap;// => "got a foo!" The is() parser can also be used to succeed or fail depending on whether the next token matches a particular predicate: mona;// => "a" Writing parsers by composing functions is perfectly fine and natural, and you might get quite a feel for it, but sometimes it's nice to have something that feels a bit more procedural. For situations like that, you can use sequence: {return mona;}mona;// => "a" We can generalize this parser into a combinator by accepting an arbitrary parser as an input: {return mona;}mona;// => "foo!" Note that if the given parser consumes closing parentheses, this will fail: mona Once you've got the basics down, you can explore mona's API for more interesting parsers. A variety of useful parsers are available for use, such as collect(), which collects the results of a parser into an array until the parser fails, or float(), which parses a floating-point number and returns the actual number. For more examples on how to use mona to create parsers for actual formats, take a look in the examples/ directory included with the project, which includes examples for json and csv. The npm version includes a build/ directory with both pre-built and minified UMD versions of mona which are loadable by both AMD and CommonJS module systems. UMD will define window.mona if neither AMD or CommonJS are used. To generate these files In bower, or if you fetched mona from source, simply run: $ npm install...dev dependencies installed...$ make And use build/mona.js or build/mona.min.js in your application.
https://www.npmjs.com/package/mona-parser
CC-MAIN-2017-34
refinedweb
788
63.59
span8 span4 span8 span4 Hi Flavio, thanks! Yes.. I guess thats the way I need to go.. to bad there is also some 'xml' in the GML that should actually be typed 'xml' and also the other way around. So I must be careful when replacing. Ill also be really happy If someone from Safe could answer regarding possible bug. Cheers Hello from the future :-) The comments here actually solved my problem in building a comprehensive XML with XmlTemplater. I also needed the GeometryExtractor to generate GML geometry, to be used in a sub template. And I ran into the same problem about XML namespaces. The trick is to add the necessary namespaces to the outermost tag in the sub template expression, otherwise the XmlTemplater fails as it validates the output. <ler:mytag gml:... Just wanted to add my experience on this problem, hoping it will help someone else 7 years from now :-) Answers Answers and Comments 1 Person is following this question. Surface Modelling 0 Answers Doubt regarding workflow 0 Answers Bulk processing of gdb Raster Datasets 1 Answer featuremerger error 0 output features 0 Answers Download link for FME Workbench build 13528 1 Answer
https://knowledge.safe.com/questions/2131/using-gml-in-the-xmltemplater.html
CC-MAIN-2020-16
refinedweb
196
61.97
Chatlog 2008-09-10 From OWL See original RRSAgent log and preview nicely formatted version. Please justify/explain all edits to this page, in your "edit summary" text. 16:52:00 <scribenick> PRESENT: Martin Dzbor, Sandro Hawke, Ian Horrocks, Boris Motik, Zhe Wu, Michael Schneider, Achille Fokoue, Uli Sattler, Bernardo Cuenca Grau, Jie Bao, Alan Ruttenberg, Mike Smith, Bijan Parsia, Peter Patel-Schneider 16:52:00 <scribenick> REGRETS: Markus Krötzsch 16:52:00 <scribenick> CHAIR: Ian Horrocks 16:52:00 <scribenick> SCRIBE: Martin Dzbor 16:52:21 <RRSAgent> RRSAgent has joined #owl 16:52:21 <RRSAgent> logging to 16:52:34 <MartinD> RRSAgent, make records public 16:56:42 <Zakim> SW_OWL()1:00PM has now started 16:56:49 <Zakim> + +0190827aaaa 16:57:01 <MartinD> zakim, aaaa is me 16:57:01 <Zakim> +MartinD; got it 16:57:21 <IanH> IanH has joined #owl 16:58:00 <Zakim> +Sandro 16:58:02 <MartinD> MartinD has changed the topic to: 16:58:36 <Zakim> +Ian_Horrocks 16:58:43 <bmotik> bmotik has joined #owl 16:58:51 <IanH> zakim, Ian_Horrocks is IanH 16:58:51 <Zakim> +IanH; got it 16:58:52 <bmotik> Zakim, this will be OWL 16:58:53 <Zakim> ok, bmotik, I see SW_OWL()1:00PM already started 16:59:24 <IanH> RRSAgent, make records public 16:59:28 <Zakim> +??P6 16:59:31 <bmotik> Zakim, ??P6 is me 16:59:31 <Zakim> +bmotik; got it 16:59:34 <bmotik> Zakim, mute me 16:59:34 <Zakim> bmotik should now be muted 16:59:34 <IanH> zakim, who is here? 16:59:35 <Zakim> On the phone I see MartinD, Sandro, IanH, bmotik (muted) 16:59:36 <Zakim> On IRC I see bmotik, IanH, RRSAgent, Zakim, MartinD, baojie, sandro, alanr, trackbot 16:59:59 <IanH> omit: Martin, are you all set for scribing? 17:00:06 <MartinD> omit: hope so... :-) 17:00:14 <m_schnei> m_schnei has joined #owl 17:00:16 <MartinD> zakim, mute me 17:00:16 <Zakim> MartinD should now be muted 17:00:20 <bcuencagrau> bcuencagrau has joined #owl 17:00:29 <Zhe> Zhe has joined #owl 17:00:40 <IanH> zakim, who is here? 17:00:40 <Zakim> On the phone I see MartinD (muted), Sandro, IanH, bmotik (muted) 17:00:41 <Zakim> On IRC I see Zhe, bcuencagrau, m_schnei, bmotik, IanH, RRSAgent, Zakim, MartinD, baojie, sandro, alanr, trackbot 17:00:42 <uli> uli has joined #owl 17:01:00 <Zakim> + +1.603.897.aabb 17:01:12 <Zhe> zakim, +1.603.897.aabb is me 17:01:15 <Zakim> +??P13 17:01:19 <Zakim> +Zhe; got it 17:01:20 <Achille> Achille has joined #owl 17:01:22 <m_schnei> zakim, ??P13 is me 17:01:23 <Zhe> zakim, mute me 17:01:27 <Zakim> +m_schnei; got it 17:01:29 <Zakim> Zhe should now be muted 17:01:32 <Zakim> +[IBM] 17:01:37 <Achille> Zakim, IBM is me 17:01:37 <Zakim> +Achille; got it 17:01:41 <Zakim> +??P14 17:01:48 <uli> zakim, ??P14 is me 17:01:48 <Zakim> +uli; got it 17:01:52 <uli> zakim, mute me 17:01:52 <Zakim> uli should now be muted 17:01:58 <Zakim> +??P16 17:02:00 <m_schnei> zakim, mute me 17:02:00 <Zakim> m_schnei should now be muted 17:02:05 <bcuencagrau> Zakim, ??P16 is me 17:02:05 <Zakim> +bcuencagrau; got it 17:02:11 <bcuencagrau> Zakim, mute me 17:02:11 <Zakim> bcuencagrau should now be muted 17:02:14 <IanH> zakim, who is here? 17:02:14 <Zakim> On the phone I see MartinD (muted), Sandro, IanH, bmotik (muted), Zhe (muted), m_schnei (muted), Achille, uli (muted), bcuencagrau (muted) 17:02:16 <Zakim> On IRC I see Achille, uli, Zhe, bcuencagrau, m_schnei, bmotik, IanH, RRSAgent, Zakim, MartinD, baojie, sandro, alanr, trackbot 17:02:35 <MartinD> IanH: Let us start with today's agenda 17:02:45 <MartinD> Topic: Administrative points 17:02:58 <Zakim> + +1.518.276.aacc 17:03:00 <MartinD> IanH: Any agenda amendments? 17:03:15 <baojie> Zakim, aacc is baojie 17:03:15 <Zakim> +baojie; got it 17:03:29 <MartinD> IanH: Previous minutes (available from) 17:04:04 <MartinD> PROPOSED: Accept Previous Minutes (3 September) 17:04:07 <IanH> +1 17:04:12 <MartinD> MartinD: +1 17:04:15 <Zhe> +1 17:04:22 <uli> +1 ;) 17:04:34 <MartinD> RESOLVED: Accepted Previous Minutes from 3 September 2008 (as available from) 17:04:47 <MartinD> Subtopic: Pending actions 17:05:01 <Zakim> +Alan 17:05:09 <MartinD> IanH: Usual procedure, let's see how actions were completed, people may say why not completed, what is the status... 17:05:21 <MartinD> IanH: if no objections, we assume actions are done... 17:05:26 <alanr> Action 189 not done yet 17:05:31 <m_schnei> omit: he did 17:05:42 <MartinD> IanH: Action 179 seems to be complete 17:05:55 <IanH> q? 17:06:15 <MartinD> IanH: Action 172 - Achille suggests next Tuesday as a day to complete the action 17:06:24 <IanH> q? 17:06:33 <MartinD> IanH: Action 189 - Alan says this is not done 17:06:46 <MartinD> Alanr: action 189 should be next week 17:07:01 <MartinD> IanH: Action 185 - should be done, if I remember correctly 17:07:17 <MartinD> IanH: yes, it is done 17:07:28 <msmith> msmith has joined #owl 17:07:28 <IanH> q? 17:07:29 <MartinD> IanH: Action 202 - was on Alan 17:07:53 <MartinD> AlanR: It is still pending, will provide update in the near future 17:07:53 <m_schnei> Zhe also finished his action 17:07:58 <Zhe> yes, it has been done 17:08:07 <MartinD> IanH: Action 181 done by Zhe 17:08:16 <IanH> q? 17:08:51 <Zakim> + +1.202.408.aadd 17:08:56 <MartinD> Sandro: Action 207, publication plan (as created last week) - join publication by RIF and OWL groups? 17:08:56 <IanH> q? 17:09:15 <MartinD> Sandro: This action should be made a bit clearer 17:09:57 <MartinD> IanH: Last week we agreed a rough plan how this publication can happen and there is an action on how this should be implemented 17:10:04 <IanH> q? 17:10:07 <msmith> Sandro, the context is at 17:10:11 <MartinD> IanH: Probably this week's deadline was a bit optimistic 17:10:46 <MartinD> Sandro: apparently, a joint recommendation is a good thing, if it can be achieved 17:10:49 <IanH> q? 17:11:03 <MartinD> Sandro: There need to be two resolutions to publish (from the two groups) and the join publication can go ahead... 17:11:36 <MartinD> IanH: If Sandro is the contact on both groups, it might be good to watch that the process is moving ahead, a kind of monitoring 17:11:51 <MartinD> IanH: we will fix the action text later 17:12:01 <IanH> q? 17:12:04 <MartinD> IanH: Action 174 is on Bijan 17:12:06 <IanH> zakim, who is here? 17:12:06 <Zakim> On the phone I see MartinD (muted), Sandro, IanH, bmotik (muted), Zhe (muted), m_schnei (muted), Achille, uli (muted), bcuencagrau (muted), baojie, Alan, msmith 17:12:09 <Zakim> On IRC I see msmith, Achille, uli, Zhe, bcuencagrau, m_schnei, bmotik, IanH, RRSAgent, Zakim, MartinD, baojie, sandro, alanr, trackbot 17:12:21 <uli> ...I will go down the corridor and knock... 17:12:23 <ewallace> ewallace has joined #owl 17:12:34 <MartinD> IanH: No Bijan yet, so we need to check later with him what is the status of this action 17:12:51 <MartinD> Subtopic: Reviewing of the current documents 17:13:08 <MartinD> IanH: Thank you to all who reviewed documents and gave feedback, good job! 17:13:24 <IanH> q? 17:13:32 <MartinD> IanH: One exception is the Profile - not a fault of reviewers, but there is still some discussion ongoing 17:13:40 <MartinD> IanH: We hope to conclude this within a few days 17:13:41 <bijan> bijan has joined #owl 17:14:03 <MartinD> IanH: According to the schedule from the last F2F meeting, we should publish the drafts by September 15... 17:14:04 <m_schnei> q+ 17:14:06 <Zakim> +Peter_Patel-Schneider 17:14:09 <IanH> q? 17:14:13 <m_schnei> zakim, unmute me 17:14:13 <Zakim> m_schnei should no longer be muted 17:14:18 <pfps> pfps has joined #owl 17:14:20 <MartinD> IanH: Perhaps people working on the documents may say if this is still realistic? Shall we go for each document? 17:14:21 <IanH> q? 17:14:55 <bijan> I'm nowhere near done my review, but I'm comfortable publishing without it (Syntax is a big document!) 17:14:55 <IanH> q? 17:15:04 <m_schnei> zakim, mute me 17:15:04 <Zakim> m_schnei should now be muted 17:15:05 <MartinD> m_schnei: Let's wait for the next stage, in my case we will finish the review by Friday... but there will be some potential points that may need further discussion 17:15:08 <IanH> q? 17:15:17 <m_schnei> q- 17:15:21 <MartinD> IanH: We can wait a few days to give people time to review things properly 17:15:36 <MartinD> IanH: Any objections to delaying the publication by a few days? 17:15:44 <IanH> q? 17:15:46 <bmotik> I'll try to handle the reviews of Syntax this weekend 17:15:58 <MartinD> IanH: What about syntax? Do we have a doc that reflects reviews by next week? 17:16:00 <pfps> It's done. 17:16:03 <bmotik> (Syntax is) done 17:16:10 <MartinD> IanH: Model theoretic semantics is done too 17:16:13 <IanH> q? 17:16:17 <MartinD> IanH: What about RDF? 17:16:24 <pfps> (RDF is) essentially done, needs a little bit more work 17:16:38 <MartinD> IanH: is it realistic to publish it next week? 17:16:42 <pfps> Yes, I expect it to be done later today 17:16:42 <IanH> q? 17:17:00 <pfps> q+ 17:17:05 <IanH> q? 17:17:09 <IanH> ack pfps 17:17:09 <MartinD> Alan: (?) Is there some proposal in there on importing? 17:17:34 <sandro> omit: that wasn't me, MartinD 17:17:42 <Zakim> -Alan 17:17:47 <IanH> q? 17:18:02 <MartinD> IanH: We still have some open issues, there will be editorial comments that would clarify parts that can change 17:18:06 <bmotik> I think it's done 17:18:09 <MartinD> IanH: What about XML serialization document? 17:18:09 <pfps> (XML Serialization is) done 17:18:18 <MartinD> IanH: OK, review of this document is done 17:18:32 <pfps> q+ 17:18:36 <IanH> q? 17:18:38 <MartinD> IanH: We're in a good shape, so we should be in position to vote on the publication of these documents next week 17:18:41 <IanH> ack pfps 17:19:02 <MartinD> pfps: Those people who did reviews should perhaps check that their comments are adequately resolved/addressed 17:19:18 <IanH> q? 17:19:35 <Zakim> +Alan 17:19:39 <MartinD> IanH: Typically, these reactions and checks are happening on the mailing lists, but reviewers should perhaps explicitly check that their comments and suggestion are making it into the revisions 17:19:40 <IanH> q? 17:19:51 <IanH> q? 17:20:37 <IanH> q? 17:20:49 <MartinD> IanH: When editors finish updates according to the reviews, they should send a message to the whole WG mailing list to alert (other) people who want to re-check... 17:20:59 <MartinD> IanH: So that we can hold the vote next week 17:21:07 <IanH> q? 17:21:19 <MartinD> IanH: Let us agree then that the editors should let Ian know about the status 17:21:22 <uli> Yes 17:21:29 <MartinD> IanH: All seem to be in principle happy with doc publication 17:21:41 <MartinD> Subtopic: SKOS last call draft 17:21:43 <pfps> q+ 17:21:48 <IanH> q? 17:21:53 <IanH> ack pfps 17:21:55 <MartinD> IanH: There were no volunteers last week to review this last call draft recommendation, so it is still on agenda 17:22:02 <MartinD> pfps: There is a review by me... 17:22:22 <MartinD> pfps: I am not quite sure what to do with my review, but it might act as a basis for the WG review/position? 17:22:25 <sandro> Want to talk also about the RIF Review on behalf of OWL2 17:22:25 <alanr> Goal would be to see what can/can't be represented in OWL2 17:22:28 <IanH> q? 17:22:32 <MartinD> pfps: There are more than one document in the SKOS draft 17:22:38 <IanH> q? 17:22:38 <m_schnei> AFAIK, only the SKOS reference is in the Last Call 17:22:38 <MartinD> IanH: Are there any volunteers now to take on this review? 17:22:53 <IanH> q? 17:23:01 <IanH> ack sandro 17:23:01 <Zakim> omit: Sandro, you wanted to ask about RIF Review for OWL 2 17:23:05 <IanH> q? 17:23:11 <m_schnei> I'm working on my own review (work in progress) 17:23:30 <m_schnei> zakim, unmute me 17:23:30 <Zakim> m_schnei should no longer be muted 17:23:30 <IanH> q? 17:23:37 <MartinD> IanH: Can Jie perhaps check if someone from there wouldn't do it? 17:23:38 <IanH> q? 17:23:58 <MartinD> m_schnei: As I said I am also working on a review, but not sure if there should be an "OWL WG" official version 17:24:14 <alanr> q+ 17:24:17 <IanH> q? 17:24:21 <IanH> ack alanr 17:24:23 <MartinD> IanH: If Peter and Michael finish their reviews, we may consider them both and discuss (if needed) what can be reused in the OWL WG position 17:24:48 <MartinD> Alan: What aspects are you focusing on? E.g. to what extent SKOS relates to OWL profile(s)? 17:25:00 <MartinD> pfps: This has been partly done, details to follow later 17:25:18 <MartinD> m_schnei: I'm more interrested in RDF semantics and those factors 17:25:30 <IanH> q? 17:25:47 <m_schnei> zakim, mute me 17:25:47 <Zakim> m_schnei should now be muted 17:25:49 <MartinD> Alan: If you are willing to contribute your reviews, we can see if we agree on a common statement/review 17:26:03 <MartinD> IanH: Let's see what comes from Peter and Michael and act later 17:26:10 <MartinD> Subtopic: Next F2F meeting 17:26:23 <MartinD> IanH: May I ask you to indicate your status on the page of the next F2F meeting on the wiki? 17:26:37 <IanH> q? 17:26:40 <MartinD> MartinD: The URI of the meeting is and the registration to TPAC is also available from there... 17:26:50 <MartinD> Subtopic: Review of RIF by OWL WG (agenda amendment) 17:26:54 <MartinD> Sandro: I have had suggestion for agenda amendment. It is about that RIF review from the OWL2 perspective 17:27:16 <pfps> Actually, I helped write it, so I'm not sure that I *reviewed* it 17:27:38 <MartinD> Sandro: RIF document review was done mostly with OWL1 focus, maybe there can be a check on whether OWL WG is still happy with it; in the light of OWL2? 17:27:41 <IanH> q? 17:27:49 <pfps> At first blush, I can't think of any changes required (but don't let me bias the review) :-) 17:27:50 <MartinD> Sandro: Ideally, we should have someone other than Peter who helped writing it 17:28:09 <sandro> See details on 17:28:14 <IanH> q? 17:28:29 <MartinD> IanH: Are there timelines? 17:28:46 <IanH> q? 17:29:02 <MartinD> Sandro: It's about next few days, so it may be a bit tough to do it within deadlines 17:29:24 <MartinD> IanH: Not many people volunteering, perhaps we need an email to reach to other people in the whole WG? 17:29:45 <MartinD> IanH: Administrative points are now concluded 17:29:49 <MartinD> Topic: Discussion on Issues 17:29:58 <MartinD> IanH: There are two resolution proposals 17:30:01 <IanH> q? 17:30:09 <msmith> q+ 17:30:12 <MartinD> IanH: Issue 133 on DL-Lite profile 17:30:15 <MartinD> Subtopic: Issue 133 (DL-Lite Profile modifications to include UNA) 17:30:25 <IanH> zakim, who is on the call? 17:30:25 <Zakim> On the phone I see MartinD (muted), Sandro, IanH, bmotik (muted), Zhe (muted), m_schnei (muted), Achille, uli (muted), bcuencagrau (muted), baojie, msmith, Peter_Patel-Schneider, 17:30:28 <Zakim> ... Alan 17:30:39 <IanH> q? 17:30:43 <IanH> ack msmith 17:30:46 <MartinD> msmith: The proposal is to move functional property and key axioms from OWL 2 QL profile 17:31:09 <MartinD> msmith: We should also remove the existing global restrictions from the OWL 2 QL profile and there should be a core DL-Lite that does not have all those extensions ; DL-Lite_A seen as an extension which adds functional properties and keys but requires the UNA 17:31:15 <bcuencagrau> omit: +q 17:31:31 <MartinD> IanH: There might be some text in the profile document mentioning about these exceptions? 17:31:34 <IanH> q? 17:31:44 <bcuencagrau> Zakim, unmute me 17:31:44 <Zakim> bcuencagrau should no longer be muted 17:31:47 <MartinD> msmith: Yes, this should happen and Diego was also happy with the proposal (see) 17:31:51 <IanH> ack bcuencagrau 17:32:00 <MartinD> bcuencagrau: I am unclear what was proposed... 17:32:23 <IanH> q? 17:32:27 <MartinD> bcuencagrau: Do we have DL-Lite and then concerning assertions will we still have sameAs and differentFrom? 17:32:51 <MartinD> msmith: differentFrom is acceptable, sameAs probably not 17:33:02 <IanH> q? 17:33:21 <MartinD> bcuencagrau: We have basic features in the profile 17:33:29 <uli> "the intersection" of the choices is how I see it 17:33:34 <IanH> q? 17:33:52 <MartinD> msmith: There are only axioms, no unique axioms... 17:34:14 <bcuencagrau> Zakim, mute me 17:34:14 <Zakim> bcuencagrau should now be muted 17:34:17 <IanH> q? 17:34:18 <MartinD> msmith: what we have in the document has been proposed a few months ago 17:34:28 <uli> Looks good to me 17:34:42 <bcuencagrau> I am fine with it too 17:34:48 <MartinD> IanH: Given there were no objections in emails, we propose to resolve this issue 17:35:01 <MartinD> PROPOSED: Resolve Issue 133 (DL-Lite Profile modified to include UNA) per Mike's email 17:35:04 <pfps> +1 17:35:07 <bcuencagrau> +1 17:35:07 <msmith> +1 17:35:08 <IanH> +1 17:35:10 <bmotik> +1 17:35:13 <MartinD> MartinD: +1 17:35:14 <Zhe> +1 17:35:17 <m_schnei> +1 17:35:22 <IanH> Mike's email = 17:35:35 <uli> +1 17:35:46 <MartinD> RESOLVED: Issue 133 (DL-Lite Profile modified to include UNA) per Mike's email () 17:36:03 <MartinD> Subtopic: Issue 119 (OWL 2 Full may become inconsistent due to self restrictions) 17:36:04 <bcuencagrau> Zakim, mute me 17:36:04 <Zakim> bcuencagrau was already muted, bcuencagrau 17:36:11 <IanH> q? 17:36:17 <MartinD> IanH: This seems to be resolved by RDF semantics 17:36:34 <MartinD> IanH: Due to self-restrictions this could have been a problem, but it was resolved by Mike 17:36:39 <MartinD> IanH: It does not seem to be really controversial 17:36:42 <IanH> q? 17:36:58 <MartinD> PROPOSED: Resolve Issue 119 (OWL 2 Full may become inconsistent due to self restrictions) per Ian's email 17:37:03 <m_schnei> +1 17:37:06 <IanH> +1 17:37:09 <bcuencagrau> +1 17:37:09 <uli> +1 17:37:10 <msmith> +1 17:37:12 <Achille> +1 17:37:14 <MartinD> MartinD: Ian's email = 17:37:17 <MartinD> MartinD: +1 17:37:18 <pfps> +1 17:37:25 <bmotik> +1 17:37:29 <baojie> +1 17:37:36 <Zhe> +1 17:37:45 <MartinD> RESOLVED: Issue 119 (OWL 2 Full may become inconsistent due to self restrictions) per Ian's email () 17:38:18 <MartinD> Subtopic: Issue 130 (Conformance, warnings, errors) 17:38:31 <MartinD> IanH: This has been discussed last week, there were a few emails in the meantime... 17:38:35 <sandro> omit: q+ 17:38:36 <IanH> q? 17:38:43 <IanH> ack sandro 17:38:43 <MartinD> IanH: Shall we spend a few minutes to get a resolution? 17:38:54 <MartinD> Sandro: We exchanged some emails and mostly we're happy 17:39:09 <IanH> q? 17:39:10 <MartinD> Sandro: There was a proposal to amend some text, I liked that proposal 17:39:29 <MartinD> IanH: Shall we then make a change agreed in the email; summarized in? 17:39:32 <Zhe> omit: q+ 17:39:33 <pfps> Make change and produce a proposal 17:39:39 <IanH> q? 17:39:41 <Zhe> zakim, unmute me 17:39:41 <Zakim> Zhe should no longer be muted 17:39:43 <MartinD> IanH: OK, let's assume we go for the change 17:39:52 <IanH> ack Zhe 17:39:54 <alanr> pointer 17:40:10 <alanr> omit: q+ 17:40:19 <sandro> Details can be found in 17:40:22 <IanH> q? 17:40:31 <sandro> In particular, the text starting "An OWL 2 RL...." 17:40:33 <MartinD> IanH: I will update the conformance document with the modified text and I will send an email how was this implemented, so that people can comment 17:40:56 <IanH> q? 17:40:59 <IanH> ack alanr 17:41:02 <IanH> q? 17:41:05 <MartinD> IanH: Proposals from the author regarding words like "could", "should",... will be made into the text too 17:41:06 <MartinD> Alan: Yesterday we discussed with Sandro - there are two meanings of "unknown" 17:41:07 <MartinD> Alan: "unable to complete", e.g. due to resource limitations 17:41:08 <MartinD> Alan: Another is due to finished but "not guaranteed entailment" algorithm 17:41:10 <MartinD> Alan: And then, if the answer "doesn't make sense", we may not have a terminating message 17:41:19 <sandro> UNKNOWN, Reason = 17:41:19 <sandro> - Resource Limits Reached 17:41:19 <sandro> - Finished Incomplete Algorithm 17:41:19 <sandro> - Unexpected Error 17:41:50 <IanH> q? 17:42:22 <IanH> q? 17:42:36 <sandro> omit: q+ is this a test case question or an API question? 17:42:40 <IanH> q? 17:42:45 <sandro> Want to ask - is this a test case question or an API question? 17:42:56 <MartinD> Alan: A proposal for something that would make it clear(er) that an algorithm ran out of resources vs. not knowing the answer 17:43:09 <m_schnei> "Out of Resource" sounds pretty technical for a formal spec... 17:43:16 <IanH> q? 17:43:21 <IanH> ack sandro 17:43:21 <Zakim> omit: sandro, you wanted to ask is this a test case question or an API question? 17:43:23 <MartinD> Alan: Even if these messages ("UNKNOWN") are present in OWL1, there is no reason why to keep previous language 17:43:37 <MartinD> Sandro: I pasted the three meanings of "unknown" above 17:44:05 <MartinD> Sandro: But not sure how useful this is; it probably does not help in test cases, so not sure how valuable this would be in API 17:44:05 <m_schnei> {True, False, Unknown} is better than {True,False} in Prolog 17:44:27 <sandro> omit: I DON'T think it helps in the test cases. 17:44:29 <IanH> q? 17:44:38 <alanr> omit: q+ 17:44:39 <MartinD> IanH: One can perhaps distinguish even more cases to complement values of true and false 17:44:41 <IanH> q? 17:44:46 <IanH> ack alanr 17:44:50 <MartinD> IanH: Any opinions from the implementers? 17:45:30 <IanH> q? 17:45:35 <MartinD> IanH: One case where it makes sense is when the check has been done, so it may be undesirable to return just unknown (?) 17:45:49 <sandro> Something like: "Completed-Unknown"... 17:45:54 <IanH> q? 17:46:21 <IanH> q? 17:46:22 <m_schnei> omit: q+ 17:46:25 <MartinD> IanH: Say {True, False, UnexpectedError, CompletedComputationButNoAnswer } 17:46:26 <m_schnei> zakim, unmute me 17:46:26 <Zakim> m_schnei should no longer be muted 17:46:27 <IanH> q? 17:46:32 <sandro> +1 to four cases for OWL RL 17:46:36 <pfps> +0 17:47:03 <MartinD> m_schnei: One can put comments re conformance, e.g. for OWL Full it cannot be avoided that "unknown" will come out 17:47:11 <IanH> q? 17:47:21 <m_schnei> zakim, mute me 17:47:21 <Zakim> m_schnei should now be muted 17:47:33 <uli> Perhaps we can see the different alternatives in writing? 17:47:39 <Zhe> +1 to Ian's suggestion of possible values 17:47:44 <IanH> ack m_schnei 17:47:47 <MartinD> IanH: I will have another pass on the document and see if people like it 17:47:49 <IanH> q? 17:48:22 <MartinD> Sandro: We should say that, in general, one "could" be returning "unknown" (there is nothing wrong with returning this value), otherwise there may be a conflict with an OWL test case? 17:48:35 <MartinD> Sandro: What about query answering issues? 17:49:10 <IanH> q? 17:49:25 <MartinD> IanH: We can mention something like XML query answering and show how these entailment checks would impact on QA... rather than having a complete new section on QA 17:49:40 <MartinD> Subtopic: Issue 144 (Missing base triple in serialization of axioms with annotations) 17:49:41 <IanH> q? 17:49:48 <Zhe> q+ 17:49:53 <sandro> omit: SCRIBE-CORRECTION: No, what I said was that there is nothing wrong with returning "unknown" in OWL RL. 17:49:55 <MartinD> IanH: This is an issue raised by Zhe, so perhaps he could summarize the point... 17:49:58 <IanH> ack Zhe 17:50:05 <alanr> Also note the message here: 17:50:12 <MartinD> Zhe: We discussed this in the WG before... 17:50:32 <MartinD> Zhe: If we don't include the base triple to the annotated axioms we may put unnecessary burden on implementations 17:50:33 <IanH> q? 17:50:33 <bmotik> omit: q+ 17:50:35 <m_schnei> q+ 17:50:39 <pfps> q+ 17:50:42 <bmotik> Zakim, unmute me 17:50:42 <Zakim> bmotik should no longer be muted 17:50:48 <MartinD> Zhe: We are suggesting to simply include it, which makes life easier 17:50:50 <IanH> q? 17:50:54 <IanH> ack bmotik 17:50:55 <alanr> q+ 17:51:20 <MartinD> Boris: It seems like reasonable thing to do but the problem is that an axiom is not represented as one thing vs. two things 17:51:39 <MartinD> Boris: What if you find both - base axiom and the reified one... then what? 17:52:00 <MartinD> Boris: We may decide, e.g. on forgeting the base one if a reified axiom is found... 17:52:06 <IanH> q? 17:52:08 <MartinD> Boris: However, this may cause some mapping issues! 17:52:35 <MartinD> Boris: Then there is another issue = including the triple does not tell you what to do with it or if it is not found, what to do with it 17:53:00 <MartinD> Boris: ideally we would need something along lines "from reified triple define the original" 17:53:05 <IanH> q? 17:53:11 <m_schnei> zakim, unmute me 17:53:11 <Zakim> m_schnei was not muted, m_schnei 17:53:23 <MartinD> Boris: Should we start adding original triples if we find a reified one? 17:53:39 <bcuencagrau> Zakim, mute me 17:53:39 <Zakim> bcuencagrau was already muted, bcuencagrau 17:53:48 <MartinD> Boris: Finally, I don't think this will occur often enough, so that it can cause problems with efficiency and performance... 17:53:54 <IanH> q? 17:54:09 <IanH> ack m_schnei 17:54:25 <MartinD> m_schnei: Without the added triples it seems more stable... 17:54:39 <pfps> Boris has made my points 17:54:41 <pfps> q- 17:54:52 <MartinD> m_schnei: Would current RDF serialization help with this? 17:55:31 <MartinD> m_schnei: If it is not always avoidable to have triple in (if you want to annotate the triple without having access to the orig. ontology), would you define new ontology? 17:55:33 <IanH> q? 17:55:47 <MartinD> m_schnei: There might arise problems with axiom closure in such a scenario 17:55:58 <MartinD> m_schnei: I would not be in favour, not necessary IMHO 17:56:07 <IanH> q? 17:56:17 <m_schnei> zakim, unmute me 17:56:17 <Zakim> m_schnei was not muted, m_schnei 17:56:22 <bmotik> omit: q+ 17:56:39 <MartinD> Alan: What about missing base triple -- there is a syntax for it, so no major issue... 17:57:08 <IanH> q? 17:57:12 <pfps> omit: q+ 17:57:13 <MartinD> Alan: Regarding Michael's comment, not sure this would be a really problem, perhaps only in some profiles? 17:57:18 <IanH> ack alanr 17:57:18 <m_schnei> Of course, you can have two ontology files, one having the spo, the other having the reification, and then having the second import the first 17:57:20 <Zhe> omit: q+ 17:57:33 <MartinD> Alan: Issues are not really with performance, more about monotonicity... 17:57:35 <pfps> Want to ask why Alan's example is non-monotonic 17:57:41 <IanH> ack bmotik 17:58:00 <msmith> omit: q+ 17:58:27 <alanr> Last statement (SCRIBE NOTE: from Michael re inferring SPO-s?) re OWL RL seems wrong. OWL RL has specific syntax. 17:58:31 <MartinD> Boris: If triple is not there, one can reverse-parse it... but what would OWL-RL parser do with this? If you have RDF graph without this triple, you are missing on some inferences 17:58:43 <alanr> Conformance allows OWL RL entailment checker to take and RDF 17:58:49 <MartinD> Boris: There is no guarantee the triple will be included (as it should)... 17:58:53 <IanH> q? 17:59:18 <m_schnei> Yes, OWL Full infers the spo 17:59:28 <IanH> q? 17:59:33 <MartinD> Boris: Then about monotonicity, we already have in OWL Full semantics, there is a possibility to get to non-reified version by means of reasoning... 17:59:34 <alanr> Where is there that reification implies base triple? 17:59:36 <alanr> It wasn't in RDF 17:59:57 <MartinD> pfps: I don't think Alan's example is non-mononotonic 18:00:00 <IanH> q? 18:00:03 <IanH> ack pfps 18:00:03 <Zakim> omit: pfps, you wanted to ask why Alan's example is monotonic 18:00:06 <bmotik> omit: q+ 18:00:08 <IanH> ack Zhe 18:00:09 <MartinD> Zhe: I still want to stress the performance issue 18:00:11 <IanH> q? 18:00:23 <MartinD> Zhe: If an application wants to use this type of annotation 18:00:49 <MartinD> Zhe: ...you can imagine this is an additional burden to keep checking on information on every single triple 18:00:51 <pfps> Want to say something about doing a back-of-the-envelope calculation of the relative costs 18:01:19 <MartinD> Zhe: If base triple is out, it's possible, but it's not efficient... if there is a mix of annotated and non-annotated axioms, what should we do? 18:01:20 <uli> Zhe, perhaps this can be overcome by some clever data structures? 18:01:26 <IanH> q? 18:01:37 <MartinD> Zhe: Should we accept axiom with annotation and forget the ones without annotation? 18:01:50 <IanH> q? 18:01:57 <IanH> ack msmith 18:02:05 <MartinD> msmith: Axiom with and without annotation are structurally different 18:02:12 <bmotik> +1 to msmith 18:02:18 <IanH> q? 18:02:22 <MartinD> msmith: This is already in the specification 18:02:26 <IanH> ack bmotik 18:02:51 <MartinD> Boris: We can address the concerns with performance without altering the core spec 18:03:27 <MartinD> Boris: People may produce RDF graphs... it is safer to assume that one gets RDF graph that needs checking if things are in it 18:03:40 <MartinD> Boris: We can think about ways to handle certain common cases 18:03:42 <IanH> q? 18:04:05 <alanr> Question: How does RDF semantics 4.18 avoid asserting positive triple for negative property assertion? 18:04:07 <MartinD> Boris: The biggest problem with reifications is their occurrence in different part of file = problem for parsers that need to trace this 18:04:10 <IanH> q? 18:04:31 <MartinD> Boris: My potential suggestion - implementation could/should put reified triples together, one after another... 18:04:40 <alanr> We don't have control of this in the RDF world 18:04:45 <IanH> q? 18:04:46 <MartinD> Boris: This would allow more efficient handling... 18:05:27 <alanr> What about RDF pipes, etc? 18:05:31 <MartinD> Boris: Of course, we don't have any control over this... but OWL things are written in files, so we may recommend it? 18:05:41 <IanH> q? 18:05:46 <IanH> ack pfps 18:05:46 <Zakim> omit: pfps, you wanted to do a back-of-the-envelope calculation of the relative costs 18:05:50 <IanH> q? 18:05:57 <MartinD> Peter: There was a point about performance issue, 18:06:09 <MartinD> pfps: Reading a triple is expensive, even compared to running rules 18:06:19 <IanH> q? 18:06:21 <alanr> A whole lot? 1/3 of # axioms that are annotated 18:06:23 <alanr> 18:06:29 <bmotik> omit: q+ 18:06:40 <MartinD> pfps: If we had more triples, we are likely to increase the amount of I/O required, right? 18:06:41 <IanH> q? 18:06:49 <MartinD> Zhe: Maybe by 20-30% 18:07:15 <IanH> q? 18:07:21 <MartinD> Peter: Yes, but that's quite substantial... unless we do an actual analysis, I am not prepared to support that we would save actual resources 18:08:05 <MartinD> Zhe: If annotation axioms do not include the base triple, we need to do additional joins in the tables... 18:08:08 <alanr> Table joins are more expensive than I/O 18:08:38 <pfps> I'm not prepared to admit that in a decent implementation rule processing is more expensive than adding triples 18:08:43 <IanH> q? 18:08:47 <MartinD> IanH: It seems to be hard to establish what takes more time - loading triples into table or doing joins.... 18:09:18 <MartinD> Boris: I want briefly about RDF pipes... unlikely that you cannot ship related triples 18:09:24 <IanH> q? 18:09:29 <IanH> ack bmotik 18:09:37 <alanr> Re pipes: not if they go through some hash table as part of their processing 18:09:46 <alanr> ...which is likely 18:10:17 <alanr> Anyways, implementation has to handle worse case 18:10:22 <MartinD> Boris: If we are processing arbitrary RDF graph, if we have guarantees that in reasonable cases the triples would be close, one can implement a thing that would basically read X triples and replace them with the base triple (if that's needed) 18:10:38 <IanH> q? 18:10:39 <Zakim> -Alan 18:10:44 <IanH> q? 18:10:56 <MartinD> Boris: If we make sure the triples are close to each, we can leave the spec as it is, and you have control over your implementations 18:11:22 <IanH> q? 18:11:29 <MartinD> IanH: What about doing the thing in tables, in a similar way as you said, filling tables once? 18:11:35 <Zakim> +Alan 18:11:55 <MartinD> Boris: True but one may actually save on filling and re-filling the table because the axiom comes later... 18:11:59 <IanH> q? 18:12:02 <m_schnei> zakim, unmute me 18:12:02 <Zakim> m_schnei was not muted, m_schnei 18:12:26 <MartinD> IanH: Sounds interesting... appropriate to take discussion offline for the interested parties, so that they come up with a proposal to resolve this... 18:12:35 <IanH> q? 18:12:37 <MartinD> IanH: Ideally by not having to have base triples? 18:12:42 <IanH> ack m_schnei 18:12:50 <MartinD> m_schnei: I/O is perhaps not interesting 18:13:11 <IanH> q? 18:13:20 <MartinD> m_schnei: If we find the version of the triple but not the original triple... what is *wrong* with this (disregarding I/O performance) 18:13:37 <MartinD> IanH: There is no reverse mapping for OWL Full though 18:13:44 <IanH> q? 18:13:48 <MartinD> m_schnei: I mean OWL DL 18:13:53 <IanH> q? 18:14:00 <m_schnei> q- 18:14:04 <m_schnei> zakim, mute me 18:14:04 <Zakim> m_schnei should now be muted 18:14:14 <MartinD> IanH: But the discussion is now about OWL RL, so ... let's take this offline and see if things are resolved this way 18:14:21 <MartinD> Subtopic: Issue 109 (Namespace for elements and attributes in the XML serialization) 18:14:34 <IanH> q? 18:14:37 <MartinD> IanH: Last time we were close to resolving namespaces in this issue, right? 18:14:49 <IanH> q? 18:14:49 <MartinD> IanH: No conclusions have been reached yet 18:15:31 <MartinD> Sandro: We are waiting for getting some objective opinion on the conflicting points... we need to find technical differences to rule one way or another 18:16:11 <MartinD> IanH: So at the end of discussion we will somehow need to flip the coin, unless there is an agreement between protagonists 18:16:19 <MartinD> Sandro: Do we have pros and cons of the two proposals? 18:16:28 <Zakim> +??P5 18:16:31 <IanH> q? 18:16:38 <MartinD> IanH: We looked at it from different angles and the point is purely in different opinions 18:17:01 <bijan> bijan has joined #owl 18:17:04 <MartinD> Alan: Is this an architectural issue? 18:17:16 <bijan> I won't accept TAG arbitration 18:17:22 <MartinD> Alan: If this is on stake, why not bringing someone else in? 18:17:25 <bijan> zakim, who is here 18:17:25 <Zakim> bijan, you need to end that query with '?' 18:17:35 <bijan> zakim, who is here? 18:17:35 <Zakim> On the phone I see MartinD (muted), Sandro, IanH, bmotik, Zhe, m_schnei (muted), Achille, uli (muted), bcuencagrau (muted), baojie, msmith, Peter_Patel-Schneider, Alan, ??P5 18:17:38 <Zakim> On IRC I see bijan, pfps, ewallace, msmith, Achille, uli, Zhe, bcuencagrau, m_schnei, bmotik, IanH, RRSAgent, Zakim, MartinD, baojie, sandro, alanr, trackbot 18:17:44 <bijan> zakim, ??p5 is me 18:17:44 <Zakim> +bijan; got it 18:17:46 <bijan> omit: q+ 18:17:51 <IanH> q? 18:18:12 <MartinD> Alan: Is there a suggestion where we can ask for ideas? e.g. XML WG 18:18:35 <IanH> I would listen to TAG opinion 18:18:38 <MartinD> Alan: do we need more time to this? Perhaps next week? 18:18:54 <alanr> omit: yes 18:18:58 <IanH> q? 18:19:02 <IanH> ack bijan 18:19:37 <MartinD> Bijan: I am curious about these situations, there should be some evidence which we don't have at the moment... mere judgments are not really making much difference here. One more person will have an opinion, but we should go for some evidence... 18:19:53 <MartinD> IanH: In the end, there will have to be a vote on this in WG 18:20:52 <MartinD> IanH: ...so it's really about other members of WG to make up their minds and in voting go one way or another... so far it's mainly W3C and Manchester objecting (with most being indifferent) 18:21:02 <IanH> q? 18:21:07 <MartinD> IanH: So what about that coin idea = if no decision reached 18:21:14 <alanr> I object to that 18:21:51 <MartinD> IanH: When do we expect to make this decision? 18:22:04 <MartinD> Alan: Why don't we see what happens next week? 18:22:29 <MartinD> Bijan: The issue is that one can hardly expect to get any new information to change mind 18:22:41 <IanH> q? 18:22:58 <MartinD> Alan: It's not about changing minds but about other people getting information to understand what's going on 18:23:26 <MartinD> IanH: Let's wait until the next week if additional information appears, if not, just call for a vote 18:23:38 <IanH> q? 18:23:43 <alanr> +1 18:23:44 <pfps> omit: q+ 18:23:46 <MartinD> Subtopic: Issue 138 (Name of dateTime datatype) 18:23:47 <MartinD> IanH: The next issue is about a new datatype proposed for dateTime... 18:23:48 <bijan> +1 to owl:datetime 18:23:51 <bmotik> Zakim, mute me 18:23:51 <Zakim> bmotik should now be muted 18:24:15 <pfps> q? 18:24:19 <IanH> q? 18:24:20 <MartinD> IanH: We are waiting for the response to Peter's email 18:24:22 <IanH> ack pfps 18:24:33 <MartinD> Peter: Perhaps we should put this in some documents... 18:24:52 <bmotik> Yes 18:24:58 <MartinD> Peter: Not as a resolved decision but just to make sure it's not forgotten and IMHO, owl:dateTime would be the safe choice 18:24:58 <IanH> q? 18:25:01 <bmotik> There is aleady an editor's note 18:25:19 <MartinD> Peter: This would be in syntax, Boris says it would there 18:25:55 <MartinD> Alan: (?) What is the definition of punning at the moment? 18:25:58 <IanH> q? 18:26:10 <bijan> I think it's what peter says it was 18:26:11 <m_schnei> Shouldn't there be an email discussion in the past about the "which punning" question? 18:26:28 <MartinD> Alan: There are a few definitions going, so which is the one we subscribe to? To explain it to people 18:26:40 <MartinD> IanH: Alright, these other issues are probably longer to discuss 18:26:42 <MartinD> Topic: AOB 18:26:53 <MartinD> IanH: There are no proposal for additional items on agenda, so let's conclude 18:26:54 <Zakim> -msmith 18:26:55 <m_schnei> omit: bye 18:26:56 <Zakim> -bmotik 18:26:56 <uli> omit: bye bye 18:27:01 <Zakim> -uli 18:27:01 <IanH> omit: bye 18:27:02 <Zakim> -baojie 18:27:02 <Zakim> -Peter_Patel-Schneider 18:27:04 <Zakim> -bijan 18:27:04 <Zakim> -Sandro 18:27:05 <msmith> msmith has left #owl 18:27:05 <Zakim> -Achille 18:27:07 <sandro> Thanks, Ian :-) 18:27:08 <Zakim> -IanH 18:27:09 <Zakim> -Alan 18:27:10 <Zakim> -m_schnei 18:27:13 <Zakim> -bcuencagrau 18:27:25 <MartinD> IanH: And thanks to you all for participation too
http://www.w3.org/2007/OWL/wiki/Chatlog_2008-09-10
CC-MAIN-2015-35
refinedweb
7,302
66.27
Music is an important part of our lives, including the music in the computer games we play. Perhaps you have wanted to create a Java-based music-editor program to help you compose pieces of music for your own computer games. This program would present a GUI with some means to choose an instrument (a piano, for example), a piano-like keyboard for playing notes (via that instrument), some way to record and playback compositions, and so on. This Java Fun and Games installment presents an applet that gets you started on that program. The applet, which I call Javano (for Java-based piano), reveals a piano-like keyboard and a means to choose an instrument —but nothing else. I'll leave it up to your creativity to add extra capabilities to this applet. The figure below reveals Javano's GUI. Press the keys on Javano's keyboard to play the bagpipes or another musical instrument. Click on thumbnail to view full-sized image. The GUI is created from the contents of Javano.java. That source file is described in Listing 1. Listing 1. Javano.java // Javano.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class Javano extends JApplet { Keyboard keyboard; JComboBox instruments; public void init () { JPanel panel = new JPanel (); panel.add (new JLabel ("Instruments:")); instruments = new JComboBox (); panel.add (instruments); getContentPane ().add (panel, BorderLayout.NORTH); keyboard = new Keyboard (); getContentPane ().add (keyboard, BorderLayout.SOUTH); } public void start () { keyboard.turnOn (); DefaultComboBoxModel dcbm; dcbm = new DefaultComboBoxModel (keyboard.getInstruments ()); instruments.setModel (dcbm); ActionListener al; al = new ActionListener () { public void actionPerformed (ActionEvent e) { JComboBox cb = (JComboBox) e.getSource (); keyboard.chooseInstrument (cb.getSelectedIndex ()); } }; instruments.addActionListener (al); } public void stop () { keyboard.turnOff (); } } Although I haven't commented Javano.java, the source code should not be too hard to follow. For starters, Javano is organized around a custom Swing-based keyboard component represented by the Keyboard class. The Keyboard presents a no-argument constructor for creating this component. That constructor initializes the keyboard's keys, including their locations. Furthermore, it attaches a mouse listener to this component—to respond to mouse-press, mouse-release, and mouse-exit events. The Keyboard also presents four methods: public void turnOn()turns on the keyboard. Turning on the keyboard causes the default MIDI (Musical Instrument Digital Interface) synthesizer to be acquired, instruments to be loaded into that synthesizer, and a channel to be established for receiving note messages that result in music. Until you call this method, you will hear no music when you press keys, although you will get visual feedback of those key presses. The best place to call turnOn()is the applet's start()method. public String [] getInstruments()returns an array of strings that describe various instruments; you will probably use that information to populate a combo box or a list. Until you invoke turnOn(), this method returns null. public boolean chooseInstrument (int instrumentID)switches from the current instrument to the instrument associated with the integer represented by instrumentID. The array index used to select a string from the array returned by getInstruments()can be passed to chooseInstrument ()to select the instrument described by that Stringobject. Until you invoke turnOn(), this method returns false. public void turnOff()turns off the keyboard. You must turn off the keyboard before leaving the applet's Webpage. Failure to do so could result in various problems due to resources being tied up as a result of the turnOn()method call. The best place to call that method is the applet's stop()method. The init() method constructs the GUI. Because we cannot invoke getInstruments() until we have called turnOn(), and because we call turnOn() in the start() method, we populate the combo box in start(). Listing 2 describes the implementation of the Keyboard class's constructor and its four methods. Listing 2. Keyboard.java // Keyboard.java /* * Portions of Keyboard's source code were excerpted from Sun's MidiSynth.java * source file. I've included Sun's original copyright and license, to be fair * to Sun. * * Copyright (c) 1999. */ import java.awt.*; import java.awt.event.*; import java.util.Vector; import javax.sound.midi.*; import javax.swing.*; /* This class creates a keyboard component that knows how to play a specific instrument. */ public class Keyboard extends JPanel { public final static Color KEY_BLUE = new Color (204, 204, 255); public final static int KEY_HEIGHT = 80, KEY_WIDTH = 16; private Key theKey; private MidiChannel channel; private Synthesizer synthesizer; private Vector blackKeys = new Vector (); private Vector keys = new Vector (); private Vector whiteKeys = new Vector (); public Keyboard () { setLayout (new BorderLayout ()); setPreferredSize (new Dimension (42*KEY_WIDTH+1, KEY_HEIGHT+1)); int transpose = 24; int [] whiteIDs = { 0, 2, 4, 5, 7, 9, 11 }; for (int i = 0, x = 0; i < 6; i++) { for (int j = 0; j < 7; j++, x += KEY_WIDTH) { int keyNum = i * 12 + whiteIDs [j] + transpose; whiteKeys.add (new Key (x, 0, KEY_WIDTH, KEY_HEIGHT, keyNum)); } } for (int i = 0, x = 0; i < 6; i++, x += KEY_WIDTH) { int keyNum = i * 12 + transpose; blackKeys.add (new Key ((x += KEY_WIDTH)-4, 0, KEY_WIDTH/2, KEY_HEIGHT/2, keyNum+1)); blackKeys.add (new Key ((x += KEY_WIDTH)-4, 0, KEY_WIDTH/2, KEY_HEIGHT/2, keyNum+3)); x += KEY_WIDTH; blackKeys.add (new Key ((x += KEY_WIDTH)-4, 0, KEY_WIDTH/2, KEY_HEIGHT/2, keyNum+6)); blackKeys.add (new Key ((x += KEY_WIDTH)-4, 0, KEY_WIDTH/2, KEY_HEIGHT/2, keyNum+8)); blackKeys.add (new Key ((x += KEY_WIDTH)-4, 0, KEY_WIDTH/2, KEY_HEIGHT/2, keyNum+10)); } keys.addAll (blackKeys); keys.addAll (whiteKeys); addMouseListener (new MouseAdapter () { public void mousePressed (MouseEvent e) { // Identify the key that was pressed. A null // value indicates something other than a key // was pressed. theKey = getKey (e.getPoint ()); // If a key was pressed ... if (theKey != null) { // Tell key to start playing note. theKey.on (); // Update key's visual appearance. repaint (); } } public void mouseReleased (MouseEvent e) { if (theKey != null) { // Tell key to stop playing note. theKey.off (); // Update key's visual appearance. repaint (); } } public void mouseExited (MouseEvent e) { // This method is called if the mouse is moved // off the keyboard component. If a key was // pressed, we release that key. if (theKey != null) { // Tell key to stop playing note. theKey.off (); // Update key's visual appearance. repaint (); // The following assignment is needed so // that we don't execute the code within // mouseReleased()'s if statement should we // release a key after exiting the keyboard // component. There is no need to tell the // key to stop playing a note after we have // already told it to do so. Furthermore, // we prevent an unnecessary repaint. theKey = null; } } public Key getKey (Point point) { // Identify the key that was clicked. for (int i = 0; i < keys.size (); i++) { if (((Key) keys.get (i)).contains (point)) return (Key) keys.get (i); } return null; } }); } public boolean chooseInstrument (int instrumentID) { if (channel == null) return false; // Select new instrument based on ID. channel.programChange (instrumentID); return true; } public String [] getInstruments () { if (synthesizer == null) return null; Instrument [] instruments = synthesizer.getLoadedInstruments (); String [] ins = new String [instruments.length]; for (int i = 0; i < instruments.length; i++) ins [i] = instruments [i].toString (); return ins; } public void paint (Graphics g) { Graphics2D g2 = (Graphics2D) g; Dimension d = getSize (); g2.setBackground (getBackground ()); g2.clearRect (0, 0, d.width, d.height); g2.setColor (Color.white); g2.fillRect (0, 0, 42*KEY_WIDTH, KEY_HEIGHT); for (int i = 0; i < whiteKeys.size (); i++) { Key key = (Key) whiteKeys.get (i); if (key.isNoteOn ()) { g2.setColor (KEY_BLUE); g2.fill (key); } g2.setColor (Color.black); g2.draw (key); } for (int i = 0; i < blackKeys.size (); i++) { Key key = (Key) blackKeys.get (i); if (key.isNoteOn ()) { g2.setColor (KEY_BLUE); g2.fill (key); g2.setColor (Color.black); g2.draw (key); } else { g2.setColor (Color.black); g2.fill (key); } } } public void turnOff () { if (synthesizer == null) return; // Attempt to unload all instruments. synthesizer.unloadAllInstruments (synthesizer.getDefaultSoundbank ()); // Close the synthesizer so that it can release any system resources // previously acquired during the open() call. synthesizer.close (); synthesizer = null; } public boolean turnOn () { try { if (synthesizer == null) { // Obtain the default synthesizer. if ((synthesizer = MidiSystem.getSynthesizer ()) == null) return false; // Open the synthesizer so that it can acquire any system // resources and become operational. synthesizer.open (); } } catch (Exception e) { e.printStackTrace(); return false; } // Attempt to load all instruments. synthesizer.loadAllInstruments (synthesizer.getDefaultSoundbank ()); // Obtain the set of MIDI channels controlled by the synthesizer. MidiChannel [] midiChannels = synthesizer.getChannels (); // There must be at least one channel. Furthermore, we assume that the // first channel is used by the synthesizer. If you run into a problem, // use the index (other than 0) of the first non-null midiChannels // entry. if (midiChannels.length == 0 || midiChannels [0] == null) { synthesizer.close (); synthesizer = null; return false; } // Identify the channel to which note messages are sent. channel = midiChannels [0]; return true; } /* This inner class describes an instrument key that knows how to start sounding and stop sounding its note. Furthermore, each key knows its size and location. */ class Key extends Rectangle { final static int ON = 0, OFF = 1, VELOCITY = 64; int kNum, noteState = OFF; public Key (int x, int y, int width, int height, int num) { super (x, y, width, height); kNum = num; } public boolean isNoteOn () { return noteState == ON; } public void off () { setNoteState (OFF); //Keyboard.this.repaint (); // Send the key number to the channel so the note stops sounding. // Also send a keyup VELOCITY, which might affect how quickly the // note decays. if (channel != null) channel.noteOff (kNum, VELOCITY); } public void on () { setNoteState (ON); //Keyboard.this.repaint (); // Send the key number (0 - 127, where 60 indicates Middle C) to the // channel so the note starts to sound. Also send a keydown VELOCITY, // which indicates the speed at which a key was pressed (the loudness // or volume of the note). if (channel != null) channel.noteOn (kNum, VELOCITY); } public void setNoteState (int state) { noteState = state; } } } Listing 2 excerpts a source file called MidiSynth.java, part of a demo that Sun created several years ago. To save time developing the keyboard component's internal organization and painting logic, I excerpted some code from MidiSynth.java and made modifications. Listing 2 depends on MIDI to play music. If you have never experienced working with MIDI from a Java perspective, I encourage you to read the MIDI chapters in the Java Sound documentation that accompanies the Java SDK. After compiling Listings 1 and 2, you will want to run this applet. Before you can do that, however, you must describe the applet to appletviewer via HTML. Listing 3 provides the needed HTML. Listing 3. Javano.html <applet code=Javano.class width=675 height=125> </applet> After you've played with Javano, you will probably become aware of its various limitations. There are many things you can do to improve this applet. Here are three suggestions: VELOCITYconstant with a GUI feature for choosing the volume A Java-based music editor can help you compose music for your computer games. Javano gets you started by offering a piano-like keyboard and the means for choosing an instrument. After adding features to save and play back your compositions, press multiple keys simultaneously, and choose an appropriate volume (from the GUI), you will be well on your way to developing a music editor for your computer gaming needs. Free Download - 5 Minute Product Review. When slow equals Off: Manage the complexity of Web applications - Symphoniq Free Download - 5 Minute Product Review. Realize the benefits of real user monitoring in less than an hour. - Symphoniq
http://www.javaworld.com/javaworld/jw-10-2005/jw-1024-funandgames.html
crawl-001
refinedweb
1,876
58.79
My bad! Thanks ! :) My bad! Thanks ! :) Ok, and my next question...i understand, why this would by bad. But this is situation when i define Test t = new Extending(); so i can call t.getX(); but what if i define Extending = new Extending();... Now i can see, that both constructors are "executed" when i run main method. But why? It is not enough just execute Extending constructor? public class Extending extends Test { public Extending() { System.out.println("printing EXTENDING constructor"); } @Override public void testMethod() { ... Hi, would like to make .jar application, which will be able to sava data without using any database (Postgres...) but i dont know, what type of datastorage i should use to make the application... I think Scanner(System.in) doesnt solve my problem. I am connected via command line in windows and i am sending commands. I just start the server and connect to it using windows command line: telnet localhost <portnumber>. I have no GUI. I just ned to know, how to get username and password from command line and save it... Hi, i would like to add 2 methods. Username and password. Client enter the username into the command line and server saves it. Then client enter the password. The password should be sum of ASCII... It works now ! :) The parametr Socket s shouldnt by there :) Hi, i wrote this code, but when i connect via telnet, the server gives me no response :/ Please help ? import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream;... Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1 at java.util.ArrayList.elementData(ArrayList.java:371) at java.util.ArrayList.get(ArrayList.java:384) at... Hi, i have passed my exam from java programming, but i have to solve one more homework. Its about B-tree implementation :/ i dont really know, how to do it. I have a few lines of code but its no... My problem is, that it doesnt work...i just want add two arrays and save it to the new array. And i dont know how i can use this function with the specific numbers ? public static int[] arrayAdd(int[] p, int[] q) { for (int i = 0; i < p.length; i++) { for (int j = 0; j < q.length; j++) { int add [] = p[i] + q[j];... package du1; import java.util.Scanner; public class Du1 { /** * Zde je metoda resici problem faktorialu * Doplnte spravne obsah metody. */
http://www.javaprogrammingforums.com/search.php?s=97bcbe69367f9ba1a2d239fff1b97685&searchid=1365030
CC-MAIN-2015-06
refinedweb
401
70.7
I came across this today. You can write Rake tasks that accept arguments, called like this: rake tweets:send[cpytel] You define the rask task like this: namespace :tweets do desc 'Send some tweets to a user' task :send, [:username] => [:environment] do |t, args| Tweet.send(args[:username]) end end Unfortunately, by default zsh can’t parse the call to the rake task correctly, so you’ll see the error: zsh: no matches found: tweets:send[cpytel] So you’ll need to run it like this: rake tweets:send\[cpytel\] Or this: rake 'tweets:send[cpytel]' However, this is controlled by the NOMATCH zsh option: If a pattern for filename generation has no matches, print an error, instead of leaving it unchanged in the argument list. This also applies to file expansion of an initial ~or =. Used like so: unsetopt nomatch rake tweets:send[cpytel] You can unset this in your .zshrc (we’ve set it in our dotfiles, so if you are using ours then you’re safe) without much loss in functionality for the typical case.
http://robots.thoughtbot.com/how-to-use-arguments-in-a-rake-task
CC-MAIN-2014-52
refinedweb
178
52.83
Creating your C++ file You can begin coding your HelloWorld program. The .cpp file that you create will be saved in the project folder you just created Creating a Makefile project. Files are edited in the C/C++ editor located to the right of the C/C++ Projects view. The left margin of the C/C++ editor, called the marker bar, displays icons for items such as bookmarks, breakpoints, and compiler errors and warnings. For more information about: - The editor area and marker bar, see Workbench User Guide > Reference > User interface information > Views and editors > Editor area - The marker bar icons, see Workbench User Guide > Reference > User interface information > Icons and buttons > Editor area marker bar To create a C++ file: - In the Project Explorer view, right-click the HelloWorld project folder, and select New > Source File. - In the Source file: field, type main.cpp. By default the source folder should be your project folder. The template selected is probably Default C/C++ Source Template. - Click Finish. - A Comment template probably appears at the top of an otherwise empty file. Type the code, exactly as it appears below, in the editor. Or you can paste it in from this help file. #include <iostream> using namespace std; int main () { // Say HelloWorld five times for (int index = 0; index < 5; ++index) cout << "HelloWorld!" << endl; char input = 'i'; cout << "To exit, press 'm' then the 'Enter' key." << endl; cin >> input; while(input != 'm') { cout << "You just entered '" << input << "'. " << "You need to enter 'm' to exit." << endl; cin >> input; } cout << "Thank you. Exiting." << endl; return 0; } - Click File > Save. Your new .cpp file is displayed in the Project Explorer view. Your project now contains main.cpp. Before you can build your HelloWorld project, you must create a makefile. Next: Creating your makefile Back: Creating your project CDT projects Project file views Coding aids Working with C/C++ project files Writing code
http://help.eclipse.org/mars/topic/org.eclipse.cdt.doc.user/getting_started/cdt_w_newcpp.htm
CC-MAIN-2019-51
refinedweb
317
74.19
This blog post has been updated to reflect changes in VS 2010 RC or later build. If you are on VS 2010 Beta 2, please upgrade before trying this post. One question that keeps coming up is “How to configure search properties used by the recorder\code generation for identifying the control?”. For example, inform recorder that the Name of the certain control is dynamic and not to use it to identify the control. Well, this feature per say is missing in VS 2010 release of Coded UI Test. However, there is a fairly simple workaround to this issue using the extensibility support of Coded UI Test. The configuring search properties may mean add or remove or change of search properties. Using the extensibility support, remove and change of search properties is possible but not add (unless you know the property value too). The extensibility sample for this is attached to this post as RemoveUnwantedProperties.zip. Before starting with this sample, ensure you have read at least the first two post on Coded UI Test Extensibility series. Explaining the Sample – - The sample hooks into UITest.Saving event. This event is raised before the UITest file is saved and code is generated for it. - It then iterates through all the UI objects. The UI objects are stored in an hierarchical tree. The sample uses recursion to do pre-order traversal of the tree. (Check for the calls to RemoveUnwantedSearchCondition() method.) - During traversal, for each UI object, it calls GetRedundantSearchProperties() method in the sample. Based on the property names returned by this method, it removes those properties, if present, from the UI object’s search criteria. - The current implementation of GetRedundantSearchProperties() method simply removes “Name” property for all but few Win32\WinForms control from the search criteria. Depending on your need, you can customize this function accordingly. Please note that since the sample is removing a search property, it might make the search criteria weak and result in playback failure. This is the reason for “claiming” this only a “workaround”. RemoveUnwantedProperties.zip Hi, what VS2010 version I need to use your example? RC? It seems namespace ‘CodeGeneration’ doesn’t exist in Beta 2. 🙁 Thanks. Hi Buck, The sample is for Beta 2 only. The possibility could be that some reference is not resolved properly. Check for any warning in the reference particularly Microsoft.VisualStudio.TeamTest.UITest.CodeGeneration.dll. You might have to fix the path to this dll. Thanks. Hi, you’re right. Thanks. 🙂 Great content! The extensibility series totally rocks! I wish the comments on some of the older posts were still enabled for folks to ask qs on relevant posts. Is there a reason you want to close commenting on older posts? Anu – All the posts are open to commenting. I have not done anything deliberate to block comments. Hi Gautam, I need help on using separate UIMAP.uitest files… Here is my scenario. 1) I have created a new folder "Screen1" under my project "TSApp". 2) I added new CodedUI test by right-cliking "Screen1" folder –> "Screen1TEST.cs" 3) then added new CodedUIMap item by right-cliking "Screen1" folder –> "Screen1UIMAP.uitest" and recorded some actions –> saved as "Method_Add". Here it created "Screen1UIMAP.cs" and "Screen1UIMAP.Designer.cs" files 4) then i went to "Screen1TEST.cs" file and trying to call recorded "Method_Add" as TSApp.Screen1.Screen1UIMAP.Method_Add() but here it not listing out this "Screen1UIMAP" file under TSApp.Screen1.. Anything i went wrong? Can u give me an idea on this…OR Tell me the approach to use separate UIMAP file and creating TEST based on that… Thanks in advance, Shanmugavel. Please use Coded UI Test forum – social.msdn.microsoft.com/…/threads for such questions. Thanks. thanks for the great Post; i am trying this for my project to customize href;Pageurl ; but not sure if i got this even though i have copied this demo to the required folder still not getting desired , any help how can i debug this ? @subbu – What is the error that you are seeing? Note that the path to copy the files are different on 32bit machine vs 64 bit machine. Check for details in blogs.msdn.com/…/2-hello-world-extension-for-coded-ui-test.aspx. Try the above Hello, World plugin to see if you are able to get that working. Hi Guatam, Can you elaborate some more on "Using the extensibility support, remove and change of search properties is possible but not add (unless you know the property value too)."? I was having trouble with CUIT recognizing some MFC controls after receiving a new test build. I want to help CUIT find the controls with perhaps adding search properties. Most likely this will need to be done by hand coding and not recording. Can you point me in the right direction? Thanks in advance. The above extensibility hook is during saving of the generated file. Hence it can only remove (or make certain changes) to the properties captured by the recorder. To add properties, we need a hook into the recorder to ask it to capture the properties in first place which is not there today. Thanks.
https://blogs.msdn.microsoft.com/gautamg/2010/02/02/configure-search-properties-used-by-recordercode-generation/
CC-MAIN-2016-44
refinedweb
857
66.84
csv. paypal2ofx1.pl is a program to download PayPal transactions and convert them to OFX v1.0.3 files to import into accounting software like GnuCash. paypal2qif.pl is a program to download PayPal transactions and convert them to QIF files. These can be imported into accounting software like GnuCash or Quickbooks. These are command line perl programs designed for advanced users. They are provided free of charge for anyone to use. If you like these programs you can donate to the author to support future development. You will need a PayPal business account to download transactions from PayPal automatically. You'll need to configure your merchant username, password and signature for the PayPal Signature API (see section Creating an API signature). Study the README.txt and INSTALL.txt files for a description how to install, setup and use these programs. These files are also included in the downloads below. The file archives available for download below contain all programs. You can contact the author by writing an email to: Dirk Jagdmann <doj@cubic.org> If you find this program useful you can support future developments by donating any amount you find appropriate to the author's PayPal account using the following button. Future features may include:
https://llg.cubic.org/csv2iif/
CC-MAIN-2021-10
refinedweb
207
59.7
Hide Forgot Description of problem: When attempting to re-compile any existing product that compiled under Fedora Core 6 (test 4/pre-final), things would compile to completion without error. Now when attempting to compile things like running "make menuconfig" on the kernel, mplayer, or NVidia's kernel module (I know you guys don't support it, I'm just addding it FYI), there is an internal compiler error which looks like a segmentation fault. I've ran strace on it, but I don't understand what I'm not familiar enough to know exactly what is going it. It seems that there is something that occurs with a memory copy or allocation. Version-Release number of selected component (if applicable): Kernel 2.6.18-2798-1.fc6 Gcc version 4.1.1 Glibc version 2.5-3 How reproducible: Very reproducible. Got a confirmation post from Steve Grubb (sgrubb@redhat.com) that he is seeing something similar. Steps to Reproduce: 1. Make sure the versions of code are installed above on x86_64 machine. 2. cd to /usr/src/kernels/2.6.18-1.2798.fc6-x86_64 (have kernel-devel installed) 3. make menuconfig Actual results: [root@jaguar 2.6.18-1.2798.fc6-x86_64]#/cc603YAn.out file, please attach this to your bugreport. make[1]: *** [scripts/kconfig/conf.o] Error 1 make: *** [menuconfig] Error 2 Expected results: Entered the menuconfig curses screeen. Additional info: So I used the kernel as an example, but it occurs with other programs that would compile before but not now: Nvidia driver and mplayer are the two example /tmp/cc<hash>.out files that I've included. Created attachment 138973 [details] Bzip2 tar file of the /tmp/cc<hash>.out files from compiling things Can't reproduce this with gcc-4.1.1-30 nor any other gcc I have around. On your box, is there anything gcc doesn't segfault on? Some of the dumps are really trivial one liners, if the bug was in gcc, it wouldn't definitely pass its tens thousands of tests run during the build. So, I'd say either you have hardware problems, or some kernel bug. I created a simple make file and c program to show that the basics work: <hello.c> #include <stdio.h> int main() { printf("Hello World\n"); return(0); } </hello.c> <Makefile> # Make file for Hello.c hello.o: $(CC) $(CFLAGS) -o hello.o hello.c clean: rm hello.o </Makefile> I'll try a diagnostic CD to see if something is going bad with my processor, memory, or drives. If you have any other tests that I could try that may help pinpoint this issue, let me know. There is another report of similar problems and that is even on another architecture. See: Unfortunately this note is also remarkably short on essential details; in particular it also does not mention what _really_ bombs out. Some guesses about possibly reasons based on past experiences: - out of space on /tmp - something messed up in a file system used by /tmp - gcc optimizer spooked by something (not necessarily easy to reproduce) Still I did not run into gcc exceptions for reasons like above for quite a while now. Looks like I'm having hardware problems. Used two different cpu stress tests from ubcd 3.4 and both came up with errors. Popped my case open and found that a fan had stopped working, so I'll be buying a new MB and CPU soon. Thanks for your time. Jonathan
https://bugzilla.redhat.com/show_bug.cgi?id=211614
CC-MAIN-2019-13
refinedweb
586
67.25
A Programming Language with Extended Static Checking, what got me looking at this was the following paper: Constrained Types for Object-Oriented Languages, N. Nystrom, V. Saraswat, J. Parlsberg and C. Grothoff, OOPSLA, 2008. [DOI] [PDF] Constrained types (which are a form of dependent type) are quite interesting to me, since Whiley supports something similar. A simple example from the paper is this: class List(length:int}{length >= 0} { ... } This is a constrained list type whose constraint states that the length cannot be negative. I find the notation here is a bit curious. X10 divides fields up into two kinds: properties and normal fields. The distinction is that properties are immutable values, whilst fields make up the mutable state of an object. Thus, constraints can only be imposed over the properties of a class. This implies our constrained list cannot have anything added to it, or removed from it. But, I suppose we can still change the contents of a given cell. Constraints can also be given for methods, like so: def search(value: T, lo: int, hi: int) {0 <= lo, lo <= hi, hi < length}: ... The first question that springs to mind here is: what can we do inside a constraint? Obviously, we’ve already seen properties, parameters and ints being used … but what else? In particular, can we call impure methods from constraints? Unfortunately, I don’t have definite answer here. As far as I can tell, X10 has no strong notion of a [[pure function]]. The spec specifically states that X10 functions are “not mathematical functions”. On the other hand, I haven’t seen a single constraint which involves a method invocation, so perhaps you simply can’t call methods/functions from constraints. Sadly, the spec is rather brief on this point. An interesting design choice they’ve made with X10 is to rely on “pluggable constraint systems”, which presumably stems from work on “pluggable type systems” (see e.g. this): The X10 compiler allows programs to extend the semantics of the language with compiler plugins. Plugins may be used to support different constraint systems. Now, let’s be clear: i’m not a fan of this. The problem is really that the meaning of programs is no longer clearly defined, and relies on third-party plugins which may be poorly maintained, or subsequently become unavailable, etc. I think the problem is compounded by the following: If constraints cannot be solved, an error is reported To me, this all translates into the following scenario: “I download and compile an X10 program, but it fails telling me I need such and such plugin; but, it turns out, such and such author is not maintaining it any more and I can’t find it anywhere.” I’m assuming here that it will be obvious which plugins you need to compile a given program. If not, then you’re faced with a real challenge deciding which plugin(s) you need. Anyway, that’s my 2c on X10 … let’s see how it pans out!! UPDATE: I have been reliably informed that constraints may call “property methods” which are effectively macros that expand inline. Thus, they are not true functions and cannot, for example, recurse. Now, let’s be clear: i’m not a fan of this. The problem is really that the meaning of programs is no longer clearly defined, and relies on third-parties plugins which may be poorly maintained, or subsequently become unavailable, etc. So… how is this any different from the current situation with software libraries? Hey Andrew, So, I do agree with you here, up to a point. In fact, messing around trying to find libraries when compiling some program is what got me thinking about this. Having these plugins as part for the compiler just seems more fundamental to me. But, perhaps people will end up distributing the necessary plugins with their code. And, of course, i’m sure developers would naturally gravitate towards plugins that are well-known, widely used and generally available. I am in the process of reading the x10 spec, and I have a question about atomic blocks, maybe somebody can explain. It says that the atomic block “is executed by an activity as if in a single step during which all other concurrent activities in the same place are blocked”. Does it mean that I can never have parallel processing of unrelated pieces of data just because they happen to reside in one place? In similar scenario in Java it will be quite different: synchronized blocks can be processed simultaneously if they are using different monitors. Hi Leonid, I’m not a expect on X10. However, atomic blocks have been proposed for lots of languages, including Java. The problem with synchronisation is that you have to synchronise on something. Sounds weird, but image you have two array lists and you want to move an item from one to the other, whilst ensuring that no other thread can ever see the intermediate state where the item is in neither. To do this, you have to lock both array lists and the problem is the order in which you do it can lead to deadlock, if other threads are doing something similar. Atomic blocks help this situation by ensuring that intermediate state arising during the block is never seen by others. I believe they are a better primitive to use than traditional synchronisation. I would expect that the X10 compiler would allow atomic blocks that cannot interfere with each other to run in parallel. This is probably why the spec says: “For the sake of efficient implementation X10 v2.0 requires that the atomic block be analyzable, that is, the set of locations that are read and written by the Block- Statement are bounded and determined statically.” This enables the compiler to figure it all out. The following is a good paper talking about this kind of thing for Java: Dave Cunningham, Khilan Gudka, Susan Eisenbach: Keep Off the Grass: Locking the Right Path for Atomicity. Proceedings of Compiler Construction, 2008:.
http://whiley.org/2010/08/05/the-x10-programming-language/
CC-MAIN-2017-47
refinedweb
1,008
62.48
I had blogged on property boilerplate and the work of my graduate student Alexandre Alves in the summer, but I didn't get much reaction then. But recently, there has been a flurry of blogs on native property syntax. Let's try this again. Many programmers are sick and tired of boring, repetitive boilerplate code for JavaBeans properties. Here is a simple code example from the JBoss EJB3 tutorial. ; } } 64 lines.. Where do the arrows come in, you ask. There are several proposals for a property access operator. The arrow got people excited. Everyone loves to hate the arrow (see here, here, and here).hé writes: "It has to be '.' (dot). Anything else would look silly." Having no operator at all might work. Look at the EJB3 example--most of the getters and setters invocations are done through reflection anyway. There is a. Or reified generic types. Or XML syntax. I am glad that people worry about solving (or creating) tomorrow's problems, but property boilerplate is a problem that we have today. Before vilifying a proposal because of unsightly syntax, let's summarize what one wants in native properties. Here are some issues that have been raised. The issue that has gotten the most press, namely what operator, if any, to use for property access, seems the least important one. Unfortunately the arrow syntax debate has got confused with the syntax sugar generate properties debate. These are two completely separate language changes. (The former is a highly debatable language change, the latter is a no brainer) The property generation language change has the potential to completely change developers interaction with frameworks, and greatly increase the robustness and clarity of out code. I discuss these property objects on my blog. So, to your points - I agree with your first 4 points, but believe strongly that reflective access to properties misses a huge opportunity. I prefer a keyword for property generation, Remi has already shown that a context sensitive keyword can work here. An annotation would be a misuse. And yes, generated get/set methods should act just like any other method. Posted by: scolebourne on January 07, 2007 at 04:59 PM It just raises so many questions. With a @Property, someone might ask whats the point? if your getters and setters are that trivial, why not just make the fields public? and we'd say, it allows you change the behaviour in the future, and allows you to override them in the subclass... So, how would I validate these properties? (eg, null checks, range checks etc). what about notifying PropertyChangeListeners? and how do I override them in a subclass? what about declaring the exceptions it throws when setting, but getting doesnt throw any? how do you do read/write only? I like getters and setters because they're clear and concise. its obviously exactly what they do and how they behave. its just natural. i just dont think that any of these new proposals arent going to improve the quality of our software or get our software written any faster. its just complicating something thats already trivial. Posted by: benloud on January 07, 2007 at 05:18 PM I agree, there's nothing gained with an @Property annotation-- we need something actually in the semantics themselves which allows the inclusion of proper get/set semantics. Using a reserved word like 'property' will have no more of an affect than enum, and people will welcome it just as they did with Enums. Secondly a property is not the same as a member variable, so I think many people's complaints on OO/encapsulation are an illegitimate argument against properties. Finally, I'm in the C# syntax camp-- if they are adding closures, I don't see why adding property syntax would be any different, putting it as a close relation to the method type. Posted by: jhook on January 07, 2007 at 09:52 PM The whole point of getters and setters over direct access to fields is to allow validation and computed properties. C#-style syntax is clear. Getter/Setter syntax also lets you encapsulate property change events, binding, and vetos. I see lots of clever workarounds to put this into annotations and the like, but hey, method bodies are also a nice place to put code, and it's very flexible... IDEs generate boilerplate for you. IntelliJ also lets you have a bean view of JavaBean-style properties, as I imagine most other IDEs do. - Chris Posted by: chris_e_brown on January 08, 2007 at 12:26 AM There's an interesting parallel discussion here: Property Support in Java, the Java Way (on JavaLobby). Posted by: chris_e_brown on January 08, 2007 at 12:32 AM Cay, property can be declared using the keyword 'property' without breaking all existing codes. I've blogged on such keywords. Futhermore, i've recently patched the java compiler to provide properties that : useq a keyword 'property' auto-generates getter and setter allows to use dot to access to a property. All these stuffs are provided without breaking compatibility. Rémi Posted by: forax on January 08, 2007 at 12:43 AM an idea: every private member of a class receive getters and setters by default.. unless it is annotated to not have it. like: private int i; // this will have get/set @hidden // any annotation name.. private int j; // this will not... why not ? we get familiarized with the programing without getter/setter by default - why not with getter/setter by default ? :)) all this simplification ideas impact in a complex task of modifing the beliefs of a huge community.. Posted by: felipegaucho on January 08, 2007 at 01:21 AM Actually, options are endless if syntax will allow "contextual" keywords, i.e. in one place this is a keyword, in other it's a variable name. Same way as get/set is treated in JavaScript. Say, "property" may goes between visibility modifier (if any) and type of field... oops property: public property String name; // public property property String otherName; // package-visible property String yetAnotherName; // package-visible field public String nameAgain; // public field This does not break any existing code at all and readable in English. Though, it's a bit verbose when it comes to overriding default getter/setter. Probably some variation of C# or AS3 syntax is better: public String name get { return default; } set(v/* type expression here is redundant */) { validate(v); default = v; } ; public int trivialRWProperty get set; // read-write property public Date justROProperty get { return new Date(); }; // read-only property public int x get; // compile-time error (?) Here we may exploit the fact that "default" already reserved and left task of generating field name to compiler. Depending on usage of "default" in code compiler decides whether or not corresponding auto-generated field is necessary at all. Obviously, exceptions may be added to get(rare)/set(typically) accessors/mutators. As of closures in Java, I firmly believe it's to early to get this functionality. First, Java need type inference. Without this any attempts to create closures will result to ugly verbose syntax. Even now I hate this: //in some method HashMap< String, WeakReference< MyBusinessObject > > refMap = new HashMap< String, WeakReference< MyBusinessObject > >; Why not allow this (Groovy-like): def refMap = new HashMap< String, WeakReference< MyBusinessObject > >; Second move should be introducing function types and closures over class/instance methods. And only afterwards it's possible to think about "real" closures. Valery Posted by: vsilaev on January 08, 2007 at 03:28 AM valery, your syntax def var = is ambiguous def can be a type. but three other syntaxes was proposed for that purpose see my blog and the blog of neal gafter Rémi Posted by: forax on January 08, 2007 at 05:19 AM If there is a special syntax to access properties, a client needs to know if a property (general sense) is implemented as a property (proposed Java facility). Isn't that an implementation detail a client doesn't want to know? Posted by: hoogenm on January 08, 2007 at 06:10 AM All this talk about the existing bean pattern being an "ugly mess" is something I don't understand. I haven't ever encountered any issues with it. Another thing that all of these proposals seem to miss is that proper support for properties should handle PropertyChangeSupport. With out that "native" properties will be of very little value anyway. Native properties aren't a bad idea because the "syntax is ugly", they are a bad idea because there is very little gain over the existing solutions. Yes my IDE writes getters and setters for me, NO it doesn't read them for me, but that doesn't matter because when I see getX() or setX(y) in the code it doesn't take extra effort to understand that code. I don't need to look at the actual getter and setter methods, everyone knows what they are supposed to do.. and of course the advantage is that those methods ARE there for me to look at if I want to see them. They are there to modify and tweak as needed as well. This smells so much of a solution looking for a problem. Posted by: swpalmer on January 08, 2007 at 06:35 AM "Don't Break My Code with a New Keyword!" I'm really tired of hearing about how new keywords will break old code. Maybe it will be a problem for your code base, maybe not. For us when we moved to JDK5 we had oh maybe 5 or 6 instances of variables named enum. As I investigate using the nifty new JDK6 I have literally hundreds of conflicts because Sun decided to include SwingWorker. We unfortunately depend on an older version of SwingWorker and so now I have to change the name of the class, and change all the import statements blah, blah, blah. My point is, lots of things break code. Lets concentrate on building the most straight forward and easy to code semantics when extending the language. If it breaks the language just a little thats ok, a little is better in the long term then confusing hacks to accommodate older code. Posted by: aberrant on January 08, 2007 at 06:58 AM Bound property support is critical. If a solution cannot be designed with that in mind, then I say hold off. I use properties in .Net, but I don't think too much of them. There's little or no savings in terms of lines-of-code, but API's are more complicated because sometimes you end up using a Get or Set method anyway, in addition to properties, which means in the docs now you have to look at 2 different reference sections. Even Microsoft has done this. But good discussion, keep it up. I wish there were a way to inject generated source code into the original .java file based on an annotation processor, and have it marked as generated code. Then we can all "have it our own way", IDE's can understand it without needing to be enhanced for a new notation. Posted by: jsando on January 08, 2007 at 08:25 AM [I don't know if this story is actually true--I'd love to hear from the Java veterans.] I don't believe that's true. I am sure that Borland showed Delphi to someone at Sun whose eyes bugged out (probably from Marketing, considering how poorly we do it :-). Howeve, the JavaBeans lead, Graham Hamiliton, knew all about VB and visual editors long before then. Posted by: tball on January 08, 2007 at 09:03 AM For me the most common case for a property is in interfaces. As many programs work with data, passing data items back and forward is not so uncommon, a definition in an interface that says that there is a property "time" should say that I can set and get the "time" to/from it. This is also why I don't like the notion of overriding get/set; these methods should simply emit an event and never alter the value (this is also the recommendation in C#) - at most save copy should be allowed. Virtual properties (or "abstract" in Rémi's patch) pose the problem for the implementation to write a proper get/set pair: (p.set(x);p.get().equals(x)) If these conditions are not satisfied the usablitity of properties goes down as again the behavior on get/set has to be specified separately Posted by: csar on January 08, 2007 at 11:59 AM I think the discussion here is to superficial. The case is not only to avoid the setter and getter methods with some kind of new syntax. You also have to provide a solution to change the behaviour of the property. I think nothing is gained if you allow short, trivial code and make real world code more complex. Somebody used the EJB example, would you really implement a setter without range check? Yes, I know there are some annotations for range checks, but what kind of error they produce and how to handle it? And how to check that my property takes lower case characters only? Or you might want to limit the date to be larger than some other date... The new solution has to provide short trivial properties and a simple and intuitive solution to change the behaviour of the property. The bean specification also provides property change support. The new solution should also provide a clean solution for the property change support. If you want to change something, read the Bean specification first to really know what you want to change. I hope that the discussion starts to address the whole complexity of the issue versus my syntax looks better than yours. Posted by: pinus on January 08, 2007 at 12:19 PM @pinus, there is no clean solution. There are lot of good solutions. see Why don't provide property change support ? Rémi Posted by: forax on January 08, 2007 at 02:26 PM Someone posted a comment including the take that "Property access just is really kinda ho hum" and this doesn't warrant a keyword. I found this very rational. I posted a reply to this comment on the long discussion posted by Mikael Grev on JavaLobby. My comment is here: Posted by: steevcoco on January 08, 2007 at 03:51 PM One thing I haven't really seen anyone debate anywhere that has saved my butt repeatedly are break points. Maybe I'm the only one who ever does this but when I'm trying to figure out how a variable is getting set in an object I'll very often put a break point in the setXYZ( ... ) method. It gets called a couple times, I check some stuff on the stack... AHA thats why its being called. This has saved me repeatedly. Maybe I'm missing something but if we move code to this property notation, where will I put my break point? At the declaration? How will I be able to indicate I want to break just at the assignments and not the references? Posted by: alekd on January 09, 2007 at 09:41 AM alekd, If we have code generation with annotations there's no problem to set the breakpoint on the setter or getter since the code is in the .class files. The IDEs needs to be updated a bit but they need that in any case. Cheers, Mikael Grev Posted by: mgrev on January 09, 2007 at 01:59 PM @alekd, I don't know if other IDEs have this feature, but if you use Eclipse, you can set a "write-only watchpoint" on the variable Posted by: dserodio on January 10, 2007 at 05:10 AM
http://weblogs.java.net/blog/cayhorstmann/archive/2007/01/arrows_in_the_b.html
crawl-002
refinedweb
2,639
62.78
Regular Expressions in C# helps to describe complex patterns in texts. These C# regular Expressions are commonly known as RegEx in short. The first thing we need to understand is in Regex expression, and everything is essentially a character. The described C# Regex patterns could be used for searching, extracting, replacing, and modifying text data. C# supports regular expressions by providing classes in System.Text.RegularExpressions namespace. Regular Expressions in C# are nothing but the patterns. After defining patterns, we can validate by searching, extracting, replacing, modifying. In general, every structured data have some specific pattern. Let us say for example Any URL is having some specific pattern like it starts with www and ends with org and in between connected with dots. Likewise, email id also has a specific pattern like It starts with some alphanumeric characters, and then there will be ‘@’ symbol after that. Again there may be alphanumeric characters followed by a dot, which is again followed by ‘com’, or ‘org’, etc. 7/12/2018(mm/dd/yyyy) Date format has a certain pattern like mm-dd-yy format or mm-dd-yyyy format. The C# Regular expressions help us to define such patterns. Regular expressions look something like the below. The C# Regex for simple email validation is. ^[a-zA-Z0-9]{1,10}@[a-zA-Z0-9]{1,10}.(com|org|in)$ Let us see the basic information to write regular expressions. B for Brackets, C for Carrot, D for Dollar C# Regex Examples The following examples shows some of the simple C# regular expression patterns. Enter the character that exists between a-g? [a-g] – Enter character that exists between a-g with the length of 3? Regex: ^[a-g]{3}$ – Enter character that exists between a-g with max three characters and minimum one character? Regex: ^[a-g]{1,3}$ Validating data with 7 digits fix numerical format like 8743524, 6864351, etc. Regex: ^[0-9]{7}$ How to validate numeric data with three as minimum length and six as the maximum length? ^[0-9]{3,7}$ How to validate invoice numbers that have formats like KTG5240, the first three are alphabetical characters, and the remaining is the length of 6 numbers. ^[a-z]{3)[0-9]{6}$ Let us see simple Regex for validating the URL. Regex: ^www.[A-Za-z0-9]{1,10}.(com|org|in)$ It is always recommended to use verbatim literals instead of regular strings while writing regular expressions in C#. Verbatim literals start with the special prefix (@) that helps not to interpret backslashes and meta characters in the string. For instance, instead of writing \\m\\n, we can write it as @” \n\m”, which makes the string readable. C# Regex String Matching The C# Regex is a class in the namespace System.Text.RegularExpressions envelop the interfaces to the regular expressions engine, thus allowing us to perform matches and extract data from text using regular expressions. In C#, there is a static method Regex.Match to test whether the Regex matches a string or not. The enum Regexoptions is an optional setting to the method Regex.Match. The Regex.Match returns a Match object which holds the information of where the match found if there is a match. The C# Regex string match syntax is Match match = Regex.Match(InputStr, Pattern, RegexOptions) Let us see an example to demonstrate the C# Regular expressions. In this C# Regex example, we are demonstrating the regex pattern for “august 25” using System; using System.Text.RegularExpressions; class program { public static void Main(string[] args) { string pattern = @"([a-zA-Z]+) (\d+)"; string input = "special occasion on august 25"; Match match = Regex.Match(input, pattern); if(match.Success) { Console.WriteLine(match.Value); } Console.ReadKey(); } } OUTPUT ANALYSIS We have taken a string variable pattern to store the regex pattern. Next, the input is another string variable to store the text in which we want to search the pattern match. And the match is a variable of type Match to store the result that the static method regex.Match returns. If the match is a success, then the match value is printed onto the console. C# RegEx patterns The following table shows you the list of common C# RegEx patterns and their descriptions. Let us see another code demonstrating the C# Regex pattern for validating simple email id [email protected] using System; using System.Text.RegularExpressions; class program { public static void Main(string[] args) { string pattern = "[A-Za-z0-9]{1,20}@[a-zA-Z]{1,20}.(com|org)$"; string input = "Website is, contact us at [email protected]"; Match match = Regex.Match(input, pattern); if(match.Success) { Console.WriteLine(match.Value); } Console.ReadKey(); } } OUTPUT ANALYSIS As you have seen in the above example code, the string variable pattern is to store the regex pattern. Variable input is to store the text we give. Both pattern and input passed as arguments to the Regex.Match method, which returns the match value if the match is successful.
https://www.tutorialgateway.org/c-regular-expressions/
CC-MAIN-2021-17
refinedweb
830
56.96
assertion "(address & 0x80) == 0" failed: file "../source/twi.c", line 279, function: TWI_StartWriteExiting with status 1. #include <Wire.h>int reading = 0;void setup(){ Wire.begin(); // join i2c bus (address optional for master) Serial.begin(9600); // start serial communication at 9600bps}void loop(){ delay(70); // datasheet suggests at least 65 milliseconds // step 3: instruct sensor to return a particular echo reading Wire.beginTransmission(0x5a<<1); // transmit to device #112 Wire.write(byte(0x08)); // sets register pointer to echo #1 register (0x02) Wire.endTransmission(); // stop transmitting // step 4: request reading from sensor Wire.requestFrom(0x5a<<1,.println(reading); // print the reading } delay(250); // wait a bit since people have to read the output :)}. However the addresses from 0 to 7 are not used because are reserved so the first address that can be used is 8. Wire.beginTransmission(0x5a<<1); // transmit to device #112 Wire.beginTransmission(0x5a); //Use this if your address is 7-bit Wire.beginTransmission(0x5a>>1); //Use this if your address is 8-bit
https://forum.arduino.cc/index.php?topic=388270.0;prev_next=prev
CC-MAIN-2020-05
refinedweb
166
52.66
Details Description If a schema imports multiple schemas from the same namespace: <xs:import <xs:import Then the results are merged per code in the ImportUnmarshaller: //-- check schema location, if different, allow merge if (hasLocation) However if in this case PersonsB imports PersonsA. What happens is that you get an error when processing the type info from PersonsA a second time saying the info is already there. The fix is to do the following: //-- check schema location, if different, allow merge if (hasLocation) Note this is related to bug CASTOR-711, which actually appears fixed, but not for this case.. Activity It looks like your patch fixes one problem, but as far as I can tell, there's now two descriptor classes that do not compile any more. In addition, can you please attach a proper unified diff, as above code fragment (due to Jira formatting) is quite ambiguous .... Hmm, just looking at the PersonSearch.xsd XMl schema. Somehow this looks a bit odd to me. Eclipse's XML editor seems to be reporting that it cannot find the element reference to <eprson:Persons>. Which actually might be correct, as you are importing two XML schemas using the same namespace declaration. Wouldn't it be sufficient to import just PersonB, as PersonB.xsd includes PersonA.xsd ? Adding the following property definition to a custom builder property file - Specifies an XML namespace to Java package mapping. - There is no default mapping. # org.exolab.castor.builder.nspackages=\ urn:active-endpoints.com:test:schemas=xml.c1935.generated resolved the compilation problems. Any chances for supplying me with a patch in form of a unified diff ?
http://jira.codehaus.org/browse/CASTOR-1935
CC-MAIN-2015-22
refinedweb
272
55.64
But, at the same time, I'm working on projects in Perl and C++ (which I'm also learning) and I've installed Qt on my system, which is an iMac running OS X 10.8.4. I downloaded and built Qt 5.1.1, then installed Sip and PyQt. Then I wanted to test and be sure PyQt was working and the new Qt install overall, too. I figured it'd be quicker to test in Python than with a C++ example (and I'm having difficulties dealing with Perl - Qt on Perl is on hold). I found this page with some basic information on using PyQt. Other pages talk about using pyuic, but this page says it's included and that .ui files from Qt Designer will be parsed automatically by the Qt modules. So I used Qt Designer and tried to create a simple dialog - with just Cancel and Okay buttons and I saved the file, using the names specified on the linked page. Then I renamed ui_imagedialog.ui to ui-imagedialog.py (as specified in the tutorial) and created the program file by cutting and pasting this code in my text editor: - Code: Select all #!/usr/local/bin/python3 import sys from PyQt5.QtWidgets import QApplication, QDialog from ui_imagedialog import Ui_ImageDialog app = QApplication(sys.argv) window = QDialog() ui = Ui_ImageDialog() ui.setupUi(window) window.show() sys.exit(app.exec_()) (I added the 1st 2 lines.) When I first set this up and tried it, I had not renamed ui_imagedialog.ui to ui_imagedialog.py and I got an error on the 5th line ("from ui_imagedialog import..."), which tells me that in Python, if a resource is not there, I get an error at the line where I'm trying to import. Since I didn't get any errors from the 4th line, that makes me think PyQt5 is visible and available (meaning I installed it correctly). But once I did rename ui_imagedialog.ui to ui_imagedialog.py, I find I get this error: - Code: Select all Traceback (most recent call last): File "./QtTest.py", line 5, in <module> from ui_imagedialog import ImageDialog File "/Users/hal/Documents/Dev/Puttering/ui_imagedialog.py", line 1 <?xml version="1.0" encoding="UTF-8"?> ^ SyntaxError: invalid syntax It's having trouble from the very first line in ui_imagedialog.py. According to the information on the page with this example, the .ui file is supposed to be renamed to a .py file and is supposed to be parsed by the uic module that PyQt5 uses. I figure it is being read and treated as a Python file and not being parsed as the page the code is from states. What am I doing wrong and what do I need to change so the .ui file is parsed as it should be?
http://www.python-forum.org/viewtopic.php?p=10180
CC-MAIN-2014-15
refinedweb
466
66.94
As part of working through adding OrangePi support to Home Assistant, Alastair and I decided to change to a different GPIO library for OrangePi to avoid the requirement for Home Assistant to have access to /dev/mem. I just realised that I hadn’t posted updated examples of how to do GPIO output with the new library. So here’s a quick post about that. Assuming that we have an LED on GPIO PA7, which is pin 29, then the code to blink the LED would look like this with the new library: import OPi.GPIO as GPIO import time # Note that we use SUNXI mappings here because its way less confusing than # board mappsings. For example, these are all the same pin: # sunxi: PA7 (the label on the board) # board: 29 # gpio: 7 GPIO.setmode(GPIO.SUNXI) GPIO.setwarnings(False) GPIO.setup('PA7', GPIO.OUT) while True: GPIO.output('PA7', GPIO.HIGH) time.sleep(1) GPIO.output('PA7', GPIO.LOW) time.sleep(1) The most important thing there is the note about SUNXI pin mappings. I find the whole mapping scheme hugely confusing, unless you use SUNXI and then its all fine. So learn from my fail people! What about input? Well, that’s not too bad either. Let’s assume that you have a button in a circuit like this: The to read the button the polling way, you’d just do this: import OPi.GPIO as GPIO import time GPIO.setmode(GPIO.SUNXI) GPIO.setwarnings(False) GPIO.setup('PA7', GPIO.IN, pull_up_down=GPIO.PUD_DOWN) while True: print('Reading...') if GPIO.input('PA7') == GPIO.LOW: print('Pressed') else: print('Released') time.sleep(1) Let’s pretend it didn’t take me ages to get that to work right because I had the circuit wrong, ok? Now, we have self respect, so you wouldn’t actually poll like that. Instead you’d use edge detection, and end up with code like this: import OPi.GPIO as GPIO import time GPIO.setmode(GPIO.SUNXI) GPIO.setwarnings(False) GPIO.setup('PA7', GPIO.IN, pull_up_down=GPIO.PUD_DOWN) def event_callback(channel): print('Event detected: %s' % GPIO.input('PA7')) GPIO.add_event_detect('PA7', GPIO.BOTH, callback=event_callback, bouncetime=50) while True: time.sleep(1) So there ya go. 2 thoughts on “Updated examples for OrangePi GPIOs”
https://www.madebymikal.com/updated-examples-for-orangepi-gpios/
CC-MAIN-2019-13
refinedweb
381
66.74
jQuery is a JavaScript library which simplifies DOM operations, event handling, AJAX, and animations. It also takes care of many browser compatibility issues in underlying DOM and javascript engines. Each version of jQuery can be downloaded from in both compressed (minified) and uncompressed formats. Create. Libraries other than jQuery may also use $ as an alias. This can cause interference between those libraries and jQuery. To release $ for use with other libraries: jQuery.noConflict(); After calling this function, $ is no longer an alias for jQuery. However, you can still use the variable jQuery itself to access jQuery functions: jQuery('#hello').text('Hello, World!'); Optionally, you can assign a different variable as an alias for jQuery: var jqy = jQuery.noConflict(); jqy('#hello').text('Hello, World!'); Conversely, to prevent other libraries from interfering with jQuery, you can wrap your jQuery code in an immediately invoked function expression (IIFE) and pass in jQuery as the argument: (function($) { $(document).ready(function() { $('#hello').text('Hello, World!'); }); })(jQuery); Inside this IIFE, $ is an alias for jQuery only. Another simple way to secure jQuery's $ alias and make sure DOM is ready: jQuery(function( $ ) { // DOM is ready // You're now free to use $ alias $('#hello').text('Hello, World!'); }); To summarize, jQuery.noConflict(): $no longer refers to jQuery, while the variable jQuerydoes. var jQuery2 = jQuery.noConflict()- $no longer refers to jQuery, while the variable jQuerydoes and so does the variable jQuery2. Now, there exists a third scenario - What if we want jQuery to be available only in jQuery2? Use, var jQuery2 = jQuery.noConflict(true) This results in neither $ nor jQuery referring to jQuery. This is useful when multiple versions of jQuery are to be loaded onto the same page. <script src=''></script> <script> var jQuery1 = jQuery.noConflict(true); </script> <script src=''></script> <script> // Here, jQuery1 refers to jQuery 1.12.4 while, $ and jQuery refers to jQuery 3.1.0. </script> To load jQuery from the official CDN, go to the jQuery website. You'll see a list of different versions and formats available. Now, copy the source of the version of jQuery, you want to load. Suppose, you want to load jQuery 2.X, click uncompressed or minified tag which will show you something like this: Copy the full code (or click on the copy icon) and paste it in the <head> or <body> of your html. The best practice is to load any external JavaScript libraries at the head tag with the async attribute. Here is a demonstration: <!DOCTYPE html> <html> <head> <title>Loading jquery-2.2.4</title> <script src="" async></script> </head> <body> <p>This page is loaded with jquery.</p> </body> </html> When using async attribute be conscious as the javascript libraries are then asynchronously loaded and executed as soon as available. If two libraries are included where second library is dependent on the first library is this case if second library is loaded and executed before first library then it may throw an error and application may break. jQuery is the starting point for writing any jQuery code. It can be used as a function jQuery(...) or a variable jQuery.foo. $ is an alias for jQuery and the two can usually be interchanged for each other (except where jQuery.noConflict(); has been used - see Avoiding namespace collisions). Assuming we have this snippet of HTML - <div id="demo_div" class="demo"></div> We might want to use jQuery to add some text content to this div. To do this we could use the jQuery text() function. This could be written using either jQuery or $. i.e. - jQuery("#demo_div").text("Demo Text!"); Or - $("#demo_div").text("Demo Text!"); Both will result in the same final HTML - <div id="demo_div" class="demo">Demo Text!</div> As $ is more concise than jQuery it is the generally the preferred method of writing jQuery code. jQuery uses CSS selectors and in the example above an ID selector was used. For more information on selectors in jQuery see types of selectors. Sometimes one has to work with pages that are not using jQuery while most developers are used to have jQuery handy. In such situations one can use Chrome Developer Tools console (F12) to manually add jQuery on a loaded page by running following: var j = document.createElement('script'); j.onload = function(){ jQuery.noConflict(); }; j.src = ""; document.getElementsByTagName('head')[0].appendChild(j); Version you want might differ from above( 1.12.4) you can get the link for one you need here. Typically when loading plugins, make sure to always include the plugin after jQuery. <script src=""></script> <script src="some-plugin.min.js"></script> If you must use more than one version of jQuery, then make sure to load the plugin(s) after the required version of jQuery followed by code to set jQuery.noConflict(true); then load the next version of jQuery and its associated plugin(s): <script src=""></script> <script src="plugin-needs-1.7.min.js"></script> <script> // save reference to jQuery v1.7.0 var $oldjq = jQuery.noConflict(true); </script> <script src=""></script> <script src="newer-plugin.min.js"></script> Now when initializing the plugins, you'll need to use the associated jQuery version <script> // newer jQuery document ready jQuery(function($){ // "$" refers to the newer version of jQuery // inside of this function // initialize newer plugin $('#new').newerPlugin(); }); // older jQuery document ready $oldjq(function($){ // "$" refers to the older version of jQuery // inside of this function // initialize plugin needing older jQuery $('#old').olderPlugin(); }); </script> It is possible to only use one document ready function to initialize both plugins, but to avoid confusion and problems with any extra jQuery code inside the document ready function, it would be better to keep the references separate. Every time jQuery is called, by using $() or jQuery(), internally it is creating a new instance of jQuery. This is the source code which shows the new instance: // Define a local copy of jQuery jQuery = function( selector, context ) { // The jQuery object is actually just the init constructor 'enhanced' // Need init if jQuery is called (just allow error to be thrown if not included) return new jQuery.fn.init( selector, context ); } Internally jQuery refers to its prototype as .fn, and the style used here of internally instantiating a jQuery object allows for that prototype to be exposed without the explicit use of new by the caller. In addition to setting up an instance (which is how the jQuery API, such as .each, children, filter, etc. is exposed), internally jQuery will also create an array-like structure to match the result of the selector (provided that something other than nothing, undefined, null, or similar was passed as the argument). In the case of a single item, this array-like structure will hold only that item. A simple demonstration would be to find an element with an id, and then access the jQuery object to return the underlying DOM element (this will also work when multiple elements are matched or present). var $div = $("#myDiv");//populate the jQuery object with the result of the id selector var div = $div[0];//access array-like structure of jQuery object to get the DOM Element
http://riptutorial.com/jquery/topic/211/getting-started-with-jquery
CC-MAIN-2018-26
refinedweb
1,182
64.61
[TOC] I. overview of scala 1.1 INTRODUCTION Scala is a multi paradigm programming language. The original intention of its design is to integrate various features of object-oriented programming and functional programming. Scala runs on the Java platform (Java virtual machine) and is compatible with existing Java programs. It can also run in Java ME in CLDC configuration. At present, there is another. NET platform, but the update of this version is a little behind. Scala's compilation model (independent compilation, dynamic class loading) is the same as Java and C ා, so Scala code can call Java class library (for. NET implementation, it can call. NET class library). 1.2 scala installation and configuration scala runs on the JVM, so you need to install jdk first. See the previous article for the installation process. First come to < > download the scala installation package. The version I use here is scala2.11.8. windows installation is basically a little bit, so I won't show it here. After installation, the PATH environment variable is configured by default. If you install it on Linux, download the. tgz package for use, and extract it to the specified directory. This is basically how to configure the PATH environment variable. Finally, enter Scala at the command line. Entering the scala command line successfully means success (just like installing jdk in Linux) II. scala Foundation It should be noted that in scala, any data is an object, and some methods can be called through the data itself. In java, only after the data is assigned to a reference variable can the method be called through the reference variable. as "1".toInt(), which can run in scala, can't run in java in a way like this 2.1 common data types 2.1.1 value type: Byte,Short,Int,Long,Float,Double Byte: 8-bit signed number, from - 128 to 127 Short: 16 bit signed data, from - 32768 to 32767 Int: 32-bit signed data Long: 64 bit signed data Example: define an Int variable in the scala command line val a:Byte = 10 a+10 Get: res9: Int = 20 res9 here is the name of the newly generated variable 2.1.2 character type Char and String, the former is single character, the latter is String For strings, you can perform interpolation operations in scala, such as: scala> val a="king" a: String = king scala> s"my name is ${a}" res1: String = my name is king Note: there is an s in front; it is equivalent to executing "my name is" + S1 2.1.3 Unit type Equivalent to the void type in java, commonly used for the return value type of function methods 2.1.4 Nothing type Generally, during execution, the type of exception generated is nothing 2.2 declaration and use of variables var Variable name: type=value val Variable name: type=value //The type can be omitted, and scala will automatically deduce the type of variable according to the type of value. //Example: var a:Int=8 var and val The difference is: val The memory address that the reference points to cannot be changed, but its contents can be changed. var The reference points to a variable memory address. And when it is defined, it must be initialized //Such as: scala> val a=2 a: Int = 2 scala> a=3 <console>:12: error: reassignment to val a=3 ^ scala> var b=2 b: Int = 2 scala> b=3 b: Int = 3 //As you can see, there's no way to reassign a 2.3 preliminary use of functions 2.3.1 scala built-in functions Scala has many built-in functions, which can be used directly, such as various mathematical functions under scala.math package import scala.math._ Import math All functions under scala> import scala.math._ import scala.math._ scala> max(2,3) res0: Int = 3 2.3.2 user defined function def Function name([Parameter name: parameter type]*) : return type = {} //Example: 1,Summation scala> def sum(x:Int,y:Int) : Int = x+y sum: (x: Int, y: Int)Int scala> sum(10,20) res4: Int = 30 scala> var a = sum(10,20) a: Int = 30 2,Factoring, recursion scala> def myFactor(x:Int) : Int = { | //Realization | if(x<=1) | 1 | else | x*myFactor(x-1) | } myFactor: (x: Int)Int scala> myFactor(3) res5: Int = 6 //Note: if there is no return statement, the last sentence of the function is the return value of the function. //The function above has branches. Both 1 and x*myFactor(x-1) may be the last sentence of the function. 2.4 scala conditional statements and loop statements 2.4.1 condition judgment statement if/else: if (Judgement condition) {} else if (Judgement condition) {} else {} 2.4.2 loop statement: for cycle //Define a list to facilitate the operation of the for loop var list = List("Mary","Tom","Mike") println("-----for Cycle the first way-------") for( s <- list) println(s) // < represents the extractor in scala, extracts every element in the list, and assigns it to s println("-----for The second writing method of circulation-------") //The length of the printed name is greater than 3 plus judgment. You can add judgment for{ s <- list if(s.length > 3) } println(s) println("-----for The third writing method of circulation-------") //Print name length less than or equal to 3 plus judgment for(s <- list if s.length <= 3 ) println(s) println("-----for The fourth writing method of circulation-------") //Use the yield keyword to generate a new set //Capitalize each element in the list and return a new collection. var newList = for{ s <- list s1 = s.toUpperCase //Capitalize your name } yield (s1) //Use yield to change the processed elements into a new collection for( s <- newList) println(s) while loop: println("-----while Cyclic writing-------") //Define a loop variable var i = 0 while(i < list.length){ println(list(i)) //Self increment //i + + is not feasible in scala i += 1 } println("-----do while Cyclic writing-------") //Define loop variables var j = 0 do{ println(list(j)) j+=1 } while (j < list.length) Iteration of foreach function: println("-----foreach usage -----") list.foreach(println) / / equivalent to for (s < - list) println (s) /** *foreach description * *foreach is equivalent to a loop *list.foreach(println) uses higher-order functions (functional programming) * *There is also a loop map *The difference between foreach and map: *foreach has no return value, map has a return value *Similar in spark. */ The second foreach method: list.foreach{ case xxxxx } Directly use case internally for pattern matching 2.4.3 nested loops and break statements /** * Topic: judge how many prime numbers are between 101-200 * * Program analysis: * How to judge prime number: * If a number can be divided by an integer, it means that the number is not a prime number, but a prime number * Example: 101 2 ~ root (101) * * Program implementation method: * Define a two-tier cycle * First floor 101-200 * 2nd floor 2 - root sign (1st floor) * If a judgment can be divided by an integer, it is not a prime number * */ println("-----loop nesting-------") var count : Int = 0 //Save the number of prime numbers var index_outer : Int = 0 //Outer loop variable var index_inner : Int = 0 //Inner loop variable //Until statement, x until y means x to y, excluding y for (index_outer <- 101 until 200 ){ var b = false //Mark whether it can be divisible breakable{ index_inner = 2 while (index_inner <= sqrt(index_outer)){ if(index_outer % index_inner == 0){ //Can be divisible b = true break } index_inner += 1 } } if (!b){ count +=1 } } println("The number is: " + count) break usage: Include the statement block to be broken with breakable {}, and then use the break statement to break it 2.4.4 nested loop for bubble sorting /** * Bubble sort * * * Algorithm analysis: * 1,Compare adjacent elements and swap positions if the first is larger than the second * 2,Do the same for each pair of adjacent elements. After this step is completed, the final element will be the maximum number * 3,Repeat for all elements. * * Program analysis: * 1,Two level circulation * 2,Number of outer loop control comparisons * 3,The inner loop controls the position of arrival, that is, the position where the comparison ends */ println("------------Bubble sort--------------") var listSort = LinkedList(3,9,1,6,5,7,10,2,4) var startIndex:Int = 0 var secondIndex:Int = 0 var tmp:Int = 0 for (startIndex <- 0 until(listSort.length - 1)) { for (secondIndex <- startIndex + 1 until(listSort.length)) { if (listSort(startIndex) > listSort(secondIndex)) { tmp = listSort(startIndex) listSort(startIndex) = listSort(secondIndex) listSort(secondIndex) = tmp } } } 2.5 parameters of scala function 2.5.1 evaluation strategy of function parameters call by value: Evaluate function arguments once Example: def test(x:Int) call by name: Function arguments are evaluated only when they are used inside the function body Example: def test(x:=>Int) Pay attention to the symbols between variable name and type. There are more = > symbols. Don't miss them Maybe it's not clear. Let's take a look at an example scala> def test1(x:Int,y:Int) : Int = x+x test1: (x: Int, y: Int)Int scala> test1(3+4,8) res0: Int = 14 scala> def test2(x: => Int,y: => Int) : Int = x+x test2: (x: => Int, y: => Int)Int scala> test2(3+4,8) res1: Int = 14 //Comparison of execution process: test1 --> test1(3+4,8) --> test1(7,8) --> 7+7 --> 14 test2 --> test2(3+4,8) --> (3+4)+(3+4) --> 7+7 --> 14 //Notice here that the argument is always 3 + 4, not 7, until the argument is used Here is a more obvious example: //x is call by value, y is call by name def bar(x:Int,y: => Int) : Int = 1 Define a loop function def loop() : Int = loop Call bar function: 1,bar(1,loop) 2,bar(loop,1) Which way will produce a life and death cycle? The second way Analysis: 1. Every time y is used in a function, it will be evaluated. The bar function does not use y, so it will not call loop. 2. x call by value, to evaluate the function parameters (whether used or not), and only once, so it will generate a life and death cycle. 2.5.2 function parameter type 1. Default parameters When you do not assign a value to a parameter, the default value is used. scala> def fun1(name:String="Tom") : String = "Hello " + name fun1: (name: String)String scala> fun1("Andy") res0: String = Hello Andy scala> fun1() res1: String = Hello Tom 2. Proxy parameters The override parameter determines which parameter to assign. scala> def fun2(str:String="Good Morning ", name:String="Tom ", age:Int=20)=str + name + " and the age is " + age fun2: (str: String, name: String, age: Int)String scala> fun2() res2: String = Good Morning Tom and the age is 20 //Which default parameter is assigned here scala> fun2(name="Mary ") res3: String = Good Morning Mary and the age is 20 3. Variable parameters Be similar to java The variable parameter in, that is, the number of parameters is not fixed, is to add a parameter after the common parameter * //Example: //Find the sum of multiple numbers: def sum(args:Int*) = { var result = 0 for(s <- args) result += s result } //This is the variable parameter scala> def sum(args:Int*) = { | var result = 0 | for(s <- args) result += s | result} sum: (args: Int*)Int scala> sum(1,2,4) res4: Int = 7 scala> sum(1,2,4,3,2,5,3) res5: Int = 20 2.6 lazy value Definition: if a constant (defined by val) is lazy, its initialization will be delayed until it is used for the first time. For example: scala> val x : Int = 10 x: Int = 10 scala> val y:Int = x+1 y: Int = 11 y No lazy,Trigger calculation immediately after definition scala> lazy val z : Int = x+1 z: Int = <lazy> z yes lazy,Initialization will be delayed and calculation will not start at definition time. scala> z res6: Int = 11 //When we use z for the first time, we trigger the calculation. Extension: The core of spark is RDD (data set). Spark provides many methods to operate RDD and operators. There are two types of operators: 1. Transformation: delay loading, no calculation will be triggered 2. Action: calculation will be triggered Let's take another example: (1)Read an existing file scala> lazy val words = scala.io.Source.fromFile("H:\\tmp_files\\student.txt").mkString words: String = <lazy> scala> words res7: String = 1 Tom 12 2 Mary 13 3 Lily 15 (2)Read a file that does not exist scala> val words = scala.io.Source.fromFile("H:\\tmp_files\\studen1231312312t.txt").mkString java.io.FileNotFoundException: H:\tmp_files\studen1231312312t.txt (The system cannot find the specified file.) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at scala.io.Source$.fromFile(Source.scala:91) at scala.io.Source$.fromFile(Source.scala:76) at scala.io.Source$.fromFile(Source.scala:54) ... 32 elided //Exceptions will occur scala> lazy val words = scala.io.Source.fromFile("H:\\tmp_files\\studen1231312312312t.txt").mkString words: String = <lazy> //If it is a lazy value, no exception will be generated, because it is not executed at the time of definition, so no exception will be generated 2.7 exception Similar to java, try catch finally is used to catch and handle exceptions try {} catch { case ex: exception_type1 => { Exception handling code } case ex: exception_type2 => { Exception handling code } case _:Exception => { This is to include all exception s. If the above ones don't match, here they are } } finally { xxxx } 2.8 array Array type: Array[T](N) fixed length array, array length needs to be specified ArrayBuffer variable length array, need to import package import scala.collection.mutable_ Array operation: scala> val a = new Array[Int](10) a: Array[Int] = Array(0, 0, 0, 0, 0, 0, 0, 0, 0, 0) //Initialize assignment, default 0 scala> val b = new Array[String](15) b: Array[String] = Array(null, null, null, null, null, null, null, null, null, null, null, null, null, null, null) scala> val c = Array("Tom","Mary","Andy") c: Array[String] = Array(Tom, Mary, Andy) scala> val c = Array("Tom","Mary",1) c: Array[Any] = Array(Tom, Mary, 1) //The Array type is not declared. It is assigned directly. The Array is of Any type, that is, Any type can be used scala> val c:Array[String]=Array("Tom","Andy",1) <console>:11: error: type mismatch; found : Int(1) required: String val c:Array[String]=Array("Tom","Andy",1) //Array elements must be all strings when Array[String] is declared ^ scala> val c:Array[String]=Array("Tom","Andy") c: Array[String] = Array(Tom, Andy) ArrayBuffer operation: scala> import scala.collection.mutable._ import scala.collection.mutable._ mutable Representative variable scala> val d = ArrayBuffer[Int]() d: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer() //Add elements to an array scala> d += 1 res8: d.type = ArrayBuffer(1) //Access the element of the specified subscript of the array: d(index) //Delete the specified element (not the element with the specified subscript): scala> y-=3 res6: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer(1) Array common operations: 1,Traversal array: scala> var a = Array("Tom","Andy","Mary") a: Array[String] = Array(Tom, Andy, Mary) scala> for(s<-a) println(s) Tom Andy Mary scala> a.foreach(println) Tom Andy Mary 2,Find the maximum and minimum scala> val myarray = Array(1,2,7,8,10,3,6) myarray: Array[Int] = Array(1, 2, 7, 8, 10, 3, 6) scala> myarray.max res16: Int = 10 scala> myarray.min res17: Int = 1 3,sort //Indicates sorting with a comparison function scala> myarray.sortWith(_>_) res18: Array[Int] = Array(10, 8, 7, 6, 3, 2, 1) scala> myarray.sortWith(_<_) res19: Array[Int] = Array(1, 2, 3, 6, 7, 8, 10) //Explanation: myArray. Sortwith (> //Complete: myArray. Sortwith ((a, b) = > {if (a > b) true else false}) (a,b)=>{if(a>b) true else false} It's an anonymous function without a name. Two parameters are passed in a b,The return value is bool sortWith(_>_) Is a higher-order function, that is, the parameter is a function Multidimensional array: and Java Similarly, it is realized through the array of arrays //Define a fixed length 2D array scala> val matrix = Array.ofDim[Int](3,4) matrix: Array[Array[Int]] = Array( Array(0, 0, 0, 0), Array(0, 0, 0, 0), Array(0, 0, 0, 0) ) scala> matrix(1)(2)=10 scala> matrix res21: Array[Array[Int]] = Array( Array(0, 0, 0, 0), Array(0, 0, 10, 0), Array(0, 0, 0, 0)) //Define a two-dimensional array, where each element is a one-dimensional array, and its length is not fixed scala> var triangle = new Array[Array[Int]](10) triangle: Array[Array[Int]] = Array(null, null, null, null, null, null, null, null, null, null) scala> for(i <- 0 until triangle.length) | triangle(i)=new Array[Int](i+1) scala> triangle res23: Array[Array[Int]] = Array( Array(0), Array(0, 0), Array(0, 0, 0), Array(0, 0, 0, 0), Array(0, 0, 0, 0, 0), Array(0, 0, 0, 0, 0, 0), Array(0, 0, 0, 0, 0, 0, 0), Array(0, 0, 0, 0, 0, 0, 0, 0), Array(0, 0, 0, 0, 0, 0, 0, 0, 0), Array(0, 0, 0, 0, 0, 0, 0, 0, 0, 0)) 2.9 Map Map[K,V] () the generic type can be determined automatically according to the incoming value. If the creation is empty, KV is of nothing type scala.collection.mutable.Map is a variable Map, and the added KV can be changed scala.collection.immutable.Map is immutable Map Example: //Initialize assignment mode 1 scala> val scores = Map("Tom" -> 80, "Mary"->77,"Mike"->82) scores: scala.collection.mutable.Map[String,Int] = Map(Mike -> 82, Tom -> 80, Mary -> 77) //Initialize assignment mode 2 scala> val chineses = Map(("Tom",80),("Mary",60),("Lily",50)) chineses: scala.collection.mutable.Map[String,Int] = Map(Tom -> 80, Lily -> 50, Mary -> 60) Mapped actions: 1,Get value in map scala> chineses("Tom") res25: Int = 80 scala> chineses("To123123m") java.util.NoSuchElementException: key not found: To123123m at scala.collection.MapLike$class.default(MapLike.scala:228) at scala.collection.AbstractMap.default(Map.scala:59) at scala.collection.mutable.HashMap.apply(HashMap.scala:65) ... 32 elided //Solution: first determine whether the key exists if(chineses.contains("To123123m")){ chineses("To123123m") } else { 1 } scala> if(chineses.contains("To123123m")){ | chineses("To123123m") | } else { | 1} res27: Int = 1 //Gets the value of the specified key, and returns - 1 if the key does not exist, similar to returning the default value scala> chineses.getOrElse("To123123m",-1) res28: Int = -1 //get method if the corresponding key does not exist, value returns none scala> chineses.get("dlfsjldkfjlsk") res29: Option[Int] = None //No error will be reported when using get method scala> chineses.get("Tom") res30: Option[Int] = Some(80) Option None Some The three types will be explained at the same time later. ========================================================== 2,Update values in map scala> chineses res31: scala.collection.mutable.Map[String,Int] = Map(Tom -> 80, Lily -> 50, Mary -> 60) scala> chineses("Tom") res32: Int = 80 scala> chineses("Tom")=100 scala> chineses res34: scala.collection.mutable.Map[String,Int] = Map(Tom -> 100, Lily -> 50, Mary -> 60) ========================================================== 3,Iteration of mapping //You can use for foreach scala> chineses res35: scala.collection.mutable.Map[String,Int] = Map(Tom -> 100, Lily -> 50, Mary -> 60) scala> for(s <- chineses) println(s) (Tom,100) (Lily,50) (Mary,60) scala> chineses.foreach(println) (Tom,100) (Lily,50) (Mary,60) foreach In essence, it is a higher-order function. ========================================================== 4,Add value to map scala> a+="tom"->"king" Ahead is K,Behind is V 2.10 Tuple Tuples are immutable types and cannot be added or modified. Tuple in Scala: a collection of different types of values Tuple Statement: scala> val t1 = Tuple(1,0.3,"Hello") <console>:14: error: not found: value Tuple val t1 = Tuple(1,0.3,"Hello") ^ scala> val t1 = Tuple3(1,0.3,"Hello") t1: (Int, Double, String) = (1,0.3,Hello) Tuple3 representative Tuple There are three elements in //You can use the following method to define the number of meta ancestors of any element scala> val t1 = (1,0.3,"Hello") t1: (Int, Double, String) = (1,0.3,Hello) scala> val t1 = (1,0.3,"Hello",1,12,5,"all") t1: (Int, Double, String, Int, Int, Int, String) = (1,0.3,Hello,1,12,5,all) //Access element: note this special symbol scala> t1._1 Access the first element in the ancestor res38: Int = 1 scala> t1._3 Access the third element in the ancestor res39: String = Hello Traverse tuple: Note: the foreach function is not provided in Tuple, we need to use the productIterator The traversal is divided into two steps: 1. Use productIterator to generate iterator 2, traversal scala> t1.productIterator.foreach(println) 1cala> 0.3 Hello 1 12 5 all 2.11 document operation //To read a file, source itself is an iterative object stored by characters, which is single character by default val source = scala.io.Source.fromFile(fileName, character set) //Read the entire contents of the file into a string source.mkString //Read by line, one line at a time source.getLines() //Read by single character for(c<-source) { println(c) } //Read data content from URL val source2 = scala.io.Source.fromFile(URL, UTF-8) //When reading binary files, scala does not implement its own classes, so it calls java classes directly val file=new File(filename) val in=new FileInputStream(file) val buffer=new Array[Byte](file,.length().toInt) in.read(buffer) //write file val out=new PrintWriter(filename) out.println(xxxxx) Write contents Use the properties class to resolve the properties configuration file under the resources directory: You can define a xx.properties configuration file and use the load method to read it. The basic format of the configuration file is the format of key=value, and the module will parse it into kv format. Example: object Dataloader { def main(args: Array[String]): Unit = { val properties = new Properties() //Note that the compiled file path is used here, so this is the input stream to get the compiled path of the file in the resource directory val propertiesStream = Dataloader.getClass.getClassLoader.getResourceAsStream("dataloader.properties") properties.load(propertiesStream) println(properties.getProperty("spark.local.cores")) } } 3. The object-oriented characteristics of scala 3.1 definition of class The definition of a class in scala is similar to that in java. The class keyword is also used to define a class, but there are no public and other modifiers in front of the class keyword. Example: class Student1 { //Defining student attributes private var stuId : Int = 0 private var stuName : String = "Tom" private var age : Int = 20 //Define member method (function) get set def getStuName() : String = stuName def setStuName(newName :String) = this.stuName = newName def getStuAge() : Int = age def setStuAge(newAge : Int) = this.age = newAge } In fact, scala will automatically generate corresponding get and set methods for attributes, but the following principles should be noted: 1. By default, if the attribute is not decorated with any modifiers, it is private by default, but the automatically generated get and set methods are public 2. If the explicit declaration attribute is private (using the private keyword explicitly), such as private var a = 10, then the generated get and set methods are private. Can only be used in companion objects, or in this class 3. If private [this] var a = 10 is used to define the attribute, it means that the attribute can not be accessed externally (including associated classes), and the set and get methods will not be generated 4. If you want scala to generate a get method instead of a set method, you can define it as a constant, because the value of a constant cannot be changed ==================Essence=================== The automatically generated set and get method names are consistent with the property names, such as: var student = new Student1() student.stuId actually calls the get method of this property directly here, but the name of the method is the same student.age=10 in this case, the set method of the property is actually called 3.2 internal scala's inner class is not as complex as java's, that is, it is simple to define a class directly in the outer class. Example: class Student2 { //Defining student attributes private var stuName : String = "Tom" private var stuAge : Int = 20 //Define an array to save students' course scores private var courseList = new ArrayBuffer[Course]() //Define a function to add a student's course grade def addNewCourse(cname:String,grade:Int) = { //Create course grade information var c = new Course(cname,grade) //Add to student's object courseList += c } //Defining the main constructor of the course class is written after the class class Course(var courseName:String,var grade:Int){ //Defining properties //Defining functions } } You can get the objects of the inner class through the methods of the outer class. Or create an inner class object by val test = new InnerClassTest() only when the external class object is defined with val var testIn = new test.myClass("king") Note that the following way definitions are error reporting var test = new InnerClassTest(), here is var var testIn = new test.myClass("king") Personal understanding: In scala, the inner class belongs to the object of the outer class, not the outer class (this is official) So the inner class object also follows the outer class object. If the external class object is defined by var, it means that it is mutable If it changes, how can internal class objects get themselves through the reference of external class objects? Class 3.3 constructors 3.3.1 main constructor The main constructor is defined at the same time after the definition class, such as: class Student3(var stuName : String , var age:Int) //Inside the brackets are the main constructors, only some attributes. //It should be noted that the preceding var or val of the attribute variable defined in brackets must not be lost. If it is lost, the parameter can only be used as an immutable parameter within the class, not as a field of the class, neither can it be accessed externally. It will result in the following consequences, for example: //First of all, there are two classes, the main constructor and the difference between VaR and no var class a(var name:String) class b(name:String) object IODemo { def main(args: Array[String]): Unit = { //Here you can create objects normally val king = new a("king") val wang = new b("wang") //There's a problem here king.name This is OK wang.name This will not be accessible name This property, because it doesn't exist at all, just as class A common variable in } } //Another point is that if there is any code in the class that is not included in any methods in the class, in fact, when creating class objects, these codes will be executed, so they are actually part of the main constructor. Such as: class a(var name:String){ println("hahha") //This statement is executed when an object is created def test()={ println("test") } } 3.3.2 auxiliary constructor A class can have multiple auxiliary constructors through keywords this To achieve,Such as: class Student3(var stuName : String , var age:Int) { //attribute private var gender:Int = 1 //Define an auxiliary constructor. There can be multiple auxiliary constructors //Auxiliary constructor is a function, but its name is this def this (age : Int){ this("Mike",age) // new Student3("Mike",age) println("This is an auxiliary constructor") } def this (){ this(10)// new Student3("Mike",10) println("This is auxiliary constructor 2") } } 3.4 object object (companion object) Object object is a special object in scala. It has several characteristics 1. The content in the Object is static. So the properties and methods defined in it are static and can be called directly through the class name without creating objects 2. In scala, there is no static keyword, and all static are defined in the object. For example, the main function 3. If the name of the class is the same as the name of the object, the object is called the companion object of the class. Companion objects. 4. The main function needs to be written in the object, but not necessarily in the associated object 5. In essence, an object is similar to a singleton object. It is static in itself, and does not need to instantiate an object to call the methods and properties defined in it. So from this point of view, it can be regarded as a remedy for the lack of static attributes in scala An example of using the singleton object property of an object object object: 1,Generate credit card number object CreditCard { //Define a variable to save credit card number private [this] var creditCardNumber : Long = 0 //Define functions to generate card numbers def generateCCNumber():Long = { creditCardNumber += 1 creditCardNumber } //Test program def main(args: Array[String]): Unit = { //Create a new card number println(CreditCard.generateCCNumber()) println(CreditCard.generateCCNumber()) println(CreditCard.generateCCNumber()) println(CreditCard.generateCCNumber()) println(CreditCard.generateCCNumber()) println(CreditCard.generateCCNumber()) println(CreditCard.generateCCNumber()) } } 2,Extended use, App This class //Define the object class, inherit the App class, and omit the main function. All the codes in the default object are in the main function. Such as: object AppTest extends App { println("test") This will be done directly } 3.5 apply method At this point, the previous estimation has some questions. Why do you sometimes use the new keyword when creating objects, and sometimes you don't? That's the way to apply. Such as: var t1 = Tuple3(1,0.1,"Hello") As you can see, a tuple object is created, but the new keyword is not used. In fact, when the new keyword is omitted, the apply method in the companion object of this class is called. Generally, the apply method will return the object of the created class. Pay attention to the following points: 1. Use the apply method to make the program more concise. 2. The apply method must be written in the companion object. Because the direct call to the apply method needs to be static, it can only be written in the object accompanying object 3. This is similar to creating objects by class name. method, In the way of java, that is to say, object is a static class, and the class name can be used directly Call the static method inside 4. Inside the apply method, new is also used to create objects of corresponding classes Example: //Define a primitive class class Student4 (var stuName : String) //Define the companion object of the above class object Student4{ //Define the apply method def apply(name : String) = { println("call apply Method") //Returns the object of the original class new Student4(name) } //Test program def main(args: Array[String]): Unit = { //Creating student objects through the main constructor var s1 = new Student4("Tom") println(s1.stuName) //Create the student object through the apply method without writing the new keyword var s2 = Student4("Mary") println(s2.stuName) } } 3.6 inheritance 3.6.1 general inheritance In scala, the extensions keyword is also used to inherit the parent class, and only single inheritance is allowed, such as: //Define parent class class Person(val name: String,val age:Int){ //Defining functions def sayHello() : String = "Hello " + name + " and the age is " + age } //Defining subclasses /** *class Emplyee(val name:String,val age:Int,val salary:Int) extends Person(name,age) *Error in the above writing *If you want to use the properties of a subclass, override the properties of the parent. No need to override * *Override is to use the value in the subclass to override the value in the parent class * */ class Emplyee(override val name:String,override val age:Int,val salary:Int) extends Person(name,age){ //Override method of parent class override def sayHello(): String = "sayHello in subclass" } When overriding the methods or properties of the parent class, you need to add the override keyword at the top 3.6.2 anonymous subclass inheritance object Demo1 { def main(args: Array[String]): Unit = { //Create Person object var p1 = new Person("Tom",20) println(p1.name+"\t"+p1.age) println(p1.sayHello()) //Create Employee object var p2 : Person = new Emplyee("Mike",25,1000) println(p2.sayHello()) //Inheritance is realized by hiding subclasses: subclasses without names are called anonymous subclasses var p3 : Person = new Person("Mary",25){ override def sayHello(): String = "In the anonymous subclass sayHello" } println(p3.sayHello()) } } 3.6.3 abstract classes and fields Abstract class is actually a class defined by the abstract keyword, such as: abstract class Teacher { var i:Int var name:String="king" def test = { } def sayHello def talk:Int } Among them, there are the following points 1. In an abstract class, fields that are not initialized are called abstract fields or properties. If they are initialized, they are not abstract fields. Similarly, a method without implementation is called an abstract method. 2. Abstract classes can only be used for inheritance, and cannot instantiate objects 3. In the non Abstract subclass, it inherits the abstract parent class and must initialize the abstract field and implement the abstract method. When initializing abstract fields, you can also initialize them in the main constructor of a subclass or in the class implementation. However, in the main constructor, initialization is not needed immediately, and parameters can be passed in for initialization when the object is instantiated. In class implementation, it must be initialized immediately 4. In the abstract class, for the abstract attribute, only the corresponding get method will be generated automatically, not the set method 5. The extension keyword is also used to inherit the abstract parent class. 3.7 trait In scala, only single inheritance is generally supported, but sometimes multiple inheritance is needed, so a trait appears, which can make subclasses inherit more. The definition of trait is basically similar to that of abstract class. Such as: //Defining parent class trait Human{ //Defining abstract properties val id : Int val name : String } //Define a trait that represents an action trait Action{ //Define an abstract function def getActionName():String } //To define a subclass to inherit from two parents class Student5(val id:Int,val name:String) extends Human with Action{ override def getActionName(): String = "Action is running" } It can also be seen from the above that when inheriting a trait, two keywords of extends + with are used. When there is more than one parent class, the with keyword must be used to inherit and the class must be trait. The class after extends can be a trait or other class. Multiple inheritance is as follows: class Father extends class1 with trait1 with trait2 with . . . . . . { } 3.8 packages and package objects The use of scala packages is similar to java, and in Scala, the import statement can be anywhere. In general, under the packages of java and Scala, there can only be classes, objects, characteristics, and functions and variables cannot be defined directly. Scala's package object can solve this problem. Package objects in scala can include: constants, variables, methods, classes, objects, trait s. Such as: package object MyPackage { def test = { } var x:Int = 0 class a {} } Then you can directly call the methods and variables in the package object name IV. collections in scala In Scala, collections are generally divided into variable collections and immutable collections, respectively, under the scala.collection.mutable and scala.collection.immutable packages. Adding elements, changing the value of an element and deleting elements are not allowed in immutable collection. 4.1 List List[T] () immutable list LinkedList[T] () variable list List[T] related operations: scala> val nameList = List("Tom","Andy") nameList: List[String] = List(Tom, Andy) scala> val intList = List(1,2,3) intList: List[Int] = List(1, 2, 3) Null list scala> val nullList : List[Nothing] = List() nullList: List[Nothing] = List() Two dimensional list scala> val dim:List[List[Int]] = List(List(1,2,3),List(10,20)) dim: List[List[Int]] = List(List(1, 2, 3), List(10, 20)) Returns the first element nameList.head Returns the list of last elements removed nameList.tail Access the element of the specified index nameList(index) LinkedList[T] related operations: Define variable list scala> val myList = scala.collection.mutable.LinkedList(1,2,3,4,5) warning: there was one deprecation warning; re-run with -deprecation for details myList: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4, 5) //Traverse modification list var cur = myList point nameList Memory address reference for while(cur != Nil){ cur.elem = cur.elem*2 Iterate over each element and change each element *2 cur = cur.next } col: + ele //Add element's to the tail of the collection (seq) ele +: col //Add elements to the header of the collection (seq) col + (ele,ele) //Add other sets to the set / map tail col -(ele,ele) //Remove a subset from a set (set / map / ArrayBuffer) col1 ++ col2 //Add other sets to the end of the set (Iterator) col2 ++: col1 //Add another collection to the header of the collection (Iterator) ele::list //Add element to list header (list) list2::list1 //Add another list2 to list1 //Head of (list) list1:::list2 //Add another list2 to the tail of list1 (list) 4.2 sequence The sequence is divided into Vector and Range, both of which are immutable Vector operation: Vector Is a sequence with subscripts, which can be accessed by subscripts (index marks) Vector element scala> var v = Vector(1,2,3,4,5,6) v: scala.collection.immutable.Vector[Int] = Vector(1, 2, 3, 4, 5, 6) Range operation: Range: Is a sequence of integers //The first way: scala> Range(0,5) res48: scala.collection.immutable.Range = Range(0, 1, 2, 3, 4) //Explanation: starting from 0, excluding 5 //The second way of writing: scala> print(0 until 5) Range(0, 1, 2, 3, 4) //The third way: front and back closed interval scala> print(0 to 5) Range(0, 1, 2, 3, 4, 5) //Two ranges can be added scala> ('0' to '9') ++ ('A' to 'Z') res51: scala.collection.immutable.IndexedSeq[Char] = Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z) Range convert to list scala> 1 to 5 toList warning: there was one feature warning; re-run with -feature for details res52: List[Int] = List(1, 2, 3, 4, 5) 4.3 set The default is HashSet, which is similar to java, immutable, and related operations Create a set scala> var s1 = Set(1,2,10,8) s1: scala.collection.immutable.Set[Int] = Set(1, 2, 10, 8) //Note: belonging to immutable scala> s1 + 10 res53: scala.collection.immutable.Set[Int] = Set(1, 2, 10, 8) scala> s1 + 7 res54: scala.collection.immutable.Set[Int] = Set(10, 1, 2, 7, 8) //Return to a new Set scala> s1 res55: scala.collection.immutable.Set[Int] = Set(1, 2, 10, 8) s1 Not changed by itself //Create a sortable Set scala> var s2 = scala.collection.mutable.SortedSet(1,2,3,10,8) s2: scala.collection.mutable.SortedSet[Int] = TreeSet(1, 2, 3, 8, 10) //Determine whether the element exists: scala> s1.contains(1) res56: Boolean = true //Determine whether a set is a subset of another set scala> var s2 = Set(1,2,10,8,7,0) s2: scala.collection.immutable.Set[Int] = Set(0, 10, 1, 2, 7, 8) scala> s1 res57: scala.collection.immutable.Set[Int] = Set(1, 2, 10, 8) scala> s1 subsetOf(s2) res58: Boolean = true //Operation of set: union Union union, intersect intersection, diff difference set scala> var set1 = Set(1,2,3,4,5,6) set1: scala.collection.immutable.Set[Int] = Set(5, 1, 6, 2, 3, 4) scala> var set2 = Set(5,6,7,8,9,10) set2: scala.collection.immutable.Set[Int] = Set(5, 10, 6, 9, 7, 8) scala> set1 union set2 res59: scala.collection.immutable.Set[Int] = Set(5, 10, 1, 6, 9, 2, 7, 3, 8, 4) scala> set1 intersect set2 res60: scala.collection.immutable.Set[Int] = Set(5, 6) scala> set1 diff set2 res61: scala.collection.immutable.Set[Int] = Set(1, 2, 3, 4) scala> set2 diff set1 res62: scala.collection.immutable.Set[Int] = Set(10, 9, 7, 8) V. functions in scala The definition of ordinary functions mentioned above is as follows: Keyword def function name (parameter): return value = function implementation In addition to ordinary functions, there are other functions. 5.1 anonymous functions A function without a name is defined as: (parameter list) = > {function implementation} Note, yes = > no= Generally speaking, anonymous function is used to define a function temporarily. It is used only once, often in combination with higher-order functions. Example: Call anonymous function Define an array, multiply each element in the array by three Pass each element in Array(1, 2, 3) to anonymous function (X: int) = > x * 3 scala> Array(1,2,3).map((x:Int) => x*3) res1: Array[Int] = Array(3, 6, 9) Pass (X: int) = > x * 3 as the function parameter of map to the higher-order function (1) what does this mean? All functions are anonymous. When defining anonymous functions, parameter lists can be omitted and function bodies can be used directly, such as: (I: int) = > I * 2 can be omitted as "* 2", where "means parameter and output type is automatically detected One thing to note: In the input parameters of anonymous functions, it is not mandatory to specify the type of parameters, and it can be automatically detected, such as: xxx.map(pair=>pair._2.toString) The anonymous function in the map is legal and OK without specifying the input parameter type. But ordinary functions must specify parameter types. 5.2 higher order functions A higher-order function is one that uses another function as a parameter in the parameter list. Usage: Define a higher-order function: def function name (passed in function name, assumed to be f: (input parameter type) = > (output parameter type), parameter 2,.... )= processing logic, in which the incoming function will be called Example: object IODemo { def main(args: Array[String]): Unit = { //Call a higher-order function and pass in the function as an argument highFun(innerFun,2) } //High order function, there are two parameters, one is function f, the specified input and output parameter types are Int, the other is x:Int def highFun(f:(Int)=>(Int),x:Int) = { f(x) } //This is the function that is then passed into the higher-order function def innerFun(x:Int) = { x+2 } } In addition to using existing functions as parameters of higher-order functions, anonymous functions can also be used, such as: Highfun ((X: int) = > x * 3, 3), (X: int) = > x * 3 is an anonymous function 5.3 common higher-order functions in Scala 5.3.1 map function It is equivalent to a loop that operates on each element in a collection (according to the logic in the receive function) and returns a new collection of processed data. Example: scala> var numbers = List(1,2,3,4,5,6,7,8,9,10) numbers: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> numbers.map((i:Int)=>i*2) res8: List[Int] = List(2, 4, 6, 8, 10, 12, 14, 16, 18, 20) In (i: int) = > i * 2, i is a cyclic variable, the entire function The map function does not change the numbers value scala> numbers res10: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) Anonymous functions can be abbreviated as; _Equivalent to loop variable i _*2 has the same function as (I: int) = > I * 2 _+_And (I: int, j: int) = > I + J Note: if a map type is passed to the map, and the key is not processed to the function in the map, only value is processed, then the new set returned after the map processing only contains value 5.3.2 foreach function Similar to map, the only difference is that it has no return value scala> numbers res11: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> numbers.foreach(_*2) scala> numbers res13: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> numbers.foreach(println(_)) 1 2 3 4 5 6 7 8 9 10 scala> numbers.map(_*2).foreach(println) 2 4 6 8 10 12 14 16 18 20 5.3.3 filter Filter. Select the data that meets the requirements. If true is returned, the data will be retained. If false is returned, the data will be discarded. The final returned new collection contains elements that meet the criteria. Example: For example: query the number divisible by 2 scala> numbers res15: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> numbers.filter((i:Int)=>i%2==0) res16: List[Int] = List(2, 4, 6, 8, 10) Note: (I: int) = > I% 2 = = 0 if true, return 5.3.4 zip Merge two sets, for example: scala> List(1,2,3).zip(List(4,5,6)) res18: List[(Int, Int)] = List((1,4), (2,5), (3,6)) scala> List(1,2,3).zip(List(4,5)) res19: List[(Int, Int)] = List((1,4), (2,5)) //As you can see, it is the combination of two elements in two sets into a primitive ancestor, and it is based on the list of the least elements scala> List(3).zip(List(4,5)) res20: List[(Int, Int)] = List((3,4)) 5.3.5 partition Partition according to the result of an assertion (that is, a condition that can be implemented through an anonymous function). The type returned by the function must be of type boolean. Finally, the one that returns true is one partition, and the one that returns false is another partition. Such as: Divide those that can be divided by 2 into one area and those that cannot be divided into another area scala> numbers res21: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> numbers.partition((i:Int)=>i%2==0) res22: (List[Int], List[Int]) = (List(2, 4, 6, 8, 10),List(1, 3, 5, 7, 9)) 5.3.6 find Find the first element that satisfies the condition (assertion) For example: find the first number divisible by 3 scala> numbers.find(_%3==0) res23: Option[Int] = Some(3) 5.3.7 flatten Expand nested results scala> List(List(2,4,6,8,10),List(1,3,5,7,9)).flatten res24: List[Int] = List(2, 4, 6, 8, 10, 1, 3, 5, 7, 9) Merge into a collection 5.3.8 faltmap Equivalent to map+flatten scala> var myList = List(List(2,4,6,8,10),List(1,3,5,7,9)) myList: List[List[Int]] = List(List(2, 4, 6, 8, 10), List(1, 3, 5, 7, 9)) scala> myList.flatMap(x=>x.map(_*2)) res25: List[Int] = List(4, 8, 12, 16, 20, 2, 6, 10, 14, 18) Execution process: 1. Cycle List(2,4,6,8,10) and List(1,3,5,7,9) and call x = > X. map (x2) where X represents a list, and the xrepresents the elements in the list List(4, 8, 12, 16, 20) and List(2, 6, 10, 14, 18) 2. Merge into a List List(4, 8, 12, 16, 20, 2, 6, 10, 14, 18) 5.4 closure Closures are the nesting of functions. In one function, it contains the definition of another function, which can access the variables of the external function in the internal function. Such as: Def mulby (factor: double) = (X: double) = > x * factor in which the external function is called Outside inside 5.5 Coriolis The Coriolis function is to transform a function with multiple parameters into a function chain, and each node is a single parameter function def add(x:Int,y:Int)=x+y def add(x:Int)(y:Int)=x+y The above two functions are equivalent when they are defined Explain: Normal function: def add(x:Int,y:Int)=x+y Collierization function: the way to use the function closure above def add(x:Int)=(y:Int)=>x+y Abbreviation: def add(x:Int)(y:Int)=x+y scala> def add(x:Int)(y:Int)=x+y add: (x: Int)(y: Int)Int scala> add(1)(2) res28: Int = 3 5.6 answering questions Sometimes when a method is called, there is no need to add parentheses after the method name. Sometimes? Why? It mainly depends on whether there are parentheses after defining functions. Of course, when there are no parameters in a function, there must be parentheses if there are parameters. When a function is parameterless and is defined without parentheses, it must be called without parentheses When a function is a nonparametric function and the definition is bracketed, the parentheses are optional when calling The same is true when defining a class Vi. high order characteristics 6.1 special types in Scala * Any Indicates that any type is equivalent to java Medium Object * Unit No value, equivalent to void * Nothing Nothing Type in Scala The lowest level of the class hierarchy of; it is a subtype of any other type * Null Is a subclass of all application types with a value of null * * Special types: * Option : Scala Option(Option) type is used to indicate that a value is optional (with or without value) * Some : If the value exists, Option Namely Some * None : If the value does not exist, Option Namely None * * scala> var myMap = Map("Andy"->90) myMap: scala.collection.immutable.Map[String,Int] = Map(Andy -> 90) scala> myMap.get("Andy") res0: Option[Int] = Some(90) scala> myMap.get("ykjdfhsdajfkajshd") res1: Option[Int] = None * * Nil Type: is empty List * * Four N Conclusion: None Nothing Null Nil * None : If map The value in does not exist, Option Namely None * Nothing : If the method throws an exception, the return value type is Nothing,Nothing Is a subtype of any other type * Null : It can be assigned to all reference types, not to value types. * Nil : Empty List 6.2 pattern matching Similar to switch/case in java, usage: xx match { case xx1=>. . . . Match 1 case xx2=>. . . . Match 2 case _ => ...... Here is the meaning of the default value, equivalent to default } Example: object Demo1 { def main(args: Array[String]): Unit = { //1. Equivalent to switch case var chi = '-' var sign = 0 //Identifier if chi is' - ', then sign is assigned to - 1 chi match { case '+' => sign = 1 case '-' => sign = -1 case _ => sign=0 // _Indicates other values } println(sign) /** * 2,Scala Guard of: matches all values of a certain type case if * * For example, match all numbers. If ch2 is a number, then digit is assigned ch2 */ var ch2 = '6' var digit : Int = -1 ch2 match { case '+' => println("This is a plus sign") case '-' => println("This is a minus sign") case _ if Character.isDigit(ch2) => digit = Character.digit(ch2,10)// 10 for decimal case _ => println("Other") } println(digit) /** * 3,Using variables in pattern matching */ var mystr = "Hello World" //Take a character and assign it to the variable of pattern matching mystr(7) match { case '+' => println("This is a plus sign") case '-' => println("This is a minus sign") case ch => println(ch)//The variable ch is used in the case statement to represent the characters passed in } /** * 4,Matching type: equivalent to instanceof in java */ var v4 : Any = 100 // The final v4 is an integer v4 match { case x : Int => println("This is an integer") case s : String => println("This is a string") case _ => println("Other types") } /** * 5,Match arrays and lists */ var myArray = Array(1,2,3) myArray match { case Array(0) => println("There is only one 0 in the array") case Array(x,y) => println("Array contains two elements") case Array(x,y,z) => println("Array contains 3 elements") case Array(x,_*) => println("This is an array with multiple elements") } var myList = List(1,2,3,4,5,6) myList match { case List(0) => println("There is only one 0 in the list") case List(x,y) => println("The list contains two elements,And: " + (x+y)) case List(x,y,z) => println("List contains 3 elements,And: "+ (x+y+z)) case List(x,_*) => println("This is a list with multiple elements,And: " + myList.sum) } } } 6.3 sample class case The sample class has one more feature than the ordinary class. You can use the case statement above as the matching type. Ordinary classes are not allowed. Other uses are the same as ordinary classes. Definition method: case class a(x:int.....) {} Note that in the sample class, all fields will be declared as val automatically, so we can omit the val keyword when declaring fields In fact, there is nothing to say. The usage is very simple. Commonly used as a storage class for some data 6.4 generic The definition of generics in scala is similar to that in java. It is not repeated here, and it will be used directly. 6.4.1 generic classes When defining a class, it has a generic type, such as: class Father[T]{ xxxxx } The way of definition is the same as in java. Many of scala have generics, such as: Array[T] List[T] Map[K,V] 6.4.2 generic functions When defining a function, define a generic type, such as: def mkArray[T:ClassTag] ClassTag Explanation: indicated in scala The status information at run time, which represents the data type at call time Example: scala> import scala.reflect.ClassTag import scala.reflect.ClassTag //A generic array is defined, where elem: * represents all elements scala> def mkArray[T:ClassTag](elem:T*) = Array[T](elem:_*) mkArray: [T](elem: T*)(implicit evidence$1: scala.reflect.ClassTag[T])Array[T] 6.5 implicit conversion 6.5.1 implicit conversion function In general, scala automatically converts some types to specified types. As long as the user defines the relevant implicit conversion function, scala will automatically call according to the input and output parameter types of the implicit function. Example: class Fruit(name:String){ def getFruitName() : String = name } class Monkey(f:Fruit){ //output def say() = println("Monkey like " + f.getFruitName()) } object ImplicitDemo { def main(args: Array[String]): Unit = { //Define a fruit object var f : Fruit = new Fruit("Banana") f.say() /** * Question: can f.say * Direct write will report an error, because there is no say function in Fruit * But Monkey has a say function * * If you can convert Fruit to Monkey, you can call say * * So we define implicit conversion functions * */ } //Defining implicit conversion functions implicit def fruit2Monkey(f:Fruit):Monkey = { new Monkey(f) } /** * Note: use implicit transformation with caution, which will lead to further poor readability of scala * * Implicit conversion function name: XXXXXXX */ } 6.5.2 implicit parameters Implicit parameter is to add an implicit keyword before each parameter when defining a function. This parameter is an implicit parameter. Example: Define a function with an implicit parameter scala> def testParam(implicit name:String) = println("The value is " + name) testParam: (implicit name: String)Unit Then define an implicit variable scala> implicit val name:String = "AAAAAA" name: String = AAAAAA Pass in parameter call method, which can run scala> testParam("dfsfdsdf") The value is dfsfdsdf When no parameter is passed, the defined implicit variable is automatically found, that is, the previous name. Currently, the implicit variable is not called by name scala> testParam The value is AAAAAA When there are multiple implicit variables, if there are multiple of the same type, an error will be reported. If it's a different type, you can scala> implicit val name2:String = "AAAAAACCCCCCCCCCCCCC" name2: String = AAAAAACCCCCCCCCCCCCC scala> testParam <console>:18: error: ambiguous implicit values: both value name of type => String and value name2 of type => String match expected type String testParam Note: when defining implicit parameters, there can only be one implicit parameter. Sometimes, for the convenience of viewing, implicit parameters and common parameters are defined separately, such as: def test1(c:Int)(implicit a:Int): Unit = {} Note that normal parameters must be in parentheses before, not after Another special example is the combination of implicit parameters and implicit conversion functions: Define an implicit parameter to implement the following requirements: find the smaller value of the two values 100 23 ----> 23 "Hello" "ABC"--->ABC def smaller[T](a:T,b:T)(implicit order : T => Ordered[T]) = if(a<b) a else b scala> def smaller[T](a:T,b:T)(implicit order : T => Ordered[T]) = if(a<b) a else b smaller: [T](a: T, b: T)(implicit order: T => Ordered[T])T scala> smaller(100,23) res1: Int = 23 scala> smaller("Hello","ABC") res2: String = ABC //First of all, we are not sure whether the generic T can be compared, that is, whether there is a < method (don'T be surprised, < is a method name, not a symbol) in it. Because we need to determine whether there is, we define an implicit conversion function and parameter: implicit order : T => Ordered[T] //First, this is an anonymous implicit conversion function, which converts T to Order[T], and takes it as the value of the parameter order 6.5.3 implicit classes Add implicit keyword before class name Function: to enhance the function of a class by implicit class, the object of a class will be converted into an implicit class object, and then the method defined in the implicit class can be called. Example: object ImplicitClassDemo { def main(args: Array[String]): Unit = { //Sum two numbers println("The sum of the two numbers is: " + 1.add(2)) // Replace 1 + 2 with 1.add(2) /** * 1.add(2) Error reported because 1 has no add method * * Define an implicit class to enhance the function of 1 * * Execution process: * First convert 1 to Calc(1) * Call Calc(1) add method again Is automatically called based on the type of parameter received in the implicit class */ implicit class Calc(x:Int){ def add(y:Int):Int = x + y } } } 6.6 upper and lower bounds of generics Sometimes when you want to restrict the definition of generics to certain classes, you will use the concept of upper and lower bounds. Example: (*) specifies the value range of generics For example: define a generic type: T Inheritance relationship of class a --- > b --- > C --- > d arrow points to subclass The value range of t can be specified d <: T <: B The value range of T can only be BCD < is the representation of upper and lower bounds (*) definition: The upper bound S <: t specifies that the type of S must be a subclass or itself of T Lower bound U >: t specifies that the type of u must be the parent or itself of T A small problem about the lower bound: class Father class Son extends Father class Son1 extends Son class Son2 extends Son1 object Demo2 { def main(args: Array[String]): Unit = { var father : Father = new Son2 fun2(father) //fun2(new Son2) works normally. Why? //The main reason is that the object of the subclass can be assigned to the reference of the parent class, similar to automatic transformation. So when new son2 is used, it can actually be regarded as the object of the parent class, so it can work normally. It can be seen that the lower bound of generics is useless in the case of inheritance. In fact, it's the reason for polymorphism } def fun[T<:Son](x:T)={ println("123") } def fun2[T>:Son](x:T): Unit ={ println("456") } } 6.7 view definition An extension of the upper and lower bounds that can receive types that can be implicitly converted in the past, in addition to the types specified by the upper and lower bounds. Such as: Usage: def addTwoString[T<%String](x:T,y:T) = x + " " + y Meaning: 1. Can receive String and its subclasses 2. It can receive other types that can be converted to String. This is the point, which is the implicit transformation Example: scala> def addTwoString[T<%String](x:T,y:T) = x + " **** " + y addTwoString: [T](x: T, y: T)(implicit evidence$1: T => String)String scala> addTwoString(1,2) <console>:14: error: No implicit view available from Int => String. addTwoString(1,2) //Error reporting solution: define implicit conversion function and convert int to String scala> implicit def int2String(n:Int):String=n.toString warning: there was one feature warning; re-run with -feature for details int2String: (n: Int)String scala> addTwoString(1,2) res13: String = 1 **** 2 //Analysis execution process: 1,call int2String Method Int convert to String(scala Call in the background, no need to display the call) 2,call addTwoString Methods, concatenating strings //Implicit conversion function, it can be called without display calling it. 6.8 covariance and inversion Grafting the type of a generic variable into a generic class Concept: Covariant: the value of a generic variable can be of its own type or its subtype, Inverse: the value of a generic variable can be of its own type or its parent type Express: Covariant + class A[+T] means to graft the type of T generics into A Inverter - 6.9 comparison of ordered and Ordering These two classes are mainly used for comparison Ordered is similar to java's comparable interface Ordering is similar to java's comparator interface. trait Ordered[A] extends scala.Any with java.lang.Comparable[A] { def compare(that : A) : scala.Int def <(that : A) : scala.Boolean = { /* compiled code */ } def >(that : A) : scala.Boolean = { /* compiled code */ } def <=(that : A) : scala.Boolean = { /* compiled code */ } def >=(that : A) : scala.Boolean = { /* compiled code */ } def compareTo(that : A) : scala.Int = { /* compiled code */ } } //This is actually a trait, which defines methods such as < and so on. No surprise, this is the method name. //The subclass can inherit the trait, override the methods in it, and the class can be used for comparison. trait Ordering[T] extends java.lang.Object with java.util.Comparator[T] with scala.math.PartialOrdering[T] with scala.Serializable { this : scala.math.Ordering[T] => def tryCompare(x : T, y : T) : scala.Some[scala.Int] = { /* compiled code */ } def compare(x : T, y : T) : scala.Int override def lteq(x : T, y : T) : scala.Boolean = { /* compiled code */ } override def gteq(x : T, y : T) : scala.Boolean = { /* compiled code */ } override def lt(x : T, y : T) : scala.Boolean = { /* compiled code */ } override def gt(x : T, y : T) : scala.Boolean = { /* compiled code */ } override def equiv(x : T, y : T) : scala.Boolean = { /* compiled code */ } def max(x : T, y : T) : T = { /* compiled code */ } def min(x : T, y : T) : T = { /* compiled code */ } override def reverse : scala.math.Ordering[T] = { /* compiled code */ } def on[U](f : scala.Function1[U, T]) : scala.math.Ordering[U] = { /* compiled code */ } class Ops(lhs : T) extends scala.AnyRef { def <(rhs : T) : scala.Boolean = { /* compiled code */ } def <=(rhs : T) : scala.Boolean = { /* compiled code */ } def >(rhs : T) : scala.Boolean = { /* compiled code */ } def >=(rhs : T) : scala.Boolean = { /* compiled code */ } def equiv(rhs : T) : scala.Boolean = { /* compiled code */ } def max(rhs : T) : T = { /* compiled code */ } def min(rhs : T) : T = { /* compiled code */ } } ordering There are many ways to use it
https://programmer.help/blogs/i-basic-introduction-to-scala.html
CC-MAIN-2020-34
refinedweb
10,536
50.16
Today's Little Program creates a shortcut on the Start menu but marks it as "Do not put me on the front page upon installation." This is something you should do to any secondary shortcuts your installer creates. And while you're at it, you may as well set the "Don't highlight me as a newly-installed program" attribute used by Windows 7. (Remember, Little Programs do little to no error checking.) #define UNICODE #define _UNICODE #define STRICT #include <windows.h> #include <shlobj.h> #include <atlbase.h> #include <propkey.h> #include <shlwapi.h> int __cdecl wmain(int, wchar_t **) { CCoInitialize init; CComPtr<IShellLink> spsl; spsl.CoCreateInstance(CLSID_ShellLink); wchar_t szSelf[MAX_PATH]; GetModuleFileName(GetModuleHandle(nullptr), szSelf, ARRAYSIZE(szSelf)); spsl->SetPath(szSelf); PROPVARIANT pvar; CComQIPtr<IPropertyStore> spps(spsl); pvar.vt = VT_UI4; pvar.ulVal = APPUSERMODEL_STARTPINOPTION_NOPINONINSTALL; spps->SetValue(PKEY_AppUserModel_StartPinOption, pvar); pvar.vt = VT_BOOL; pvar.boolVal = VARIANT_TRUE; CComQIPtr<IPropertyStore> spps(spsl); spps->SetValue(PKEY_AppUserModel_ExcludeFromShowInNewInstall, pvar); spps->Commit(); wchar_t szPath[MAX_PATH]; SHGetSpecialFolderPath(nullptr, szPath, CSIDL_PROGRAMS, FALSE); PathAppend(szPath, L"Awesome.lnk"); CComQIPtr<IPersistFile>(spsl)->Save(szPath, FALSE); return 0; } First, we create a shell link object. Next, we tell the shell link that its target is the currently-running program. Now the fun begins. We get the property store of the shortcut and set two new properties. - Set System. AppUserModel.to StartPinOption APPUSERMODEL_. This prevents the shortcut from defaulting to the Windows 8 Start page. STARTPINOPTION_ NOPINONINSTALL - Set System. AppUserModel.to ExcludeFromShowInNewInstall VARIANT_. This prevents the shortcut from being highlighted as a new application on the Windows 7 Start menu. TRUE We then commit those properties back into the shortcut. Finally, we save the shortcut. But for a real installer you should use an installer builder, eg. WiX APPUSERMODEL_STARTPINOPTION_NOPINONINSTALL is not used in VS2012, is it? @EduardoS My psychic debugging tells me it's probably defined in propkey.h, part of the Windows API, which you can install separately from Visual Studio if you don't have it. *s/API/SDK Though now that I think about it, not having the SDK doesn't make much sense, especially if the rest of the code seems OK to you. @The MAZZTer: I think EduardoS's point was that VS 2012 pins a LOT of things to the Start screen. Which it does. For anyone wanting to use this in an MSI installer, using the property key name will cause an error; you need to use the GUID instead: support.microsoft.com/…/2745126 Can you invent a time machine and tell the Visual Studio 2010 and Visual Studio 2008 installer writers about this? Thanks! I had the hardest time parsing NOPINONINSTALL as "no pin on install". No opinion? No pinonin stall? What? I understand, though — underscores are a precious resource to be used for namespace/type disambiguation only. Separating common words is something you're expected to do yourself, much like the ancient Romans did. Note that the APPUSERSUPERMODEL_STARTPINOPTION_PINUPONWALL option is common in the U.S. military. @Chris Smith: According to my reading of the linked support article, you're not completely correct. Using the property key name will cause an error *in Windows 7*. Nice of you to tell us, but this ought to have been a feature of the now non-existent deployment project. So, we can either write more code, or take deal with the XML madness of WiX. Wix is great IF you figure out how it works – but unfortunately the developers are so busy writing code that they are unable to write decent documentation. Exactly. And Rob has left to set up his own consultancy on it! :) Why didn't MS invest some time and effort on making it useful to mere mortals without dedicated installation teams? And for "legacy" apps which haven't been updated (and will possibly never be updated) to set "System.AppUserModel.StartPinOption", use AutoPin Controller: winaero.com/comment.php ah, Windows 8, you know, back in windows 7, in case of file name collisions, I could tell Windows to rename files right in the copy dialog. now with windows 8 I'm 3 clicks away from that. Progress!
https://blogs.msdn.microsoft.com/oldnewthing/20130304-00/?p=5083
CC-MAIN-2017-17
refinedweb
679
59.3
sem_post() Increment a named or unnamed semaphore Synopsis: #include <semaphore.h> int sem_post( sem_t * sem ); Since: BlackBerry 10.0.0 Arguments: - sem - A pointer to the sem_t object for the semaphore whose value you want to increment. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Returns: - 0 - Success. - -1 - An error occurred (errno is set). Errors: - EINVAL - Invalid semaphore descriptor sem. - ENOSYS - The sem_post() function isn't supported. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/sem_post.html
CC-MAIN-2019-35
refinedweb
103
54.9
Sorry if this is off topic but hope someone can help ... Using Groovy 1.5.6 / Grails 1.0.2 . JetGroovy plugin 1.5 .. New(ish) to Groovy / Grails and working thru 'Getting Started With Grails' .. Having problems with testing .. Using following for testing, lifted straight from book private Race getValidRace() { def race = new Race() race.name = 'Fast 5K' race.startDateTime = new Date().plus(1) // 1 day in the future race.city = 'Somewhere' race.state = 'NC' race.distance = 3.1 race.cost = 20.00 race.maxRunners = 1000 // Make sure that we have indeed constructed a valid Race object assertTrue race.validate() return race } When i run all tests via GUI I get .. groovy.lang.MissingMethodException: No signature of method: Race.validate() is applicable for argument types: () values: {} .. If I run the same tests via grails using ' test-app' directly everything works as expected and all tests pass .. The actual application runs fine , just the testing I have problems with ... Anyone have any idea ? Thanks .. Sorry if this is off topic but hope someone can help ... Using Groovy 1.5.6 / Grails 1.0.2 . JetGroovy plugin 1.5 .. New(ish) to Groovy / Grails and working thru 'Getting Started With Grails' Sorry , configuration error .. Corrected ..and works .. Rob, Glad you found the configuration issue. Now, would you tell me what it was since I'm having the same problem? :) (I should never code at 2:30am anyway!) Edited by: jj_jackson on Jul 26, 2008 3:00 AM You cannot test Grails classes from within IntelliJ, since they need to be injected by Grails (eg the Dynamic GORM methods/validae). When you want to do a normal unit test without any Grails specific features, it shouldn't be a problem, but when you start using Grails features like constraints, manyToMany attrbitutes, etc, you're out of luck and you'll have to use the Grails test-app target.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205997509-Problem-with-testing-JetGroovy-1-5-Grails1-0-2-Groovy-1-5-6-
CC-MAIN-2020-24
refinedweb
316
70.09
[SOLVED] Binding from javascript I create an item dynamically in js and my question is how can I set binding for that item? I tried following way, but to no avail: @var newLine = Qt.createQmlObject('import QtQuick 1.0; import Lines 1.0; Line {/.../}', rootScene) var newBinding = Qt.createQmlObject('import QtQuick 1.0; Binding {}', newLine) newBinding.target = newLine newBinding.property = "startingPoint" newBinding.value = Qt.point(node.x, node.y)@ It SETS the value properly, but when node's position changes, my line doesn't update. What's the correct way, if this is not? You best establish the property bindings from QML. You know that you can instantiate your QML components simply by <code>componentId.createObject(<parent>)</code>? - minimoog77 Property binding in JS will be available in Qt 4.7.4. See I wasn't aware that I could do property bindings in JS. I'm impressed how QML/JS is evolving! [quote author="minimoog77" date="1302971106"]Property binding in JS will be available in Qt 4.7.4. See[/quote] That's it, is there any chance for me to get current version of qt 4.7.4 (especially QtQuick 1.1) now? - DenisKormalev Fenix, I think you can use Qt from "gitorious": in branch 4.7 (or maybe use master if you want even more experimental features). QtQuick 1.1 was just recently merged into the Qt 4.7 master. You always get the bleeding edge from the "staging area": (e.g. try branch "qtquick11-stable"). Thank you, I compiled qtquick11-stable clone, but it didn't help; I found other workaround, though - if someone would have similar problem (with dynamically created bindings of dynamically created items): *Firstly I created js stateful lib with declared two variables and assigned some objects to them (no matter what they were, just couldn't be nulls): @var firstHook = someExistingObject1 var secondHook = someExistingObject2@ *Next I created a wrapper component with my custom Line element including mentioned lib with properties 'startingPoint' and 'endingPoint' binded to these variables: @import QtQuick 1.0 import Lines 1.0 import "../../js/Curve.js" as CurveFunctions Line { id: line penColor: "red" penWidth: 2 startingPoint: Qt.point(CurveFunctions.firstHook.x, CurveFunctions.firstHook.y) endingPoint: Qt.point(CurveFunctions.secondHook.x, CurveFunctions.secondHook.y) }@ *Then I added two functions to my component which change values of those variables and simulated 1px move back and forth of previous binded objects, so properties could update properly: @ function setFirst(first) { CurveFunctions.firstHook = first someExistingObject1.x += 1 someExistingObject1.x -= 1 } function setSecond(second) { CurveFunctions.secondHook = second someExistingObject2.x += 1 someExistingObject2.x -= 1 }@ and it works. 'someExistingObjects' could have opacity 0.0, or be any accessible objects. I'm not proud of it, but I wasn't able to figure out some other solution.
https://forum.qt.io/topic/5141/solved-binding-from-javascript
CC-MAIN-2018-05
refinedweb
458
52.66
One of the great new features that the Kindle Fire HD offers is a set of dual-driver stereo speakers on both sides of the display. This sound setup opens new possibilities to game and app developers by allowing for a more comprehensive and immersive sound experience.With minor adjustments, any app can leverage the stereo speakers and enhance the user experience. By default, Kindle Fire HD uses both speakers to output balanced sound (left speaker = right speaker). By changing the volume on either side, we can achieve an effect of localized sound. As a simple example, we can consider a conga drum app. The app will have two conga drums displayed, one on the left and one on the right. Tapping on the left conga produces sound only in the left speaker and similarly, tapping on the right drum produces sound only in the right speaker. The way to control the volume on the speakers depends on the method being used to actually play sound. For the purpose of this post, we will assume that the MediaPlayer class is used, but most other methods should be similar if not identical in nature. The following code will create a MediaPlayer instance that will play audio only through the left speaker: import android.media.MediaPlayer; … float leftVolume = 1.0f; float rightVolume = 0.0f; MediaPlayer mPlayer = MediaPlayer.create(…); if (nPlayer == null) { mPlayer.setVolume(leftVolume,rightVolume); } The leftVolume and rightVolume parameters can be set toany value between 0.0 (“off”) and 1.0 (“full volume”). The volume level is relative to the master volume of the device so changing these values is basically just changing the balance between the stereo channels or, in this case, the two speakers. Using the SoundPool class is not all that different: import android.media.SoundPool; … float leftVolume = 1.0f; float rightVolume = 0.0f; SoundPool sPool = new SoundPool(…); // Load an asset into the pool int streamId = sPool.load(…); sPool.play(streamId,leftVolume, rightVolume ,0, 0, 1.0);
https://developer.amazon.com/appsandservices/community/post/Tx2RF9SW317ZTUV/Creating-Immersive-Experiences-with-Stereo-Speakers-on-Kindle-Fire-HD.html
CC-MAIN-2015-18
refinedweb
329
58.48
libsox - SoX, an audio file-format and effect library #include <sox.h> int sox_format_init(void); void sox_format_quit(void); sox_format_t sox_open_read(const char *path, const sox_signalinfo_t *info, const char *filetype); sox_format_t sox_open_write(sox_bool (*overwrite_permitted)(const char *filename), const char *path, const sox_signalinfo_t *info, const char *filetype, const char *comment, sox_size_t length, const sox_instrinfo_t *instr, const sox_loopinfo_t *loops); sox_size_t sox_read(sox_format_t ft, sox_ssample_t *buf, sox_size_t len); sox_size_t sox_write(sox_format_t ft, sox_ssample_t *buf, sox_size_t len); int sox_close(sox_format_t ft); int sox_seek(sox_format_t ft, sox_size_t offset, int whence); sox_effect_handler_t const *sox_find_effect(char const *name); sox_effect_t *sox_create_effect(sox_effect_handler_t const *eh); int sox_effect_options(sox_effect_t *effp, int argc, char * const argv[]); sox_effects_chain_t *sox_create_effects_chain(sox_encodinginfo_t const *in_enc, sox_encodinginfo_t const *out_enc); void sox_delete_effects_chain(sox_effects_chain_t *ecp); int sox_add_effect(sox_effects_chaint_t *chain, sox_effect_t*effp, sox_signalinfo_t *in, sox_signalinfo_t const *out); cc file.c -o file -lsox libsox is a library of sound sample file format readers/writers and sound effects processors. It is mainly developed for use by SoX but is useful for any sound application. sox_format_init function performs some required initialization related to all file format handlers. If compiled with dynamic library support then this will detect and initialize all external libraries. This should be called before any other file operations are performed. sox_format_quit function performs some required cleanup related to all file format handlers. sox_open_input function opens the file for reading whose name is the string pointed to by path and associates an sox_format_t with it. If info is non-NULL then it will be used to specify the data format of the input file. This is normally only needed for headerless audio files since the information is not stored in the file. If filetype is non- NULL then it will be used to specify the file type. If this is not specified then the file type is attempted to be derived by looking at the file header and/or the filename extension. A special name of "-" can be used to read data from stdin. sox_open_output function opens the file for writing whose name is the string pointed to by path and associates an sox_format_t with it. If info is non-NULL then it will be used to specify the data format of the output file. Since most file formats can write data in different data formats, this generally has to be specified. The info structure from the input format handler can be specified to copy data over in the same format. If comment is non-NULL, it will be written in the file header for formats that support comments. If filetype is non-NULL then it will be used to specify the file type. If this is not specified then the file type is attempted to be derived by looking at the filename extension. A special name of "-" can be used to write data to stdout. The function sox_read reads len samples in to buf using the format handler specified by ft. All data read is converted to 32-bit signed samples before being placed in to buf. The value of len is specified in total samples. If its value is not evenly divisable by the number of channels, undefined behavior will occur. The function sox_write writes len samples from buf using the format handler specified by ft. Data in buf must be 32-bit signed samples and will be converted during the write process. The value of len is specified in total samples. If its value is not evenly divisable by the number of channels, undefined behavior will occur. The sox_close function dissociates the named sox_format_t from its underlying file or set of functions. If the format handler was being used for output, any buffered data is written first. The function sox_find_effect finds effect name, returning a pointer to its sox_effect_handler_t if it exists, and NULL otherwise. The function sox_create_effect instantiates an effect into a sox_effect_t given a sox_effect_handler_t *. Any missing methods are automatically set to the corresponding nothing method. The function sox_effect_options allows passing options into the effect to control its behavior. It will return SOX_EOF if there were any invalid options passed in. On success, the effp->in_signal will optional contain the rate and channel count it requires input data from and effp->out_signal will optionally contain the rate and channel count it outputs in. When present, this information should be used to make sure appropriate effects are placed in the effects chain to handle any needed conversions. Passing in options is currently only supported when they are passed in before the effect is ever started. The behavior is undefined if its called once the effect is started. sox_create_effects_chain will instantiate an effects chain that effects can be added to. in_enc and out_enc are the signal encoding of the input and output of the chain respectively. The pointers to in_enc and out_enc are stored internally and so their memory should not be freed. Also, it is OK if their values change over time to reflect new input or output encodings as they are referenced only as effects start up or are restarted. sox_delete_effects_chain will release any resources reserved during the creation of the chain. This will also call sox_delete_effects if any effects are still in the chain. sox_add_effect adds an effect to the chain. in specifies the input signal info for this effect. out is a suggestion as to what the output signal should be but depending on the effects given options and on in the effect can choose to do differently. Whatever output rate and channels the effect does produce are written back to in. It is meant that in be stored and passed to each new call to sox_add_effect so that changes will be propagated to each new effect. SoX includes skeleton C files to assist you in writing new formats (skelform.c) and effects (skeleff.c). Note that new formats can often just deal with the header and then use raw.c's routines for reading and writing. example0.c and example1.c are a good starting point to see how to write applications using libsox. sox.c itself is also a good reference. Upon successful completion sox_open_input and sox_open_output return an sox_format_t (which is a pointer). Otherwise, NULL is returned. TODO: Need a way to return reason for failures. Currently, relies on sox_warn to print information. sox_read and sox_write return the number of samples successfully read or written. If an error occurs, or the end-of-file is reached, the return value is a short item count or SOX_EOF. TODO: sox_read does not distiguish between end-of-file and error. Need an feof() and ferror() concept to determine which occured. Upon successful completion sox_close returns 0. Otherwise, SOX_EOF is returned. In either case, any further access (including another call to sox_close()) to the handler results in undefined behavior. TODO: Need a way to return reason for failures. Currently, relies on sox_warn to print information. Upon successful completion sox_seek returns 0. Otherwise, SOX_EOF is returned. TODO Need to set a global error and implement sox_tell. TODO. Representing samples as integers can cause problems when processing the audio. For example, if an effect to mix down left and right channels into one monophonic channel were to use the line *obuf++ = (*ibuf++ + *ibuf++)/2; distortion might occur since the intermediate addition can overflow 32 bits. The line *obuf++ = *ibuf++/2 + *ibuf++/2; would get round the overflow problem (at the expense of the least significant bit). Stereo data is stored with the left and right speaker data in successive samples. Quadraphonic data is stored in this order: left front, right front, left rear, right rear. A format is responsible for translating between sound sample files and an internal buffer. The internal buffer is store in signed longs with a fixed sampling rate. The format operates from two data structures: a format structure, and a private structure. The format structure contains a list of control parameters for the sample: sampling rate, data size (8, 16, or 32 bits), encoding (unsigned, signed, floating point, etc.), number of sound channels. It also contains other state information: whether the sample file needs to be byte-swapped, whether sox_seek() will work, its suffix, its file stream pointer, its format pointer, and the private structure for the format . The private area is just a preallocated data array for the format to use however it wishes. It should have a defined data structure and cast the array to that structure. See voc.c for the use of a private data area. Voc.c has to track the number of samples it writes and when finishing, seek back to the beginning of the file and write it out. The private area is not very large. The ``echo'' effect has to malloc() a much larger area for its delay line buffers. A format has 6 routines:: For some effects, some of the functions may not be needed and can be NULL. An effect that is marked `MCHAN' does not use the LOOP (channels) lines and must therefore perform multiple channel processing inside the affected functions. Multiple effect instances may be processed (according to the above flow diagram) in parallel. This manual page is both incomplete and out of date. sox(1), soxformat(7) example*.c in the SoX source distribution. Copyright 1998-2009 by Chris Bagwell and SoX Contributors. Copyright 1991 Lance Norskog and Sundry Contributors. This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. Chris Bagwell (cbagwell@users.sourceforge.net). Other authors and contributors are listed in the ChangeLog file that is distributed with the source code.
http://huge-man-linux.net/man3/libsox.html
CC-MAIN-2017-17
refinedweb
1,654
56.15
Created on 2019-09-20 17:21 by rhettinger, last changed 2020-03-27 16:45 by miss-islington. This issue is now closed. Current signature: pow(x, y, z=None, /) Proposed signature: pow(base, exp, mod=None) Benefits: * Meaningful and self-explanatory parameters in tooltips * Optionally clearer calls for the three argument form: pow(2, 5, mod=4) * More usable with partial(): squared = partial(pow, exp=2) Looks like a solid proposal, I especially like the clarity for the 3-argument call. Often beginners ask about why there's a third argument in pow especially when teaching RSA and number-theoretic stuff. Do you mind if I take this on Raymond? Actually quick question, should a similar change be made for `math.pow` for consistency's sake? I've made a PR, feel free to close it if you'd rather implement this yourself or this proposal won't be accepted :) You can use a lambda instead of partial: squared = lambda x: pow(x, 2) Proposed names look meaningful. But after adding support of keyword arguments please compare performance of the old and the new functions. I expect that the difference will be small, but we need to check. Here's a little microbenchmark, let me know if there's anything specific you'd like to see: Before ====== > python -m pyperf timeit "from test.test_builtin import BuiltinTest; tst = BuiltinTest()" -- "tst.test_pow()" Mean +- std dev: 3.80 us +- 0.23 us > python -m pyperf timeit "pow(23, 19, 3)" Mean +- std dev: 519 ns +- 12 ns After ===== > python -m pyperf timeit "from test.test_builtin import BuiltinTest; tst = BuiltinTest()" -- "tst.test_pow()" Mean +- std dev: 3.80 us +- 0.26 us > python -m pyperf timeit "pow(23, 19, 3)" Mean +- std dev: 526 ns +- 18 ns The proposal sounds reasonable to me. > should a similar change be made for `math.pow` for consistency's sake? I'd leave math.pow alone here. Thank you. Could you please test simpler examples like pow(2, 3)? Please use the --duplicate option. And pow(2.0, 3.0) please. Before ====== >python -m pyperf timeit "pow(2, 3)" --duplicate 100000 Mean +- std dev: 242 ns +- 19 ns > python -m pyperf timeit "pow(2.0, 3.0)" --duplicate 100000 Mean +- std dev: 197 ns +- 16 ns After ===== > python -m pyperf timeit "pow(2, 3)" --duplicate 100000 Mean +- std dev: 243 ns +- 11 ns > python -m pyperf timeit "pow(2.0, 3.0)" --duplicate 100000 Mean +- std dev: 200 ns +- 14 ns math.pow changes removed from PR New changeset 87d6cd3604e5c83c06339276228139f5e040b0e7 by Miss Islington (bot) (Ammar Askar) in branch 'master': bpo-38237: Make pow's arguments have more descriptive names and be keyword passable (GH-16302) Thanks Ammar Thank you for your contribution Ammar! Nice work! New changeset 37bc93552375cb1bc616927b5c1905bae3c0e99d by Raymond Hettinger (Miss Islington (bot)) in branch '3.8': bpo-38237: Let pow() support keyword arguments (GH-16302) (GH-16320) Isn't it a new feature? Isn't it too later to add it to 3.8? As noted in the checkin, this was backported with the release manager's assent. FWIW, pow() itself is an old feature, recently enhanced to support negative powers in a given modulus. When the enhancement went in, we should have done this as well. New changeset 24231ca75c721c8167a7394deb300727ccdcba51 by Raymond Hettinger (Miss Islington (bot)) in branch '3.8': bpo-38237: Shorter docstring (GH-16322) (GH-16323) Thank you for the explanation Raymond and sorry for the disturb. My mistake, I had not noticed the release manager's assent. There seems to be a slight mixup with the built-in pow() function in Python 3.8.2. Currently, under it says: Changed in version 3.9: Allow keyword arguments. Formerly, only positional arguments were supported. I think this should be into "Changed in version 3.8 ... ", as pow(3,4, mod=5) actually works in Python 3.8.2. The "What’s New In Python 3.8" also needs to be changed accordingly. In it says: One use case for this notation is that it allows pure Python functions to fully emulate behaviors of existing C coded functions. For example, the built-in pow() function does not accept keyword arguments: def pow(x, y, z=None, /): "Emulate the built in pow() function" r = x ** y return r if z is None else r%z This example can simply be dropped now. > This example can simply be dropped now. It could be, but it would be better to replace it with a different example that works. Any suggestions? In my original PR I changed it to divmod: It is hard to find a builtin which could be easy and clearly implemented in Python (it means no use of dunder methods). Maybe sorted()? def sorted(iterable, /, *, key=None, reverse=False): """Emulate the built in sorted() function""" result = list(iterable) result.sort(key=key, reverse=reverse) return result Although I think that this use case is less important. The primary goal of the feature is mentioned at the end of the section -- easy way to implement functions which accept arbitrary keyword arguments. divmod() should be implemented via __divmod__ and __rdivmod__. And they should be looked up on the type, not instance. There is no easy way to express it in Python accurately. This would make the example too complex. I don't think that matters. The example is supposed to just serve as an illustration, it doesn't need to encode the dunder dispatch semantics. The already existing example doesn't check for a __pow__. I'd picture it just as: return x//y, x%y Maybe a use case in this direction: int(x, base=10). Because, if you type int(x='3', base=12) you get TypeError: 'x' is an invalid keyword argument for int() and x needs to be a positional-only to program this yourself. New changeset 5a58c5280b8df4ca5d6a19892b24fff96e9ea868 by Ammar Askar in branch 'master': bpo-38237: Use divmod for positional arguments whatsnew example (GH-19171) New changeset 9c5c497ac167b843089553f6f62437d263382e97 by Miss Islington (bot) in branch '3.8': bpo-38237: Use divmod for positional arguments whatsnew example (GH-19171)
https://bugs.python.org/issue38237
CC-MAIN-2021-21
refinedweb
1,014
65.83
. Here is a picture of my EP200Mmd. I bought it on eBay for a pretty decent price. This monochromator is slightly different than the one Shahriar had. Instead of using fiber optics to guide the light into the input slit, this one uses an input collimator/coupler to guide the light onto the input optical slit. Because of the reduced field of view, this setup is slightly less prone to stray light interference from the environment. The desired wavelength can be selected using a screw micrometer. According to the specifications, the wavelength can be adjusted from 185nm to 925nm which covers most of the ultraviolet region, the visible light region (380 nm–700 nm) and all the way to the near infrared spectrum. You can see the micrometer mechanism above. In my particular unit, the upper range can only be ajusted to 903nm, but this should be sufficient for what I am doing. I might be able to re-calibrate it sometime in the future. One thing I did notice during my initial testing of the monochromator was that the spectrum resolution was very poor, and the reason for this is that this monochromator was missing a crucial piece — the input slit. In place of the slit, there was a hole about a quarter inch in diameter, which is too big for any meaningful measurements. I am still pretty puzzled as to what happened to the slit as it did not appear that anyone had tampered with it before. Anyway, finding a proper slit for this monochromator could be quite challenging, so I decided to make one myself. As it turned out, it was pretty easy to make a slit using a razor blade. The blades inside a razor are thin enough to be cut by scissors and can be positioned precisely. After I manipulated the pieces to get the desired opening width, I dabbed the area with tiny bit of glue so that the slit can remain be in place securely. The picture below shows the makeshift slit I made. Features in this picture is magnified by roughly seven times (click on the image to see the full sized picture). The actual distance between the two horizontal mounting holes is 32 mm. So the opening of the slit is roughly 0.2 mm by estimation. According to the datasheet, this slit size should give us a resolution of 1 nm. Of course, if I do get hold of a proper slit assembly in the future I will swap this temporary one out. Since I did not want to disassemble the monochromator like Shahriar did in his modification, I decided to use a different method to record the measurement results. The heart of the monochromator is a 1200 grooves/mm diffraction grating and it cannot be cleaned without suffering performance degradation and I didn’t want to risk damaging the delicate optics. So instead of fitting a custom shaft hub over the micrometer, which requires cutting an opening on one side of the case, I decided to use a belt to drive an optical encoder to record the micrometer readings. The main benefit of this alternative method is that no modification to the monochromator is required. This is crucial since I do not have the equipment needed to design and build the custom mounting mechanism. Also there is always the risk of damaging the sensitive grating and other optics when opening up a precision instrument like this. Of course there are challenges to this approach as well. Because the shaft moves laterally, the belt assembly needs to be moved along when scanning through the wavelength range. Also the belt cannot exert too much force on the micrometer otherwise we run the risk of damaging it. Given all these, I decided to drive the belt manually. In this way I can apply just enough force so that the belt does not slip and I can also adjust the horizontal alignment manually on the go. While this might sound a bit difficult to do, it is actually quite easy to accomplish in practice. You can see how this is done in my video later. Below is a picture of the optical encoder assembly I built. The optical encoder used here was taken from an old HP equipment. The belt pulley wheel used was taken from a disassembled laser printer. The actual size of the wheel does not matter much. I glued a gear on the other side of the pulley wheel so that the belt would stay in place when rotating. Given this configuration, I get roughly 10 pulses per nanometer, which is more than adequate for the 1nm resolution we get from the slit size chosen. If you use a smaller wheel, you can achieve even higher resolutions. The encoder readings are sent to a PC via an Arduino. Below is a picture of the simple shield I made. The 4 pin header connects to the optical encoder. Two of the encoder pins are routed to Arduino digital pin 5 and pin 6 (PWM pins) and the other two leads are for ground and Vcc respectively. I used an RCA jack to take analog signal from the monochromator detector output. Since the specified output signal from the monochromator goes from 0V to 10V but can go as high as 15V in overload situations, I included a voltage divider to divide the input voltage by three at the input in order for the analog input to stay within range. Power comes in from a modified Global Specialties 1301A power supply (±15V supply voltages and the 2-10V photomultiplier control voltage) via a DB-9 connector. The picture below shows the typical experiment setup. To make manual adjustment slightly easier, each run starts at 200nm and ends when the micrometer hits the upper limit (in this case 903nm). This way, we do not need to worry about the optical encoder wheel to the micrometer turn ratio as the pulses recorded would always correspond to the 703nm travel range. Each time the optical encoder value is read, we also read back the analog input voltage and this voltage alone with the encoder value is sent over via the serial port in a space separated format. A button on the Arduino digital pin 4 is used to send a special stop signal (9999 9999) manually, so we can let the program that is listening on the serial port know that the run has stopped. You can examine the Arduino code towards the end and see how this is done. Since the encoder value is only reset to 0 inside setup(), you will need to press the reset button prior to taking measurement each time (that’s what the hole on the shield is for). I used MATLAB to plot the received data from the serial port. You can use pretty much any programming languages to receive and plot your data however. The MATLAB code is also included towards the end. Now, let us take a look at the spectrum of different light sources (click to see larger picutres). The first measurement shows the spectrum of a red Helium-neon laser. The characteristic spectrum is centered at 632.8nm. For the most accurate measurements, the monochromator must starts at 200nm precisely each time and there should be enough tension on the belt to ensure that it does not slip during the measurement. The next couple of spectrum are taken from a couple of laser diodes I have on hand. The spectrum on the left is that of a red laser diode. This is a common GaInP laser with a wavelength of 635 nm. The picture to the right shows the spectrum from an old laser pointer. I got this laser pointer more than 20 years ago. It uses an AlGaInP laser diode. The AlGaInP laser has a peak at 670 nm. Due to its longer wavelength the light appears in deeper red. Red laser pointers nowadays mostly use GaInP laser diodes as they are cheaper and much brighter than the AlGaInP ones. These two pictures are the spectrum of an violet LED (left) and a blue LED (right): And here we have the spectrum of a green LED (left) and a yellow LED (right): The pictures below are the spectrum of a red LED (left) and the spectrum of an infrared LED (right). Because the sensitivity at the infrared range is significantly lower for this monochromator (the ADC reading is only a tenth of the full scale), the spectrum peaks picked up from the lower wavelengths are likely from the environment and not from the infrared LED itself. Here is the spectrum of a white LED. Since white light are essentially light from blue LED down-converted by the phosphor coating, you can see the characteristic blue LED spectrum (centered at roughly 450nm) and a much broader spectrum covering the entire visible light range from the light converted via the phosphor coating. I then tested the light spectrum of two flash lights with incandescent light bulbs. The result is somewhat interesting as you can see below. Intuitively I was expecting a Gaussian shaped spectrum, but as you can see from the spectrum below, each has a peak in the 780 nm to 800 nm near infrared range. Next let’s take a look at the spectrum of a neon bulb. These gas discharge indicators were quite popular in test gears back in the 70’s (this one was from the power indicator on my HP 6113A power supply). Different spectral lines can be seen clearly. Finally, here is what the light spectrum looks in my lab. Almost all of my lights are CFL‘s and the measured spectrum is inline with the characteristics of fluorescent bulbs. Arduino Code #include <Encoder.h> #include <SPI.h> #define STOP_BTN_PIN 4 #define ENC_PIN_1 5 #define ENC_PIN_2 6 long data[2]; int adcOut; int buttonState; int lastButtonState = LOW; Encoder enc(ENC_PIN_1, ENC_PIN_2); long oldPos = -9999; long newPos; void setup() { pinMode(STOP_BTN_PIN, INPUT); digitalWrite(STOP_BTN_PIN, HIGH); Serial.begin(115200); } void loop() { newPos = enc.read(); adcOut = analogRead(A0); data[0] = newPos; data[1] = adcOut; if (newPos != oldPos) { oldPos = newPos; Serial.print(newPos); Serial.print(" "); Serial.println(adcOut); } if (digitalRead(STOP_BTN_PIN) == LOW) { Serial.println("9999 9999"); } } MATLAB Code startWL = 200; stopWL = 903; delete(instrfindall) serialPort = serial('/dev/ttyUSB0', 'BaudRate', 115200); serialPort.TimeOut = 60; fopen(serialPort); while 1 out = fscanf(serialPort) if strcmp(strtrim(out),'9999 9999') break end [x y] = strread(out) if x >= 0 w(x + 1) = x; v(x + 1) = y; end end w = w./length(w)* (stopWL - startWL) + startWL; plot(w,v); axis([min(w) max(w) 0 max(v) + 1]); grid on; shg; set(gca, 'xtick',[200:50:903]); fclose(serialPort); delete(serialPort); clear serialPort; Thanks for posting this! Really interesting. I’m trying to do similar, but am having trouble powering my monochromator right. Would you be able to share a schematic of how you powered the various pins? I have an adjustable 0-30V power supply, but am a bit confused about the +/- 15V inputs and how to adjust gain. Any help much appreciated! Kind regards, Nick Take a look at page 4 of this document:, pin 7/9 are common ground, and you will need a dual power supply to power pin 2(-15V) and pin 3(+15V) with respect to ground. A programming voltage on pin 1 between 2 to 10V can be used to adjust the gain.
http://www.kerrywong.com/2015/08/16/yet-another-scanning-monochromator-build/?replytocom=917803
CC-MAIN-2020-05
refinedweb
1,912
61.56
Introduction This article describes a very specific use case of MongoDB, where we want to query for documents belonging to certain categories, and get the most recent n documents first. The two key conditions are therefore Categories This means that your documents belong to one of a set of distinct values. Imagine news articles that can belong to a category like "Politics", "Technology", "Sports", etc. In this example we won't consider documents with multiple categories (tags), but thanks to MongoDB's multi-key indexes, the same principles can be extended to those cases as well. var result = db.documents .find({category: {"$in": ["Business", "Politics", "Sports"]}}) Recency We are interested in "the most recent" n documents. This implies some sorting order, ensured by an index on a timestamp-like field (ts) and traditionally a sort and limit on the result set: var n = 10000; var result = db.documents .find({category: {"$in": ["Business", "Politics", "Sports"]}}) .sort({ts:-1}) .limit(n) I say "traditionally", because sorting is often used for this kind of problem, and while that certainly guarantees the most recent n documents, this may not be the optimal solution for the problem. It really depends what happens to these documents next. If they need to be processed or presented in an ordered list, then yes, this is really the only solution. It is, however, a stronger requirement than the one we stated in the beginning. There is a significant difference between "most recent n" and "most recent n in sorted order". One that we're going to exploit in this article. Perhaps my task is to batch-process a queue of jobs. I can process 1,000 jobs simultaneously, and the relative order of the jobs doesn't matter, as long as I get the most recent 1,000 jobs in the queue. What are the options? Let's have a look at how this particular problem can be addressed in MongoDB. For any kind of efficient document retrieval, we need to define an index on the fields in question. But in what order should the fields be arranged? Index on {category:1, ts:-1} This index first branches into the categories, then sorts the documents on their timestamp, in descending order. This seems like a good match, as we could quickly determine the documents matching the categories (our first condition). However, the documents are sorted by timestamp per individual $in branch, not globally. Therefore, if we request documents from several categories, the partially sorted lists would have to be merged. In theory, this could be done in linear time, by always picking the most recent document from each of the queues (often, this is referred to as a merge sort, but it is actually just the merge part in the merge sort algorithm). MongoDB does not currently do this (as of version 2.4.3), but this improvement is planned for the upcoming 2.6 release (SERVER-3310). Other optimizations are already in place, like limiting each of the branches to the total requested limit (SERVER-5063). For small limit values, the current implementation is still quite fast. But what if we want the top 100,000 documents from 10 different categories? MongoDB would have to sort up to 1 million documents in memory, then return the top 100,000. With a memory restriction of 32MB currently, this is often simply impossible and MongoDB will complain, telling you to use a different index or ask for less documents. Index on {ts:-1, category:1} The alternative is to use the reversed index, first on timestamp, descending, then on categories. The Index Tree Diagram would look something like the this: Note that we use simple integers as timestamp value here, but this could be any sortable value, like dates or epoch numbers. Now the documents are sorted globally, so any query can be returned in sorted order without in-memory sorts. But finding the matching categories requires to scan each of the index entries. Because the timestamp fields are all distinct from each other, each entry only links to a single document, with a single category entry. To retrieve the most recent n documents, MongoDB scans the index entries from left to right, filtering out any document that doesn't match the requested categories, until it has reached the limit. This scan is done in linear time, but how many documents need to be scanned? In the previous approach, we needed to look at C $\cdot$ L documents at most, where C is the number of categories, and L is the limit, but they needed to be sorted in memory. Here, it depends on the distribution of the data. In the worst case, there aren't enough matching documents, so every single document in the collection needs to be inspected. Not only take these collection scans a long time, they also mess up the working set in memory, in case you have more data than available RAM. Can we do better? Remember that our initial requirement was less strict than what we attempted with the last two solutions. We didn't ask for sorted results, just for the most recent ones. One way to achieve this is by sorting and limiting. But there's another possibility, that avoids the expensive sort: var total = db.documents.count(); var k = 10000; var results = db.documents .find({ category: {"$in": ["Business", "Politics", "Sports"]}, ts: {"$gt": total-k}}) .hint({category: 1, ts: -1}) Instead of sorting and limiting the results, we use a second query condition on the ts field. Here the timestamp has to be greater than k. Basically, we limit the number of documents before we match the categories, not after, as we did in the previous solutions. We also force the query to use the index on category first. How many documents will this query return? 10,000? Most likely not, unless the last 10,000 documents by chance all fall into the correct categories. That's unlikely, and for another find on different categories certainly not the case. Let's call the number of returned documents r, which is most likely not equal to n, the number of documents we wanted. How different r and n are depends on the distribution of documents over the categories, and on our choice of k, the document limit before we filter out the categories. The upside of this query is that it is very fast. Matching the categories is simply a matter of branching into each of the category btree children, and limiting the results means setting a lower bound on the range. So while this query didn't really fit the brief of returning the most recent n matching documents, at least it ran very fast :-) Let's recap: We have a fast way of checking the last k documents and filtering out the ones that match the categories. The query will return the most recent r matching documents, which may be different from the desired number of n documents returned: - k number of documents to check for category match - r number of matching documents returned after inspecting k documents: r ≤ k - n number of desired documents matching We'd like to change the query so that r is closer to, ideally identical to, n. Given a fixed distribution of data over the categories, there is only one variable that we can adjust, and that is k. Let's say we want to return the most recent n=5 documents, we queried with k=5 documents (the lower bound), and got r=3 documents back. That's not enough, so we repeat the query with a higher k=6. This time, we get 4 documents back, which is closer to n but doesn't quite reach it yet. A third query with k=7 does the trick, and we now have the most recent 5 documents, without relying on a .sort() in the query. The graphic below shows this example for k=5, 6, 7. Each time we push the lower bound bracket (calculated as total-k) a little further to the right on each of the category branches, until the final results returns the desired number of documents. With 3 fast queries (4 if we count the initial count), we have now found the most recent 5 documents that match the given categories without using a sort or having to scan through a lot of documents. Optimizing the Iterative Algorithm There are a few more optimizations that we can make use of. In above example, we linearly increased k from 5 to 6 to 7, but we don't have to search every single k. Instead, we can use a binary search on k until we get to the correct number of matching documents, which only requires log(n) steps in the worst case. The second optimization is more of a suggestion to relax the conditions even a little further. Perhaps you don't really need exactly n most recent documents. Then you can make use of the fact, that with binary search, you can trade some precision for a bit more performance. If your use case allows a small error margin on n, you can iterate until the number of returned documents lies within the error bounds. This can have a dramatic effect on performance, as a lot of iteration steps would be used to get exactly to n matching documents. The algorithm below allows to specify a min and max value, and it will stop iterations when the number of matching documents lies within that range. Even an error margin of 5% can reduce the number of necessary iterations significantly. You can still specify a .limit(n) on the cursor, to get at most n documents. This allows for flexible use cases, for example: Return exactly 100,000 documents, where these documents are at least in the most recent 105,000 documents (5% error margin). For parallel queue processing, this may be an acceptable requirement, which can be answered much quicker than the sorted queries. Results In this test I used 5 million documents separated into 100 categories, which isn't that much when you think about them as actors for a movie database, or product categories for an online shop for example. I've ran both the sorted and the iterative version for different numbers of n: 100, 1000, 10000, 25000, 50000, 75000, 100000. The error margin for r was 10% (so anything between n and 1.1 $\cdot$ n was acceptable). Here are the results: The x-axis is the number of most recent documents n to retrieve, the y-axis shows how long each query took in milliseconds. The numbers above each measurement point show how many documents (r) were actually retrieved. For 100,000 documents, the sorted query was roughly 8x slower than the iterative one. These results depend on your data distribution and the number of categories though. With less categories, the difference may not be as big. As always, test your queries in a staging/QA environment. Implementation Here is a javascript function that finds the n most recent documents matching the categories given in query, where n will be between min and max. query needs to be of the form: {cat: {$in: [1, 55, 88]}} function findRecent(collection, query, min, max) { var total = db[collection].count(); var step, marker, cursor, c, last_marker, qc; step = min; marker = total; c = 0; while (c < min) { last_marker = marker; marker = marker - step; query['ts'] = {'$gt': marker}; cursor = db[collection].find(query).hint({cat:1, ts:-1}); c = cursor.count(); step = step*2; } if (c < max) { return cursor; } step = (last_marker - marker) / 2; marker = marker + step; while (true) { query['ts'] = {'$gt': marker}; cursor = db[collection].find(query).hint({cat:1, ts:-1}); c = cursor.count(); step = step/2; if (c > max) { marker = marker + step; } else if (c < min) { marker = marker - step; } else { return cursor; } } } And if you want to reproduce these results, I used this little Python script to fill the database (using numbers 0–99 for categories): from pymongo import MongoClient, ASCENDING, DESCENDING from random import choice categories = range(100) mc = MongoClient("localhost:27017") db = mc.test db.docs.drop() interval = 1000 docs = [] for i in xrange(5000000): doc = {"cat": choice(categories), "ts": i} docs.append(doc) if i % interval == interval-1: print i db.docs.insert(docs, w=0) docs = []
http://blog.rueckstiess.com/mongodb/2013/06/13/recency-vs-sorting.html
CC-MAIN-2018-51
refinedweb
2,062
61.36
gRPC an introduction gRPC is an open source, modern RPC framework initially developed at Google. It uses protocol buffers as an interface description language, protobuf is a mechanism for serializing structured data. You just define your services and its data structure in the proto file and gRPC automatically generates client and server stubs for your service in a variety of languages and platforms. Using profobuf allows us to communicate using binary instead of JSON, this makes gRPC much faster and reliable. Some of the other key features of gRPC are bidirectional streaming and flow control, blocking or nonblocking bindings and pluggable authentication. gRPC uses HTTP/2 which uses multiplexing by which client and servers can both initiate multiple streams on a single underlying TCP connection, You can read more about gRPC here. gRPC-web gRPC-Web is a javascript library using which we can directly talk to the gRPC service via web-browser. gRPC-Web clients connect to gRPC services via a special gateway proxy(Envoy proxy) which is going to be a docker service in our case running on the same server machine which bridges GRPC( HTTP/2) with Browser Communication (HTTP/1.1) This was the game changer because initially we were able to use gRPC only for communications between services or micro-services and the client can only use REST API calls to access the data, but now by using the gRPC we can make use of the power of gRPC throughout our app and eliminate REST Why gRPC is better than REST The major differences between REST and gRPC are - Payload type, REST uses JSON and gRPC uses Protobuff - Transfer protocol, REST uses HTTP/1.1 and gRPC uses HTTP/2 Since we are using Protobuf in gRPC we don't have to care about the verbs(GET, PUT) and headers etc. Also, it reduces the serialization code which we have to write for all the data models the stubs generated by the gRPC framework takes care of these. Since we are using HTTP/2 in gRPC now we can stream both request and response and get rid of latency issues, Head of line blocking and complexity in establishing TCP connections. Required tools and software's - Protoc v3.6.1— Protobuf compiler to generate client and server stubs. - go v1.11 — Our server is going to be built using go lang. - NodeJS — To build the Vue.JS frontend app. - Docker — To run envoy proxy. Folder structure An overview of topics to be covered - Creating a proto file - Creating server stubs and writing gRPC service handlers - Creating gRPC service - Creating Envoy proxy service - Creating client stubs and client application 1. Proto file Okay now let's jump into the code, the proto file is the heart of our gRPC app using this file the gRPC framework generates the client and server stubs, we define our data models and the services which are going to consume those data models, this file will be placed inside the todo folder at the root of our project. The first line of the file specifies the version of the proto-buffer we are going to use, the same package name we specified in the second line will also be used in the generated go file. In our todoService we have three RPC methods addTodo, deleteTodo, getTodos with its request types as arguments and response types as the return type of the RPC method. On each message type we specify tags like=1, =2 which are unique tags which will be used at the time of encoding and decoding. The repeated keyword means that the field can be repeated any number of times. 2. Generate server stub file Next step after creating our proto file is to generate the server stubs using which we will create our gRPC server. We are going to use protoc to generate the stub files, use the below command from the root of the project protoc -I todo/ todo/todo.proto --go_out=plugins=grpc:todo In the above command, we specify our output folder to be todo/ and the input file to be todo/todo.proto and we specify the plugin name and the package name for the generated stub file. after executing the above command you can find a new file named todo.pb.go inside the todo folder. Now we have to write handler methods for all our RPC methods specified in the proto file, we will be creating a new file handler.go inside the same todo folder. For the sake of simplicity, I am not going to use any database for storing and retrieving our todo’s, Since we are in the same generated todo package I can use the request and response data types from the generated stub files. All our handler methods are tied to the server struct. In addTodo handler function, I am using a UUID package to generate a unique ID for every todo’s and generate a todo object and append it to the Todos list in the server struct In the GetTodoshandler function, I am just returning the Todos list inside the server struct. In the deleteTodo handler function, I am just doing an find and delete operation using the todo id and updating the Todos list in the server struct. 3. Hook up the gRPC server Now we have to hook up all the handler and start the gRPC server, we are going to create a new file server.go in the root of our project. In the above file, we are creating a new server at port 14586 and an empty todo server instance and a new gRPC server, we are using the RegisterTodoServiceto register our todo service with the newly created gRPC server then we serve the created gRPC server. To run the above file use go run server.go from the root of the project which will start the gRPC server. 4. Envoy proxy setup Envoy proxy is going to be a docker service which will be sitting in between our server and client apps, below are the envoy proxy docker and config files. Our todo gRPC service will be running at port 14586 and Envoy will be intercepting HTTP 1.1 traffic at 8080 and re-directing it to 14586 as HTTP2(GRPC) To build the Docker container sudo -E docker build -t envoy:v1 . To start the envoy proxy start the docker container using sudo docker run -p 8080:8080 --net=host envoy:v1 5. Vue.js frontend app Now the only missing part is the client, we are going to use Vue.js framework to create our client web application, for the sake of simplicity we are only going to look at the methods which are responsible for adding and deleting the todos. Create a Vue.js project using vue-cli vue create todo-client This creates a new folder named todo-client in the root of our project next we have to create the client stubs Use the below command to create the client stubs protoc --proto_path=todo --js_out=import_style=commonjs,binary:todo-client/src/ --grpc-web_out=import_style=commonjs,mode=grpcwebtext:todo-client/src/ todo/todo.proto The above command will create two files todo_pb.js and todo_grpc_web_pb.js in the src folder. For the sake of simplicity, I am only going to cover the parts where gRPC service client is used import { addTodoParams, getTodoParams, deleteTodoParams } from "./todo_pb"; import { todoServiceClient } from "./todo_grpc_web_pb"; In the todo component of our client app import all the required data types from todo_pb.js and the client from todo_grpc_web_pb.js we then create a new client instance using todoServiceClient and use the localhost URL with the port which we configured our envoy proxy to listen as the server URL and save the client instance. Above are the methods hooked up to the components add todo button click and delete todo icon click. We are just using our client stubs to execute our gRPC services and use the stub datatypes and it’s setters and getters to handle the data to be sent/received from the server. Conclusion Thank you for taking the time to read this till the end😁, If you have any queries regarding this or something I should add, correct or remove, feel free to comment below. If you really enjoyed reading it don't forget to press the clap icon You can find the full source code at this repo, and follow me on GitHub and LinkedIn Originally published at Medium Discussion (1) looks like the protofile is missing for context, here it is from the medium link
https://dev.to/thearavind/a-todo-app-using-grpc-web-and-vuejs-3p55
CC-MAIN-2022-05
refinedweb
1,436
57.3
Created on 2007-12-16 01:19 by jgatkinsn, last changed 2010-04-27 22:56 by loewis. This issue is now closed. I found this problem while using PySNMP and trying to load a custom MIB converted to python code. Inside the PySNMP, it uses execfile() to load the python MIBs, and I assume so it won't be put into a separate namespace. On one particular MIB, which I cannot modify because it's a company product MIB, this error of unable to take in more than 255 arguments occurs. The problem line is quite long, but surely in this day and age of 32 bit to 64 bit machines this 255 constraint could be done away with in the execfile function. I'm assuming the 255 constraint has something to do with unsigned 8 bit number. Can you please be more specific? What code exactly are you executing, and what is the exact error message that you get? In the entire Python source code, the error message "Execfile unable to take arguments beyond 255!" is never produced. Very few error messages in Python include an exclamation mark at all. Error message from ipython console: In [28]: mibBuilder2 = builder.MibBuilder().loadModules('ADTRAN-TC') --------------------------------------------------------------------------- <class 'pysnmp.smi.error.SmiError'> Traceback (most recent call last) C:\Documents and Settings\Jack Atkinson\<ipython console> in <module>() c:\python25\lib\site-packages\pysnmp\v4\smi\builder.py in loadModules(self, *mod Names) 80 del self.__modPathsSeen[modPath] 81 raise error.SmiError( ---> 82 'MIB module \"%s\" load error: %s' % (modPath, why) 83 ) 84 <class 'pysnmp.smi.error.SmiError'>: MIB module "c:\python25\lib\site-packages\p ysnmp\v4\smi\mibs\ADTRAN-TC.py" load error: more than 255 arguments (ADTRAN-TC.p y, line 33) Here's the code that loads it: try: execfile(modPath, g) except StandardError, why: del self.__modPathsSeen[modPath] raise error.SmiError( 'MIB module \"%s\" load error: %s' % (modPath, why) ) This comes from line 1845 in ast.c. It has nothing to do with execfile(), it's just a constraint on the generated bytecode. Fixing this would be quite complex, and referring to "this day and age" isn't going to raise its priority. However, I'd be happy to accept a patch that does away with this limit or at least raises it considerably. While you can't modify the MIB, I'm sure it would be possible to change the generator code that compiles the MIB into Python source code, to work around the restrictions in the byte code. I don't have much to contribute other than a simple test to reproduce the issue: >>> code = """ ... def f(%s): ... pass ... """ % ','.join('a%d' % i for i in range(300)) >>> exec(code) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 2 SyntaxError: more than 255 arguments Closing it as "won't fix": nobody is really interested in working on the problem.
http://bugs.python.org/issue1636
CC-MAIN-2016-26
refinedweb
489
65.12
Hedge funds and mutual funds are both “pooled” vehicles, but there are more differences than similarities. For instance, a mutual fund is registered with the SEC, and can be sold to an unlimited number of investors. Most hedge funds are not registered and can only be sold to carefully defined sophisticated investors. Usually a hedge fund will have a maximum of either 100 or 500. If he changes his strategy, he will be accused of “style drift”. Paperwork – a mutual fund is offered via a prospectus; a hedge fund is offered via the private placement memorandum. Liquidity – the mutual fund often offers daily liquidity (you can withdraw at any time); the hedge fund usually has some sort of “lockup” provision. You can only get your money periodically. Absolute vs. Relative – the hedge fund aims for absolute return (it wants to produce positive returns regardless of what the market is doing); the mutual fund is usually managed relative to an index benchmark and is judged on its variance from that benchmark. Self-Investment – the hedge fund manager is expected to put some of his own capital at risk in the strategy. If he does not, it can be interpreted as a bad sign. The mutual fund does not face this same expectation. 42.Is investment banking a good career for someone who is afraid of taking risks? Why or why not?.” 43.Should financial institutions be regulated in order to reduce their risk? Offer at least one argument for regulation and one argument against regulation. Regulation maybe able to reduce failures of financial institutions, which may stabilize the financial system. The flow of funds into financial institutions will be larger if the people who provide the funds can trust that the financial institutions will not fail. However, regulation can also restrict competition. In some cases, it results in subsidies to financial institutions that are performing poorly. Thus, regulation can prevent firms from operating efficiently 44.When a securities firm serves as an underwriter for an IPO, is the firm working for the issuer or the institutional investors that may purchase shares? Explain the dilemma. a securities firm attempts to satisfy the issuer of stock by ensuring that the price is suffiently high, but it must also ensure that it can place all the shares. It also wants to satisfy the investors who invest in the IPO. If the investors incur losses because they paid too much for the shares, they may not want to purchase any more stock from that underwriter in the future. TUTORIAL 5 45. Discuss alternative measures of financial leverage. Should the market value of equity be used or the book value? Is it better to use the market value of debt or the book value? How should you treat off-balance-sheet obligations such as pension liabilities? How would you treat preferred stock? CHƯA TÌM ĐƯỢC ĐÁP ÁN. 46. In general, the principles of cash cycle time management call for a firm to shorten (minimise) the time it takes to collect receivables, and lengthen (maximise) the time it takes to pay amounts it owes to suppliers. Explain what tradeoffs need to be managed if the firm offers discounts to customers who pay early, and the firm also foregoes discounts offered by its suppliers by extending the time until it pays invoices. By offering the discount to customers for early payment, the firm is shortening its receivables period, but also giving away some money in form of discount. By waiting to pay its suppliers, the firm is extending its payables period, but the net effect is to spend more money than is necessary to resolve open invoices. A question to be asked: Is the firm earning more by holding its money over the period between when the discount is forgone and the total invoice from the supplier come due, than it is incuring in the form of the implicit interest charge by paying later? Or is there some cases which at a persent time calls for the retention of cash for as long as possible.
https://pl.scribd.com/document/378256427/ACFrOgAhRu3-yERcQoDLYPHYK7EEol-R060siAYiiZPhoacWvW2ehxtFcVTUU05T-1sHkRSrROCayVLJHEfrJrI5CEsSGKQH2qPOWneLQ2fkycu54gCEJdMK7B91aWw
CC-MAIN-2019-26
refinedweb
677
54.02
13 October 2004 18:02 [Source: ICIS news] LONDON (CNI)--EU member states voted on Wednesday to limit the permitted levels of polycyclic aromatic hydrocarbons (PAHs) in foods, following a proposal originally submitted by the European Commission (EC). From next April, levels of benzo-a-pyrene in food should be less than 5 parts per billion (ppb) for shellfish and smoked foods, less than 2ppb for fish and oil products, and less than 1ppb in baby foods. Benzo-a-pyrene is used as a measure of total PAH content. PAHs are combustion products that can contaminate foods through smoking, heating or drying processes. They can also be present in vegetable oils and animal fats, for example in fish that have been contaminated by oil spills. ?xml:namespace> Control of PAHs in food has been in the Commission's sights ever since an incident in July 2001 when food products made from Spanish olive oil had to be withdrawn from markets across the EU after being found to be contaminated with high levels of PAHs. The Commission's scientific committee on food decided in a report of December 2002 that PAHs are genotoxic carcinogens. However, the decision to introduce EU-wide maximum levels for PAHs is more commercial than health related. Several member states have already adopted them while others have not, and the Commission is worried that market unity across the EU will be endangered. Today's decision on PAH limits was taken by the Standing Committee on the Food Chain and Animal Health, a body chaired by the Commission but including representatives of the member states. The new regulation will now be referred back to the Commission for formal adoption, and should come into force next
http://www.icis.com/Articles/2004/10/13/620279/eu-member-states-agree-to-set-limits-on-pahs-in-food.html
CC-MAIN-2013-48
refinedweb
287
54.46
Eclipse Community Forums - RDF feed Eclipse Community Forums JPA 2 inserting Parent table when trying to update child table field <![CDATA[Hi, I am facing a problem when I am trying to insert/update the fields in the child table. In my parent table I am having columns PCOL1 PCOL2 PCOL3 ... ... LAST_UPDATED_BY LAST_UPDATED_ON and in my child table I am having the columns CCOL1 CCOL2 CCOL3 ..... LAST_UPDATED_BY LAST_UPDATED_ON From my application when I try to persist/merge the child entities public class Child extends Parent The values are getting inserted/updated into Parent table's LAST_UPDATED_BY / LAST_UPDATED_ON instead of child table's LAST_UPDATED_BY / LAST_UPDATED_ON. Please help to resolve this issue. Note: I cannot change the name of fields as the same entities are used by other applications. Thanks. ]]> H R 2012-11-11T17:50:37-00:00 Re: JPA 2 inserting Parent table when trying to update child table field <![CDATA[You have not shown any of the mappings or inheritance setup from JPA. The table structure you've shown looks like the child table duplicates the parent table, so my guess is you have the mappings in a parent table and no additional fields in the child, but marked the inheritance to use a joined table strategy. Joined table inheritance means a row for the child entity must exist in both parent and child tables, so probably isn't what you want. You might want a table per class strategy, but you will need to override the field names used in the child since they do not match what is mapped in the parent. You might be better avoiding Inheritance, and have the classes treated as separate independent entities with their own mappings. See for info on using @AttributeOverride to change the field names used in the child Entity.]]> Chris Delahunt 2012-11-12T15:29:25-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=435520&basic=1
CC-MAIN-2014-52
refinedweb
309
60.45
Source JythonBook / OpsExpressPF.rst Chapter 3: Operators, Expressions, and Program Flow The focus of this chapter is an in-depth look at each of the ways that we can evaluate code, and write meaningful blocks of conditional logic. We’ll cover the details of many operators that can be used in Python expressions. This chapter will also cover some topics that have already been discussed in more meaningful detail such as the looping constructs, and some basic program flow. We’ll begin by discussing details of expressions. If you’ll remember from Chapter 1, an expression is a piece of code that evaluates to produce a value. We have already seen some expressions in use while reading through the previous chapters. In this chapter, we’ll focus more on the internals of operators used to create expressions, and also different types of expressions that we can use. This chapter will go into further detail on how we can define blocks of code for looping and conditionals. This chapter will also go into detail on how you write and evaluate mathematical expressions, and Boolean expressions. And last but not least, we'll discuss how you can use augmented assignment operations to combine two or more operations into one. Types of Expressions An expression in Python is a piece of code that produces a result or value. Most often, we think of expressions that are used to perform mathematical operations within our code. However, there are a multitude of expressions used for other purposes as well. In Chapter 2, we covered the details of String manipulation, sequence and dictionary operations, and touched upon working with sets. All of the operations performed on these objects are forms of expressions in Python. Other examples of expressions could be pieces of code that call methods or functions, and also working with lists using slicing and indexing. Mathematical Operations The Python contains all of your basic mathematical operations. This section will briefly touch upon each operator and how it functions. You will also learn about a few built-in functions which can be used to assist in your mathematical expressions. Assuming that this is not the first programming language you are learning, there is no doubt that you are at least somewhat familiar with performing mathematical operations within your programs. Python is no different than the rest when it comes to mathematics, as with most programming languages, performing mathematical computations and working with numeric expressions is straightforward. Table 3-1 lists the numeric operators. Table 3-1. Numeric Operators Most of the operators in Table 3-1 work exactly as you would expect, so for example: Listing 3-1. Mathematical Operator # Performing basic mathematical computations >>> 10 - 6 4 >>> 9 * 7 63 However, division, truncating division, modulo, power, and the unary operators could use some explanation. Truncating division will automatically truncate a division result into an integer by rounding down, and modulo will return the remainder of a truncated division operation. The power operator does just what you’d expect as it returns the result of the number to the left of the operator multiplied by itself n times, where n represents the number to the right of the operator. Listing 3-2. Truncating Division and Powers >>> 36 // 5 7 # Modulo returns the remainder >>> 36 % 5 1 # Using powers, in this case 5 to the power of 2 >>> 5**2 25 # 100 to the power of 2 >>> 100**2 10000 Division itself is an interesting subject as its current implementation is somewhat controversial in some situations. The problem 10/5 = 2 definitely holds true. However, in its current implementation, division rounds numbers in such a way that sometimes yields unexpected results. There is a new means of division available in Jython 2.5 by importing from __future__. In a standard division for 2.5 and previous releases, the quotient returned is the floor (nearest integer after rounding down) of the quotient when arguments are ints or longs. However, a reasonable approximation of the division is returned if the arguments are floats or complex. Often times this solution is not what was expected as the quotient should be the reasonable approximation or “true division” in any case. When we import division from the __future__ module then we alter the return value of division by causing true division when using the / operator, and floor division only when using the , // operator. In an effort to not break backward compatibility, the developers have placed the repaired division implementation in a module known as __future__. The __future__ module actually contains code that is meant to be included as a part of the standard language in some future revision. In order to use the new repaired version of division, it is important that you always import from __future__ prior to working with division. Take a look at the following piece of code. Listing 3-3. Division Rounding Issues # Works as expected >>> 14/2 7 >>> 10/5 2 >>> 27/3 9 # Now divide some numbers that should result in decimals # Here we would expect 1.5 >>> 3/2 1 # The following should give us 1.4 >>> 7/5 1 # In the following case, we'd expect 2.3333 >>> 14/6 2 As you can see, when we’d expect to see a decimal value we are actually receiving an integer value. The developers of this original division implementation have acknowledged this issue and repaired it using the new __future__ implementation. Listing 3-4. Working With __future__ Division # We first import division from __future__ from __future__ import division # We then work with division as usual and see the expected results >>> 14/2 7.0 >>> 10/5 2.0 >>> 27/3 9.0 >>> 3/2 1.5 >>> 7/5 1.4 >>> 14/6 2.3333333333333335 It is important to note that the Jython implementation differs somewhat from CPython in that Java provides extra rounding in some cases. The differences are in display of the rounding only as both Jython and CPython use the same IEEE float for storage. Let’s take a look at one such case. Listing 3-5. Subtle Differences Between Jython and CPython Division # CPython 2.5 Rounding >>> 5.1/1 5.0999999999999996 # Jython 2.5 >>> 5.1/1 5.1 Unary operators can be used to evaluate positive or negative numbers. The unary plus operator multiplies a number by positive 1 (which generally doesn’t change it at all), and a unary minus operator multiplies a number by negative 1. Listing 3-6. Unary Operators # Unary minus >>> -10 + 5 -5 >>> +5 - 5 0 >>> -(1 + 2) -3 As stated at the beginning of the section, there are a number of built-in mathematical functions that are at your disposal. Table 3-2 lists the built-in mathematical functions. Table 3-2. Mathematical Built-in Functions Listing 3-7. Mathematical Built-ins # The following code provides some examples for using mathematical built-ins # Absolute value of 9 >>> abs(9) 9 # Absolute value of -9 >>> abs(-9) 9 # Divide 8 by 4 and return quotient, remainder tuple >>> divmod(8,4) (2, 0) # Do the same, but this time returning a remainder (modulo) >>> divmod(8,3) (2, 2) # Obtain 8 to the power of 2 >>> pow(8,2) 64 # Obtain 8 to the power of 2 modulo 3 ((8 **2) % 3) >>> pow(8,2,3) 1 # Perform rounding >>> round(5.67,1) 5.7 >>> round(5.67) 6.00 Comparison Operators Comparison operators can be used for comparison of two or more expressions or variables. As with the mathematical operators described above, these operators have no significant difference to that of Java. See Table 3-3. Table 3-3. Comparison Operators Listing 3-8. Examples of Comparison Operators # Simple comparisons >>> 8 > 10 False >>> 256 < 725 True >>> 10 == 10 True # Use comparisons in an expression >>> x = 2*8 >>> y = 2 >>> while x != y: ... print 'Doing some work...' ... y = y + 2 ... Doing some work... Doing some work... Doing some work... Doing some work... Doing some work... Doing some work... Doing some work... # Combining comparisons >>> 3<2<3 False >>> 3<4<8 True Bitwise Operators Bitwise operators in Python are a set of operators that are used to work on numbers in a two’s complement binary fashion. That is, when working with bitwise operators numbers are treated as a string of bits consisting of 0s and 1s. If you are unfamiliar with the concept of two's complement, a good place to start would be at the Wikipedia page discussing the topic: ('s_complement). It is important to know that bitwise operators can only be applied to integers and long integers. Let’s take a look at the different bitwise operators that are available to us (Table 3-4), and then we’ll go through a few examples. Table 3-4. Bitwise Operators Suppose we have a couple of numbers in binary format and we would like to work with them using the bitwise operators. Let’s work with the numbers 14 and 27. The binary (two's complement) representation of the number 14 is 00001110, and for 27 it is 00011011. The bitwise operators look at each 1 and 0 in the binary format of the number and perform their respective operations, and then return a result. Python does not return the bits, but rather the integer value of the resulting bits. In the following examples, we take the numbers 14 and 27 and work with them using the bitwise operators. Listing 3-9. Bitwise Operator Examples >>> 14 & 27 10 >>> 14 | 27 31 >>> 14 ^ 27 21 >>> ~14 -15 >>> ~27 -28 To summarize the examples above, let’s work through the operations using the binary representations for each of the numbers. 14 & 27 = 00001110 and 00011011 = 00001010 (The integer 10) 14 | 27 = 00001110 or 000110011 = 00011111 (The integer 31) 14 ^ 27 = 00001110 xor 000110011 = 00010101 (The integer 21) ~14 = 00001110 = 11110001 (The integer -15) The shift operators (see Table 3-5) are similar in that they work with the binary bit representation of a number. The left shift operator moves the left operand’s value to the left by the number of bits specified by the right operand. The right shift operator does the exact opposite as it shifts the left operand's value to the right by the number of bits specified by the right operand. Essentially this translates to the left shift operator multiplying the operand on the left by the number two as many times as specified by the right operand. The opposite holds true for the right shift operator that divides the operand on the left by the number two as many times as specified by the right operand. Table 3-5. Shift Operators More specifically, the left shift operator (<<) will multiply a number by two n times, n being the number that is to the right of the shift operator. The right shift operator will divide a number by two n times, n being the number to the right of the shift operator. The __future__division import does not make a difference in the outcome of such operations. Listing 3-10. Shift Operator Examples # Shift left, in this case 3*2 >>> 3<<1 6 # Equivalent of 3*2*2 >>> 3<<2 12 # Equivalent of 3*2*2*2*2*2 >>> 3<<5 96 # Shift right # Equivalent of 3/2 >>> 3>>1 1 # Equivalent of 9/2 >>> 9>>1 4 # Equivalent of 10/2 >>> 10>>1 5 # Equivalent of 10/2/2 >>> 10>>2 2 While bitwise operators are not the most commonly used operators, they are good to have on hand. They are especially important if you are working in mathematical situations. Augmented Assignment Augmented assignment operators (see Table 3-6) combine an operation with an assignment. They can be used to do things like assign a variable to the value it previously held, modified in some way. While augmented assignment can assist in coding concisely, some say that too many such operators can make code more difficult to read. Listing 3-11. Augmented Assignment Code Examples >>> x = 5 >>> x 5 # Add one to the value of x and then assign that value to x >>> x+=1 >>> x 6 # Multiply the value of x by 5 and then assign that value to x >>> x*=5 >>> x 30 Table 3-6. Augmented Assignment Operators Boolean Expressions Evaluating two or more values or expressions also uses a similar syntax to that of other languages, and the logic is quite the same. Note that in Python, True and False are very similar to constants in the Java language. True actually represents the number 1, and False represents the number 0. One could just as easily code using 0 and 1 to represent the Boolean values, but for readability and maintenance the True and False “constants” are preferred. Java developers, make sure that you capitalize the first letter of these two words as you will receive an ugly NameError if you do not. Boolean properties are not limited to working with int and bool values, but they also work with other values and objects. For instance, simply passing any non-empty object into a Boolean expression will evaluate to True in a Boolean context. This is a good way to determine whether a string contains anything. See Table 3-7. Listing 3-12. Testing a String >>>>> if mystr: ... 'Now I contain the following: %s' % (mystr) ... else: ... 'I do not contain anything' ... 'I do not contain anything' >>>>> if mystr: ... 'Now I contain the following: %s' % (mystr) ... else: ... 'I do not contain anything' ... 'Now I contain the following: Now I have a value' Table 3-7. Boolean Conditionals As with all programming languages, there is an order of operations for deciding what operators are evaluated first. For instance, if we have an expression a + b c, then which operation would take place first? The order of operations for Python is shown in Table 3-8 with those operators that receive the highest precedence shown first, and those with the lowest shown last. Repeats of the same operator are grouped from left to the right with the exception of the power (*) operator. Table 3-8. Python Order of Operations An important note is that when working with Boolean conditionals, 'and' and 'or' group from the left to the right. Let’s take a look at a few examples. Listing 3-13. Order of Operations Examples # Define a few variables >>> x = 10 >>> y = 12 >>> z = 14 # (y*z) is evaluated first, then x is added >>> x + y * z 178 # (x * y) is evaluated first, then z is subtracted from the result >>> x * y - z 106 # When chaining comparisons, a logical 'and' is implied. In this # case, x < y and y <= z and z > x >>> x < y <= z > x True # (2 * 0) is evaluated first and since it is False or zero, it is returned >>> 2 * 0 and 5 + 1 0 # (2 * 1) is evaluated first, and since it is True or not zero, the (5 + 1) is evaluated and # returned >>> 2 * 1 and 5 + 1 6 # x is returned if it is True, otherwise y is returned if it is False. If neither # of those two conditions occur, then z is returned. >>> x or (y and z) 10 # In this example, the (7 – 2) is evaluated and returned because of the 'and' 'or' # logic >>> 2 * 0 or ((6 + 8) and (7 - 2)) 5 # In this case, the power operation is evaluated first, and then the addition >>> 2 ** 2 + 8 12 Conversions There are a number of conversion functions built into the language in order to help conversion of one data type to another (see Table 3-9). While every data type in Jython is actually a class object, these conversion functions will really convert one class type into another. For the most part, the built-in conversion functions are easy to remember because they are primarily named after the type to which you are trying to convert. Table 3-9. Conversion Functions Listing 3-14. Conversion Function Examples # Return the character representation of the integers >>> chr(4) '\x04' >>> chr(10) '\n' # Convert intger to float >>> float(8) 8.0 # Convert character to its integer value >>> ord('A') 65 >>> ord('C') 67 >>> ord('z') 122 # Use repr() with any object >>> repr(3.14) '3.14' >>> x = 40 * 5 >>> y = 2**8 >>> repr((x, y, ('one','two','three'))) "(200, 256, ('one', 'two', 'three'))" The following is an example of using the eval() functionality as it is perhaps the one conversion function for which an example helps to understand. Again, please note that using the eval() function can be dangerous and impose a security threat if used incorrectly. If using the eval() function to accept text from a user, standard security precautions should be set into place to ensure that the string being evaluated is not going to compromise security. Listing 3-15. Example of eval() # Suppose keyboard input contains an expression in string format (x * y) >>> x = 5 >>> y = 12 >>> keyboardInput = 'x * y' # We should provide some security checks on the keyboard input here to # ensure that the string is safe for evaluation. Such a task is out of scope # for this chapter, but it is good to note that comparisons on the keyboard # input to check for possibly dangerous code should be performed prior to # evaluation. >>> eval(keyboardInput) 60 Using Expressions to Control Program Flow As you’ve learned in previous references in this book, the statements that make up programs in Python are structured with attention to spacing, order, and technique. Each section of code must be consistently spaced as to set each control structure apart from others. One of the great advantages to Python’s syntax is that the consistent spacing allows for delimiters such as the curly braces {} to go away. For instance, in Java one must use curly braces around a for loop to signify a start and an end point. Simply spacing a for loop in Python correctly takes place of the braces. Convention and good practice adhere to using four spaces of indentation per statement throughout the entire program. For more information on convention, please see PEP 8, Style Guide for Python Code (). Follow this convention along with some control flow and you’re sure to develop some easily maintainable software. if-elif-else Statement The standard Python if-elif-else conditional statement is used in order to evaluate expressions and branch program logic based upon the outcome. An if-elif-else statement can consist of any expressions we’ve discussed previously. The objective is to write and compare expressions in order to evaluate to a True or False outcome. As shown in Chapter 1, the logic for an if-elif-else statement follows one path if an expression evaluates to True, or a different path if it evaluates to False. You can chain as many if-else expressions together as needed. The combining if-else keyword is elif, which is used for every expression in between the first and the last expressions within a conditional statement. The elif portion of the statement helps to ensure better readability of program logic. Too many if statements nested within each other can lead to programs that are difficult to maintain. The initial if expression is evaluated, and if it evaluates to False, the next elif expression is evaluated, and if it evaluates to False then the process continues. If any of the if or elif expressions evaluate to True then the statements within that portion of the if statement are processed. Eventually if all of the expressions evaluate to False then the final else expression is evaluated. These next examples show a few ways for making use of a standard if-elif-else statement. Note that any expression can be evaluated in an if-elif-else construct. These are only some simplistic examples, but the logic inside the expressions could become as complex as needed. Listing 3-16. Standard if-elif-else # terminal symbols are left out of this example so that you can see the precise indentation pi =3.14 x = 2.7 * 1.45 if x == pi: print 'The number is pi' elif x > pi: print 'The number is greater than pi' else: print 'The number is less than pi' Empty lists or strings will evaluate to False as well, making it easy to use them for comparison purposes in an if-elif-else statement. Listing 3-17. Evaluate Empty List # Use an if-statement to determine whether a list is empty # Suppose mylist is going to be a list of names >>> mylist = [] >>> if mylist: ... for person in mylist: ... print person ... else: ... print 'The list is empty' ... The list is empty while Loop Another construct that we touched upon in Chapter 1 was the loop. Every programming language provides looping implementations, and Python is no different. To recap, the Python language provides two main types of loops known as the while and the for loop. The while loop logic follows the same semantics as the while loop in Java. The while loop evaluates a given expression and continues to loop through its statements until the results of the expression no longer hold true and evaluate to False. Most while loops contain a comparison expression such as x <= y or the like, in this case the expression would evaluate to False when x becomes greater than y. The loop will continue processing until the expression evaluates to False. At this time the looping ends and that would be it for the Java implementation. Python on the other hand allows an else clause which is executed when the loop is completed. Listing 3-18. Python while Statement >>> x = 0 >>> y = 10 >>> while x <= y: ... print 'The current value of x is: %d' % (x) ... x += 1 ... else: ... print 'Processing Complete...' ... The current value of x is: 0 The current value of x is: 1 The current value of x is: 2 The current value of x is: 3 The current value of x is: 4 The current value of x is: 5 The current value of x is: 6 The current value of x is: 7 The current value of x is: 8 The current value of x is: 9 The current value of x is: 10 Processing Complete... This else clause can come in handy while performing intensive processing so that we can inform the user of the completion of such tasks. It can also be handy when debugging code, or when some sort of cleanup is required after the loop completes Listing 3-19. Resetting Counter Using with-else >>> total = 0 >>> x = 0 >>> y = 20 >>> while x <= y: ... total += x ... x += 1 ... else: ... print total ... total = 0 ... 210 continue Statement for or while loop. Listing 3-20. Continue Statement # Iterate over range and print out only the positive numbers >>> x = 0 >>> while x < 10: ... x += 1 ... if x % 2 != 0: ... continue ... print x ... 2 4 6 8 10 In this example, whenever x is odd, the 'continue' causes execution to move on to the next iteration of the loop. When x is even, it is printed out. break Statement Much like the continue statement, the break statement can be used inside of a loop. We use the break statement in order to stop the loop completely so that a program can move on to its next task. This differs from continue because the continue statement only stops the current iteration of the loop and moves onto the next iteration. Let’s check it out: Listing 3-21. Break Statement >>> x = 10 >>> while True: ... if x == 0: ... print 'x is now equal to zero!' ... break ... if x % 2 == 0: ... print x ... x -= 1 ... 10 8 6 4 2 x is now equal to zero! In the previous example, the loop termination condition is always True, so execution only leaves the loop when a break is encountered. If we are working with a break statement that resides within a loop that is contained in another loop (nested loop construct), then only the inner loop will be terminated. for Loop The for loop can be used on any iterable object. It will simply iterate through the object and perform some processing during each pass. Both the break and continue statements can also be used within the for loop. The for statement in Python also differs from the same statement in Java because in Python we also have the else clause with this construct. Once again, the else clause is executed when the for loop processes to completion without any break intervention or raised exceptions. Also, if you are familiar with pre-Java 5 for loops then you will love the Python syntax. In Java 5, the syntax of the for statement was adjusted a bit to make it more in line with syntactically easy languages such as Python. Listing 3-22. Comparing Java and Python for-loop Example of Java for-loop (pre Java 5) for(x = 0; x <= myList.size(); x++){ // processing statements iterating through myList System.out.println("The current index is: " + x); } Listing 3-23. Example of Python for-loop my_list = [1,2,3,4,5] >>> for value in my_list: # processing statements using value as the current item in my_list ... print 'The current value is %s' % (value) ... The current value is 1 The current value is 2 The current value is 3 The current value is 4 The current value is 5 As you can see, the Python syntax is a little easier to understand, but it doesn’t really save too many keystrokes at this point. We still have to manage the index (x in this case) by ourselves by incrementing it with each iteration of the loop. However, Python does provide a built-in function that can save us some keystrokes and provides a similar functionality to that of Java with the automatically incrementing index on the for loop. The enumerate(sequence) function does just that. It will provide an index for our use and automatically manage it for us. Listing 3-24. Enumerate() Functionality >>> myList = ['jython','java','python','jruby','groovy'] >>> for index, value in enumerate(myList): ... print index, value ... 0 jython 1 java 2 python 3 jruby 4 groovy If we do not require the use of an index, it can be removed and the syntax can be cleaned up a bit. >>> myList = ['jython', 'java', 'python', 'jruby', 'groovy'] >>> for item in myList: ... print item ... jython java python jruby groovy Now we have covered the program flow for conditionals and looping constructs in the Python language. However, good programming practice will tell you to keep it as simple as possible or the logic will become too hard to follow. In practicing proper coding techniques, it is also good to know that lists, dictionaries, and other containers can be iterated over just like other objects. Iteration over containers using the for loop is a very useful strategy. Here is an example of iterating over a dictionary object. Listing 3-25. Iteration Over Containers # Define a dictionary and then iterate over it to print each value >>> my_dict = {'Jython':'Java', 'CPython':'C', 'IronPython':'.NET','PyPy':'Python'} >>> for key in my_dict: ... print key ... Jython IronPython CPython PyPy It is useful to know that we can also obtain the values of a dictionary object via each iteration by calling my_dict.values(). Example Code Let’s take a look at an example program that uses some of the program flow which was discussed in this chapter. The example program simply makes use of an external text file to manage a list of players on a sports team. You will see how to follow proper program structure and use spacing effectively in this example. You will also see file utilization in action, along with utilization of the raw_input() function. Listing 3-26. # import os module import os # Create empty dictionary player_dict = {} # Create an empty string enter_player = '' # Enter a loop to enter inforation from keyboard while enter_player.upper() != 'X': print 'Sports Team Administration App' # If the file exists, then allow us to manage it, otherwise force creation. if os.path.isfile('players.txt'): enter_player = raw_input("Would you like to create a team or manage an existing team?\n (Enter 'C' for create, 'M' for manage, 'X' to exit) ") else: # Force creation of file if it does not yet exist. enter_player = 'C' # Check to determine which action to take. C = create, M = manage, X = Exit and Save if enter_player.upper() == 'C': # Enter a player for the team print 'Enter a list of players on our team along with their position' enter_cont = 'Y' # While continuing to enter new player's, perform the following while enter_cont.upper() == 'Y': # Capture keyboard entry into name variable name = raw_input('Enter players first name: ') # Capture keyboard entry into position variable position = raw_input('Enter players position: ') # Assign position to a dictionary key of the player name player_dict[name] = position enter_cont = raw_input("Enter another player? (Press 'N' to exit or 'Y' to continue)") else: enter_player = 'X' # Manage player.txt entries elif enter_player.upper() == 'M': # Read values from the external file into a dictionary object print print 'Manage the Team' # Open file and assign to playerfile playerfile = open('players.txt','r') # Use the for-loop to iterate over the entries in the file for player in playerfile: # Split entries into key/value pairs and add to list playerList = player.split(':') # Build dictionary using list values from file player_dict[playerList[0]] = playerList[1] # Close the file playerfile.close() print 'Team Listing' print '++++++++++++' # Iterate over dictionary values and print key/value pairs for i, player in enumerate(player_dict): print 'Player %s Name: %s -- Position: %s' %(i, player, player_dict[player]) else: # Save the external file and close resources if player_dict: print 'Saving Team Data...' # Open the file playerfile = open('players.txt','w') # Write each dictionary element to the file for player in player_dict: playerfile.write('%s:%s\n' % (player.strip(),player_dict[player].strip())) # Close file playerfile.close() This example is packed full of concepts that have been discussed throughout the first three chapters of the book. As stated previously, the concept is to create and manage a list of sport players and their relative positions. The example starts by entering a while() loop that runs the program until the user enters the exit command. Next, the program checks to see if the 'players.txt' file exists. If it does, then the program prompts the user to enter a code to determine the next action to be taken. However, if the file does not exist then the user is forced to create at least one player/position pair in the file. Continuing on, the program allows the user to enter as many player/position pairs as needed, or exit the program at any time. If the user chooses to manage the player/position list, the program simply opens the 'players.txt' file, uses a for() loop to iterate over each entry within the file. A dictionary is populated with the current player in each iteration of the loop. Once the loop has completed, the file is closed and the dictionary is iterated and printed. Exiting the program forces the else() clause to be invoked, which iterates over each player in the dictionary and writes them to the file. Unfortunately, this program is quite simplistic and some features could not be implemented without knowledge of functions (Chapter 4) or classes (Chapter 6). A good practice would be to revisit this program once those topics have been covered and simplify as well as add additional functionality. Summary All programs are constructed out of statements and expressions. In this chapter we covered details of creating expressions and using them. Expressions can be composed of any number of mathematical operators and comparisons. In this chapter we discussed the basics of using mathematical operators in our programs. The __future__ division topic introduced us to using features from the __future__. We then delved into comparisons and comparison operators. We ended this short chapter by discussing proper program flow and properly learned about the if statement as well as how to construct different types of loops in Python. In the next chapter you will learn how to write functions, and the use of many built-in functions will be discussed.
https://bitbucket.org/idalton/jythonbook/src/28b0486ae6c1/OpsExpressPF.rst
CC-MAIN-2015-18
refinedweb
5,364
61.06
HI..Can u please tell me the link from where i can see a solution on oracle forums Thnks HI..Can u please tell me the link from where i can see a solution on oracle forums Thnks public class GG{ public static void main(String[] args) { Integer i = new Integer(10); System.out.println("Before Call:"+i); change(i); System.out.println("After Call:"+i); } Where is your Die class?? Hi..Where is your main() method. How can your run a program without a main() function?? I can run your program and CoffeeCup class is being called from CoffeeCup_Driver. So you have to check where you are wrong. And if my help is needed then tell me the output you needed. Your program is correct. Its running and stopping after 20 times. The output is always 0 as you are matching a double value with integer value which will never be true. So you change your code as... Hi james I will suggest you to use Eclipse IDE so that you can have a look on different functions. Their parameters. how to use them. You can program much better then Resolved Program without errors.After running your program You can check studentGPavg.out import java.util.*; import java.io.*; public class university { public static void... May i know from where you are getting your programs? I think you are not writing these programs yourself. Can you please elaborate what is your requirement?? What must be the output you want from your program? Actually Json is correct as the value you are reading from the file is character and... Hi have a look on your errors. There is a space in every class name like as: C:\Users\User1\Desktop\Jabble.src\JabbleFinal\Abou tDialog.java:48: not a statement AboutDialog.1 local1; It should... I think its not. You can use Scanner class to get data from Keyboard. I have seen many apps where in Scanner is used. I debugged your program and found following errors. 1.You have used semicolon in if second if statement if(ans =='B' || ans =='b'); Remove this semicolon and write statement as if(ans =='B' ||... Hi, Please tell what is Keyboard here. I am unable to resolve it. Is it some other class or an object Hi I want to run your program. Please give the necessary files like LEFT.gif and Right.gif. I want to try your program. Hi james, This is your complete program. Use it and enjoy...and click on Thanks link plz import java.util.*; class football { static Scanner console = new Scanner(System.in); public... Hi, Give me that studentGPA.txt file..then only I can check what happend Hi I used your player class but now it is giving an error on Bullets. What is Bullets now? Is it some another class? Hi, You are using a Player class which is not defined anywhere. Please define Player class first and then try to execute your Shooter class Hi Now I am getting one more error on this particular line UIManager.setLookAndFeel("com.seaglasslookandfeel.SeaGlassLookAndFeel"); The error is: java.lang.ClassNotFoundException:... Here is your resolved Program. James May i Know you are using which editor??? import javax.swing.*; import java.awt.*; import java.awt.event.*; public class trafficlights extends JFrame {... Hi, There is one more error appearing saying that TaskPanel can not be resolved. What is hsa.console dear? Hi Durga Prasad. Your problem is solved. The bug i found in your code is that please make "in=3 " in your for loop again. Actually what was happenning is that the data is being written in... yeah i got
http://www.javaprogrammingforums.com/search.php?s=3ad219230644de495f99a3db1f5f648a&searchid=1418995
CC-MAIN-2015-11
refinedweb
611
78.96
I'm closing in on a functional trip timer. I've got the hardware and software more or less figured out. All that was left to do was write the code to implement a stopwatch like display. And without further ado, here it is: /* * See: * * */ #include <Adafruit_STMPE610.h> #include <SPI.h> #include <Wire.h> // this is needed even tho we aren't using it #include <Adafruit_GFX.h> // Core graphics library #include <Adafruit_ILI9341.h> // Hardware-specific library #include <Adafruit_STMPE610.h> #define STMPE_CS 32 #define TFT_CS 15 #define TFT_DC 33 #define SD_CS 14 Adafruit_ILI9341 tft = Adafruit_ILI9341(TFT_CS, TFT_DC); Adafruit_STMPE610 ts = Adafruit_STMPE610(STMPE_CS); long started = millis() / 1000; void setup(void) { Serial.begin(115200); delay(10); Serial.println("Timer Time!"); if (!ts.begin()) { Serial.println("Couldn't start touchscreen controller"); while (1); } } void loop(void) { long remaining = (millis() / 1000) - started; int hours = remaining / (60 * 60); remaining = hours == 0 ? remaining : remaining % (hours * 60 * 60); int minutes = remaining / 60; remaining = minutes == 0 ? remaining : remaining % (60 * minutes); int seconds = remaining; int fg = textFg(hours, minutes, seconds); int bg = textBg(hours, minutes, seconds); tft.begin(); tft.setRotation(3); tft.fillScreen(fg); tft.setCursor(10,(tft.height() / 2) - 30); tft.setTextColor(bg); tft.setTextSize(7); tft.print(hours); tft.print(":"); if(minutes < 10) { tft.print("0"); } tft.print(minutes); tft.print(":"); if(seconds < 10) { tft.print("0"); } tft.print(seconds); delay(750); } int textFg(int hours, int minutes, int seconds) { int red = minutes/2; int green = 61 - minutes; int blue = 31; return ((red) << 11) | ((green) << 6) | blue; } int textBg(int hours, int minutes, int seconds) { return ~ textFg(hours, minutes, seconds); } And here's the timer doing it's thing: A few notes on the code: 1. Having the code from my micro:bit prototype was a big help. I'd already figured out a number of math and formatting challenges, and was able to reuse this code. 2. I used this sample as the basis for my code. It told me that I needed the GFX, ILI9341 and SMPE610 libraries, and how to initialize them to properly drive my 2.4 TFT FeatherWing screen. I was eager to avoid the type of confusion I ran into when I mixed up the ESP8266 with the ESP32. 3. You can see I'm deriving the background and text color from the current value of minutes. This is my attempt to make the timer gracefully fade from one color to another throughout an hour. It works, though the color choices need to be smarter. 4. The timer seemed to work great, but when I'd come back after a few hours it had reset itself. The problem was that I was relying on a call to micros() which rolls over every 70 minutes. I switched to calling millis which resets every 50 days. Up next, I need to take this guy for a road test and see how it does in the car. Then I need to build an enclosure to keep my new baby safe. And then I believe I'll be able to call this complete!
https://www.blogbyben.com/2018/07/it-counts-trip-timer-lives.html
CC-MAIN-2021-04
refinedweb
506
66.33
I currently have this code: import java.util.Random; import java.io.*; public class randomdwarf { public static void main(String[]args)throws IOException { String dwarves[]= {"Dopey", "Sneezy", "Happy", "Sleepy", "Grumpy", "Bashful", "Doc"}; Random generator = new Random(); int randomIndex = generator.nextInt( 7 ); System.out.println(dwarves[randomIndex]); Runtime.getRuntime().exec("H:\\Profile\\Desktop\\dwarves\\Doc.jpeg"); } } It is a very simple code that is designed to get a random dwarf name then display the image. Note that currently I am just testing the program out and have it only open the picture of "Doc". I will implement the random image opening once I figure out how to open the image. After searching for a while I have not been able to find a solution. I am currently using windows and would appreciate any help you can give me. Thanks, Jaro
https://www.daniweb.com/programming/software-development/threads/413990/beginner-image-opening-program-help
CC-MAIN-2018-22
refinedweb
139
57.98
Lab 5 Goals: To learn to design constructors that provide better user interface and at the same time assure the integrity of the data that the instances of the class represent. We will also learn how the programmer can signal an error in the program by throwing an Exception. 1 Understanding Constructors Start a new project named Lab5-Date and import into it the file Date.java. Standard Java asks that programmers define every public class in a file with the same name as the name of the class, followed by .java. We will now follow this convention. Additionally, every public class must define at least one public constructor. Define a new file in your project (in menu New -> File) and name it ExamplesDate.java. As you can guess, this will be the place for your examples and tests. Add at least three examples of valid dates to this class, and set up a Run configuration to run it. 1.1 Overloading Constructors: Assuring Data Integrity. The data definitions at times do not capture the meaning of data and the restrictions on what values can be used to initialize different fields. For example, if we have a class that represents a date in the calendar using three integers for the day, month, and year, we know that our program is interested only in some years (maybe between the years 1500 and 2500), the month must be between 1 and 12, and the day must be between 1 and 31 (though there are additional restrictions on the day, depending on the month and whether we are in a leap year). Suppose we make the following Date examples: Of course, the third example is just nonsense. While complete validation of dates (months, leap-years, etc...) is a study topic in itself, for the purposes of practicing constructors, we will simply make sure that the month is between 1 and 12, the day is between 1 and 31, and the year is between 1500 and 50000 — Did you notice the repetition in the description of validity? It suggests we start with a few helper methods (an early abstraction): Design the method validNumber that consumes a number and the low and high bound and returns true if the number is within the bounds (inclusive on the low end, exclusive on the high end). Design the methods validDay, validMonth, and validYear designed in a similar manner. Do this quickly - do not spend much time on it - maybe do just the method validDay and leave the rest for later - for now just returning true regardless of the input. (Such temporary method bodies are called stubs, their goal is to make the rest of program design possible.) Now change the Date constructor to the following: This is similar to the constructors for the Time class we saw in lectures. To signal an error or some other exceptional condition, we throw an instance of the RuntimeException. We elected to use the instance of the IllegalArgumentException, which is a subclass of the abstract class RuntimeException. If the program ever executes a statement like: throw new ...Exception("... message ..."); Java stops the program and signals the error through the constructed instance of the Exception. For our purposes now, this is as good as terminating the program and printing the message String. The tester library provides methods to test constructors that should throw exceptions: For example, the following test case verifies that our constructor throws the correct exception with the expected message, if the supplied year is 53000: Run your program with this test. Now change the test by providing an incorrect message, incorrect exception (e.g. NoSuchElementException), or by supplying arguments that do not cause an error, and see that the test(s) fail. Java provides the class RuntimeException with a number of subclasses that can be used to signal different types of dynamic errors. Later we will learn how to handle errors and design new subclasses of the class RuntimeException to signal errors specific to our programs. 1.2 Overloading Constructors: Providing Defaults. When entering dates for the current year it is tedious to continually enter 2013. We can provide an additional constructor that only requires the month and day, assuming the year should be 2013. Remembering the single point of control rule, we make sure that the new overloaded constructor defers all of the work to the primary full constructor: Add examples that use only the month and day to see that the constructor works properly. Include tests with invalid month or year as well. 1.3 Overloading Constructors: Expanding Options. The user may want to enter the date in the form: Jan 20 2013. To make this possible, we can add another constructor: Our first task is to convert a String that represents a month into a number. We can do it in a helper method getMonthNo: (There may be more efficient ways to provide the list of valid names for the months - for now we are just focusing on the fact that this is possible.) Our constructor can then invoke this method as follows: Note: When the constructor invokes another constructor for the same class, the constructor invocation this(...); must be the first statement. That is why we first created an insance of the date with January as its month, then adjusted the month to be the month specified by the given String. Complete the implementation, and check that it works correctly.
http://www.ccs.neu.edu/home/vkp/2510-sp13/lab5.html
CC-MAIN-2015-40
refinedweb
905
58.82
Issues encoding image captured using cv2.VideoCapture(). Goal is to display image in an iPywidget Hello; In my use case, I'm using a Jetson Nano running Jupyter Lab. I ssh into the nano, and run Jupyter Lab on a web browser on the host machine, a laptop. As such, I can't use cv2.imshow() to display the image. This only works when run locally. Therefore, I've learned to use cv2.VideoCapture() to capture an image, but I'm having trouble encoding this image into the format required by ipywidgets. Here is my code for capturing one camera frame: import cv2 import ipywidgets from IPython.display import display from IPython.display import Image gst_pipeline = "nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw ! jpegenc ! image/jpeg ! appsink" camera = cv2.VideoCapture(gst_pipeline, cv2.CAP_GSTREAMER) capture_state, captured_image = camera.read() print(capture_state) print(captured_image) The output of this code produces these results: capture_state = True captured_image = [[255 216 255 ... 179 255 217]] However, when I run the next code sequence, I get the following error: error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/imgcodecs/src/grfmt_base.cpp:145: error: (-10:Unknown error code -10) Raw image encoder error: Maximum supported image dimension is 65500 pixels in function 'throwOnEror' This is the code I run, that produces that error: encode_parameters = [int(cv2.IMWRITE_JPEG_QUALITY),20] encode_state, encoded_image = bytes(cv2.imencode('.jpg', captured_image, encode_parameters)) In looking around the internt, it seems there is a need to do some sort of transform. In this code example, they are using a numpy array as shown here: def __init__(self, *args, **kwargs): super(Camera, self).__init__(*args, **kwargs) if self.format == 'bgr8': self.value = np.empty((self.height, self.width, 3), dtype=np.uint8) self._running = False I would really appreciate if someone could help me figure out what I need to do to get my "capture_image" into the right format so that I can do the encoding. Once this step is good, I'll be able to view this image in an iPywidget - accomplishing my overall goal. Can someone please help me? Cheers! U can't declared variable encode_parameters =. See the answer cv2.imencode Thanks for your comment, I checked out what you sent me but it didn't work in my case. However; I did find a solution. I'll post it on the main quesiton though.
https://answers.opencv.org/question/237210/issues-encoding-image-captured-using-cv2videocapture-goal-is-to-display-image-in-an-ipywidget/
CC-MAIN-2021-43
refinedweb
411
51.85
Background: I am playing around with bit-level coding (this is not homework - just curious). I found a lot of good material online and in a book called Hacker's Delight, but I am having trouble with one of the online problems. It asks to convert an integer to a float. I used the following links as reference to work through the problem: How to manually (bitwise) perform (float)x? How to convert an unsigned int to a float? Problem and Question: I thought I understood the process well enough (I tried to document the process in the comments), but when I test it, I don't understand the output. Test Cases: float_i2f(2) returns 1073741824 float_i2f(3) returns 1077936128 I expected to see something like 2.0000 and 3.0000. Did I mess up the conversion somewhere? I thought maybe this was a memory address, so I was thinking maybe I missed something in the conversion step needed to access the actual number? Or maybe I am printing it incorrectly? I am printing my output like this: printf("Float_i2f ( %d ): ", 3); printf("%u", float_i2f(3)); printf("\n"); /* * float_i2f - Return bit-level equivalent of expression (float) x * Result is returned as unsigned int, but * it is to be interpreted as the bit-level representation of a * single-precision floating point values. * Legal ops: Any integer/unsigned operations incl. ||, &&. also if, while * Max ops: 30 * Rating: 4 */ unsigned float_i2f(int x) { if (x == 0){ return 0; } //save the sign bit for later and get the asolute value of x //the absolute value is needed to shift bits to put them //into the appropriate position for the float unsigned int signBit = 0; unsigned int absVal = (unsigned int)x; if (x < 0){ signBit = 0x80000000; absVal = (unsigned int)-x; } //Calculate the exponent // Shift the input left until the high order bit is set to form the mantissa. // Form the floating exponent by subtracting the number of shifts from 158. unsigned int exponent = 158; //158 possibly because of place in byte range while ((absVal & 0x80000000) == 0){//this checks for 0 or 1. when it reaches 1, the loop breaks exponent--; absVal <<= 1; } //find the mantissa (bit shift to the right) unsigned int mantissa = absVal >> 8; //place the exponent bits in the right place exponent = exponent << 23; //get the mantissa mantissa = mantissa & 0x7fffff; //return the reconstructed float return signBit | exponent | mantissa; } Continuing from the comment. Your code is correct, and you are simply looking at the equivalent unsigned integer made up by the bits in your IEEE-754 single-precision floating point number. The IEEE-754 single-precision number format (made up of the sign, extended exponent, and mantissa), can be interpreted as a float, or those same bits can be interpreted as an unsigned integer (just the number that is made up by the 32-bits). You are outputting the unsigned equivalent for the floating point number. You can confirm with a simple union. For example: #include <stdio.h> #include <stdint.h> typedef union { uint32_t u; float f; } u2f; int main (void) { u2f tmp = { .f = 2.0 }; printf ("\n u : %u\n f : %f\n", tmp.u, tmp.f); return 0; } Example Usage/Output $ ./bin/unionuf u : 1073741824 f : 2.000000 Let me know if you have any further questions. It's good to see that your study resulted in the correct floating point conversion. (also note the second comment regarding truncation/rounding)
https://codedump.io/share/5L3wLJ25FfOe/1/c-bit-level-int-to-float-conversion-unexpected-output
CC-MAIN-2017-26
refinedweb
568
61.97
Condition Variables Sometimes its not enough to lock a shared resource and use it. Sometimes the shared resource needs to be in some specific state before it can be used. For example, a thread may try and pull data off of a stack, waiting for data to arrive if none is present. A mutex is not enough to allow for this type of synchronization. Another synchronization type, known as a condition variable, can be used in this case. A condition variable is always used in conjunction with a mutex and the shared resource(s). A thread first locks the mutex and then verifies that the shared resource is in a state that can be safely used in the manner needed. If its not in the state needed, the thread waits on the condition variable. This operation causes the mutex to be unlocked during the wait so that another thread can actually change the state of the shared resource. It also ensures that the mutex is locked when the thread returns from the wait operation. When another thread changes the state of the shared resource, it needs to notify the threads that may be waiting on the condition variable, enabling them to return from the wait operation. >Listing Four illustrates a very simple use of the boost::condition class. A class is defined implementing a bounded buffer, a container with a fixed size allowing FIFO input and output. This buffer is made thread-safe internally through the use of a boost::mutex. The put and get operations use a condition variable to ensure that a thread waits for the buffer to be in the state needed to complete the operation. Two threads are created, one that puts 100 integers into this buffer and the other pulling the integers back out. The bounded buffer can only hold 10 integers at one time, so the two threads wait for the other thread periodically. To verify that it is happening, the put and get operations output diagnostic strings to std::cout. Finally, the main thread waits for both threads to complete. Listing Four: The boost::condition class #include <boost/thread/thread.hpp> #include <boost/thread/mutex.hpp> #include <boost/thread/condition.hpp> #include <iostream> const int BUF_SIZE = 10; const int ITERS = 100; boost::mutex io_mutex; class buffer { public: typedef boost::mutex::scoped_lock scoped_lock; buffer() : p(0), c(0), full(0) { } void put(int m) { scoped_lock lock(mutex); if (full == BUF_SIZE) { { boost::mutex::scoped_lock lock(io_mutex); std::cout << "Buffer is full. Waiting..." << std::endl; } while (full == BUF_SIZE) cond.wait(lock); } buf[p] = m; p = (p+1) % BUF_SIZE; ++full; cond.notify_one(); } int get() { scoped_lock lk(mutex); if (full == 0) { { boost::mutex::scoped_lock lock(io_mutex); std::cout << "Buffer is empty. Waiting..." << std::endl; } while (full == 0) cond.wait(lk); } int i = buf[c]; c = (c+1) % BUF_SIZE; --full; cond.notify_one(); return i; } private: boost::mutex mutex; boost::condition cond; unsigned int p, c, full; int buf[BUF_SIZE]; }; buffer buf; void writer() { for (int n = 0; n < ITERS; ++n) { { boost::mutex::scoped_lock lock(io_mutex); std::cout << "sending: " << n << std::endl; } buf.put(n); } } void reader() { for (int x = 0; x < ITERS; ++x) { int n = buf.get(); { boost::mutex::scoped_lock lock(io_mutex); std::cout << "received: " << n << std::endl; } } } int main(int argc, char* argv[]) { boost::thread thrd1(&reader); boost::thread thrd2(&writer); thrd1.join(); thrd2.join(); return 0; }
http://www.drdobbs.com/architecture-and-design/the-boostthreads-library/184401518?pgno=3
CC-MAIN-2015-35
refinedweb
559
55.84
JS Interview Questions And Answers Now, if you are looking for a job which is related to JS, then you need to prepare for the 2020 JS Interview Questions. It is true that every interview is different as per the different job profiles, but still to clear the interview, you need to have a good and clear knowledge of JS. Here, we have prepared the important JS Interview Questions and Answers which will help you get success in your interview. Below are the 12 important 2020 JS Interview Questions and Answers that are frequently asked in an interview. these questions are divided into parts are as follows: Part 1 – JS Interview Questions (Basic) This first part covers basic JS Interview Questions and Answers Q1. What is JS? Answer: JavaScript is a scripting language primarily designed for creating web pages as well as adding interactivity to web applications. Q2. How does JavaScript work? Answer: This is the common JS Interview Questions asked in an interview. Every browser has three prime components to work. The first one is the DOM (Document Object Model) interpreter. This will take your HTML document and convert it and displays it in the browser. The other small program that is part of the browser is a CSS interpreter, which will style the page and make it look better. The last one is a mini-program in the browser called the JS engine. - Browser loads the HTML file/JS file - JavaScript is an interpreted language(means no compilation required) - Browser(JavaScript engine) executes line by line and waits for events(like clicks, mouseovers etc.) to happen Q3. Mention some of the features of JavaScript? Answer: Below are the different features of JavaScript: - JS is a lightweight programming language with interpreted functionality - JS is open source and cross-platform - JS is integrated to HTML and Java - Designed to create network-centric applications Q4. Regarding JS, what are the different types of JavaScript data? Answer: - Strings - Functions - Boolean - Object - Number - Undefined Let us move to the next JS Interview Questions And Answer. Q5. Define the common errors which occur in JavaScript? Answer: In general, there are 3 types of error we find in JS, which are as follow. - Runtime error: this is the outcome of the misuse of the commands within the HTML language - Load tie error: this is a syntax error and is generated dynamically - Logical error: this error occurs when the logic of the functions are badly performed. Q6. Explain why JS is a Case-Sensitive Language? Answer: JS is a case-sensitive programming language. In JS, we use different types of variables, functions, and various other identities that should be consistent throughout. Part 2 –JS Interview Questions (Advanced) Let us now have a look at the advanced JS Interview Questions. Q7. List down some of the Advantages and disadvantages of JavaScript? Answer: Advantages: - Rich user interface - Increased interactivity(when a mouse hovers on elements like buttons or keyboard accessibility) Disadvantages: - Lacks multithreading activities - Not suitable for networking applications - Client-side JavaScript cannot be read or write Q8. Types of objects in JS and define them? Answer: There are 2 types of objects in JS: - Date Object: This is built within the JS programming. These are created with the use of a new date and can be operated with the help of an available bunch of methods once it is created. This includes the year, month, day, hour, minutes, seconds and even milliseconds of the date object. These are set with the help of local standards of the universal time. - Number Object: these include the dates as it solely represented by integers and fractions. The literals of the numbers get converted to number class automatically. Let us move to the next JS Interview Questions And Answer. Q9. What is Closure in JavaScript? Answer: When we define a function within another function (a.k.a. parent function) and are accessing the variables that are defined in the parent functions. The closure accesses the variables in three scopes: - Variables declared in their own scope - Variables declared in a parent function scope - Variables declared in the global namespace Code: innerFunction is a closure that is defined inside outerFunction and has access to all variables declared and defined in the outer function scope. In addition, the function defined inside another function as closure will have access to variables declared in the global namespace. Output: Q10. How to empty the array in JavaScript? Answer: This is the popular JS Interview Questions asked in an interview. By following any of the given methods – - arrayList = [ ] Above code will set the variable ArrayList for a new empty array. - length =0; The above code, first of all, clears the existing array by setting its length to 0. This way is useful when you want to update all the other references variables pointing to ArrayList. - splice(0, ArrayList.length); This way of empty the array will also update all the references of the original array. - while(ArrayList.length){ arrayList.pop(); This is one of the ways to empty the array Q11. Mention some of the JavaScript datatypes? Answer: These datatypes generally hold the value. In JS there are two types of data types. - Primitive datatypes - Non-primitive datatypes Under the primitive data types, there are String, Number, Boolean, Undefined, Null whereas under the Non-primitive there are Object, Array, and RegExp. Q12. What do you mean by functions in JavaScript? Answer: Functions are a block of reusable codes. This allows a user to write a particular code and use it as many times as per the need by calling the function. A JS function is not necessary to return a value. There are 2 types of functions JS support – Anonymous functions – Named functions Syntax for JS function – Function functionName (parameter1, parameter2, …..parameter n) {//statement of the functions } To declare a function we have to use the function followed by the function name and parenthesis. Within the parenthesis, we have to specify the function parameters (can have multiple parameters). To call the function we have to simply specify the name of the function and within parenthesis the values of the parameters (pass the values). addNumbers(x1, x2) – here we have given the values and called the functions. Note: if in the code there are 3(let’s say) parameter and we pass 3 or more parameter values. In this case, JS will simply ignore the additional parameter values. Recommended Articles This has been a guide to List Of JS Interview Questions and Answers. Here we have listed the top 10 Interview Questions and Answers that are commonly asked in interviews with detailed responses. You may also look at the following articles to learn more –
https://www.educba.com/js-interview-questions/?source=leftnav
CC-MAIN-2021-31
refinedweb
1,115
63.39
ROUND(3) BSD Programmer's Manual ROUND(3) round, roundf - round to nearest integral value libm #include <math.h> double round(double x); float roundf(float x); The round() and roundf() functions return the nearest integral value to x; if x lies halfway between two integral values, then these functions return the integral value with the larger absolute value (i.e., they round away from zero). ceil(3), floor(3), math(3), rint(3), trunc(3) The round() and roundf() functions conform to ISO/IEC 9899:1999 ("ISO C99"). The round() and roundf() functions appeared in NetBSD 2.0. MirOS BSD #10-current July.
http://mirbsd.mirsolutions.de/htman/sparc/man3/roundf.htm
crawl-003
refinedweb
104
65.01
Time Series Analysis for Financial Data VI— GARCH model and predicting SPX returns Download the iPython notebook here was because of volatility clustering or heteroskedasticity. In this post, we will discuss conditional heteroskedasticity, leading us to our first conditional heteroskedastic model, known as ARCH. Then we will discuss extensions to ARCH, leading us to the famous Generalised Autoregressive Conditional Heteroskedasticity model of order p,q, also known as GARCH(p,q). GARCH is used extensively within the financial industry as many asset prices are conditional heteroskedastic. Let’s do a quick recap first: We have considered the following models so far in this series (it is recommended reading the series in order if you have not done so already): - Discrete White Noise and Random Walks - Auto Regresssive Models AR(p) - Moving Average Models MA(q) - Auto Regresssive Moving Average Models ARMA(p,q) and - Auto Regresssive Integrated Moving Average Models ARIMA(p,d,q) Now we are at the final piece of the puzzle. We need a model to examine conditional heteroskedasticity in financial series that exhibit volatility clustering. What is conditional heteroskedasticity? Conditional heteroskedasticity exists in finance because asset returns are volatile. A collection of random variables is heteroskedastic if there are subsets of variables within the larger set that have a different variance from the remaining variables. Consider a day when equities markets undergo a substantial drop. The market gets into panic mode, automated risk management systems start getting of their long positions by selling their positions and all of this leads to a further fall in prices. An increase in variance from the initial price drop leads to to significant further downward volatility. That is, an increase in variance is serially correlated to a further increase in variance in such a “sell-off” period. Or looking at it the other way around, a period of of increased variance is conditional on an initial sell-off . Thus we say that such series are conditional heteroskedastic. Conditionally heteroskedastic(CH) series are non stationary since its variance is not constant in time. One of the challenging aspects of conditional heteroskedastic series is ACF plots of a series with volatility might still appear to be a realisation of stationary discrete white noise. How can we incorporate CH in our model? One way could be to create an AR model for the variance itself — a model that actually accounts for the changes in the variance over time using past values of the variance. This is the basis of the Autoregressive Conditional Heteroskedastic (ARCH) model. Autoregressive Conditionally Heteroskedastic Models — ARCH(p) ARCH(p) model is simply an AR(p) model applied to the variance of a time series. ARCH(1) is given by: Var(x(t)) = σ²(t) = ⍺*σ²(t-1) + ⍺1 The actual time series is given by: x(t) = w(t)* σ(t) = w(t)* ⎷(⍺*σ²(t-1) + ⍺1) where w(t) is white noise When To Apply ARCH(p)? Let’s say we fit an AR(p) model and the residuals look almost like white noise but we are concerned about decay of the p lag on a ACF plot of the series. If we find that we can apply an AR(p) to the square of residuals as well, then we have an indication that an ARCH(p) process may be appropriate. Note that ARCH(p) should only ever be applied to a series that has already had an appropriate model fitted sufficient to leave the residuals looking like discrete white noise. Since we can only tell whether ARCH is appropriate or not by squaring the residuals and examining the ACF, we also need to ensure that the mean of the residuals is zero. ARCH should only ever be applied to series that do not have any trends or seasonal effects, i.e. that has no (evident) serially correlation. ARIMA is often applied to such a series, at which point ARCH may be a good fit. #) Notice the time series looks just like white noise. However, let’s see what happens when we plot the square of the series. tsplot(y**2, lags=30) Now the ACF, and PACF seem to show significance at lag 1 indicating an AR(1) model for the variance may be appropriate.. Generalized Autoregressive Conditionally Heteroskedastic Models — GARCH(p,q). The GARCH(1,1) model is: σ²(t) = a*σ²(t-1) + b*e²(t-1) + w (a) Again, notice that overall this process closely resembles white noise, however take a look when we view the squared eps series. _ = tsplot(eps**2, lags=30) There is substantial evidence of a conditionally heteroskedastic process via the decay of successive lags.) print(res.summary()) Iteration: 5, Func. Count: 38, Neg. LLF: 12311.7950557 Iteration: 10, Func. Count: 71, Neg. LLF: 12238.5926559 Optimization terminated successfully. (Exit mode 0) Current function value: 12237.3032673 Iterations: 13 Function evaluations: 89 Gradient evaluations: 13 Constant Mean - GARCH Model Results ==================================================================== Dep. Variable: y R-squared: -0.000 Mean Model: Constant Mean Adj. R-squared: -0.000 Vol Model: GARCH Log-Likelihood: -12237.3 Distribution: Normal AIC: 24482.6 Method: Maximum Likelihood BIC: 24511.4 No. Observations: 10000 Date: Tue, Feb 28 2017 Df Residuals: 9996 Time: 20:52:48 Df Model: 4 Mean Model ==================================================================== coef std err t P>|t| 95.0% Conf. Int. -------------------------------------------------------------------- mu -6.7225e-03 6.735e-03 -0.998 0.318 [-1.992e-02,6.478e-03] Volatility Model ==================================================================== coef std err t P>|t| 95.0% Conf. Int. -------------------------------------------------------------------- omega 0.2021 1.043e-02 19.383 1.084e-83 [ 0.182, 0.223] alpha[1] 0.5162 2.016e-02 25.611 1.144e-144 [ 0.477, 0.556] beta[1] 0.2879 1.870e-02 15.395 1.781e-53 [ 0.251, 0.325] ==================================================================== Covariance estimator: robust We can see that the true parameters all fall within the respective confidence intervals. Application to Financial Time Series Now apply the procedure to a financial time series. Here we’re going to use SPX Here, we first try to fit SPX return to an ARIMA process and find the best order. import auquanToolbox.dataloader as dl end = ‘2017–01–01’ start = ‘2010–01–01’ symbols = [‘SPX’] data = dl.load_data_nologs(‘nasdaq’, symbols , start, end)[‘ADJ CLOSE’] # log returns lrets = np.log(data/data.shift(1)).dropna() print('aic: {:6.2f} | order: {}'.format(best_aic, best_order)) return best_aic, best_order, best_mdl TS = lrets.SPX res_tup = _get_best_model(TS) aic: -11323.07 | order: (3, 0, 3) order = res_tup[1] model = res_tup[2] Since we've already taken the log of returns, we should expect our integrated component d to equal zero, which it does. We find the best model is ARIMA(3,0,3). Now we plot the residuals to decide if they possess evidence of conditional heteroskedastic behaviour tsplot(model.resid, lags=30) We find the residuals look like white noise. Let’s look at the square of residuals tsplot(model.resid**2, lags=30) We can see clear evidence of autocorrelation in squared residuals. Let’s fit a GARCH model and see how it does. # Now we can fit the arch model using the best fit arima model parameters p_ = order[0] o_ = order[1] q_ = order[2] am = arch_model(model.resid, p=p_, o=o_, q=q_, dist='StudentsT') res = am.fit(update_freq=5, disp='off') print(res.summary()) Constant Mean - GARCH Model Results ==================================================================== Dep. Variable: None R-squared: -56917.881 Mean Model: Constant Mean Adj. R-squared: -56917.881 Vol Model: GARCH Log-Likelihood: -4173.44 Distribution: Standardized Student's t AIC: 8364.88 Method: Maximum Likelihood BIC: 8414.15 No. Observations: 1764 Date: Tue, Feb 28 2017 Df Residuals: 1755 Time: 20:53:30 Df Model: 9 Mean Model ==================================================================== coef std err t P>|t| 95.0% Conf. Int. -------------------------------------------------------------------- mu -2.3189 9.829e-03 -235.934 0.000 [ -2.338, -2.300] Volatility Model ==================================================================== coef std err t P>|t| 95.0% Conf. Int. -------------------------------------------------------------------- omega 1.2926e-04 2.212e-04 0.584 0.559 [-3.043e-04,5.628e-04] alpha[1] 0.0170 1.547e-02 1.099 0.272 [-1.332e-02,4.733e-02] alpha[2] 0.4638 0.207 2.241 2.500e-02 [5.824e-02, 0.869] alpha[3] 0.5190 0.213 2.437 1.482e-02 [ 0.102, 0.937] beta[1] 7.9655e-05 0.333 2.394e-04 1.000 [ -0.652, 0.652] beta[2] 3.8056e-05 0.545 6.980e-05 1.000 [ -1.069, 1.069] beta[3] 1.6184e-03 0.312 5.194e-03 0.996 [ -0.609, 0.612] Distribution ==================================================================== coef std err t P>|t| 95.0% Conf. Int. -------------------------------------------------------------------- nu 7.7912 0.362 21.531 8.018e-103 [ 7.082, 8.500] ==================================================================== Covariance estimator: robust Let’s plot the residuals again tsplot(res.resid, lags=30) The plots looks like a realisation of a discrete white noise process, indicating a good fit. Let’s plot a square of residuals to be sure tsplot(res.resid**2, lags=30) We have what looks like a realisation of a discrete white noise process, indicating that we have “explained” the serial correlation present in the squared residuals with an appropriate mixture of ARIMA(p,d,q) and GARCH(p,q). Next Steps — Sample Trading Strategy We are now at the point in our time series for the S&P500. import auquanToolbox.dataloader as dl end = '2016-11-30' start = '2000-01-01' symbols = ['SPX'] data = dl.load_data_nologs('nasdaq', symbols , start, end)['ADJ CLOSE'] # log returns lrets = np.log(data/data.shift(1)).dropna() Strategy Overview Let’s try to create a simple strategy using our knowledge so far about ARIMA and GARCH models. The idea of this strategy is as below: - Fit an ARIMA and GARCH model everyday on log of S&P 500 returns for previous T days - Use the combined model to make a prediction for the next day’s return - If the prediction is positive, buy the stock and if negative, short the stock at today’s close - If the prediction is the same as the previous day then do nothing Strategy Implementation Let’s start by choosing an appropriate window T of previous days we are going to use to make a prediction. We are going to use T = 252 (1 year), but this parameter should be optimised in order to improve performance or reduce drawdown. windowLength = 252 We will now attempt to generate a trading signal for length(data)- T days foreLength = len(lrets) - windowLength signal = 0*lrets[-foreLength:] To backtest our strategy, let’s loop through every day in the trading data and fit an appropriate ARIMA and GARCH model to the rolling window of length 252. We’ve defined the functions to fit ARIMA and GARCH above (Given that we try 32 separate ARIMA fits and fit a GARCH model, for each day, the indicator can take a long time to generate) for d in range(foreLength): # create a rolling window by selecting # values between d+1 and d+T of S&P500 returns TS = lrets[(1+d):(windowLength+d)] # Find the best ARIMA fit # set d = 0 since we've already taken log return of the series res_tup = _get_best_model(TS) order = res_tup[1] model = res_tup[2] #now that we have our ARIMA fit, we feed this to GARCH model p_ = order[0] o_ = order[1] q_ = order[2] am = arch_model(model.resid, p=p_, o=o_, q=q_, dist='StudentsT') res = am.fit(update_freq=5, disp='off') # Generate a forecast of next day return using our fitted model out = res.forecast(horizon=1, start=None, align='origin') #Set trading signal equal to the sign of forecasted return # Buy if we expect positive returns, sell if negative signal.iloc[d] = np.sign(out.mean['h.1'].iloc[-1]) Note: The backtest is doesn't take commission or slippage into account, hence the performance achieved in a real trading system would be lower than what you see here. Strategy Results Now that we have generated our signals, we need to compare its performance to ‘Buy and Hold’: what would our returns be if we simply bought the S&P 500 at the start of our backtest period. returns = pd.DataFrame(index = signal.index, columns=['Buy and Hold', 'Strategy']) returns['Buy and Hold'] = lrets[-foreLength:] returns['Strategy'] = signal['SPX']*returns['Buy and Hold'] eqCurves = pd.DataFrame(index = signal.index, columns=['Buy and Hold', 'Strategy']) eqCurves['Buy and Hold']=returns['Buy and Hold'].cumsum()+1 eqCurves['Strategy'] = returns['Strategy'].cumsum()+1 eqCurves['Strategy'].plot(figsize=(10,8)) eqCurves['Buy and Hold'].plot() plt.legend() plt.show() We find the model does outperform a naive Buy and Hold strategy. However, the model doesn’t perform well all the time, you can see majority of the gains have happened during short durations in 2000–2001 and 2008. It seems there are certain market conditions when the model does exceedingly well. In periods of high volatility, or when S&P 500 had periods of ‘sell-off’ , such as 2000–2002 or the crash of 2008–09, the strategy does extremely well, possibly because our GARCH model captures the conditional volatility well. During periods of uptrend in S&P500, such as the bull run from 2002–2007 the model performs on par with S&P 500. In the current bull run from 2009, the model has performed poorly compared to S&P 500. The index behaved like what looks to be more a stochastic trend, the model performance suffered in this duration. There are some caveats here: We don’t account for slippages or trading costs here, which would significantly eat into profits. Also, we’ve performed a backtest on a stock market index and not a tradeable instrument. Ideally, we should perform the same modelling and backtest on S&P500 futures or a Exchange Traded Fund (ETF) like SPY . This strategy can be easily applied to other stock market indices, other regions, equities or other asset classes. You should try researching other instruments, playing with window parameters and see if you can make improvements on the results presented here. Other improvements to the strategy could include buying/selling only if predicted returns are more or less than a certain threshold, incorporating variance of prediction into the strategy etc. If you do find interesting strategies, participate in our competition, QuantQuest and earn profit shares on your strategies!
https://medium.com/auquan/time-series-analysis-for-finance-arch-garch-models-822f87f1d755
CC-MAIN-2018-51
refinedweb
2,399
54.32
In this tutorial, we will be discussing a program to understand how vectors work in C/C++. A vector data structure is an enhancement over the standard arrays. Unlike arrays, which have their size fixed when they are defined; vectors can be resized easily according to the requirement of the user. This provides flexibility and reduces the time requirement with arrays to copy the previous elements to the newly created array. #include <iostream> #include <vector> using namespace std; int main(){ vector<int> myvector{ 1, 2, 3, 5 }; myvector.push_back(8); //not vector becomes 1, 2, 3, 5, 8 for (auto x : myvector) cout << x << " "; } 1 2 3 5 8
https://www.tutorialspoint.com/how-does-a-vector-work-in-c-cplusplus
CC-MAIN-2022-21
refinedweb
109
62.17
Are you sure? This action might not be possible to undo. Are you sure you want to continue? a)Normally it is under $oRACLE HoME/rdbms/logs, OR b)show parameter background dump dest (after login sqlplus / as sysdba). [NOTE: $oRACLE HoME/rdbms7trace for trace files] 2. explain unmount, mount, and open? in nomount-instance is started in mount - database started but datafile inaccessible in open - datafiles put in accessible mode 3.purpose of a pfile/spfile? to allocate system memory to point to control/admin file locations for the database 4. is it possible to change location of admin/control file with database in mount state? it aint possible because changing location requires changing pfile/spfile which have already been used by the time the database gets to MOUNT mode. 5. difference between obsolete and expired backup? EXPIRED: Displays backup sets, proxy copies, and image copies marked in the repository as expired, that is, "not found."To ensure that LIST EXPIRED shows upto-date output, issue a CROSSCHECK command periodically. When you issue a CROSSCHECK command, RMAN searches on disk and tape for the backups and copies recorded in the repository. If it does not find them, then it updates their repository records to status EXPIRED. OBSOLETE: 6. default location for spfile/pfile? $oRACLE HoME/dbs 7. default location for oracle password file? $oRACLE HOME/database 8. how do you-set the oracle database to use a password file? create an oracle password and you are up and running #orapwd file=PWDdb1.ora password=sys entries=6 9. how do you unset the database recovery area? 10. difference between a trace and an alert. 11. what does the alert 13. how do you start and stop a listener? Lsnr ctl [start, stop, status] 14.what are the key oracle file? tnsnames, sqlnet, listener IS.Difference between sqlnet, tnsnames and listener? tnsnames > list of database connection info for client/server, This is a text file contains the information about the oracle databases.For each database, it must contains the db server name, db listener port number, and instance name. sqlnet > communication parameter setup listener> list of databases to listen for on the machine l6.How do i see the pl/sql procedure output? SET SERVERoUTPUT ON; l7.how do i execute a SQL file? @fHename. sql IS. how to see who is currently connected? SELECT username, program FROM v$session WHERE username IS NOT NULL; 19.how do i recompile invalid objects? @?/rdbms/admin/utlrp.sql 20.how do i tell which database am in? SELECT name from v$database; OR SELECT instance name, host name from v$instance; 21. how do u export and lmport in oracle? > make sure you run the script $oRACLE HoME/rdbms/admin/catexp.sql >import and export use IMP/EXP utilities which are both located in $oRACLE HOME/bin directory >-c:\>EXP HELP=Y > c:\>EXP USERID=scott/tiger oWNER=scott FILE=scott.dmp > c:\>IMP USERID=scott/tiger FILE=scott.dmp FULL=Y (which moves the dmp fHe) 22.how to add different service to one oracle default listener? Just edit the SID_LIST ie; 23.default location of tnsnames, sqlnet and listener? $oRACLE HoME\network\admin 24.Steps to moving admin and controlfiles? ***************MoVING A CoNTRRoLFILE -shutdown database -copy controlfile to new location -edit pfile/spfile pointing to new controlfile location -restart instance ***************MoVING ADMIN FILES -you can do the same as for controlfile -Use the ALTER SYSTEM command [ALTER SYSTEM set background dump dest=" SCoPE='spfile'] 2S.steps to partial movement of log/data-files by renaming them? ******************** -shutdown the database -copy logfiles to the desired new location -start the database in mount mode -issue the ALTER DATABASE RENAME FILE" TO ". -Open the database -put particular tablespace offline [alter tabl -use OS to move files to new location ~ -rename fHes [alter tablespace INDS rename.. • D~/'S' -put tablespace back online [alter tablespace ~ onni'\Sl.:? 26. steps to fun movement of log/data fHes by co .1"l-1le ******************* -backup the controlfile to trace and edit it • location -on- -on- TERED I, ~ U ;z,. -shutdown the database 27.steps to manually create an oracle database? -create directories -create pfile -startup nomount using pfile -run create database script -execute the catproc and catalog scripts -alter database put it in mount and open mode 28.steps to restore rman backup to a different node? -define the directories for the files -connect to NODE 1 using rman and backup database to an accessible location -Move the following files to the NODE 2 (+The database backup pieces, +Controlfile backup piece,+The parameter file i.e init.ora file) -Edit the PFILE on NODE 2 to change the environment specific parameters like. (user dump dest=,background dump dest=,control files=) -Once the-PFILE is suitably modIfied invoke Rman on the NODE 2 after setting the Oracle environment variables and start the database in nomount mode: -Restore the controlfile from the backup piece. -Mount the database. RMAN> alter database mount; -Now catalog the backup pieces that were shipped from NODE 1. RMAN> catalog backuppiece '/home/oracle/test/backup/o1 mf annnn TAG20070213T002925 2x21m6ty .bkp'; -Get to know the last sequence available in the archivelog backup-using the following command. RMAN > list backup of archivelog all; -Rename the Redologfiles,so that they can be created in new locations when opened the database is opened in resetlogs. SQL> alter database rename file '/u01/oracle/product/oradata/ora10g/log/redo01.log' to '/home/oracle/test/log/redo01.log'; . -execute script to restore datafiles to new node and recover RMAN> run { set until sequence <seq no> set newname for datafile-1 to '/home/oracle/test/data/sys01.dbf' ; set newname for datafile 2 to '/home/oracle/test/data/undotbs01.dbf' ; set newname for datafile 3 to '/home/oracle/test/data/sysaux01.dbf' ; set newname for datafile 4 to '/home/oracle/test/data/users01.dbf' ; set newname for datafile 5 to '/home/oracle/test/data/1.dbf'; set newname for datafile 6 to '/home/oracle/test/data/sysaux02.dbf' ; set newname for datafile 7 to '/home/oracle/test/data/undotbs02.dbf' ; restore database; switch datafile all; recover database; alter database open resetlogs; } 29.steps to creating and running rman backup scri 30.steps to manual database cloning with control -create directories for the database you are -create init file for the clone with point desired controlfile -backup controlfile of database to clone, edit the backed up controlfile into a script pointing to the location of new datafiles and 10gfiles. -shutdown database to clone and move the datafiles and redolog files to their desired new location -startup the instance using a created pfile that points to the location of the controlfiles for the clone but startup should be in NOMOUNT mode. -Now execute the controlfile script to create the controlfiles. -open database with a RESETLOGS to avoid errors that may arise as a result of SCN number difference/change. 31.steps to manual database cloning without controlfile recreation? 32.difference between cloned and duplicate database? 33.steps to connect to rman using the catalog/auxilliary? 34.steps to recreating a lost controlfile? 35.why do we use a resetlog option when recreating a controlfile? 36.steps to manually upgrade a database? -connect to database to be upgraded as sysdba -Analyse the database to be upgraded [by executing the script /rdbms/utlulO.sql which is located in oracle home for new version --sript is executed from the old environment, execute a sql>spool info.log, execute the script utlululO.sql, turn off spool sql>spool off, read the info.log for information about the database and what needs to be upgraded] -Backup the database to be upgraded while in old environment [sign on to RMAN: rman "target / nocatalog", issue backup command:RUN { ALLOCATE CHANNEL chan name TYPE DISK; BACKUP DATABASE FORMAT @some backup directory%U@ TAG before upgrade; BACKUP CURRENT CONTROLFILE TO @save-controlfile 10cation@; }] -- -shutdown instances -copy configuration files [spfile, password file] from old environment to your new home and then edit the pfiles accordingly] -make sure Oracle services are stopped -set your environment variables to the new release ie [oracle_home, path, oracle base,Ld library path and oracle sid] ~start sqlplus connecting with SYSDBA in your new release environment -startup upgrade with pfile [sql>startup upgrade pfile=' '] -create the SYSAUX tablespace - dont forget to specify [CREATE TABLESPACE sysaux DATAFILE '/uOl/oradata/sysauxOl.dbf' SIZE 700M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;] 'cause if extent management and segment space management are not set, sysaux tablespace cannot be created. -spool upgrade log [sql>spool upgrade. log] -Run the upgrade script[sql>@catupgrd.sql (this determines which upgrade scripts need to be run and then runs eac necessary script, according to your old release, this creates and upgrades data dictionary tables) -Put spool off -Run [sql>@utlulOls.sql TEXT], this specifies . ~~ components in the upgraded database. ~~ ~~ -shutdown and restart the instance -run [sql>@utlrp.sql], this recompiles any rema ~~tored PL/SQL and jJ~ code and also recompiles any invalid obj ects. .Ji!:~R:EGISTERED %: -YOUR DATABASE IS NOW UPGRADED. ~ I, C? ~~ ***************In case you need recovery -sign on to RMAN [rman "target / nocatalog"] -issue the recovery command \fERS/()~ STARTUP NoMoUNT RUN { REPLICATE CoNTRoLFILE FROM @save_controlfile_location@; ALTER DATABASE MOUNT; RESTORE DATABASE FROM TAG before_upgrade ALTER DATABASE OPEN RESETLoGS; } ************************************************* 37.explain steps for database backup and recovery using the O.S? 38.explain steps for database backup and recovery using RMAN? 39.what do u do incase of a mssing lib in oracle? copy it to the directory where it's required and then execute [RELINK] $oRACLE HoME/bin/relink all OR bln# ./relink all 40.Explain differences between a hot backup and a cold/closed backup and the benefits. 41. 42.what is an oracle instance? 43.Describe what redo logs are & their advantage? Redo logs are logical and physical structures that are designed to hold all the changes made to a database and are intended to aid in the recovery of a database. 44.what are the background processes in oracle? 44.why set db block size to 8192? 8192 size is the default size but not the most recommended, because it has to be bigger when it comes to dataware houses. 45. what are the default ports? for oracle net listener and em repository port:1521/1526 [commonly used listener ports: 1522-1540] for em: oracle names server:1575 oracle connection manager:1630 http server listen port:80 EM Agent port:1831 EM reporting port:3339 iSQL plus: 5560/5580 46.how do you start? em > emctl start dbconsole isqlplus>isqlplus ctl [start/stop] 47.what is the difference between NoMoUNT, MOUNT IN NOMOUNT> instant starts but doesnt utilise the controlfile yet IN MOUNT>control file is in use but datafiles are not open IN OPEN>datafiles can be accessed 48.What@s the benefit of @dbms stats@ over @analyze@? 49.41. How would you configure-your networking files to connect to a database by the name of DSS which resides in domain icallinc.com? 50.Explain the concept of the DUAL table? A single row table provided by oracle for selecting values and expressions. 51.What are the ways tablespaces can be managed and how do they differ? 52. From the database level, how can you tell under which time zone a database is operating? select DBTIMEZONE from dual; 53.How do you recover a datafile that has not been physically been backed up since its creation and has been deleted. Provide syntax example? 54.You have found corruption in a tablespace that contains static tables that are part of a database that is in NOARCHIVE log mode. How would you restore the tablespace without losing new data in the other tablespaces? 55.Typically, where is the conventional directory structure chosen for Oracle binaries to reside? 56.Explain undo retention? 57.How does Oracle guarantee data integrity of data changes? 58.Which environment variables are absolutely critical in order to run the OUI? 59.Explain how you would restore a database using RMAN to Point in Time? 60.Database crashes. Corruption is found scattered among the file system neither of your doing nor of Oracle@s. What database recovery options are available? Database is in archive log mode? 61.What query tells you how much space a tables pace named @test@ is taking up, and how much space is remaining? SQL>select sum(bytes/(1024*1024)) mb from dba_free_space where tablespace name='TEST' 62.Which dIctionary tables and/or views would you look at to diagnose a locking issue? 63.What is theh difference between an spfile and a pfile? 63.would an oracle instance start incase you lost the controlfile? Yes it would but database wouldnt open because controlfile points to location of the physical files. 64. 65. What command would you use to encrypt a PL/SQL application? WRAP 66 func ns. d procedures that are grouped together based on their function or application. E.~ED 67.Name three advisory statistics you can collect. ,~~ ~~ Buffer Cache Advice, Segment Level Statistics, • )~tatistics ~~ 68.Where in the Oracle directory tree structure are a • :~~ aces placed by ~ default? ~ ~ In unix $ORACLE_HOME/rdbms/audit, in Windows ~e~,E[~]rf:flf:l) 69.Explain materialized views and how they are us \ff:fl~/()~ Materialized views are objects that are reduced se summarized, grouped, or aggregated from base tab, data warehouse or decision support systems. 70.When a user process fails, what background process cleans up after it? PMON 7I.What background process refreshes materialized views? The Job Queue Processes. 72.How would you determine what sessions are connected and what resources they are waiting for? Use of V$SESSION and V$SESSION WAIT 73.How would you force a log switch? ALTER SYSTEM SWITCH LOGFILE; 74.Give two methods you could use to determine what DDL changes have been made. You could use Logminer or Streams 7S.Explain steps in configuring your database for auditing? 76.What does coalescing a tablespace do? Coalescing is only valid for dictionary-managed tablespaces and de-fragments space by combining neighboring free extents into large single extents. 77.Name a tablespace automatically created when you create a database. The SYSTEM tablespace. 78.What is the difference between a TEMPORARY tablespace and a PERMANENT tables pace? A temporary tablespace is used for temporary objects such as sort structures while permanent tablespaces are used to store those objects meant to be used as the true objects of the database. 79.How do you add a data file to a tablespace? ALTER TABLESPACE ADD DATAFILE SIZE 80.How do you resize a data file? ALTER DATABASE DATAFILE RESIZE; 8I.What view would you use to look at the size of a data file? DBA DATA FILES 82.What view would you use to determine free space in a tablespace? DBA FREE SPACE 83.How would you determine who has added a row to a table? Turn on fine grain auditing for the table. 84. Explain steps to setting a database for fine grain 8S.How can you rebuild an index? ALTER INDEX REBUILD; 86.Explain what partitioning is and what Partitioning is a method of taking large into smaller, more manageable pieces. 87.You have just compiled a PL/SQL package but go errors? SHOW ERRORS 88.How can you gather statistics on a table? The ANALYZE command. 89.How can you enable a trace for a session? Use the DBMS_SESSIoN.SET_SQL_TRACE or Use ALTER SESSION SET SQL TRACE = TRUE; 90. 91.Name two files used for network connection to a database. TNSNAMES.oRA and SQLNET.oRA Technical UNIX 92.Why is a Where clause faster than a group filter or a format trigger? Because, in a where clause the condition is applied during data retrieval than after retrieving the data. 93.What is the difference. 94. What is user Account in Oracle database? A user account is not a physical structure in database but it is having important relationship to the objects in the database and will be having certain privileges. 95.. 96. What are the dictionary tables used to monitor a database space? DBA FREE SPACE DBA SEGMENTS DBA DATA FILES. 97. What is oCI. What are its uses? oCI is Oracle Call Interface. When applications de powerful interface to the Oracle Database Server, Interface(oCI). oCI provides the most comprehensi Database functionality. The newest performance, appear first in the oCI API. If you write applicat • you likely already depend on oCI. Some types of ar (PL/SQL apps executing SQL, c++ apps using oCCI, Java Apps using oCI-based JDBC driver, CApps using oDBC driver, Pro*C apps, distributed SQL, VB apps using oLEDB driver) 98. What is Multi Node System? Multi node system in oracle Applications IIi means you have Applications IIi component on more than one system. Typical example is database, concurrent Manager on one machine and forms, webserver on second machine is example of two node system. 99.What is a pseudo column. Give examples? information such as row numbers and row descriptions ae automatically stored by oracle and is directly accessible i.e not through tables. This information is contained within pseudo columns. These pseudo columns can be retrieved in queries. These pseudo columns can be included in queries within select data from tables. [A pseudocolumn behaves like a table column, but is not actually stored in the table. You can select from pseudocolumns, but you cannot insert, update, or delete their values.] Availables pseudo columns are RoWNUM - order number in which a row values is retrieved RoWID-physical row(memory or disk address) location i.e the unique row identification, SYSDATE-system or today's date, UID - user identification number indicating the current user, USER - name of current logged in user. 100. What is TWO TASK in oracle Database? TWO TASK mocks your tns alias which you are going to use to connect to database. Lets assume you ave database client with tns alias defined as PROD to connect to database PROD on machine teachmeoracle.com listening on port 1S21. The usual way to connect is sqlplus username/passwd@PRoD; now if you dont want to use @PRoD then you set TWO TASK=PRoD and then can simply use sqlplus username/passwd then sql will check that it has to connect to tnsalias define by value PROD i.e TWO TASK. 101. What is the difference between unique and primary key? A unique key can have null whereas primary key is always not null. both bear unique names. 102. 103.What are disadvantages of having raw devices? We should depend on export/import utility for backup/recovery (fully reliable) The tar command cannot be used for physical file backup, instead we can use dd command which is less flexible and has limited recoveries. 104.What is the use of INCTYPE option in EXP command? Type export should be performed COMPLETE ,CUMULATIVE ,INCREMENTAL. List the sequence of events when a large transaction that exceeds beyond its optimal value when an entry wraps and causes the rollback segment toexpand into anotion Completes. e. will be written. lOS.What is the use of FILE option in IMP command? The name of the file from which import should be perfo 106.What is a Shared SQL pool? The data dictionary cache is stored in an area in This will allow sharing of parsed SQL statements a rrent users. 107.List the Optional Flexible Architecture (oFA) we organize the tablespaces in Oracle database to SYSTEM - Data dictionary tables. DATA - Standard operational tables. VERSION DATA2- Static tables used for standard operatio INDEXES - Indexes for Standard operational tabl x~tt:IS~~~t:Y INDEXESI - Indexes of static tables used for standard operations. TOOLS - Tools table. TooLSl - Indexes for tools table. RBS - Standard Operations Rollback Segments, RBSl,RBS2 - Additional/Special Rollback segments. TEMP - Temporary purpose tablespace TEMP USER - Temporary tablespace for users. USERS - User tablespace. IDS.How to implement the mUltiple control files for an existing database? Shutdown the database Copy one of the existing control file to new location Edit Config ora file by adding new control file. name Restart the database. 109. 110.How will you force database to use particular rollback segment? SET TRANSACTION USE ROLLBACK SEGMENT rbs name. 111.Why query fails sometimes? Rollback segment dynamically extent to handle larger transactions entry loads. A single transaction may wipeout all available free space in the Rollback Segment Tablespace. This prevents other user using Rollback segments. l12.What is the use of RECORD LENGTH option in EXP command? Record length in bytes. l13. l14 anyone group fails then database automatically switch over to next group. It degrades performance. llS.Which parameter in Storage clause will reduce no. of rows per block? PCTFREE parameter Row size also reduces no of rows per block. l16.What is meant by recursive hints? Number of times processes repeatedly query the dictionary table is called recursive hints. It is due to the data dictionary cache is too small. By increasing the SHARED POOL SIZE parameter we can optimize the siz f Dictionary Cache. l17.What is the use of PARFILE option in EXP command ~E.~E:D Name of the parameter file to be passed for export. \.c, \ V~ llS.What is the difference between locks, latches, e • ~~d semaphores? ~~~ DBA) G'I A latch is an internal Oracle mechanism used to pro - ~ ,t:,~~~~~t&~S~ from simultaneous access. Atomic hardware inst ructrs ;ikeIY~bMil-l;fli:~~d;,f to implement latches. Latches are more restrictiv a' locks j~.1~t~t~~aire ~ always exclusive. Latches are never queued, but spin or s~~~y obtain a resource, or time out. Enqueues and locks are different names for the s 119.What is a logical backup? Logical backup involves reading a set of database records and writing them into a file. Export utility is used for taking backup and Import utility is used to recover from backup. 120. 121. 122.What are the different kind of export backups? Full back - Complete database Incremental - Only affected tables from last incremental Cumulative backup - Only affected table from the last cu, date. 123.How does Space allocation table place within Each block contains entries as follows Fixed block header Variable block header Row Header,row date (multiple rows may exists) PCTEREE (% of free space for row updation in fut 124.What are the factors causing the reparsing of Due to insufficient Shared SQL pool size. Monito place while executing SQL statements. If the ratio is greater than 1 then increase the SHARED_POOL_SIZE. LOGICAL & PHYSICAL ARCHITECTURE OF DATWhat is dictionary cache ? Dictionary cache is information about the databse objects stored in a data dictionary table. ABASE. 125.What is dictionary cache? Dictionary cache is information about the databse objects stored in a data dictionary table. 126.What is Database Buffers? Database buffers are cache in the SGA used to hold the data blocks that are read from the data segments in the database such as tables, indexes and clusters DB BLOCK BUFFERS parameter in INIT.ORA decides the size. 127.What-is a Control file? Database overall physical architecture is maintained in a file called control file. It will be used to maintain internal consistency and guide recovery operations. Multiple copies of control files are advisable. 128.How will you create mUltiple rollback segments in a database? Create a database which implicitly creates a SYSTEM Rollback Segment in a SYSTEM tablespace. Create a Second Rollback Segment name RO in the SYSTEM tablespace. Make new rollback segment available (After shutdown, modify init.ora file and Start database) Create other tables paces (RBS) for rollback segments. Deactivate Rollback Segment RO and activate the newly created rollback segments. 130.What is cold backup? What are the elements of it? Cold backup is taking backup of all physical files after normal shutdown of database. We need to take. An Data files. All Control files. All on-line redo log files. The init.ora file (Optional) 131. 132.How will you monitor the space allocation? By querying DBA SEGMENT table/view. 133.What is meant by free extent? A free extent is a collection of continuous free blocks in tablespace. When a segment is dropped its extents are reallocated and are marked as free. 134.What are the different methods of backing up oracle database? - Logical Backups - Cold Backups - Hot Backups (Archive log) 135.The database instance crashed because of power failure. The database files are not affected by the crash. Which files will be used for the instance recovery at the next database instance startup? (Choose all that apply) Data files Control file E.~ED Redo log files . c,\ V~ 136.The database instance is started using SPFILE an ~~~iDase is in MOU~G() state. Which two operations can you perform in the MO ' ate of the databas~rG'l (Choose two) ~ REGISTERED ~ Renaming the data files ~ I, Configuring the database in the ARCHIVELOG AND NEVER NEVER Creating new tables pace , Adding atabase u\lf:RS/()~ 137.Which two statements are true about the Flas U The restoration of files is not required for accomplishing the recovery. The unsolicited or wrongly committed transactions can be recovered. AND NEVER NEVER The accidental loss of schema can be recovered, The accidental loss of tablespace can be recovered. 138. Which statement is true for the database configured in the ARCHIVELOG mode? You can take backup of the database without shutting down the database. AND NEVER NEVER Archiving information is written to the data files and redo log files, Flash recovery area must be used to store the archived redo log files,The online redo log files must be multiplexed before putting the database in the ARCHIVELOG mode. 139.The abnormal termination of the database background process causes the database instance to shut down without synchronizing the database files. Arrange the steps required for instance recovery? i) SMON applies redo to the database ii) The database opens iii)Uncommitted transactions are rolled back by SMON NB:Perform recovery manually AND SMON reads the archived redo log files and online redo log files are not required. 140.The database is running and users are connected to the database instance using the LSNR1 listener. You observe that LSNR1 has stopped. Which two statements are true in this situation? (Choose all that apply) -New connections are not allowed and -The connected sessions are not affected 141.UNDO RETENTION is set to 900 in your database, but undo retention guarantee is not enabled. Which statement is true in this scenario? Undo data is overwritten if no space is available for the new undo data AND NEVER NEVER Undo data is retained even if no space is available and new undo generated is stored in the temporary tablespace, Undo data becomes obsolete if a long-running transaction takes more time than that specified by UNDO RETENTION, All undo data is purged if no space is available to accommodate new undo generated 142 143.What are mutating tables? When a table is in state of transition it is said to be mutating. eg : If a row has been deleted then the table is said to be mutating and no operations can be done on the table except select. 144.Can U disable database trigger? How? Yes. With respect to table ALTER TABLE TABLE [[ DISABLE all trigger 11 14S.How many columns can table have? The number of columns in a table can range from 146.Is space acquired in blocks or extents? In extents . 147.What are attributes of cursor? %FOUND , %NOTFOUND , %ISOPEN,%ROWCOUNT 148.What are the various types of Exceptions? User defined and Predefined Exceptions. 149.Can we define exceptions twice in same block 7 No. ISO.Can you have two functions with the same name Yes. 151.Can you have two stored functions with the same name? Yes. 152.What are the various types of parameter modes in a procedure? IN, OUT AND INoUT. 153.What is Over loading and what are its restrictions? Overloading means an object performing different functions depending upon the no. of parameters or the data type of the parameters passed to it. 154.Can functions be overloaded? Yes. 155.Can two functions have same name & input parameters but differ only by return data type No. 156.What are the constructs of a procedure, function or a package? The constructs of a procedure, function or a package are: variables and constants cursors exceptions 157.Why Create or Replace and not Drop and recreate procedures? So that Grants are not dropped. 158.Can you pass parameters in packages? How? Yes. You can pass parameters to procedures or functions in a package. 159.What are the parts of a database trigger? The parts of a trigger are: A triggering event or statement A trigger restriction A t rigger action 160.What are the various types of database triggers? There are 12 types of triggers, they are combination of Insert, Delete and Update Triggers. Before and After Triggers. Rowand Statement Triggers. (3*2*2=12) 161.What is the advantage of a stored procedure over a database trigger? We have control over the firing of a stored procedure but we have no control over the firing of a trigger. 162.What is the maximum no. of statements that can be specified in a trigger statement? One. 163.Can views be specified in a trigger statement? No 164.What are the values of :new and :old in Insert/Delete/Update Triggers? INSERT new new value, old = NUll DELETE : new = NUll, old = old value UPDATE : new = new value, old = old value 165.What are cascading triggers? What is the maximum no of cascading triggers at a time? When a statement in a trigger body causes another trigger triggers are said to be cascading. Max = 32. 166.What are mutating triggers? A trigger giving a SELECT on the table on which the t 167.Describe Oracle database's physical and logical Physical: Data files, Redo log files, Control file. logical: Tables, Views, Tablespaces, etc. 168.Can you increase the size of a tablespace ? Yes, by adding datafiles to it. 169.What are constraining triggers? A trigger giving an Insert/Updat e on a table h constraint on the triggering table. 170.What is the use of Control files? Contains pointers to locations of various data files, redo log files, etc. 171.What is the use of Data Dictionary? Used by Oracle to store information about various physical and logical Oracle structures e.g. Tables, Tablespaces, datafiles, et 172.What are the disadvantages of clusters? The time for Insert increases. 173.What are the advantages of clusters? Access time reduced for joins. 174.What are the min. extents allocated to a rollback extent? Two 17S.An insert statement followed by a create table statement followed by rollback? Will the rows be inserted? No. 176.What are the states of a rollback segment? What is the difference between partly available and needs recovery? The various states of a rollback segment are: ONLINE, OFFLINE, PARTLY AVAILABLE, NEEDS RECOVERY and INVALID. 177.What is the significance of the & and && operators in PL SQL ? The & operator means that the PL SQL block requires user input for a variable. The && operator means that the value of this variable should be the same as inputted by the user previously for this same variable. If a transaction is very large, and the rollback segment is not able to hold the rollback information, then will the transaction span across different rollback segments or will it terminate? It will terminate (Please check). 178.Can you pass a parameter to a cursor? Explicit cursors can take parameters, as the example below shows. A cursor parameter can appear in a query wherever a constant can appear. CURSOR c1 (median IN NUMBER) IS SELECT job, ename FROM emp WHERE sal> median; 179.What are the various types of RollBack Segments? Public Available to all instances Private Available to specific instance 180.Can you use %RowCount as a parameter to a cursor? Yes 181.What is Optimal Flexible Architecture? A directory structure. It is a standard or a set of configuration guidelines created to ensure fast and reliable oracle databases that require little maintenance. The main purposes thereof are -organising large amounts of complicated software and data on disk to prevent hardware bottlenecks and poor performance, facilitating routine administrative tasks that are highly vvulnerable to data corruption, facilitating switching between databases, managing the database growth, avoiding fragmentation of free space in the data dictionary, minimizing resource contention. In short oracle has designed the OFA as a file system directory structure that helps in maintaining mUltiple versions of mUltiple oracle products. 182. Which other protocols can oracle Net in 11g use? In release 11g Oracle's network support is limite socket,the newer Sockets Direct Protocol SDP and 183.Explain is ORACLE NET Oracle Net is a software component that enab oracle networks. It can also be defined as a su provide enterprise-wide connectivity solutions computing environments. Oracle Net Services con -Oracle Net, Listener, Oracle Connection Assistant, Oracle Net Manager. 184. Explain DBUA? Database Upgrade Assistant is a GUI tool used to upgrade an existing database. Oracle recommends using DBUA while upgrading a database and DBCA while creating a database. 185. Explain DBCA? Database Configuration Assistant is a GUI-based oracle tool, which can run as part of OUI or as a standalone application and is used to create, configure, manage and delete a database. DBCA runs during database installation when the create a database option is selected. The DBCA utility is able to examine database settings. It is also optionally used to export the current data of a database. DBCA stores templates in a file with a .dbc extension, variables such as ORACLE BASE and DB NAME allow DBCA to install files into an appropriate directory for an-oracle installation. While exporting data from the database, DBCA creates a zip file having extension .dfj, which consists of raw images of each data file. These images are automatically written directly to disk instead of using SQL statements. Therefore, it is appropriate to use .dfj files for database creation, and not as a backup. 186. Explain OEM? Oracle Enterprise Manager is a graphical system management tool that is used to manage components of Oracle and administer databases. OEM comprises a graphical console, management server, Oracle Intelligent Agent, repository database, and tools to provide an intergrated and comprehensive system management platform for managing oracle products. Tools intergrated with OEM can be used to examine the performance of an Oracle database and tune the database accordingly. 187.Scenario: you work as a DBA for Infotech Inc. The company used Oracle as its database. You have opened the database to perform update. Which files if lost can cause the database to crash? Datafile from the SYSTEM tablespace and controlfile. Remember - if any of the control files and a datafile from the SYSTEM tablespace is missing, the instance will immediately abort reason being that these files are responsible for a database to remain in the MOUNT mode. AND NEVER NEVER archivelog file or datafile from SYSAUX reason being the instance can take the datafile from the SYSAUX tables pace offline automatically, and therefore is capable to remain open even if it is lost. and also if anything happened to an archived log file, it cannot have any effect on the instance. 188.Why do you have to open the database with RESETLOGS after recreating a controlfile? 189.which views are available in NOMOUNT mode? It's the V$INSTANCE and V$SESSION AND NEVER NEVER and view prefixed with DBA are the data dictionary views which are only seen in OPEN mode and also YOU CAN NEVER find V$DATABASE and V$DATAFILE because V$ views are dynamic views that are populated from the control file and therefore are only available in mount mode or Open mode 190.Explain parameters? USER DUMP DEST- an init parameter that specifies the destination in which the use processes inclusinve of background and foreground processes will write trace files. BACKGROUND DUMP DEST - specifies location for trace fi processes to be written and also location for the alert.log DB CREATE FILE DEST- parameter to specify the def managed datafiles with properties [parameter type:st syntax:DB_CREATE_FILE_DEST=directorYldisk group, d Modifiable:ALTER SESSION, ALTER SYSTEM], if DB CRE parameter lS not speclfled then DB CREATE FILElocation for oracle managed control files-and 0 REMEMBER: there is no such parameter as ba 191. Give types of segments? Table, Data dictionary,index, cluster 192.Explain the concept of segments, datablock, SEGMENT: is a set of extents allocated for a specific database object such as mentioned above, each time a database object is created, oracle allocates a segment to it, this segment contains atleast one extent that intun contains atleast one block. A single segment holds all the data of the corresponding database object. A segment can belong to only one tablespace, but can be associated to mUltiple datafiles. Extents allocated to a segment can belong to mUltiple datafiles but datablocks allocated to an extent can belong to only one datafile. 193.which command is used to exit from the Listener command utility without saving changes to the listener.ora file? The QUIT command 194.Explain the Listener utility? The LSNRCTL utility enables a database administrator to manage one or more listeners. It cannot be used to create or configure listeners. It only proides commands to control various listener functions listener functions such as starting, stopping and getting status of listeners plus also changing parameter settings in the listener.ora file START-starts a listener STOP-stops a listener STATUS-sees status of a listener SERVICES-checks services offered by the listener VERSION-shows version of a listener RELOAD-forces a listener to re-read its entry in listener.ora file SAVE CONFIG-used to write any changes made online to the listener TRACE-enables tracing of a listener's activity CHANGE PASSWORD-sets password for a listener's administration EXIT-used to exit from the tool and save changes to the listener. ora file QUIT-used to exit the tool without saving changes made to the listener. ora file SET-used to set various options such as tracing and timeouts SHOW-used to show options that have been set for a listener SERVICE and STATUS commands are used to see the status of a listener. REMEMBER - ABORT is not a listener command 195.Explain the Data pump utility and the files belonging to it? Data pump is a new feature introduced in Oracle 109 to moev data between databases and to or from operating system files very efficiently. It provides parallel import and export utilities(impdb, expdp) on the command-line as well as the web-based oracle enterprise manager export/import interface. It is ideally beneficial for large databases and data waehousing environments. Oracle data pump facility runs on the server. some of the functions performed by oracle data pump are; - it is used to copy data from one schema to another between two databases or within a single database and also it can be used to extract a logical copy of the entire database, a list of tablespaces and a list of schemas or a list of tables. Types of files managed by the data pump are; -dump files:containing the data and metadata that is being moved, files:these record messages associated with an operation, SQL fil the output of a SQLFILE operation. a SQLFILE operation is in • - I ta pump import SQLFILE parameter and results in all of the f II ~~ • executing based on other parameters, being writtent 0 a .~ ·L:LJ ~~~ The Data pupm utility uses the export parameter CONT <fitter the export",.""r. objects such as data, metedata or both. ~ ~ 196.Explain SQL*Loader? ~ _DCr:'C:-T~DCn ~ 197.When the SGA TARGET is set to a non-zero value, • :5't fII'f~J)i;;M.Ya1I'~dlli'ml!l),~fe~ cache and Large pool which other memory structure • ,~config~~~~s~R~a~f the Automatic Shared Memory Management feature in or -. "t:f(~'lIfV It is the Java pool and Streams pool, becaus 'f SGA is s;ecified then the memory pools that are automatically sized are But. - cache(DBA@lO'Gz~Ohare pool(SHARED POOL SIZE), large pool(LARGE POOL SIZE), java pool(JAVA POOL SIZE), streams pool(STREAMS POOL SIZE) NOTE: Streams pool is an optional SGA structure 198.Explain the flash recovery area? The flash recovery area Is a specific location on disk that stores and manages files for backup and recovery purposes. It is an automatic feature available in Oracle 109. Oracle Managed Files(OMF) configures the flash recovery area and utilises the disk resources managed by Automatic Storage Management (ASM) . RMAN performs the task of automatic cleaning up. The flash recovery area acts as a cache area for the backup components that are to be copied to the tape. The flash recovery area contains the following database file: Control files, archive log files, flashback logs, control file and SPFILE auto-backups, and datafile image copies. It is necessary to configure the flash recovery area when the database is set up for the very first time. 199.When database is in noarchielog mode and you used RMAN to take backup. which types of backups are possible? Only closed and Incremental backups INCREMENTAL BACKUP is an RMAN backup in which only data blocks that were modified since the last incremental backup are backed up. Incremental backup are classified by levels. The baseline backup for an incremental backup is a level 0 incremental backup. A level 0 incremental backup, like a full backup, backs up all data blocks that have ever been used. However, a full backup cannot be used as the baseline backup for subsequent incremental backups. Incremental backups at levels greater than 0 back up only data blocks modified since the last incremental backup. Forexample, a level 1 incremental backup will only backup only those data blocks that have changed since the level 0 incremental backup. furthermore, a level 2 incremental backup will back up only those data blocks that have changed since teh level 1 incremental backup. Incremental backups are quicker, and they occupy less space because they do not contain all data blocks. Incremental backups can be applied to the baseline backup, wen required to form a complete backup. A closed backup is a backup of one or more database files, which is taken while the database is closed. Usually, a closed backup is also a whole database backup(a backup of the control file and all datafiles of the database). A closed backup can be either consistent or inconsistent, A backup of a closed database that was cleanly sutdown(NORMAL, IMMEDIATE, TRANSACTIONAL) is a closed consistent backup. On the other hand, if a database is shutdown using the ABORT command or the associated instance terminates abnormally before the backup is taken, the backup will be a closed inconsistent backup. A closed backup is also called a cold backp and is the only type of backup that can be taken for a database running in NOARCHIVELOG mode. NOTE: that an open or online backup is one done while the datafiles are open and is always an inconsistent one that requires recovery(application of redo data) before it can be made consistent. 200.which files are created by default at database create command even if not specified? -control file, online redo log files, SYSAUX tablespace datafile. 201.Explain Normalisation? NOTE: Many-to-many is cardinality that is avoided in 202.Which background process is used to write dirty DBWn (it doesnt change) 203.How long does it take for MMON to gather a sna. 60 minutes. 204.Which view shows all the tables in a database? it is the DBA TABLE VERSION AND NEVER NEVER user tables, 205.what is an aggregate function? 206.How do you convert a date to a string? To_char. A bonus would be that they always include a format mask. 207.Describe the block structure of PLSQL? Declaration, Begin, exception, end. 20S.What is an anonymous block? Unnamed PL/SQL block. 209.Why would you choose to use a package versus straight procedures and functions? I look for maintenance, grouping logical functionality, dependency management, etc. I want to believe that they believe using packages is a @good thing@. 210.Explain what happens when you issue a commit command? 211.Your developers asked you to create an index on the PROD ID column of the SALES HISTORY table, which has 100million rows. The table has approximately 2 million rows of new data loaded on the first day of every mont. For the remainder of the month, the table is only queried. Most reports are generated according to the PROD ID, which has 96 distinct values. Which type of index would be app rop ri ate? 212.You examine the alert log file and notice that errors are being generated from an SQL*Plus session. Which files are best for providing you with more information about the nature of the problem? 213.Users pward and psmith have left the company. You nolonger want them to have access to the database. You need to make sure that the objects they created in the database remain. What do you need to do? 214.which two actions cause a log switch? 21S.which facts do you know about a shared pool? 216.You need to create an index on the PASSPORT RECORDS table. It contains 10 million rows of data. The key columns have low cardinality. The queries generated against this table use a combination of mUltiple WHERE conditions involving the OR operator. Which type of index would be best for this type of table? 217.Explain the concept of locally managed tablespace? 21S.You decided to use mUltiple buffer pools in the database buffer cache of your database, You set the sizes of the buffer pools with the DB KEEP CACHE SIZE and DB RECYCLE CACHE SIZE parameters and restarted your instance. What else must you do-to enable the-use of the buffer pools? 219.How do you create a hierarchical query? By using a CONNECT BY. 220.How would you generate XML from a query? The answer here is @A lot of different ways@. They should know that there are SQL functions: XMLELEMENT, XMLFoREST, etc and PL/SQL functions: DBMS_XMLGEN, DBMS XMLQUERY, etc. 221.How can you tell if a SELECT returned no rows? By uisng the NO DATA FOUND exception. 222.What is an autonomous transaction? Identified by pragma autonomous. A child parent that MUST be committed or rolled back. ~~ED 223.The DBA has decreased LoG_CHECKPoINT_INTERVAL to '.'. ~1t'rg,!>d' VL'LoG_CECKPoINT_ TIMEOUT from 0 to S. What best identif' '~Ipact of these <.,.~ settings on the oracle database? ~ ~ 224.which SQL*loader parameters enables you to loa •• ~ ~i~~~~~~rf~DQr.d~ stored in the data fHe? .....f'(C\:1I;:) , tzru: ;z_ 22S.You are attempting to create a robust backup • ~~overy it~~~~~g Oracle Enterprise Manager 2.0. which two tools al. you to s~~~~the Oracle Instance? 226.What are the things that Oracle 109 improved ••• improved against 109? Oracle 9i didnt have recovery file dest so had no flash recoevry area, no SYSAUX tablespace in 9i which is important for recovery. 227.Which roles must be granted to the recovery catalog owner in order for RMAN to work properly when you create a recovery catalog for a production database? 228.Explain rman commands? How you connect to an auxilliary and catalog database? sql>rman connect target sys/pwd@db1 CATALOG rman/pwd@catdb sql>rman connect target sys/pwd@db1 AUXILLIARY sys/pwd@auxdb 229.You are backing up an oracle database on the UNIX platform while the database is still open. Before issuing the tar command, which of statements should be issued? 230.You are evaluating the complexity of an existing recovery strategy in your organisation. Offline backups on OARCHIVELOG mode databases offer which of the following benefits over offline backups on ARCHIVELOG databases? 231.when a DBA attempts to backup the oracle database control file. after issuing the ALTER DATABASE BACKUP CONTROLFILE TO TRACE command, where can the DBA find the backup control file creation materials Oracle created for him? 232.You are trying to determine the status of a database backup. In order to determine the data files involved in the backup, which queries are appropriate? SELECT file# from V$BACKUP where STATUS=' ACTIVE ,; this lists files being backed up(in backup progress) 233.You are attempting to identify synchronisation between files on a mounted database that was just backed up. Which dictionary view may offer assistance in this task? 234.which background process does an oracle database use to ensure the detection of the need for database recovery by checking for synchronisation? 235.Explain base tables? A table used to define a view 236.Explain the concept of a large pool? The large pool is created to avoid contention in the shared pool in the cae of shard server environment. The UGA process takes place in the shared pool in the case of shared server environment so to avoid contentions all the user processes are shifted 237.The DBA is configuring use of RMAN with a recovery catalog. When using a recovery catalog in conjuction with RMAN, where can be the actual backup copies of database information found? 238.The DBA is assessing maintenance of recovery catalog creation. Once created, where does the recovery catalog draw much of its information from when maintenance is performed on it? [options:production database file, recovery catalog control file, production database control file, archived redo logs] 239.The DBA is configuring RMAN for use as the backup/recovery tool for the oracle database. After creating the production database and the recovery catalog, which command is used for establishing the baseline information required in the recovery catalog for proper RMAN usage? [options:RESET, REGISTER, RESYNC, REPORT] 240.The DBA has shutdown the database and made copies of a benchmark backup for stress testing. Which of the fol make that backup usable in the context of the rest of t catalog? [options:RESYNC, CATALOG, REPORT, LIST] 241.The DBA needs to identify available backups belonging to the SYSTEM tablespace. Which command this process? 242.The DBA is developing scripts in RMAN. In 0 commands in a scipt in RMAN for database backup, choices are appropriate? [options:RUN{ ..}, EXECUTE SCRIPT{ ...}, CREATE SCRIPT{ ...}, ALLOCATE CHANNEL{ ...}] 243.The DBA is developing a script for backups that will ensure that no block corruption is permitted in a datafile. Which of the following commands is appropriate for that script? [options:SET MAXCoRRUPT 0, SET DBVERIFY ON, SET LOG BLOCK CHECKSUM ON] 244.Explain the trade-off a DBA makes when considering dropping and re-creating indexes using the NoLoGGING vs LOGGING? 245.What are the factors that would significantly contribute to a rapid recovery time when backing up an ARCHIVE LOG database? [options: (frequency of backups, size of datafiles & I/O speed), (available memory, I/O speed and size of datafiles) ,(available memory) ,(number of multiplexed cont rol files)] 246.The DBA has just finished making OS copies of dtafiles for the backup. Which of the following choices identifies how the DBA should complete the backup? Take tablespaces out of backup mode, backup control file to trace, witch online redo logs (OR THE REVERSE) 247.which priviledge has to be given to users to select data dictionary? sql>grant select catalog role to user; 248.Difference between-TEST/DEV/PRoD oracle databases? 249.Why is the instance started in NoMoUNT mode when manually creating a database? The reason is, there are no controlfiles for the database to be in OPEN mode, and you cannot even use MOUNT mode, because at mount mode, controlfile has already pointed to the location of the redologs and datafiles but just that access to them is not granted yet, so we start instance in NoMoUNT because we dont have control files yet but memory usage should be defined. 250.The different ways of restoring a controlfile? *********************Restore of the Control File From a Known Location RMAN> RESTORE CoNTRoLFILE from 'filename'; *********************Restore of the Control File to a New Location RESTORE CoNTRoLFILE TO '/tmp/my controlfile'; *********************Restore from autobackup RMAN> SET DBID 320066378; RMAN> RUN { SET CoNTRoLFILE AUToBACKUP FORMAT FOR DEVICE TYPE DISK TO 'autobackup format'; RESTORE CoNTRoLFILE FROM AUToBACKUP; } RMAN> RESTORE CoNTRoLFILE TO '/tmp/ctl file.ctl' FROM AUToBACKUP; NOTE: to restore controlfile from autobackup, the database must be in NoMoUNT mode and you must first set the DBID. After restoring controlfile from autobackup, you must run RECOVER DATABASE and also perform an OPEN RESETLoGS on the database. 251.Different ways of restoring spfiles? *********************Restore the server parameter file for autobackup RMAN> RESTORE SPFILE FROM AUToBACKUP; RMAN> RESTORE SPFILE TO '/tmp/spfileTEMP.ora' FROM 252.Steps for recovery? *********************tablespace /~~ ~S\E.RED V~ -connect to the target database ~ ~~ -take affected tablespace offline [alter dat ••• r-~D~~f)J~~~.~~ -run SHOW ALL to see current configurations ,~"'tft:17I~ I £:tft:LJ -restore tablespace or datafile with RESTOR ~oVER c~~ RMAN>SQL 'alter tablespace users offl immediate\{t:t(~/()~ RMAN>restore tables pace users; RMAN>recover tables pace users; ~~ RMAN>sql 'alter tablespace users online; ********************database with current controlfile available -RMAN>startup mount;[best recover for database/datafile when in mount model -RMAN>restore database; -RMAN>recover database; -RMAN>alter database open; 253.Restore and Recover of Datafiles to a New Location The procedure shown here is a convenient way to restore a datafile to a new location and perform media recovery on it. RUN { SET NEWNAME FOR DATAFILE 3 to 'new location'; RESTORE DATAFILE 3; SWITCH DATAFILE 3; RECOVER DATAFILE 3; 254.what is the default location of portlist and PUPBLD.SQL? **********portlist.ini is located int $ORACLE HOME/install/portlist.ini *********PUPBLD.sql is located in $ORACLE HOME/sqlplus/admin/PUPBLD.SQL 255.How to backup incremental with block change tracking SQL> alter database enable block change tracking using file @/home/oracle/bc.ora@; This command brings up the background process Change Tracking Writer (CTWR) , that maintains a change tracking file. This file will contain the database block adresses of Oracle Blocks, modified after the next level 0 Full Backup. Because of this file, RMAN no longer has to scan all productive datafiles and compare their blocks with the blocks of the level 0 backup in order to determine which blocks have changed. That was a very time consuming process before 109. Now we will invoke RMAN with the following command: 256.What is the default password for the sys user in Oracle? Ans : CHANGE ON INSTALL 257.What would be the-first thing you would do if an end user complains that performance is poor? 258.Explain different startups? * * STARTUP NOMOUNT Starts the instance but does not mount the database. STARTUP MOUNT Starts the instance and mounts the database but does not open the database. STARTUP OPEN Starts the instance and mounts and opens * * STARTUP RESTRICT Starts the instance, however, access is restricted to users with STARTUP RECOVER Starts the instance but begins recovery for whatever failure scenario oc * STARTUP FORCE Forces the instance to shutdown abort and immediately startup open. This option should only be used for instances having problems either starting or stopping. 259.How do you create a TEMPORARY tables pace using Oracle-Managed File (OMF) technique? First, connect to SQL*Plus as the system/manager user. SQL> CONNECT system/manager@school AS SYSDBA Define a create file destination Let's first make sure that the DB_ CREATE _ FILE DEST value is set to a valid sub-directory. SQL> ALTER SYSTEM SET db_create_file_dest='c:' / Create a temporary tables pace (OMF) Now, create a temporary tables pace with Oracle-Managed Files (OMF). Users create temporary segments in a tablespace when a disk sort is required to support their use of select statements containing the GROUP BY, ORDER BY, DISTINCT, or UNION, or the CREATE INDEX statements. SQL> CREATE TEMPORARY TABLESPACE mytemp 260.Give one method for transferring a table from one schema to another: Level: Intermediate Expected Answer: There are several possible methods, export-import, CREATE TABLE ... AS SELECT, or COPY. 261. What is the purpose of the IMPORT option IGNORE? What is it?s default setting? Level: Low Expected Answer: errors. If it is it is specified, default value is The IMPORT IGNORE option tens import to ignore "already exists" not specified the tables that already exist will be skipped. If the error is ignored and the tables data will be inserted. The N. 263. You have a rollback segment in a version 7.2 database that has expanded beyond optimal, how can it be restored to optimal? Level: Low Expected answer: Use the ALTER TABLESPACE ..... SHRINK 264. If the DEFAULT and TEMPORARY tables pace clauses a command what happens? Is this bad or good? Why? Level: Low Expected answer: The user is assigned the SYSTEM temporary tablespace. This is bad because it ca segments to be placed into the SYSTEM tables pace improper table placement (only data dictionary ob segment should be in SYSTEM). 265.. 266.. 267.. 268. What is the proper method for disabling and re-enabling a primary key constraint? Level: Intermediate Expected answer: You use the ALTER TABLE command for both. However, for the enable clause you must specify the USING INDEX and TABLESPACE clause for primary keys. 269.. 270. (On UNIX) When should more than one DB writer should be used? Level: High Expected answer: If the UNIX system being used is only one is required, if the system is not capable twice the number of disks used by Oracle number 0 by use of the db_writers initialization paramete 271. You are using hot backup without being in an the event of a failure? Why or why not? Level: High Expected answer: You can?t use hot backup without being in archivelog mode. So no, you couldn?t recover. 272.. 273. How can you tell if a database object is invalid? Level: Low Expected answer: By checking the status column of the DBA, ALL or USER OBJECTS views, depending upon whether you own or only have permission on the view or are using a DBA account. 274.;) 275.. 276. If you have an example table, what is the best way to get sizing data for the production table implementation? Level: Intermediate Expected answer: The best way is to analyze the tab provided in the DBA TABLES view to get the average data for the calculation. The quick and dirty way blocks the table is actually using and ratio the its number of blocks against the number of 277. How can you find out how many users are How can you find their operating system id? Level: high Expected v$process v$sysstat command, answer: There are several ways. One is to look at the v$session or views. Another way is to check the current_logins parameter in the view. Another if you are on UNIX is to do a "ps -eflgrep oraclelwc -l? but this only works against a single instance installation. 278.. 279. lS nearlng 0.3. 280. A tablespace has a table with 30 extents in it. Is this bad? Why or why not. Level: Intermediate Expected answer: Multiple extents in and of themselves aren?t bad. However if you also have chained rows this can hurt performance. 281. How do you set up tablespaces during an Oracle installation? Level: Low Expected answer: You should always attempt to use the Oracle Flexible Architecture standard or another partitioning scheme to ensure proper separation of SYSTEM, ROLLBACK, REDO LOG, DATA, TEMPORARY and INDEX segments. 282. You see mUltiple fragments in the SYSTEM tablespace, what should you check first? Level: Low Expected answer: Ensure that users don?t have TEMPORARY or DEFAULT tablespace assignment by 283. What are some indications that you need to parameter? Level: Intermediate Expected answer: Poor data dictionary or library cache hit ratios, getting error ORA-04031. Another indication is steadily decreasing performance with all other tuning parameters the same. 284.. 285. What is the fastest query method for a table? Level: Intermediate Expected answer: Fetch by rowid 286.. 287.. 288. num your system. How about an Level: Low Expected answer: You can look in the init.ora fil set parameters. For all parameters, their value value is the default value, look in the v$paramete 290.. 291.. 292.. 293. If you see contention for library caches how can you fix it? Level: Intermediate Expected answer: Increase the size of the shared pool. 294. If you see statistics that deal with "undo" what are they really talking about? Level: Intermediate Expected answer: Rollback segments and associated 295. If a tablespace has a default pctincrease of relationship to the smon process)? Level: High Expected answer: The SMON process won?t automatic fragments. 296. If a tablespace shows excessive fragmentati. 297. How can you tell if a tablespace has excessive fragmentation? Level: Intermediate If a select against the dba free space table shows that the count of a tables paces extents is greater than the-count of its data files, then it is fragmented. 298.. 299. What can cause a high value for recursive calls? How can this be fixed? Level: High Expected answer: A high value for recursive calls is cause by improper cursor usage, excessive dynamic space management actions, and or excessive statement reparses. You need to determine the cause and correct it By either relinking applications to hold cursors, use proper space management techniques (proper storage and sizing) or ensure repeat queries are placed in packages for proper reuse. 300. If you see a pin hit ratio of less than 0.8 in the estat library cache report is this a problem? If so, how do you fix it? Level: Intermediate Expected answer: This indicate that the shared pool shared pool size. 301. If you see the value for reloads is high in t this a matter for concern? Level: Intermediate Expected answer: Yes, you should strive for zero excessive reloads then increase the size of the shared pool. 302.. 303.. 304. In a system with an average of 40 concurrent users you get the following from a query on rollback extents: ROLLBACK CUR EXTENTS ROI R02 11 • action is needed. 305. You see mUltiple extents in the temporary Level: Intermediate Expected answer: As long as they are all the same fact, it can even improve performance since Oracl extent when a user needs one. 306. Define OFA. Level: Low Expected answer: OFA stands for Optimal Flexible Architecture. It is a method of placing directories and files in an Oracle system so that you get the maximum flexibility for future tuning and file placement. 307. pace on another and still have two for DATA and INDEXES. They should indicate how they will handle archive logs and exports as well. As long as they have a logical plan for combining or further separation more or less disks can be specified. 308. What should be done prior to installing Oracle (for the OS and the disks)? Level: Low Expected Answer: adjust kernel parameters or OS tuning parameters in accordance with installation guide. Be sure enough contiguous disk space is available. 309.. 310. When configuring SQLNET on the server what files must be set up? Level: Intermediate Expected answer: INITIALIZATION file, TNSNAMES.ORA file, SQLNET.ORA file 311. When configuring SQLNET on the client what files need to be set up? Level: Intermediate Expected answer: SQLNET.ORA, TNSNAMES.ORA 312. What must be installed with ODBC on the Oracle? Level: Intermediate Expected answer: SQLNET and PROTOCOL (for exampl transport programs. 313. You have just started a new instance with a large SGA on a busy existing server. Performance is terrible, what should you check for? Level: Intermediate Expected answer: The first thing to check with a large SGA is that it isn?t being swapped out. 314. What OS user should be used for the first part of an Oracle installation (on UNIX)? Level: low Expected answer: You must use root first. 315. When should the default values for Oracle initialization parameters be used as is? Level: Low Expected answer: Never 316. How many control files should you have? Where should they be located? Level: Low Expected answer: At least 2 on separate disk spindles. Be sure they say on separate disks, not just file systems. 317.. 318. You have a simple application with no "hot" tables (i.e. uniform 10 and access requirements). How many disks should you have assuming standard layout for SYSTEM, USER, TEMP and ROLLBACK tablespaces? Expected answer: At least 7, see disk configuration answer above. 319. Describe third normal form? Level: Low Expected answer: Something are related to the primary 320. Is the following statement true or "All relational databases must be Level: Intermediate Expected answer: False. While 3NF is good for 10 they have more than just a few tables, will not Usually some entities will be denormalized in the process. 321. What is an ERD? Level: Low Expected answer: An ERD is an Entity-Relationship-Diagram. entities and relationships for a database logical model. It is used to show the 322.. 323. What does a hard one-to-one both ends is "must")? relationship mean (one where the relationship on Level: Low to intermediate Expected answer: This means the two entities should probably be made into one entity. 324. How should a many-to-many relationship be handled? Level: Intermediate Expected answer: By adding an intersection entity table 325. What is an artificial (derived) primary key? When should an artificial (or derived) primary key be used? Level: Intermediate Expected answer: A derived key comes from a sequence. Usually it is used when a concatenated key becomes too cumbersome to use as a foreign key. 326. When should you consider denormali2ation? Level: Intermediate Expected answer: Whenever performance analysis indicates it would be beneficial to do so without compromising data integrity. UNIX Interview Questions 327. How can you determine the space left in a file Level: Low Expected answer: There are several commands to do 328. How can you determine the number of SQLNET system? Level: Intermediate Expected answer: SQLNET users will show up with a process unique name that begins with oracle, if you do a ps -eflgrep oraclelwc -1 you can get a count of the number of users. 329. What command is used to type files to the screen? Level: Low Expected answer: cat, more, pg 330. What command is used to remove a file? Level: Low Expected answer: rm 331. Can you remove an open file under UNIX? Level: Low Expected answer: yes 332. How do you create a decision tree in a shell script? Level: intermediate Expected answer: depending on shell, usually a case-esac or an if-endif or fi structure 333. What is the purpose of the grep command? Level: Low Expected answer: grep is a string search command that parses the specified string from the specified file or files 334. The system has a program that always includes the word nocomp in its name, how can you determine the number of processes that are using this program? Level: intermediate Expected answer: ps -eflgrep *nocomp*lwc -1 335. What is an inode? Level: Intermediate Level: High Expected answer: Maybe. Some UNIX systems don?t Inode problems and dead user processes can accumulate causing possible performance and corruption problems. Most UNIX systems should have a scheduled periodic reboot so file systems can be checked and cleaned and dead or zombie processes cleared out. 337. What is redirection and how is it used? Level: Intermediate Expected answer: redirection is the process by which input or output to or from a process is redirected to another process. This can be done using the pipe symbol "I", the greater than symbol ">" or the "tee" command. This is one of the strengths of UNIX allowing the output from one command to be redirected directly into the input of another command. 338. How can you find dead processes? Level: Intermediate Expected answer: ps -eflgrep zombie -- or -- who -d depending on the system. 339. How can you find all the processes on your system? Level: Low Expected answer: Use the ps command 340. How can you find your id on a system? Level: Low Expected answer: Use the "who am i" command. 341. What is the finger command? Level: Low Expected answer: The finger command uses data in the passwd file to give information on system users. 342. What is the easiest method to create a file on UNIX? Level: Low Expected answer: Use the touch command 343. What does » do? Level: Intermediate Expected answer: The "»" redirection symbol appends specified into the file specified. The file must al 344. If you aren?t sure what command does a part best way to determine the command? Expected answer: The UNIX man -k command specified. Review the results from the command t Oracle Troubleshooting Interview Questions 345. How can you determine if an Oracle instance is up from the operating system level? Level: Low Expected answer: There are several base Oracle processes that will be running on multi-user operating systems, these will be smon, pmon, dbwr and 19wr. Any answer that has them using their operating system process showing feature to check for these is acceptable. For example, on UNIX a ps -eflgrep dbwr will show what instances are up. 346. Users from the PC clients are getting messages indicating Level: Low ORA-06II4: (Cnct err, can't get err txt. See Servr Msgs & Codes Manual) What could the problem be? Expected answer: The instance name is probably incorrect in their connection string. 347. Users from the PC clients are getting the following error stack: Level: Low ERROR: ORA-OI034: ORACLE not available ORA-073I8: smsget: open error when opening sgadef.dbf file. HP-UX Error: 2: No such file or directory What is the probable cause? Expected answer: The Oracle instance is shutdown that they are trying to access, restart the instance. 348. How can you determine if the SQLNET process is running for SQLNET VI? How about V2? Level: Low Expected answer: For SQLNET VI check for the existence of the orasrv process. You can use the command "tcpctl status" to get a full status other protocols have similar command formats. For SQLNET of the LISTENER process(s) or you can issue the command 349. What file will give you Oracle instance status located? Level: Low Expected answer: The alert.ora log. It is the background_dump_dest parameter in the 350.. 351.. 352. You attempt to add a datafile and get: Level: Intermediate ORA-OllIS:. 353.. 354. Your users get the following error: Level: Intermediate ORA-00055 maximum number of DML locks exceeded What is the problem and how do you fix it? Expected answer: The number of DML Locks is set by DML LOCKS. If this value is set to low (which it i error. Increase the value of DML LOCKS. If you are temporary problem, you can have them wait and the should clear. 355. You get a call from you backup DBA while yo 261.How do i know when the database was up? SQL> SELECT instance name, version, to char(startup time, 'DD-MON-YYYY HH24:MI:SS') "DB Startup Tlme" FROM vsins t ance ; 262.How do i know when the database was created? SQL>select name,created,log mode from v$database; 263.What happens if archivelog dest is full? If the archive log file system is full, your database will hang (ora-002S7) and no one can login the database until you move the archive logs to some other file systems. 264.How do i know if an oracle account has been locked and h SQL>select username, lock date from dba users where 1 SQL>alter user scott account unlock; [You can unlock the account with the above 26S.How do i know all the database user names? SQL>select username from dba users order by us 267.How to find out what objects are in a parti SQL>select segment name, segment type, bytes where tablespace name = 'EXAMPLE'; 268.How to find out what tablespace the object SQL>select segment_name, tablespace_name, where segment_name = 'SALES'; 269.How do i know if my datafile is autoextend (YES/NO)? SQL>select file name, autoextensible, maxbytes/(1024*1024*1024) MAX GB from dba data files; 270~How to increase sga max size? SQL>alter system set sga max size=130m scope=spfile; SQL>create pfile from spfile; 271.How do i know if my database is using a PFILE or SPFILE? show parameter spfile (if NULL, then your db was started up using pfile initSID.ora) 272. 273. 274. 275. 276. 277 . 278. 279. 280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300.
https://www.scribd.com/doc/98692876/Basic-to-Know-in-Oracle
CC-MAIN-2018-09
refinedweb
12,194
58.28
On Fri, Mar 02, 2007 at 02:58:21PM +0000, Paul Moore wrote: > On 02/03/07, Bayley, Alistair <Alistair_Bayley at invescoperpetual.co.uk> > wrote: > >There's a big difference between getContents and Takusen: getContents > >has a non-trivial implementation (using unsafeInterleaveIO) that allows > >it to return data lazily. Takusen has no such implementation. > > ... ie, there's deep dark magic involved in the seemingly simple > getContents, which isn't easily available to mere mortals (or even > semi-immortal library designers). That figures. It's a shame, but not > totally unsurprising. I think I understand it ... here a some illustrative (I hope!) examples: stefan at stefans:~$ ghci ___ ___ _ / _ \ /\ /\/ __(_) / /_\// /_/ / / | | GHC Interactive, version 6.7.20070223, for Haskell 98. / /_\\/ __ / /___| | \____/\/ /_/\____/|_| Type :? for help. Loading package base ... linking ... done. Prelude> :m + System.IO. System.IO.Error System.IO.Unsafe Prelude> :m + System.IO.Unsafe Prelude System.IO.Unsafe> foo <- unsafeInterleaveIO (putStr "foo") -- note that IO does NOT happen immediately Prelude System.IO.Unsafe> show foo -- but forcing it causes the IO to happen (unsafely interleaved with printing (pun intentional)) "foo()" Prelude System.IO.Unsafe> show foo -- but now that it is in WHNF, forcing it again has no effect (laziness) "()" Prelude System.IO.Unsafe> -- a more interesting case is using unsafeInterleaveIO in list recursion Prelude System.IO.Unsafe> let myGetContents = unsafeInterleaveIO $ do { ch <- getChar; chs <- myGetContents ; return (ch:chs) } Prelude System.IO.Unsafe> -- simplified by omitting support for EOF handling and block reads Prelude System.IO.Unsafe> print . map reverse . lines =<< myGetContents f["oo? ?oof"Interrupted. Prelude System.IO.Unsafe> mapM_ putStrLn . map reverse . lines =<< myGetContents foo? ?oof bar! !rab muahahaha. .ahahahaum ^D^? Interrupted. Prelude System.IO.Unsafe> > > >> > >implications. > > Oh, well. It's mostly irrelevant for me anyway, as the data sets I'm > actually playing with are small enough that slurping them into memory > isn't an issue - so I just choose between a simple and decoupled > implementation or a more complex and scalable one, which is a fairly > standard optimisation choice. > > Thanks for clarifying. > Paul. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > HTH Stefan
http://www.haskell.org/pipermail/haskell-cafe/2007-March/023042.html
CC-MAIN-2014-49
refinedweb
358
51.44
The QWSWindow class encapsulates a top-level window in Qt for Embedded Linux. More... #include <QWSWindow> The QWSWindow class encapsulates a top-level window in Qt for Embedded Linux. When you run a Qt for Embedded Linux application, it either runs as a server or connects to an existing server. As applications add and remove windows, the server process maintains information about each window. In Qt for Embedded Linux, top-level windows are encapsulated as QWSWindow objects. Note that you should never construct the QWSWindow class yourself; the current top-level windows can be retrieved using the QWSServer::clientWindows() function. With a window at hand, you can retrieve its caption, name, opacity and ID using the caption(), name(), opacity() and winId() functions, respectively. Use the client() function to retrieve a pointer to the client that owns the window. Use the isVisible() function to find out if the window is visible. You can find out if the window is completely obscured by another window or by the bounds of the screen, using the isFullyObscured() function. The isOpaque() function returns true if the window has an alpha channel equal to 255. Finally, the requestedRegion() function returns the region of the display the window wants to draw on. See also QWSServer, QWSClient, and Qt for Embedded Linux Architecture.. See also isVisible(). Returns true if the window is opaque, i.e., if its alpha channel equals 255; otherwise returns false. See also opacity(). Returns true if the window is visible; otherwise returns false. See also isFullyObscured(). Returns the window's name, which is taken from the objectName() at the time of show(). See also caption() and winId(). Returns the window's alpha channel value. See also isOpaque(). Returns the region that the window has requested to draw onto, including any window decorations. See also client()..
http://doc.trolltech.com/4.5-snapshot/qwswindow.html
crawl-003
refinedweb
301
59.6
In this tutorial, we will attempt to generate an amazing and interactive network graph from a pandas data frame to take things up a notch! Also Read: NetworkX Package – Python Graph Library Without any delay, Let’s begin! This section is focused on loading and pre-processing the dataset. The dataset chosen for this tutorial is the OpenFlights Airport dataset available on Kaggle. As of January 2017, the OpenFlights Airports Database contains data for over 10,000 airports over the globe. Read More: Working with DataFrame Rows and Columns in Python In the code below, we will be importing the pandas module and load the routes.csv file into the program. Now out of all the columns in the dataset, we only require the source and destination airports from the dataset. import pandas as pd df = pd.read_csv('routes.csv') df = df[['Source airport','Destination airport']] df = df[:500] df.head() To make the processing easier and the computation less complex, we will only take the top 500 rows from the dataset. And we will display the first five rows of the dataset using the head function. We will separate the sources and destination nodes into two separate lists using the Python code below. sources = list(df['Source airport']) destinations = list(df['Destination airport']) Now we will move on to the generation of the Network graph using the networkx and pyviz library in the next section. Generation of Network Graph We will start off by creating an empty graph using the net.Network function and passing a number of attributes of the empty network graph. The next step is to iterate over the sources list and add nodes along with their labels and titles. After this, we will be adding edges using the add_edge function. We will be making use of exception handling to make sure all errors are taken into consideration (if any). Also Read: Python Exception Handling – Try, Except, Finally Look at the code mentioned below. g_from_data =net.Network(height='600px',width='50%', bgcolor='white',font_color="black", heading="A Networkx Graph from DataFrame",directed=True) for i in range(len(sources)): try: g_from_data.add_node(sources[i],label=sources[i],title=sources[i]) except: pass for (i,j) in zip(sources,destinations): try: g_from_data.add_edge(i,j) except: pass g_from_data.show_buttons(['physics']) g_from_data.show('A_Complete_Networkx_Graph_From_DataFrame.html') display(HTML('A_Complete_Networkx_Graph_From_DataFrame.html')) Have a look at the network graph generated below. It’s amazing how the graph looks this interesting and fun to check out. Conclusion I hope you were able to understand how to generate network graphs using the pandas data frame using the pyviz library in Python programming language. Thank you for reading! I would recommend you to have a read at the following tutorials below: - Network Analysis in Python – A Complete Guide - Neural Networks in Python – A Complete Reference for Beginners - An Ultimate Guide On Insider Threat Risks And Their Prevention
https://www.askpython.com/python/examples/network-graphs-from-pandas-dataframe
CC-MAIN-2022-33
refinedweb
483
55.84
- - 1 Piece/Pieces - US $0.9 - 1.14 / Watt1 Piece/Pieces - US $0.9 - 1.15 / Watt1 Piece/Pieces - US $1.9 - 2.0 / Watt10 Piece/Pieces - - 1 Piece/Pieces - US $0.7 - 1.0 / Watt1 Piece/Pieces - US $0.9 - 1.15 / Watt1 Piece/Pieces import solar panels 1.ISO,TUV,CE,CCC 2.Customized specs and sizes 3.80% output warranty within 25 life time Hot sale 12V 50W import solar panels Warranty: 5 years warranty for material and workmanship, 10 years warranty for output not less than 90%, 25 years warranty for output not less than 80%. About our Compn!
http://www.alibaba.com/product-detail/Hot-sale-12V-50W-import-solar_1122639366.html
CC-MAIN-2014-52
refinedweb
103
70.9
How do i get the value of the variable that i have define previously(using addVar) in gurobi python? I need to compare the value of the gurobi variable and then perform calculations to reach to my objective variable. The same has to be done before optimization. Best answer You have two options. The most straightforward is to save a reference to the Var object returned by Model.addVar. Another way is to give a name your variables with the name parameter in addVar, then retrieve the variable with Model.getVarByName. from gurobipy import * a_var = m.addVar(name="variable.0") # ... a_var_reference = m.getVarByName("variable.0") # a_var and a_var_reference refer to the same object m.optimize() #obtain the value of a_var in the optimal solution if m.Status == GRB.OPTIMAL: print a_var.X
https://pythonquestion.com/post/gurobi-python-get-value-of-the-defined-variable/
CC-MAIN-2020-16
refinedweb
132
51.04
How can my code discover the name of an object? Generally speaking, it can’t, because objects don’t really have names. Essentially, assignment always binds a name to a value; the actual name is only known by the namespace it’s in, and a single value can be referenced from many different namespaces (as well as containers). The same is true of def and class statements, but in that case the value is a callable. Consider the following code: class A: pass B = A a = B() b = a print b <__main__.A instance at 016D07CC> print a <__main__.A instance at 016 tools,! CATEGORY: programming
http://www.effbot.org/pyfaq/how-can-my-code-discover-the-name-of-an-object.htm
CC-MAIN-2014-42
refinedweb
106
65.32
: The Problem Despite how useful and cool the control is, there is no way OOB to parse the repeating section data if you need to do something with it. Why would you do that? Here are some scenarios / use cases: the attachments below. The SharePoint 2010 one will be released soon. Update (Aug 13th, 2015): SharePoint 2010 Version is now available, you can download it from here. Notes: Excellent resource. This is a great article!!! We had few nightmares with repeating section. I am sure this article gonna end them soon Awesome ! Thank you for sharing (I will wait the 2010 version) Thank you. Just what I was looking for. :-) Thanks! Coming in 1 week max. Thanks .. Glad you liked it Great! No more loops and other different actions! Thanks Glenda, have you tested it already? Great Job Ayman! Have you tried the JSLink instead of a custom column? this can enable the end user customize how the data looks like in the list. also this way it can work in O365. Hi Osama, Well, I wanted to create something that works for both SP2010 and SP2013, so Custom Field Type was my only choice. If you want to further customize how the repeating section is renderes, the WSP creates a folder under Layouts called "NintexRepeatingSectionView", this folder contains two files: - GetRepeatingSections.js --> This is the file which is referred to from the XSLT of the custom field type. It does all the heavy lifting like querying the FormData hidden field, reading its XML to retrieve the specific repeating section the end user chose while creating the column, parsing the repeating section XML and extracting its records. It is also responsible for rendering the values in a table. - NintexRepeatingSection.css --> This is the css file used within GetRepeationSections.js to style the table. As you can see, you have full control over both if you want to customize how the repeating section records are rendered. Not as powerful as JSLink, but works on any view and most importantly works for both SP2010 and SP2013. Next weekend, I will just change the mapping from [15] to [14] and hence have a working version of the WSP for SharePoint 2010. I will definitely consider JSLink for Office 365, it's obviously the only option for me Thanks, Ayman El-Hattab Great, thanks for the information! The SharePoint 2010 version has been released .. The "Nintex Repeating Section Data" Field Type for SharePoint 2010 Getting an error when trying to deploy as a solution. SP 2013 This solution contains invalid markup or elements that cannot be deployed as part of a sandboxed solution. Solution manifest for solution 'guid' failed validation, file manifest.xml, line 6, character 4: The element 'Solution' in namespace '' has invalid child element 'TemplateFiles' in namespace ''. List of possible elements expected: 'ActivationDependencies, FeatureManifests' in namespace ''. Hi Cory, This is a farm solution, not a Sandbox one. Ayman Since this is a farm solution. What is the name of the feature to activate. On my dev box i installed as farm solution but do not see it in my site collections features Hi Cory, You don't need to activate any features. The custom field type is available for you to use once you deploy the WSP to your farm. Ayman This worked great! Is there a similar solution if you need to print or provide reporting of some type? Can you please explain the use case in more details? If it's something of value, I can definitely implementat it. Hmmmm. I am new to Nintex so I will see if my vision makes sense. I will be designing a form in the near future that will have a repeating section on it. The client will want a printable (or exportable) version to send hard copy or email. I tried to export a test and the column with the repeating section comes over blank. Just wondering if including the repeating section is possible. Thanks Hi Stuart, Printing forms is coming soon as far as I know. Ayman Any chance I could get the source code Visual Solution solution for this, also can it be modified to only be site collection scoped? Hello This looks great and it may help me out with my form that I have just created (I am only new at Nintex Forms). Could you tell me where I need to save the .wsp file to on my SharePoint server so that I have the option to add the Nintex Repeating section data field type in columns? Thanks Nicole Hi Nicole Simpson, You need to deploy the WSP like any other WSP. Check this quick tutorial for more info. Hi Scott, We're working now on Version 2 of the solution. Once published, feel free to reach out for the source code. And no, it cannot be site collection scoped because it's not a feature.. That's a SharePoint limitation. Ayman Hi Thanks so much for this, it has been deployed and now I can add the repeating section data field in when I create new columns. Thanks again Nicole Great!!! Quick side question. Is the metadata stored in repeating sections searchable? How do you control the order that the repeating section controls (columns) show up in the list view? I have a repeating section with a person control named AccountName and a calculated value control named DisplayName, no matter what order I add the controls to the repeating section the person control data (AccountName) is always on the left and the calculated value data (DisplayName) always appears on the right in the list view -> AccountName DisplayName. I would like the list view to show DisplayName on the left and AccountName on the right, this is the same order they show up on the form inside the repeating section with the DisplayName on the left and the AccountName on the right. In your example you have four controls and the order they appear in the repeating section is the same order they appear on the list view: Product, Quantity, ListPrice, SubTotal. No problem .. Glad it worked for you very nice post anyone tried to export the list with Nintex column to excel? Nintex column that you created is empty when i export the list to excel. any idea why? Ayman El-Hattab Hi Ibrahim, Yes, this is not supported. Thank you Ayman, good work. but how i can do it without using UDA? because am using Nintex WF for office 365. Hi This post is very useful, But I'm new in Nintex, I don't know how to use this field because after deploy I can't find any new feature and I can't see this Nintex repeating section, after that I want to know value of this field can crawl with SharePoint search and can I use this field in content hub? Thank you Naeemeh Hi Naeemeh, You don't need to activate a feature .. Once you deploy the WSP, you should find a new field type called "Nintex Repeating Section Data" in the column creation pages. Indexing is not supported at this stage but will be added in later versions of the solution. Thanks Hi Ibrahim, Unfortunately, the solution is only available for SharePoint on-prem. If there is enough demand, I will create something for Office 365. Hi El-Hattab I deployed this solution after Installed but this field didn't add to my column how can I do? Thanx Hi Naeemeh, How did you deploy the solution? Hi El-Hattab Yes I deploy it But after I remove this solution this field became appear, I can use it but I removed that solution for testing, I don't know why this happened appear on my server. Thanx Hi, I have not yet tried this (not allowed to install the wsp yet) but it seems pretty nice. One question: I need to have a view that presents the data in a way that is "Export to Excel" friendly. How would I go about doing such a thing? Regards Leif hi Leif, this solution does not support export to excel, if add the column to the list view and export it to excel, it will appear as blank. if you don't have a complex form, check this blog Repeated Section to Plain Text Ran into an issue where I"m not seeing the option to activate on the site collection. I have it deployed globally to all sites: But I'm not seeing it as an option to activate on the site collection level. I retracted and deployed again and it is still not showing up. Have you came across this? SP Foundation 2013. Hi Leif, SharePoint Custom Field Types are not supported in the Excel View unfortunately. Hi Michael, You don't have to activate any features for a custom field type. Once the WSP is deployed, you can create columns based on the new custom field type (Nintex Repeating Section Data). Hi Ayman, I have tried your solution its working perfectly while adding new items. But when i edit the item, in repeating control section it always shows the value of the last set of control values for all the child controls. please find below the screenshots for the same. Can you tell me how the form gets rendered for the repeatin gsection Hi First of all thank you so much for taking your time making this great solution. I got it to display on the list. however. is there anyways to pass these repeated section entries using the SQL server query to pass them into a sql server or whatever database/file? This is a great solution...especially the .wsp file. IF I can get my enterprise SharePoint support to apply it to the farm/Central admin., is this field available for insertion into a Nintex workflow notification action email template? This worked great for One of my list however when I attempted to perform the same on another list it fails to display the table. Have you had this error before? Hi, nice solution you have there. i followed the instruction, but when i add new item, the column shows nothing. is there any other things to do? kindly advise. -willy- Same things happens to me. Simple repeating table with a few fields and my list column is completely empty after a new form is submitted.
https://community.nintex.com/t5/Community-Blogs/Displaying-Repeating-Section-Data-in-List-Views-The-Easy-Way/bc-p/79471/highlight/true
CC-MAIN-2019-47
refinedweb
1,738
73.07
I have a project that uses an I2C sensor and an Arduino Mega 2560 REV3. I have included the code that is running below, and this will read 16-bits from the I2C sensor and concatenate them together. #include <Wire.h> void readSensor(int address) { int c; int c1; int c2; //start the communication with IC with the provided address Wire.beginTransmission(address); //send a bit and ask for register zero Wire.write(0); //end transmission Wire.endTransmission(); //request 1 byte from address xx Wire.requestFrom(address, 2); //wait for response while(Wire.available() == 0); //put the temperature in variable c c1 = Wire.read(); c2 = Wire.read(); c = (c1<<8) + c2; return c; } I have found that the issue occurs during the while() loop and it will sometimes get stuck on that due to what I assume is a communication issue with the sensor. This appears to be somewhat random, and if I unplug and replug the wires and restart the board sometimes it works and sometimes it does not. I do not have an external pull up resistor and maybe that is the issue. Does anyone have experience with this and is there an alternative potential solution?
https://forum.arduino.cc/t/i2c-hangs-on-arduino-mega-platform/672820
CC-MAIN-2021-39
refinedweb
198
65.62
Many python frameworks and modules have dependencies and require installation in their respective folders in order to work. If you’re using any python module then you’ll find it confusing to install it under python’s working directory in windows. This is where easy_install is going to make things easy for you. It installs python modules and it’s respective dependencies easily without you having to do anything. Installation First, You need to download ez_install script. Second, you need to place this file in a directory where you have setup.py. That place is usually tools/scripts or maybe some another path depending on your platform. Once you copy this file in that folder, simply run it. Note: In case of python 2.7 setup.py file is in “C:/python2.7/tools/scripts“. Your drive letter may vary between C: or D: or whatever. After running the ez_install script, you’ll find some files inside folder “/python2.7/scripts“. Now add the following code to the top of your setup.py file. from ez_setup import use_setuptools use_setuptools() You have to use the path “c:/python2.7/scripts” or whatever that applies in your case to the PATH environment variable. Follow the steps to add this path to PATH field. - Right click on My Computer then Click ‘Properties’. - Click on “Advanced” tab and then “Environment Variables”. - Check second section namely “System Variables”. - Find “PATH” field by scrolling the variables. - Click on “PATH” field to highlight it and then click edit. - Now copy the path “C:/python2.7/scripts” and paste it into the text area. Note: Make sure you have semicolon at the start and the end of the pasted path. Now that we have path to the scripts folder, let’s test it. Go to command prompt and then enter “easy_install” (without quotes of course). In case of windows 7 or vista you’ll have to go through all that admin privileges stuff. If you got that sorted, then script will run the dependencies check and will download required files. Once it is finished you’re free to use easy_install for installing python modules on windows machine. Hope this tutorial helps you with easy_install on windows. If you have any questions feel free to post here or send me a tweet at @maheshkale.
https://onecore.net/how-to-install-python-modules-with-easy_install-on-windows.htm
CC-MAIN-2018-34
refinedweb
385
76.62
Opening a Project As opposed to creating a new project, you can open a project that either you or someone else created. To open an existing project: This action would display the Open Project dialog box. This allows you to select a project and open it. Using Various Projects in the Same Solution With Microsoft Visual Studio, you can create many projects on one solution. To add a project to a solution: Any of these actions would display the Add New Project dialog box. You can then select the type of project in the middle list, give a name to the project, and click OK. In the same way, you can add as many projects as you judge necessary to your solution. When a solution possesses more than one project, the first node in the Solution Explorer becomes Solution 'ProjectName' (X Projects). The ProjectName represents the name of the first project and X represents the current number of projects. When you are using more than one project in the same solution, one of the projects must be set as the startup. The project that is set as the startup has its name in bold characters in the Solution Explorer. You can change and use any project of your choice as the startup. To do this, in the Solution Explorer, you can right-click the desired project and click Set As StartUp Project. When a solution possesses more than one project, you can build any project of your choice and ignore the others. To build one particular project, you can right-click it in the Solution Explorer and click Build. The Project Interface Besides the windows and functionalities we reviewed earlier, when you work on a project, there are other features that become available. The Server Explorer The Server Explorer is an accessory that allows you to access SQL Server databases without using the physical server and without opening Microsoft SQL Server Management Studio: The items of this window display in a tree. To expand a node, you can click its + button. To collapse it, click its - button. The Solution Explorer The Solution Explorer is a window that displays the file names and other items used in your project: The items of this window display in a tree. To expand a node, you can click its + button. To collapse it, click its - button. To explore an item, you can double-click it. The result depends on the item you double-clicked. The Solution Explorer can be used to create a new class, a new folder, or a reference. To perform any of these operations, you can right-click a folder node such as the name of the project, position the mouse on Add and select the desired operation. You can also perform any of these operations from the Project category of the main menu. Besides adding new items to the project, you can also use the Solution Explorer to build the project or change its properties. If you add one or more other project(s) to the current one, one of the projects must be set as the default. That project would be the first to come up when the user opens. Application: Using the Solution Explorer The Class View The Class View displays the various classes used by your project, including their ancestry. The items of the Class View an organized as a tree list with the name of the project on top: The Class View shares some of its functionality with the Solution Explorer. This means that you can use it to build a project or to add new class. While the Solution Explorer displays the items that are currently being used by your project, the Class View allows you to explore the classes used in your applications, including their dependencies. For example, sometimes you will be using a control of the of the .NET Framework and you may wonder from what class that control is derived. The Class View, rather than the Solution Explorer, can quickly provide this information. To find it out, expand the class by clicking its + button. Application: Using the Class View using System; using System.Windows.Forms; namespace Exercise3 { public class Central { public static int Main() { Application.Run(new Exercise()); return 0; } } } Fundamentals of Control Addition The Client Area On a form, the client area is the body of the form without the title bar, its borders and other sections we have not mentioned yet such as the menu, scroll bars, etc: Besides the form, every control also has a client area. The role of the client area is to specify the bounding section where the control can be accessed by other controls positioned on it. Based on this, a control can be visible only within the client area of its parent. Not all controls can be parent. Design and Run Times Application programming primarily consists of adding objects to your project. Some of these objects are what the users of your application use to interact with the computer. As the application developer, one of your jobs will consist of selecting the necessary objects, adding them to your application, and then configuring their behavior. There are various ways you can get a control into your application. If you are using Notepad or a text editor to add the objects, you can write code. If you are using Microsoft Visual C#, you can visually select an object and add it. To create your applications, there are two settings you will be using. If a control is displaying on the screen and you are designing it, this is referred to as design time. This means that you have the ability to manipulate the control. You can visually set the control's appearance, its location, its size, and other necessary or available characteristics. The design view is usually the most used and the easiest because you can glance at a control, have a realistic display of it and configure its properties. The visual design is the technique that allows you to visually add a control and manipulate its display. This is the most common, the most regularly used, and the easiest technique. The other technique you will be using to control a window is with code, writing the program. This is done by typing commands or instructions using the keyboard. This is considered, or referred to, as run time. This is the only way you can control an object's behavior while the user is interacting with the computer and your program.
http://www.functionx.com/vcsharp/fundamentals/gaf.htm
CC-MAIN-2017-30
refinedweb
1,085
70.53
I just downloaded entity framework 6 and created a brand new project to test it. We currently use EF 5. After adding all my tables and stored procedures, I tried to build the project but I get errors: Value of type 'System.Data.Objects.ObjectParameter' cannot be converted to 'System.Data.Entity.Core.Objects.ObjectParameter'. Value of type 'System.Data.Entity.Core.Objects.ObjectResult(Of DataLibrary.MyStoredProc_Result)' cannot be converted to 'System.Data.Objects.ObjectResult(Of DataLibrary.MyStoredProc_Result)'. I cannot figure out why this will not work out-of-the-box EF 5 had so such issues. I am using VS 2012. .Net 4.5 Vb.net (also tried with a C# project... same issue) Any ideas? EDIT: The answer was to install EF6 Tools for VS 2012. I did not know I had to do this since I thought that was installed when I added the Entity Framework package. I guess you are using EF tools from VS2012 which are still bound to original EF distribution (part of .NET framework). EF6 uses out-of-band distribution and it doesn't work with previous tooling - that is the reason why those types has little bit different namespaces and cannot be converted from one to another. Solution should be downloading and installing EF6 tools for VS2012 or using VS2013 where tools should be included. You can overcome this scenario by replacing: using System.Data.Objects; With: using System.Data.Entity.Core.Objects; You may need to update the using statements in your T4 templates, like your Context.tt file, so that auto-generated files continue to work when re-generating. Before After
https://entityframeworkcore.com/knowledge-base/19661494/entity-framework-6-cannot-build-after-adding-stored-procedures-to-data-model
CC-MAIN-2022-21
refinedweb
271
50.53
Dailypapers Free Dailypapers Crack Free Download is a small extension for Google Chrome that changes the wallpaper of the new tab every day. The source of the pictures is the subreddit r/EarthPorn. Xo Filed in: Videos “Knife is Life” is the successor to the classic “Knife” series. The filming took place at the WA Design studio in old town Seattle. This is a concept that explores the ethics of duality, human nature and the notion of chaos. Knife is Life attempts to rewrite the idea of a myth in a modern context, and to explore the possibilities of violence in today’s society. This series of videos will mainly focus on the sculptural, action-packed fight sequences that take place within the fictional, live action story. I am interested in exploring how knife fights fit into the context of a contemporary action story, and what the interplay is between the action and artistry of a fight. Folded Knives: Folded Knife #1 Single Bevel Knife Double Bevel Knife Folded Knife #2 Double Bevel Knife Striking Knife Folded Knife #3 Single Bevel Knife Folded Knife #4 Single Bevel Knife Chosen Resource: Knife is Life by Daniel Scheinert Part I: Part II: Part III: Part IV: Part V: Part VI: Part VII: Part VIII: Knife is Life an Open-Source project, the shared video files are made freely available online. If you enjoy this project please consider donating to the project! Donations: Thanks for watching! If you like what you see, visit: What can you make from scratch. With just a few ingredients Dailypapers Crack Serial Number Full Torrent Beautiful landscape pictures directly in Chrome’s new tab page: You just read: If you’re a frequent Redditor, then Dailypapers Cracked Version might be a good extension for you. Artikelen The Forest The Forest by The Forest The tree is a powerful symbol, a tree shape. This shape holds such different meaning depending on who needs to perceive the tree or does it? It’s not about what the tree is but who a person is who wants to understand the tree. Let’s talk about the people who come to understand the tree, the people who have always asked questions about the tree. Who are these people? The same is true for every living thing in life. If you feel happy then you will want to show this happiness to people around you. You will want to spread your happiness around because you want people to feel good or happy. The relationships in your life will be strong. People will do things for you without expecting anything in return. People will accept you into their lives because of your happiness, attitude, and kindness. The world will start to change. People who are unhappy with their lives will seek to end their own lives. People who are unhappy with their life seek ways to change their thoughts so that they can be happy. This is the beginning of a new trend in the world. The hopeless might become alive, the hopeless will take a hold of the world and lead it. The hopeless will change the world to be a better place. It’s believed that this is the beginning of the beginning. So what is the best way to make sure the relationships in your life are strong? To strengthen the relationship so that everything and everyone around you is happy. You will begin to understand the forest, trees, and plants. It’s true that this is about a great learning. You can go into the forest as early as you want. You can do it as late as you want because it doesn’t matter. It is the process of learning that matters, it is the process of change that will help people. The forest is the beginning to the end. At first when you start out in the forest, you will feel lost and confused. You might feel tired because the forest is overwhelming. The way to find your way is with the human beings that walk with you. If you get lost, you will find your way to where you need to be. The forest will bring a lot of different things with it. If you are walking in the forest 6a5afdab4c Dailypapers Crack Full Version Turns your new tab page into a beautiful new wallpaper each day No need to dig around for the picture, or any longer manually set as new tab page background Download full resolution images from original Reddit post Dailypapers installation instructions: Download Dailypapers Extension from Chrome Web Store Update Dailypapers Extension Click on the little blue button in the lower-right corner of the new tab page Chrome will turn up a wizard that will ask you to select the extension Accept, allow and install the extension The extension will ask you to restart the browser (which you might not have to do) If you don’t want the extension to change your new tab page background each day, you’ll have to un-check its checkbox Bonus recommendation Have you noticed that the photos on Chrome’s new tab page are always landscape pictures? This is actually kind of cool, and this is a good reason to use this extension. If you like to customize your new tab page even more, I recommend that you have a look at Gravity (official extension). It adds a colorful, stylish view of your Chrome’s new tab page, and the great thing about Gravity is that it allows you to choose which sites you want to see on your new tab page. I recently started using Dailypaper extension. It’s fantastic. I like to change my new tab as well. I set the Chrome’s new tab page to change every day. You can see the image here: Currently the image shows – Day 0 image. Clicking on it opens a link to the post of the day – Day 0 image on Reddit. Loved the extension and used it so far for 1 week. I’m assuming that it’s a great result for 12 euros/EUR.Q: Selenium Page Object Model: how to get the value of one element depending on the value of another? I’m trying to write test cases in a page object model for a service that returns JSON data based on the given search criteria. The code I have so far: public class IsMailBoxValidPage { public void CheckBox(String boxName) { What’s New In? For some time, I wanted to build something really simple, fast and useful for myself. So, after pondering a lot, I realized that I should use a Chrome extension to make all my google searches faster. I read a lot about extensions which do different stuff for me and most of them are available in the Chrome Webstore. When I used some of them, I understood that for this kind of extensions, you need to build for so many different browsers (for some, you need to build for Chrome, Safari and Internet Explorer). So, I thought of doing it the other way. I’ll do it for myself and then I’ll share it with everyone. So, I created a Chrome extension that helps me to make my searches faster and saves them for me. If you want to know more about the features and the philosophy behind it, you can visit my product page: Extension Tutorial: How To Install Dailypapers Extension: As always, before you can install an extension, make sure you’re running version 70.0 or newer of Chrome. You can verify your version of Chrome by clicking the menu button in the upper-right corner of the Chrome window, clicking on the More Tools option and then the More tools option. When you’re certain you’re running the latest version of Chrome, go to the Chrome Webstore. Hit the Menu icon (looks like three horizontal lines) on the top-right corner of the screen, and then search for Dailypapers. Install the Extension. When it’s installed, Dailypapers should automatically show up in the Quick Access widget on the far-right side of the Chrome toolbar. Make sure to Activate the extension. The extension enables Dailypapers as you browse the web. There are several ways to enable the extension. This tutorial shows you how to enable the extension as you search using Google. Finding a good domain name to match your brand has been proven to be a difficult task. You need a domain that contains your name, but it also needs to have a solid market value. Luckily, there is a service that helps you find great domain names in minutes. TLDr is a site that compares the current and past registries of your target name, and finds matches that you can purchase. Check out the following domain to see how many System Requirements For Dailypapers: Minimum: – Windows 7 SP1 (32-bit or 64-bit) – 4GB of RAM Recommended: – 8GB of RAM System Requirements: –
https://www.luthierdirectory.co.uk/dailypapers-free/
CC-MAIN-2022-33
refinedweb
1,503
70.02
#include "avcodec.h" #include "ac3.h" #include "get_bits.h" Go to the source code of this file. Definition in file ac3.c. Definition at line 86 of file ac3.c. Referenced by ff_ac3_bit_alloc_calc_mask(). Definition at line 76 of file ac3.c. Referenced by calc_lowcomp(), and ff_ac3_bit_alloc_calc_mask(). Calculate 123 of file ac3.c. Referenced by bit_alloc_masking(), and decode_audio_block(). Calculate 97 of file ac3.c. Referenced by bit_alloc_masking(), and decode_audio_block(). Initialize some tables. note: This function must remain thread safe because it is called by the AVParser init code. Definition at line 220 of file ac3.c. Referenced by ac3_decode_init(), and ff_ac3_encode_init(). Initial value: { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 31, 34, 37, 40, 43, 46, 49, 55, 61, 67, 73, 79, 85, 97, 109, 121, 133, 157, 181, 205, 229, 253 }().
http://ffmpeg.org/doxygen/0.10/ac3_8c.html
CC-MAIN-2016-50
refinedweb
153
79.77
#include <deftypes.h> Inheritance diagram for Label: Get the name of this label. Get the detail for this label. Return true if this label uses a (non-zero) mask. Construct and add a label from a byte array. Find a label with a given value, from a ULPtr. Find a label with a given value, from the label bytes. The value of this label. Mask of ignore bits, each set bit flags a bit to be ignored when comparing. True if there is a non-zero mask. The XML-Tag-valid name for this label. The human-readable description for this label. Map of all existing labels that don't use masking. Map of all existing labels that use masking - this is a multimap to allow the same base with different masks.
http://freemxf.org/mxflib-docs/mxflib-1.0.0-docs/classmxflib_1_1_label.html
CC-MAIN-2018-05
refinedweb
132
78.65
Creating Feature Environments Over the last couple of chapters we looked at how to work on Lambda and API Gateway locally. However, besides Lambda and API Gateway, your project will have other AWS services. To run your code locally, you have to simulate all the AWS services. Similar to serverless-offline, there are plugins like serverless-dynamodb-local and serverless-offline-sns that can simulate DynamoDB and SNS. However, mocking only takes you so far since they do not simulate IAM permissions and they are not always up to date with the services’ latest changes. You want to test your code with the real resources. Serverless is really good at creating ephemeral environments. Let’s look at what the workflow looks like when you are trying to add a new feature to your app. As an example we’ll add a feature that lets you like a note. We will add a new API endpoint /notes/{id}/like. We are going to work on this in a new feature branch and then deploy this using Seed. Create a feature branch We will create a new feature branch called like. $ git checkout -b like Since we are going to be using /notes/{id}/like as our endpoint we need to first export the /notes/{id} API path. Open the serverless.yml in the services/notes-api service, and append to the resource outputs. ApiGatewayResourceNotesIdVarId: Value: Ref: ApiGatewayResourceNotesIdVar Export: Name: ${self:custom.stage}-ExtApiGatewayResourceNotesIdVarId Our resource outputs should now look like: ... - Outputs: ApiGatewayRestApiId: Value: Ref: ApiGatewayRestApi Export: Name: ${self:custom.stage}-ExtApiGatewayRestApiId ApiGatewayRestApiRootResourceId: Value: Fn::GetAtt: - ApiGatewayRestApi - RootResourceId Export: Name: ${self:custom.stage}-ExtApiGatewayRestApiRootResourceId ApiGatewayResourceNotesIdVarId: Value: Ref: ApiGatewayResourceNotesIdVar Export: Name: ${self:custom.stage}-ExtApiGatewayResourceNotesIdVarId Let’s create the like-api service. $ cd services $ mkdir like-api $ cd like-api Add a serverless.yml. service: notes-app-ext-like-api plugins: - serverless-bundle - serverless-offline custom: ${file(../../serverless.common.yml):custom} package: individually: true provider: name: aws runtime: nodejs12.x stage: dev region: us-east-1 tracing: lambda: true apiGateway: restApiId: !ImportValue ${self:custom.stage}-ExtApiGatewayRestApiId restApiRootResourceId: !ImportValue ${self:custom.stage}-ExtApiGatewayRestApiRootResourceId restApiResources: /notes/{id}: !ImportValue ${self:custom.stage}-ExtApiGatewayResourceNotesIdVarId environment: stage: ${self:custom.stage} iamRoleStatements: - ${file(../../serverless.common.yml):lambdaPolicyXRay} functions: like: handler: like.main events: - http: path: /notes/{id}/like method: post cors: true authorizer: aws_iam Again, the like-api will share the same API endpoint as the notes-api service. Add the handler file like.js. import { success } from "../../libs/response-lib"; export async function main(event, context) { // Business logic code for liking a post return success({ status: true }); } Now before we push our Git branch, let’s enable the branch workflow in Seed. Enable branch workflow in Seed Go to your app on Seed and head over to the Pipeline tab and hit Edit Pipeline. Enable Auto-deploy branches. Select the dev stage, since we want the stage to be deployed into the Development AWS account. Click Enable. Click Pipeline to head back. Add the new service to Seed Click on Add a Service. Enter the path to the service services/like-api and click Search. Since the code has not been committed to Git yet, Seed is not able to find the serverless.yml of the service. That’s totally fine. We’ll specify a name for the service like-api. Then hit Add Service. This should add the new service across all your stages. By default, the new service is added to the last deploy phase. Let’s click on Manage Deploy Phases, and move it to Phase 2. This is because it’s dependent on the API Gateway resources exported by notes-api. Git push to deploy new feature Now we are ready to create our new feature environment. Go back to our command line, and then push the code to the like branch. $ git add . $ git commit -m "Add like API" $ git push --set-upstream origin like Back in Seed, a new stage called like is created and is being deployed automatically. After the new stage successfully deploys, you can get the API endpoint in the stage’s resources page. Head over to the Resources tab. And select the like stage. You will see the API Gateway endpoint for the like stage and the API path for the like handler. You can now use the endpoint in your frontend for further testing and development. Now that our new feature environment has been created, let’s quickly look at the flow for working on your new feature. Working on new feature environments locally Once the environment has been created, we want to continue working on the feature. A common problem people run into is that serverless deploy takes very long to execute. And running serverless deploy for every change just does not work. Why is ‘serverless deploy’ slow? When you run serverless deploy, Serverless Framework does two things: - Package the Lambda code into zip files. - Build a CloudFormation template with all the resources defined in serverless.yml. The code is uploaded to S3 and the template is submitted to CloudFormation. There are a couple of things that are causing the slowness here: - When working on a feature, most of the changes are code changes. It is not necessary to rebuild and resubmit the CloudFormation template for every code change. - When making a code change, a lot of the times you are only changing one Lambda function. In this case, it’s not necessary to repackage the code for all Lambda functions in the service. Deploying individual functions Fortunately, there is a way to deploy individual functions using the serverless deploy -f command. Let’s take a look at an example. Say we change our new like.js code to: import { success } from "../../libs/response-lib"; export async function main(event, context) { // Business logic code for liking a post console.log("adding some debug code to test"); return success({ status: true }); } To deploy the code for this function, run: $ cd services/like-api $ serverless deploy -f like -s like Deploying an individual function should be much quicker than deploying the entire stack. Deploy multiple functions Sometimes a code change can affect multiple functions at the same time. For example, if you changed a shared library, you have to redeploy all the services importing the library. However, there isn’t a convenient way to deploy multiple Lambda functions. If you can easily tell which Lambda functions are affected, deploy them individually. If there are many functions involved, run serverless deploy -s like to deploy all of them. Just to be on the safe side. Now let’s assume we are done working on our new feature and we want our team lead to review our code before we promote it to production. To do this we are going to create a pull request environment. Let’s look at how to do that next. For help and discussionComments on this chapter
https://serverless-stack.com/chapters/creating-feature-environments.html
CC-MAIN-2021-04
refinedweb
1,152
58.79
2004 Sun17 * Microsystems, Inc. All Rights Reserved.18 */19 20 package org.netbeans.modules.junit;21 22 import java.awt.Dimension ;23 import javax.swing.JPanel ;24 import javax.swing.SwingUtilities ;25 26 /**27 * Panel that changes its hight automatically if the components inside28 * it cannot fit in the current size. The panel checks and changes only its29 * height, not width. The panel changes not only height of its own but also30 * height of the toplevel <code>Window</code> it is embedded into. The size31 * change occurs only after this panel's children are <em>painted</em>.32 * <p>33 * This panel is supposed to be used as a replacement for a normal34 * <code>JPanel</code> if this panel contains a wrappable text and the panel35 * needs to be high enough so that all lines of the possibly wrapped text36 * can fit.37 * <p>38 * This class overrides method <code>paintChildren(Graphics)</code>.39 * If overriding this method in this subclasses of this class,40 * call <code>super.paintChildren(...)</code> so that the routine which performs41 * the size change is not skipped.42 *43 * @author Marian Petras44 */45 public class SelfResizingPanel extends JPanel {46 47 /**48 * <code>false</code> until this panel's children are painted49 * for the first time50 */51 private boolean painted = false;52 53 /** Creates a new instance of SelfResizingPanel */54 public SelfResizingPanel() {55 super();56 }57 58 /**59 * Paints this panel's children and then displays the initial message60 * (in the message area) if any.61 * This method is overridden so that this panel receives a notification62 * immediately after the children components are painted - it is necessary63 * for computation of space needed by the message area for displaying64 * the initial message.65 * <p>66 * The first time this method is called, method67 * {@link #paintedFirstTime is called immediately after68 * <code>super.paintChildren(..>)</code> finishes.69 *70 * @param g the <code>Graphics</code> context in which to paint71 */72 protected void paintChildren(java.awt.Graphics g) {73 74 /*75 * This is a hack to make sure that window size adjustment76 * is not done sooner than the text area is painted.77 *78 * The reason is that the window size adjustment routine79 * needs the text area to compute height necessary for displaying80 * the given message. But the text area does not return correct81 * data (Dimension getPreferredSize()) until it is painted.82 */83 84 super.paintChildren(g);85 if (!painted) {86 paintedFirstTime(g);87 painted = true;88 }89 }90 91 /**92 * This method is called the first time this panel's children are painted.93 * By default, this method just calls {@link #adjustWindowSize()}.94 *95 * @param g <code>Graphics</code> used to paint this panel's children96 */97 protected void paintedFirstTime(java.awt.Graphics g) {98 SwingUtilities.invokeLater(new Runnable () {99 public void run() {100 adjustWindowSize();101 }102 });103 }104 105 /**106 * Checks whether the dialog is large enough for the message (if any)107 * to be displayed and adjusts the dialogs size if it is too small.108 * <p>109 * Note: Resizing the dialog works only once this panel and its children110 * are {@linkplain #paintChildren(java.awt.Graphics) painted}.111 */112 protected void adjustWindowSize() {113 Dimension currSize = getSize();114 int currHeight = currSize.height;115 int prefHeight = getPreferredSize().height;116 if (currHeight < prefHeight) {117 int delta = prefHeight - currHeight;118 java.awt.Window win = SwingUtilities.getWindowAncestor(this);119 Dimension winSize = win.getSize();120 win.setSize(winSize.width, winSize.height + delta);121 }122 }123 124 /**125 * Has this panel's children been already painted?126 *127 * @return <code>true</code> if128 * {@link #paintChildren #paintChildren(Graphics) has already been129 * called; <code>false</code> otherwise130 * @see #paintedFirstTime131 */132 protected boolean isPainted() {133 return painted;134 }135 136 }137 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/modules/junit/SelfResizingPanel.java.htm
CC-MAIN-2017-17
refinedweb
629
56.66
Scaling Down The BEM Methodology For Small Projects - By Maxim Shirshin - July 17th, 2014 - 33 Comments Front-end development is no longer about individual frameworks. Tools are available — we merely have to choose. To make the right choices for your project, you need to start with a general approach, or methodology. But most methodologies have been created by big companies? Are they still useful for small companies, or do we need to reinvent them at a small scale? You probably already know of BEM121, one of those methodologies developed by a big company — namely, Yandex2. BEM posits that three basic entities (blocks, elements and modifiers) are enough to define how to author HTML and CSS, structure code and components, describe interfaces and scale a project up to an industry-leading service. I’ve spent some time with Yandex and BEM, and I know that this methodology works for large projects. Yandex uses BEM to develop CSS and JavaScript components; Yandex also optimizes templates and tracks dependencies in BEM, develops BEM utilities, supports code experiments and researches the field. On a large scale, this investment pays off and allows Yandex to develop hundreds of its services faster. Would smaller teams benefit from BEM? I wasn’t sure. BEM is a layer of abstraction, offered with other tools and technologies. A small agile team switching to a full BEM stack would be questionable. Could the idea — the approach itself — be useful? I had to revisit this question when my career recently took me from Yandex to Deltamethod, a mid-sized startup in Berlin. Facing ambitious development plans, we decided to try BEM on a smaller scale. We wanted the same benefits that Yandex gets from BEM: code sharing, a live style guide, scalability, faster development. We also wanted to keep our toolchain and upgrade the existing code base gradually, rather than start from scratch. For some time, we’ve been focusing on architecture and the basics, trying aspects of BEM one by one, assessing the results, then moving forward. We keep writing down ideas, guidelines, useful tips and short tutorials. I am now convinced that BEM applies to small projects as well. I’ve written down my findings, in case you find them useful. Let’s start by reviewing the basics. BEM 101 Link While semantics is considered the foundation of web development, various front-end technologies do not share the same semantic model. The HTML of a modern app is mostly a div soup. CSS by itself does not offer any structured model at all. High-level JavaScript components use abstractions that are not consistently tied to styles or markup. At the UX level, interfaces are described in terms that have nothing in common with technical implementations. Enter BEM, a unified semantic model for markup, styles, code and UX. Let’s take a closer look. Blocks Link A block is an independent entity with its own meaning that represents a piece of interface on a page. Examples of blocks include: - a heading, - a button, - a navigation menu. To define a block, you’d give it a unique name and specify its semantics. Several instances of the same block definition (such as various buttons or multiple menus) might exist in the interface. Any web interface can be represented as a hierarchical collection of blocks. The simplest representation is the HTML structure itself (tags as blocks), but that is semantically useless because HTML was designed for structured text, not web apps. Elements Link An element is a part of a block, tied to it semantically and functionally. It has no meaning outside of the block it belongs to. Not all blocks have elements. Examples of elements include: - a navigation menu (block) that contains menu items; - a table (block) that contains rows, cells and headings. Elements have names, too, and similar elements within a block (such as cells in a grid or items in a list) go by the same name. Elements are semantic entities and not exactly the same as HTML layout; a complex HTML structure could constitute just a single element. Modifiers Link Modifiers are flags set on blocks or elements; they define properties or states. They may be boolean (for example, visible: true or false) or key-value pairs ( size: large, medium, small) — somewhat similar to HTML attributes, but not exactly the same. Multiple modifiers are allowed on a single item if they represent different properties. Blocks and the DOM Link How do you work with BEM while still using HTML? You do it by mapping DOM nodes to BEM entities using a naming convention. BEM uses CSS class names to denote blocks, elements and modifiers. Blocks, elements or modifiers cannot claim any “exclusive ownership” of DOM nodes. One DOM node may host several blocks. A node may be an element within one block and (at the same time) a container for another block. A DOM node being reused to host more than one BEM entity is called a “BEM mixin.” Please note that this is just a feature of convenience: Only combine things that can be combined — don’t turn a mix into a mess. The BEM Tree Link By consistently marking up a document with BEM entities, from the root block (i.e. <body> or even <html>) down to the innermost blocks, you form a semantic overlay to the DOM’s existing structure. This overlay is called a BEM tree. The BEM tree gives you the power to manipulate the whole document in BEM terms consistently, focusing on semantics and not on a DOM-specific implementation. Making Your First Move Link You might be thinking, “I’ll give BEM a try. How do I start migrating my project to BEM? Can I do it incrementally?” Sure. Let’s start by defining some blocks. We will only cover semantics; we’ll proceed with specific technologies (like CSS and JavaScript) later on. As you’ll recall, any standalone thing may be a block. As an example, document headings are blocks. They go without inner elements, but their levels (from top-most down to the innermost) may be defined as key-value modifiers. If you need more levels later, define more modifiers. I would say that HTML4 got it wrong with <h1> to <h6>. It made different blocks (tags) of what should have been just a modifier property. HTML5 tries to remedy this with sectioning elements, but browser support is lagging. For example, we get this: BLOCK heading MOD level: alpha, beta, gamma As a second example, web form input controls can be seen as blocks (including buttons). HTML didn’t get it exactly right here either. This time, different things (text inputs, radio buttons, check boxes) were combined under the same <input> tag, while others (seemingly of the same origin) were defined with separate tags ( <select> and <textarea>). Other things, such as <label> and the auto-suggestion datalist, should be (optional) elements of these blocks because they bear little to no meaning on their own. Let’s see if we can fix this: BLOCK text-input MOD multiline MOD disabled ELEMENT text-field ELEMENT label The essential feature of a text input is its ability to accept plain text. When we need it to be multiline, nothing changes semantically — that’s why multiline is just a modifier. At the HTML level, this is represented by different markup for technical reasons, which is also fine because we’re only defining semantics, not the implementation. The textfield tag itself is an element, and label is another element; later, we might need other elements, like a status icon, error message placeholder or auto-suggestion. BLOCK checkbox ELEMENT tick-box ELEMENT label BLOCK radio ELEMENT radio-button ELEMENT label These two blocks are pretty straightforward. Still, <label> is an element, and “native” <input> tags are elements, too. BLOCK select MOD disabled MOD multiple ELEMENT optgroup ELEMENT option MOD disabled MOD selected Select boxes don’t really need labels, and anything else here is more or less similar to a normal select box control. Technically, we can reuse the existing <select> tag with all of its structure. Note that both the select block and its option element have a disabled modifier. These are different modifiers: The first one disables the whole control, while the second one (being a perfect example of an element modifier) disables just an individual option. Try to find more examples of blocks in your web projects. Classifying things according to BEM takes some practice. Feel free to share your findings, or ask the BEM team your questions3! Let Your CSS Speak Out Loud Link Perhaps you’ve heard a lot about BEM as a way to optimize CSS and are wondering how it works? As mentioned, BEM uses CSS class names to store information about blocks, elements and modifiers. With a simple naming convention, BEM teaches your CSS to speak, and it adds meaning that makes it simpler, faster, more scalable and easier to maintain. BEM Naming Conventions for CSS Link Here are the prerequisites: - Keep the names of blocks, elements and modifiers short and semantic. - Use only Latin letters, dashes and digits. - Do not use underscores ( _), which are reserved as “separator” characters. Block containers get a CSS class of a prefix and a block name: .b-heading .b-text-input That b- prefix stands for “block” and is the default in many BEM implementations. You can use your own — just keep it short. Prefixes are optional, but they emulate much-anticipated (and missing!) CSS namespaces. Element containers within a block get CSS classes consisting of their block class, two underscores and the element’s name: .b-text-input__label .b-text-input__text-field Element names do not reflect the block’s structure. Regardless of nested levels within, it’s always just the block name and the element name (so, never .b-block__elem1__elem2). Modifiers belong to a block or an element. Their CSS class is the class name of their “owner,” one underscore and a modifier name: .b-text-input_disabled .b-select__option_selected For a “boolean” modifier, this is enough. Some modifiers, however, are key-value pairs with more than one possible value. Use another underscore to separate the values: .b-heading_level_alpha Modifier classes are used together with the block and element class, like so: <div class="b-heading b-heading_level_alpha">BEM</div> Why Choose BEM CSS Over Other Approaches Link One Class to Rule Them All Link CSS sometimes depends a lot on the document’s structure — if you change the structure, you break the CSS. With BEM, you can drop tag names and IDs from your CSS completely, using only class names. This mostly frees you from structural dependencies. Specificity Problems Solved Link Big chunks of CSS are hard to maintain because they keep redefining themselves unpredictably. This issue is called CSS specificity. The original problem is that both tag names and element IDs change selector specificity in such a way that if you rely on inheritance (the most common thing to expect from CSS), then you can only override it with selectors of the same or higher specificity. BEM projects are least affected by this problem. Let’s see why. Let’s say you have a table with these style rules: td.data { background-color: white } td.summary { background-color: yellow } However, in another component, you need to redefine the background of a particular cell: .final-summary { background-color: green } This wouldn’t work because tag.class always has a higher specificity than just .class. You would add a tag name to the rule to make it work: td.final-summary { background-color: green } Because BEM provides unique class names for most styles, you would depend only on the order of rules. Bye-Bye Cascade?! Link Nested CSS selectors aren’t fast enough in old browsers and can create unintended overrides that break the styles of other elements. Eliminating a lot of the cascade from CSS is possible with BEM. How is this possible, and why is it important? Isn’t the cascade supposed to be there? Isn’t it the “C” in CSS)? As you know, every BEM CSS class is unique and self-sufficient. It does not depend on tags or IDs, and different blocks never share class names. That’s why you need only a single class name selector to do the following: - style a block container, - style any block element, - add style extras and overrides with a modifier. This covers most of your styling needs, all with just one class selector. So, it’s mostly about single-class selectors now, and they are extremely fast. To apply a selector, the browser starts with an initial (broader) set of elements (usually determined by the rightmost part of a selector), and then gradually reduces the set by applying other parts until only matching elements remain. The more steps needed, the more time it takes, which is why you can hardly beat single-class selectors for speed. CSS is rarely a performance bottleneck on small pages, but CSS rules must be reapplied with every document reflow. So, when your project grows, things will get slower at some point. According to usability science, 250 milliseconds is the perception limit for “instant.” The faster your selectors are, the more room you have to manoeuvre to keep that “blazing fast” feeling for your users. So, no cascade?! Well, almost. In some cases, you might need two class names in a selector — for example, when a block modifier affects individual elements: .b-text-input_disabled .b-text-input__label { display: none; } The nice thing is that any rule that redefines this one will likely depend on another modifier (because of the unified semantics!), which means that specificity is still the same and only the rule order matters. Surely, we can invent more cases that require even more cascading (internal element dependencies, nested modifiers, etc.). While the BEM methodology allows for that, you’ll hardly ever need it in real code. Absolutely Independent Blocks Link If blocks depend on each other’s styles, how do we express that in CSS? The answer is, they shouldn’t. Each block must contain all styles necessary for its presentation. The overhead is minimal, but this ensures that you can move blocks freely within a page or even between projects without extra dependencies. Avoid project-wide CSS resets for the same reason. This is not the case for elements because they are guaranteed to stay within their parent block and, thus, inherit block styles accordingly. Alternative BEM Naming Conventions Link A number of alternative BEM naming conventions exist. Which should we use? BEM’s “official” naming convention for CSS is not the only one possible. Nicolas Gallagher once proposed4 some improvements, and other adopters have, too. One idea is to use attributes to represent modifiers, and CSS prefixes aren’t “standardized” at all. The biggest advantage of the syntax proposed by the team behind BEM is that it’s the one supported in open-source tools distributed by Yandex, which you might find handy at some point. In the end, the methodology is what matters, not the naming convention; if you decide to use a different convention, just make sure you do it for a reason. Semantic JavaScript: BEM-Oriented Code Link Many publishers and authors view BEM as a naming convention only for CSS, but that brings only half of the benefits to a project. The BEM methodology was designed to fix (i.e. polyfill) non-semantic DOM structures at all levels (HTML, CSS, JavaScript, templates and UX design), similar to how jQuery “fixes” broken DOM APIs. HTML was designed as a text markup language, but we use it to build the most interactive interfaces around. Experimental efforts such as Web Components strive to bring semantics back into our markup and code, but BEM can be used in a full range of browsers now, while retaining compatibility with future approaches, because it does not depend on any particular API or library. How do you apply the BEM model to JavaScript code? We’ll go through a development paradigm using as little code as possible. It will be really high-level and abstract, but the abstractness will help us to understand the idea more clearly. You’ll notice another term in the heading above: “BEM-oriented code.” Before explaining what’s behind that, let’s go over some ideas that are useful to know when applying BEM to JavaScript. Learning to Declare Link The first step is to embrace a declarative paradigm. Declarative programming is an approach that concentrates on the “what,” not the “how.” Regular expressions, SQL and XSLT are all declarative, and they specify not the control flow, but rather the logic behind it. When doing declarative programming, you’d start by describing a set of conditions, each of them mapped to specific actions. In BEM, conditions are represented by modifiers, and any action can only happen on a block or element. The code examples in this article will use the i-bem.js framework, written and open-sourced by Yandex, but your favorite framework might be able to do similar or better things because declarative programming is not tied to a specific implementation. BEM.DOM.decl('b-dropdown', { onSetMod: { disabled: function(modName, modVal) { this.getLabel().setMod('hidden', 'yes'); if (modVal === 'yes') { this.getPopup().hide(); } }, open: { yes: function() { this.populateList(); } } }, /* … */ The code snippet above defines actions for two modifiers on a b-dropdown block. These are similar to event handlers, but all states get immediately reflected in the CSS. Modifiers are still stored as class names on the corresponding block and element entities. Enabling and disabling different key bindings on a b-editor block is another example of how to use modifiers: BEM.DOM.decl('b-editor', { onSetMod: { hotkeys: { windows: function() { this.delMod('theme'); this.loadKeyMap('windows'); }, emacs: function() { this.setMod('theme', 'unix'); this.loadKeyMap('emacs'); enableEasterEgg(); } } }, onDelMod: { hotkeys: function() { this.clearKeyMaps(); this.delMod('theme'); } } /* … */ In this example, we see how modifiers bring logic to our transitions in state. Methods Link With a declarative approach, methods are not always “tied” to a component automatically. Instead, they, too, can be declared to belong to some instances under certain circumstances: BEM.DOM.decl({ name : 'b-popup', modName : 'type', modVal : 'inplace' }, { appear: function() { // makeYouHappy(); } }); This method is defined only for blocks that have the specific type modifier: inplace. As in classic object-oriented programming, you can extend semantically defined methods by providing even more specific declarations and reuse the original code if necessary. So, both overrides and extensions are possible. For example: BEM.DOM.decl({'name': 'b-link', 'modName': 'pseudo', 'modVal': 'yes'}, { _onClick : function() { // runs the basic _onClick defined // for all b-link instances this.__base.apply(this, arguments); // redefine the appearance from within CSS, // this code only gives you a semantic basis! this.setMod('status', 'clicked'); } }); As specified by this definition, the extended _onClick method runs only on b-link instances with a _pseudo_yes modifier. In all other cases, the “original” method is implemented. Semantics will slowly migrate from your markup (where it’s not needed anymore) to your code (where it supports modularity and readability, making it easier to work with). “… Sitting in a (BEM) Tree” Link What is the practical use of a declarative approach if it is way too abstract? The idea is to work with a BEM tree, which is semantic and controlled by you, instead of a DOM tree, which is tied to the markup and specifics of implementation: BEM.DOM.decl('b-checkbox-example', { onSetMod: { js: { inited: function() { var checkbox = this.findBlockInside({ blockName: 'b-form-checkbox', modName: 'type', modVal: 'my-checkbox' }); this.domElem.append('Checkbox value: ' + checkbox.val()); } } } } ); Other APIs exist, like this.elem('name') and this.findBlockOutside('b-block'). Instead of providing a complete reference, I’d just highlight BEM trees as the API’s foundation. Modify Modifiers to Control Controls Link The previous section leaves the important subject of application state changes unaddressed. When app states are declared, you need a way to perform transitions. This should be done by operating on a BEM tree, with the help of modifiers. BEM modifiers can be set directly on DOM nodes (as class names), but we cannot effectively monitor that (for technical reasons). Instead, i-bem.js provides a simple API that you can use as inspiration: // setter this.setMod(modName, modVal); // getter this.getMod(modName); // check for presence this.hasMod(modName, modVal); // toggle this.toggleMod(modName, modVal); // remove modifier this.delMod(modName); Thus, we can internally hook into the modifier change call and run all of the actions specified for this particular case. BEM-Oriented Code Explained Link Many JavaScript libraries provide enough power to support the BEM methodology without introducing a completely new tool chain. Here’s a check list to see whether the one you’re looking at does so: - Embraces a declarative approach - Defines your website or app in BEM’s terms Can many of the project’s existing entities be “mapped” to blocks, elements and modifier properties? - Allows you to drop the DOM tree for the BEM tree Regardless of any particular framework API, wipe out as much of the raw DOM interaction as you can, replacing it with BEM’s tree interaction. During this process, some of the nodes you work with will be redefined as blocks or elements; name them, and see how the true semantic structure of your application reveals itself. - Uses modifiers to work with state transitions Obviously, you shouldn’t define all states with modifiers. Start with the ones that can be expressed in CSS (to hide and reveal elements, to change style based on states, etc.), and clean your code of any direct manipulation of style. If your framework of choice can do this, then you are all set for BEM-oriented code. jQuery users could try these lightweight plugins to extend their code with BEM methods: - jQuery BEM5 plugin - jQuery BEM Helpers6 ( setModand getMod) From A Naming Convention To A Style Guide Link If you work a lot with designers, your team would also benefit from a BEM approach. Imagine that you had a style guide created by a Real Designer™. You would usually get it as a PDF file and be able to learn everything about the project’s typefaces, color schemes, interface interaction principles and so on. It serves perfectly as a graphic book that is interesting to look at in your spare time. However, it would be of little to no use to most front-end developers — at the level of code, front-end developers operate with totally different entities. But what if you and the designer could speak with each other using the same language? Of course, this would require some training, but the benefits are worth it. Your style guide would be an interactive block library, expressed in BEM terms. Such a library would consist of blocks that are ready to be used to build your product. Once the designer is familiar with BEM’s terms, they can iterate towards designing blocks and elements, instead of “screens.” This will also help them to identify similar UI parts and unify them. Modifiers help to define visual variations (i.e. which apply to all blocks) and states (i.e. for interactive blocks only). The blocks would be granular enough to enable you to make an early estimation of the amount of work that needs to be done. The result is a specification that fully covers all important states that can be reused with other screens or pages. This eventually allows you to mock up interfaces as wireframes or sketches, because all of the building blocks have already been defined. More importantly, this model maps directly to the code base, because the blocks, elements and modifiers defined by the designer are essentially the same blocks, elements and modifiers that the developer will implement. If you have been using BEM in your project for some time, then certain blocks are probably already available. The biggest change, however, is closing the gap between screen and code by operating on the same entities in the UI design and development. Like the famous Babel fish, BEM enables you to understand people who have no idea how your code works. On a bigger team, working on individual blocks is easier because it can be done in parallel, and big features do not end up being owned by any one developer. Instead, you share the code and help each other. The more you align the JavaScript HTML and CSS with BEM, the less time you need to become familiar with new code. BEM As High-Level Documentation Link Despite all advice, developers still don’t write enough documentation. Moving projects between developers and teams is non-trivial. Code maintenance is all about minimizing the time a developer needs to grasp a component’s structure. Documentation helps a lot, but let’s be honest, it usually doesn’t exist. When it does exist, it usually covers methods, properties and APIs, but hardly anything about the flow of components, states or transitions. With minimally structured BEM-oriented code, you will immediately see the following: - the elements you’re dealing with, - other blocks you depend on, - states (modifiers) that you need to be aware of or support, - element modifiers for fine-grained control. Explaining with examples is easier. What would you say about the following block? b-popup _hidden _size _big _medium _large _dir _left _right _top _bottom _color-scheme _dark _light __anchor-node __popup-box __close-btn __controls __ok __cancel By now, you can tell me what this block is about! Remember, you’ve seen zero documentation. This block could be a structure that you’ve defined in a CSS preprocessor or a YAML meta description. BEM And File Structure Link In a growing project, an inconsistent file structure could slow you down. The structure will only become more complex and less flexible with time. Unfortunately, tools and frameworks do not solve the problem because they either deal with their own internal data or offer no specific structure at all. You and only you must define a structure for the project. Here, BEM can help as well. Block Library Link A block’s folder is the basis of all BEM-based file structures. Block names are unique within the project, as are folder names. Because blocks do not define any hierarchies, keep block folders as a flat structure: /blocks /b-button /b-heading /b-flyout /b-menu /b-text-field Libraries and other dependencies may be defined as blocks, too. For example: /blocks … /b-jquery /b-model Inside each folder, the easiest arrangement would be to give each “technology” a distinct file: /b-menu b-menu.js b-menu.css b-menu.tpl A more advanced approach would be to store some definitions of elements and modifiers in separate subfolders and then implement in a modular way: /b-menu /__item b-menu__item.css b-menu__item.tpl /_horizontal b-menu_horizontal.css /_theme /_dark b-menu_theme_dark.css /_light b-menu_theme_light.css b-menu.css b-menu.js b-menu.tpl This gives you control, but it also requires more time and effort to support the structure. The choice is yours. Redefinition Levels Link What if you need to extend the styles and functionality of components or share code between projects without changing (or copying and pasting) the original source? Big web apps, sections and pages could be significantly different, as could be the blocks they use. At the same time, a shared block library often has to be extended, individual items redefined and new items added. BEM addresses this with the concept of redefinition levels. As long as you’ve chosen a file structure, it should be the same for any block. That’s why several block libraries can be on different levels of an application. For example, you could have a common block library as well as several specific libraries for individual pages: /common /blocks /b-heading /b-menu … /pages /intro /blocks /b-heading b-heading_decorated.css /b-demo /b-wizard … Now, /common/blocks will aggregate blocks used across the whole app. For each page (as for /pages/intro in our example), we define a new redefinition level: A specific library, /pages/intro/blocks, adds new blocks and extends some common ones (see the extra _decorated modifier for the common b-heading block). Your build tool can use these levels to provide page-specific builds. Separation of libraries can be based on the form factors of devices: /common.blocks /desktop.blocks /mobile.blocks The common library stays “on top,” while the mobile or desktop block bundle extends it, being the next redefinition level. The same mechanism applies when several different projects need to share blocks or when a cross-project common block library exists to unify the design and behavior across several services. The Build Process Link We’ve ended up with many small files, which is good for development but a disaster for production! In the end, we want all of the stuff to be loaded in several big chunks. So, we need a build process. Yandex has an open-source build tool, Borschik7, which is capable of building JavaScript and CSS files and then compressing and optimizing them with external tools, such as UglifyJS8 and CSS Optimizer9. Tools like RequireJS10 can also facilitate the building process, taking care of dependency tracking. For a more comprehensive approach, have a look at bem-tools11. The clearest lesson I’ve learned from BEM is not to be afraid of granularity, as long as you know how to build the whole picture. Beyond Frameworks Link For a while, I was pretty skeptical that BEM is suitable for small projects. My recent experience in a startup environment proved me wrong. BEM is not just for big companies. It works for everyone by bringing unified semantics across all of the front-end technologies that you use. But that is not the biggest impact of the BEM methodology on my projects. BEM enables you to see beyond frameworks. I remember times when people seriously discussed the best ways to bind event handlers to elements, and when DOM libraries competed for world dominance, and when frameworks were the next big buzz. Today, we can no longer depend on a single framework, and BEM takes the next step by providing a design foundation, giving us a lot of freedom to implement. Visit the BEM121 website for extra resources, GitHub links, downloads and articles. Long story short, BEM it! (al, il) Footnotes Link - 1 - 2 - 3 mailto:info@bem.info - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 ↑ Back to top Tweet itShare on Facebook
http://www.smashingmagazine.com/2014/07/bem-methodology-for-small-projects/
CC-MAIN-2015-48
refinedweb
5,109
55.24
On Wed, Mar 2, 2011 at 6:54 AM, Antoine Pitrou <solipsis at pitrou.net> wrote: >> A remark: Having all clones created under a dedicated namespace (say >> sandbox) could make the hg.python.org listing clearer, since all user >> clones would be grouped. > > Sure, we can change the enforced convention depending on the majority's > preference. I chose that one because other devs thought it would be bad > to let people create many repos at the top-level. Having user clones flagged by the two-level names should be more than enough when it comes to what the server enforces. That way we can be flexible about additional namespaces (although using "sandbox" by convention should cover most use cases). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
https://mail.python.org/pipermail/python-dev/2011-March/108538.html
CC-MAIN-2017-04
refinedweb
129
66.33
Search Create How can we help? You can also find more resources in our Help Center . Select a category Something is confusing Something is broken I have a suggestion Other feedback What is your email? What is 1 + 3? Send Message 50 terms xXNetRavenXx Windows Server- Network Infrastructure - Lesson 5/6 Practice Exam STUDY PLAY 802.1X provides port-based security through the use of all of the following components with the exception of a(n) __________. verifier A hub examines the destination and source address of an incoming data frame and forwards the frame to the appropriate destination port according to the destination address. False A static-routed IP environment is best suited to small, single-path, static IP internetworks. True An NPS Network Policy, which is a rule for evaluating remote connections, consists of which of the following items? all of the above By default, the Callback Options setting is configured as __________. No Callback By default, the Callback Options setting is configured as No Callback. True By using the Routing and Remote Access service, Windows Server 2008 can be configured as a router and remote access server. True For best results, the internetwork should be limited to fewer than how many subnets with an easily predicted traffic pattern (such as arranged consecutively in a straight line)? 10 How can you view the IP routing table? both B & C Of the four types of routes that can be found in a routing table, which of the following cannot be found? client routes One example of an NPS Policy Setting includes IP properties that specify IP address assignment behavior. Which of the following is not an option? Client Must Supply an IP Address Routers are considered components of which layer? layer 3 What encryption type is used for dial-up and PPTP-based VPN connections with a 40-bit key? basic encryption What term refers to the device that is seeking access to the network, such as a laptop attempting to access a wireless access point? supplicant What type of connectivity creates a secure point-to-point connection across either a private network or a public network, such as the Internet? virtual private network When most traffic is synchronous, as in voice and video transmissions, VPN is your best option. False Which column of the IP Routing Table indicates the gateway value for each routing table entry? third Which entries refer to a separate multicast route? 224.0.0.0 Which generic authentication method does not encrypt authentication data? Shiva Password Authentication Protocol (SPAP) Which generic authentication method offers encryption of authentication data through the MD5 hashing scheme? Challenge Handshake Authentication Protocol (CHAP) Which mutual authentication method offers encryption of both authentication data and connection data? MS-CHAPv2 Which of the following is the limited broadcast address that is general for all networks and routers? 255.255.255.255 Which one-way authentication method offers encryption of both authentication data and connection data (the same cryptographic key is used in all connections; this method supports older Windows clients, such as Windows 95 and Windows 98)? MS-CHAPv1 Which option enables internal clients to connect to the Internet using a single, external IP address? network address translation Windows Server 2008 includes all of the following routing protocols that can be added to the Routing and Remote Access service with the exception of __________. OSPF A period of planning and design is unnecessary before you start implementation of a file server deployment. False By default, what topology do replication groups use? full mesh For network users to be able to access a shared folder on an NTFS drive, you must grant them __________ permissions. both A & B Generally speaking, a well-designed sharing strategy provides each user with all of the following resources except __________ storage space. virtual How many active partitions can you have per hard drive? 1 If your organization has branch offices scattered around the world and uses relatively expensive wide area networking (WAN) links to connect them, it would probably be more economical to install a file server at each location rather than having all of your users access a single file server using the WAN links. True Most personal computers use basic disks because they are the easiest to manage. A basic disk uses what type of partitions and drives? primary partitions, extended partitions, and logical disks Regardless of the size of your network, your strategy for creating shared folders should consist of all the following information except what __________. online file settings you will use for the shares The Distributed File System (DFS) implemented in the Windows Server 2008 File Services role includes two technologies: DFS Namespaces and __________. DFS Replication The File Services role and other storage-related features included with Windows Server 2008 provide tools that enable system administrators to address many problems on a scale appropriate to a large enterprise network. However, before you implement the role or begin using these tools, what should you spend some time thinking about? your users' needs and how these needs affect their file storage and sharing practices The process of deploying and configuring a simple file server using Windows Server 2008 includes many of the most basic server administration tasks, including all of the following except __________. partitioning drives The system partition contains what types of files? hardware-related files that the computer uses to boot What file system provides the most granular user access control and also provides other advanced storage features, including file encryption and compression? NTFS What is the first step in designing a file-sharing strategy? projecting anticipated storage needs and procuring the correct server hardware and disk arrays to meet your needs What server is responsible for maintaining the list of DFS shared folders and responding to user requests for those folders? namespace server What tool can you use to perform disk-related tasks such as initializing disks, selecting a partition style, converting basic disks to dynamic disks, and more? Disk Configuration MMC snap-in What volume type consists of space on three or more physical disks, all of which must be dynamic disks? RAID-5 What volume type is essentially a method for combining the space from multiple dynamic disks into a single large volume? spanned When preparing a hard disk for use, Windows Server 2008 file servers can use the same settings as workstations. False When you install additional hard disk drives on a file server, the Windows Server 2008 setup program automatically performs all of the preparation tasks. False When you work with basic disks in Windows Server 2008, how many primary partitions can you create? 4 Windows Server 2008 can support dynamic volumes as large as __________ terabytes. 64 Windows Server 2008 has several sets of permissions that operate independently of each other. Which permissions control access to folders over a network? share permissions Within each site, the number of file servers you need can depend on __________. all of the above You can mark an existing dynamic disk as an active partition. False
https://quizlet.com/5419238/windows-server-network-infrastructure-lesson-56-practice-exam-flash-cards/
CC-MAIN-2017-13
refinedweb
1,178
51.68
Details Description Including files such as rp_header.h and rp_element.h in axis2c/neethi/include generate the following error: ../../../include/axis2-1.1/rp_header.h:57: error: expected ',' or '...' before 'namespace' This is due to the fact that the word 'namespace' is reserved in c++. It can be fixed by simply renaming the variable to some other values as will be shown by the example diffs I will attach. Activity - All - Work Log - History - Activity - Transitions Hide Manjula Peiris added a comment - Fixed in the latest svn. Thanks for pointing this. Show Manjula Peiris added a comment - Fixed in the latest svn. Thanks for pointing this. Example diffs showing the simple change that will cause the code to compile when included into c++ code – there may be other files that need changing also, these are just the two I had to use.
https://issues.apache.org/jira/browse/AXIS2C-673
CC-MAIN-2015-35
refinedweb
142
67.76
# Static Analysis: baseline VS diff If you use static analyzers, you will have, sooner or later, to address the task of making their integration into existing projects easier, where fixing all warnings on legacy code is unfeasible. The purpose of this article is not to help with integration but rather to elaborate on the technicalities of the process: the exact implementations of warning suppression mechanisms and pros and cons of each approach. ![image1.png](https://habrastorage.org/r/w1560/webt/vv/lz/bl/vvlzbldy1lk6u3a4zpoyjqropm4.png) baseline, or the so-called suppress profile ------------------------------------------- This approach is known by various names: baseline file in [Psalm](https://github.com/vimeo/psalm) and [Android Lint](https://developer.android.com/studio/write/lint), suppress base (or profile) in [PVS-Studio](https://habr.com/ru/company/pvs-studio/), code smell baseline in [detekt](https://arturbosch.github.io/detekt/baseline.html). This file is generated by the linter when run on the project: ``` superlinter --create-baseline baseline.xml ./project ``` Inside, it stores all the warnings produced at the creation step. When running a static analyzer with the baseline.xml file, all the warnings contained in it will be ignored: ``` superlinter --baseline baseline.xml ./project ``` A straightforward approach, where warnings are kept in full along with line numbers, won't be working well enough: adding new code to the beginning of the source file will result in shifting the lines and, therefore, bringing back all the warnings meant to stay hidden. We typically seek to accomplish the following goals: * All warnings on new code must be issued * Warnings on existing code must be issued only if it was modified * (optional) Allow moving files or code fragments What we can configure in this approach is which fields of the warning to include to form its hash value (or "signature"). To avoid problems related to the shifting of line numbers, don't include the line number in the list of these fields. Here is an example list of fields that can form a warning signature: * Diagnostic name or code * Warning message * File name * Source line that triggers the warning The more properties used, the lower the risk of collision, but also the higher the risk of getting an unexpected warning due to signature invalidation. If any of the specified properties changes, the warning will no longer be ignored. Along with the warning-triggering line, PVS-Studio stores the line before and the line after it. This helps better identify the triggering line, but with this approach, you may start getting the warning after modifying a neighboring line. Another – less obvious – property is the name of the function or method the warning was issued for. This helps reduce the number of collisions but renaming the function will cause a storm of warnings on it. It was empirically found that using this property allows you to weaken the filename field and store only the base name rather than the full path. This enables moving files between directories without the signature getting invalidated. In languages like C#, PHP, or Java, where the file name usually reflects the class name, such moves may have sense. A well composed set of properties makes the baseline approach more effective. Collisions in a baseline method ------------------------------- Suppose we have a diagnostic, **W104**, that detects calls to **die** in the source code. The project under analysis has the file *foo.php*: ``` function legacy() { die('test'); } ``` The properties we use are {file name, diagnostic code, source line}. When creating a baseline, the analyzer adds the call *die('test')* to its ignore base: ``` { "filename": "foo.php", "diag": "W104", "line": "die('test');" } ``` Now, let's add some more code: ``` + function newfunc() { + die('test'); + } function legacy() { die('test'); } ``` All properties of the new call *die('test')* are exactly the same as those forming the signature of the fragment to be ignored. That's what we call a collision: a coincidence of warning signatures for potentially different code fragments. One way of solving this issue is to add an additional field to distinguish between the two calls – say, "name of containing function". But what if the new *die('test')* call is added to the same function? Neighboring lines may be the same in both cases, so including the previous and next lines in the signature won't help. This is where the counter of signatures with collisions comes in handy. It will let us know that we get two or more warnings when only one was expected inside a function – then all warnings but the first must be shown. With this solution, however, we somewhat lose in precision: you can't determine which line is newly added and which already existed. The analyzer will report the line following the ignored ones. Approach based on diff capabilities of VCSs ------------------------------------------- The original goal was to have warnings issued only on "newly written" code. This is where version control systems can be useful. The [revgrep](https://github.com/bradleyfalzon/revgrep) utility receives a flow of warnings at *stdin*, analyzes the *git diff*, and outputs only warnings produced for new lines. > [*golangci**-lint*](https://github.com/golangci/golangci-lint) *employs revgrep's fork as a library, so it uses just the same algorithms for diff calculations.* > > If you take this path, you'll have to answer the following questions: * What commit range to choose for calculating the diff? * How are you going to process commits coming from the master branch (merge/rebase)? Also keep in mind that sometimes you still want to get warnings outside the scope of the diff. For example, suppose you deleted a meme director class, *MemeDirector*. If that class was mentioned in any doc comments, you'd like the linter to tell you about that. We need not only to get a correct set of affected lines but also to expand it so as to trace the side effects of the changes throughout the whole project. The commit range can also be different. You wouldn't probably want to check the last commit only because in that case you'd be able to push two commits at once: one with the warnings, and the other for CI traversal. Even if done unintentionally, this poses a risk of overlooking a critical defect. Also keep in mind that the previous commit can be taken from the master branch, in which case it shouldn't be checked either. diff mode in NoVerify --------------------- [NoVerify](https://github.com/VKCOM/noverify) has two working modes: diff and full diff. The regular diff can find warnings on files affected by modifications within the specified commit range. It's fast, but it doesn't provide thorough analysis of dependencies, and so new warnings on unaffected files can't be found. The full diff runs the analyzer twice: first on existing code and then on new code, with subsequent filtering of results. This is similar to generating a baseline file on the fly based on the ability to get the previous version of the code using git. As you would expect, execution time increases almost twofold in this mode. The initially suggested scenario was to run the faster analysis on pre-push hooks – in diff mode – so that feedback comes as soon as possible; then run the full diff mode on CI agents. As a result, people would ask why issues were found on agents but none were found locally. It's more convenient to have identical analysis processes so that passing a pre-push hook guarantees passing the linter's CI phase. full diff in one pass --------------------- We can make an analog of full diff but without having to run double analysis. Suppose we've got the following line in the diff: ``` - class Foo { ``` If we try to classify this line, we'll tag it as "Foo class deletion". Each diagnostic that in any way depends on the class being present must issue a warning if this class got deleted. Similarly, when deleting variables (whether global or local) and functions, we have a collection of facts generated about all changes that we can classify. Renames don't require additional processing. We view it as deleting the character with the old name and adding a character with the new one: ``` - class MemeManager { + class SeniorMemeManager { ``` The biggest difficulty here is to correctly classify lines with changes and reveal all their dependencies without slowing down the algorithm to the speed of full diff with double traversal of the code base. Conclusions ----------- **baseline**: simple approach used in many analyzers. The obvious downside to it is that you will have to place this baseline file somewhere and update it every now and then. The more appropriate the collection of properties forming the warning signature, the more accuracy. **diff**: simple in basic implementation but complicated if you want to achieve the best result possible. Theoretically, this approach can provide the highest accuracy. Your customers won't be able to integrate the analyzer into their process unless they use a version control system. | **baseline** | **diff** | | --- | --- | | + can be easily made powerful | + doesn't require storing an ignore file | | + easy to implement and configure | + easier to distinguish between new and existing code | | − collisions must be resolved | − takes much effort to prepare properly | Hybrid approaches are also used: for example, you first take the baseline file and then resolve collisions and calculate line shifts using git. Personally, I find the diff approach more elegant, but the one-pass implementation of full analysis may be too problematic and fragile.
https://habr.com/ru/post/513952/
null
null
1,576
59.64
Buzzer utility for pi Project description Summary This is a beeping utility for the common buzzer. This package comes with a variety of beeps that can be called from a command line inteface or imported from python. Installation To install simply install the pip package. pip install beep You can then look at the examples on how to run as a cli. You can also import it into your project using, from beep import pulseBeep,beepDuration beepDuration(pin=12,duration=.33) #beeper on pin 12, on for .33 sec pulseBeep(pin=12,freq=25,duration=1) #pulse beep at 25HZ for 1 seccond Beeps - short: duration 0.05 sec - medium: duration 0.25 sec - long: duration 1.00 sec - warning: pulses 8HZ for 1.5 sec duration - confirmed: pulses 16HZ for .5 sec duration - brr: pulses 50HZ for .5 sec duration Example The cli can be use any of the listed above beeps in the format as the following commands: beep.py warning --pin 12 beep.py short --pin 12 beep.py brr --pin 12 Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pibeep/0.0.1.1/
CC-MAIN-2021-17
refinedweb
198
69.28
William P O'Sullivan wrote:What type of "project" did you start. When you select "new project", Eclipse does most of the setup for you. You then need to add dependent jars to your Java Build Path in the project properties. Eclipse adjusts the .classpath file automatically. I sense you are trying to do too much too quickly, even your comment about "random" examples. Start small, there are in fact a ton of examples in the posts here. I simply copy them to a workspace and help! WP Arnold Strong wrote:I am not really a beginner anymore. This is a "hello world" for google analytics using Java API. The code below is from the google website. public class HelloAnalytics { private static final HttpTransport HTTP_TRANSPORT = new NetHttpTransport(); private static final JsonFactory JSON_FACTORY = new JacksonFactory(); private static Analytics initializeAnalytics() throws Exception { Credential credential = OAuth2Native.authorize( HTTP_TRANSPORT, JSON_FACTORY, new LocalServerReceiver(), Arrays.asList(AnalyticsScopes.ANALYTICS_READONLY)); // ... } } Aniruddh Joshi wrote: - You need not copy all referenced libraries into lib. Aniruddh Joshi wrote: - A valid JDK or JRE is not on your classpath - Since class Arrays is not identified. Check if its a google's version of Arrays. Posting your import statements on top of the file will help. They should be marked in red by eclipse too. When you work with java in eclipse( or any IDE for that matter ) first see if anything ir red in your import statements Aniruddh Joshi wrote: - Google jars are NOT on your classpath. Simplest way is to open the .classpath file. - Since classes like HttpTransport and Credentials are not identified.
http://www.coderanch.com/t/585246/vc/Frustrating-eclipse-resolved-type
CC-MAIN-2015-18
refinedweb
262
59.19
Using CIM Studio to Understand the IIS Metabase This topic explains the relationship between the IIS metabase and the structure of WMI, and compares it with the structure of ADSI. A tool called CIM Studio walks you through the IIS WMI provider and the IIS metabase at the same time. CIM Studio comes with the WMI Administrative Tools, which can be downloaded from MSDN. For more information about downloading CIM Studio, see Information. It is strongly recommended that you install CIM Studio and use it as instructed to learn to use the IIS WMI provider more quickly. If you are already familiar with WMI, it is recommended that you skim through this section. IIS 5.1 and earlier: The IIS WMI provider is not available. To familiarize yourself most efficiently with the IIS WMI provider, review the information on managing IIS in IIS Manager, the IIS metabase schema and configuration file, the IIS Managed Object Format (MOF) file, and CIM Studio. Most systems have management GUIs that are tedious to use if you have large amounts of data or frequent management tasks. But the GUI is the most modular way to learn how to manage a system, so it is used here as an orientation. The IIS metabase schema and configuration files are used to emphasize the difference between the structure of the IIS metabase and the data it contains that is unique to your machine. The IIS MOF file and CIM Studio give you a pictorial representation of the IIS WMI provider. If you have the tools for the IIS Resource Kit installed, another useful tool to have open is the Metabase Editor which acts like a CIM Studio for IIS ADSI interface; however, this tutorial does not reference the Metabase Editor tool. To open IIS Manager From the Start menu, click Run. In the Open box, type inetmgr, and click OK. For alternate methods to open IIS Manager, see IIS Manager. To open the IIS metabase schema file From the Start menu, click Run. In the Open box, type Notepad %systemroot%\System32\Inetsrv\mbschema.xml. From the File menu in Notepad, click Save As. In the File name box, type C:\mbschema, and click Save. This saves the file to C:\mbschema.txt so you can safely modify the file without altering the configuration IIS. To open the IIS metabase configuration file From the Start menu, click Run. In the Open box, type Notepad %systemroot%\System32\Inetsrv\metabase.xml, and click OK. From the File menu in Notepad, click Save As. In the File name box, type C:\metabase, and click Save. This saves the file to C:\metabase.txt so you can safely modify the file without altering the configuration of IIS. To open the IIS WMI provider MOF file From the Start menu, click Run. In the Open text box, type Notepad %systemroot%\System32\wbem\iiswmi.mof, and click OK. From the File menu in Notepad, click Save As. In the File name text box, type C:\iiswmi, and click Save. This saves the file to C:\iiswmi.txt so you can safely modify the file without altering the configuration of IIS. After you download the WMI Administrative Tools from MSDN, WMI Tools should appear in the Start menu. For more information about downloading CIM Studio, see Information. To open CIM Studio From the Start menu, click All Programs, point to WMI Tools, then click WMI CIM Studio. This opens an instance of Internet Explorer containing the CIM Studio tool. You may get an error message that says "VBScript: Possible Version Incompatibility" if the tool is a pre-release version, but that is just a disclaimer until the final version is released. Click OK. When the Connect to Namespace dialog box appears, click Cancel. This dialog box appears only the first time you open CIM Studio. Anytime thereafter that you want to browse to a WMI namespace, you can click the button with the computer icon to the right of the Classes in dialog box in the main Internet Explorer window. In this tutorial, the button with the computer icon is referred to in order to reduce confusion. Click the button with the computer icon to the right of the Classes in dialog box. The Browse for Namespace dialog box opens. In the Machine name box, type \\ and the name of the machine on which IIS 6.0 is installed. The default is the local machine name. In the Starting namespace box, type root\MicrosoftIISv2. This tells CIM Studio that you want to connect to the IIS WMI provider on the above machine. If you want to see a list of other providers that are accessible, leave the starting namespace as root. Click Connect. The WMI CIM Studio Login dialog box opens. Click Options to view the entire dialog box. If the machine you are connecting to lists you as an administrator, you can leave the check box for Login as current user selected. If you aren't listed as an administrator, clear the Login as current user check box and enter an administrator user name and password in the appropriate fields. In the Impersonation level box, verify that Impersonate is selected. In the Authentication level box, verify that Packet is selected. Select the Enable all privileges box only if you want to be able to change properties in the IIS metabase using CIM Studio. Click OK to connect. If an Access Denied error appears, check your user name and password to make sure they are correct. If you left the Starting namespace box as root in step 5, the Browse for namespace dialog box opens again. Expand the root directory to see all the available WMI providers, select MicrosoftIISv2, and click OK to connect to the IIS WMI provider. The CIM Studio window loads all the classes for the IIS WMI provider in the left frame. All the classes that start with a "__" are inherited from the root namespace. The other five classes are CIM_ManagedSystemElement, CIM_Setting, IIsStructuredDataClass, CIM_Component, and CIM_ElementSetting. Now that you have three views of IIS open, this lesson helps you make connections between the content of each as you browse through properties and methods in CIM Studio. To view the default Web site In IIS Manager, expand the Web Sites folder, right-click Default Web Site and click Properties to view the configuration properties for the default Web site. Click the Web Site tab, and in the Description box, Default Web Site is selected by default. Click the Home Directory tab, and in the Local path box, C:\Inetpub\Wwwroot is selected by default if you installed Windows on the C: drive. Open Notepad, and open the IIS configuration file, MetaBase.txt, and from the Edit menu, click Find, and in the Find what box, type Default Web Site. You should find the following block of text: The metabase files use Extensible Markup Language (XML) format, which is similar to HTML. The Default Web Site is an instance of the IIsWebServer class. The ServerComment property matches the Description box in IIS Manager. Where is the property that matches the Local path box in IIS Manager? In the MetaBase.txt file, search for C:\Inetpub\Wwwroot to find the following text: < IIsWebVirtualDir </IIsWebVirtualDir> The Default Web Site appears to also be a virtual directory, or in other words, an instance of the IIsWebVirtualDir class. This is because all Web sites must have a root virtual directory (notice the difference between the Location properties of each class). The Path property matches Local path in IIS Manager. The property sheet for the Default Web Site shows both the IIsWebServer properties and the IIsWebVirtualDir properties. Open the IIS schema file, MBSchema.txt, in Notepad and search for the schema of the IIsWebServer class until you find the following line: Under that line are all the properties that belong to the IIsWebServer class. ServerComment is listed among them. In the MBSchema.txt file, search for IIsWebVirtualDir until you find the following line: Under that line is the schema for the Path property. So far, you are looking at the schema and configuration of the IIS metabase, without reference to WMI. WMI is, after all, only a tool to access the metabase programmatically. Now open IisWmi.txt in Notepad, and search for IIsWebServer until you find the following line: This is the definition of the IIsWebServer class, which WMI uses to access any instance of an IIS Web site. IIsWebServer is a subclass of the CIM_LogicalElement class, which in turn is a subclass of the CIM_ManagedSystemElement class. Notice that there are far fewer properties listed as members of this class than are listed under the metabase schema <Collection> tag for IIsWebServer. Where are all the other properties? Where is ServerComment? The CIM_ManagedSystemElement class contains only the read-only properties and methods for an object. Look at the above excerpt from the IisWmi.txt file to see lines that begin with [Implemented. These lines define the methods contained in the IIsWebServer class. All the writable properties are under the matching element class in the CIM_Setting class, in this case, IIsWebServerSetting. In the IisWmi.txt file, search for IIsWebServerSetting until you find the following line: Under that line is the definition for the ServerComment property. All properties start with [Key or [read. [Key indicates that the property is the identification key for the element, and is usually the Name property. CIM Studio graphically displays the IIS MOF file (or any service's MOF file). As a bonus, CIM Studio can also display instances of any class so that you can view current configuration settings on your Web server. View the Internet Explorer window that contains CIM Studio. Where is the IIsWebServer class? As you saw in the IisWmi.txt file, it is a subclass of the CIM_LogicalElement class, which in turn is a subclass of the CIM_ManagedSystemElement class. Expand CIM_ManagedSystemElement (click the plus sign beside it), then expand CIM_Logical. All the IIS object classes are listed. Click IIsWebServer. The read-only properties, methods, and associations are displayed in the right frame. The Name property is marked with a key icon to identify it as the key of the element. To view instances of the IIsWebServer class (to view the Web sites that are on your server) Click Instances above the right frame that looks like a window with two frames. If you are unsure which button you are looking at, let the mouse cursor hover over the button and Instances should pop up. On a default installation, only one instance appears; W3SVC/1. All the fields in the row correspond to the read-only properties for IIsWebServer, and in the fields are the current values in the metabase configuration file. When you write administration scripts to read or change properties in the metabase, it helps to use the metabase files, the IIS MOF file, or CIM Studio to know which properties are available to an object. An advantage of CIM Studio is that you can view the parameters that you need to pass to a method. To view parameters of a method Click Instances again to go back to the schematic view. Click the Methods tab. Right-click a method name and click Edit Method Parameters. A window appears listing the parameters of the method. While there are no parameters for the methods of the IIsWebServer object, but there are for the IIsWebService object. Organization in WMI is dictated by the needs of inheritance. The IIsWebService class is a prime example. The IIS services take advantage of inheritance of base service classes by being subclasses of those base service classes. To find IIsWebService, look under CIM_ManagedSystemElement, CIM_Service, Win32_BaseService, Win32_Service. This level of nesting does not make programmatic access difficult because you can directly refer to the IIsWebService class when writing a script. To view the parameters of the IIsWebService object Click IIsWebService in the left frame. Click the Methods tab in the right frame. Right-click CreateNewSite and click Edit Method Parameters. If you write a script that creates a new Web site using this method, you need to call the method with the following syntax: Is the ServerId parameter (fourth parameter) missing? No, it is optional, as you can see by right-clicking ServerId in the right frame of CIM Studio and clicking Parameter Qualifiers. The OPTIONAL qualifier is set to true. To view the writable properties of the IIsWebServer object In the left frame of CIM Studio, expand CIM_Setting, expand IIsSetting, and click IIsWebServerSetting. Again, you can click Instances in the right frame to view the current configuration settings of all of the Web sites on your server. Go back to the schematic view of IIsWebServerSetting and click the Associations tab. This is how WMI determines that IIsWebServerSetting is related to IIsWebServer of the CIM_ManagedSystemElement class. This association is represented under the CIM_ElementSetting class as the IIsWebServer_IIsWebServerSetting class. Go back to the IIsWebServer class and click the Associations tab. The association to IIsWebServerSetting is there, but so are several others. The other associations represent containment of one class in another, allowing for inheritance of some properties and methods. Each containment association is represented under the CIM_Component class. Finally, there is the IIsStructuredDataClass. This class contains elements that were represented in a complicated, error-prone manner in ADSI. You should become familiar with the elements of this class and their properties. Notice that they are not associated with any other class. They can exist on their own. As we saw in the parameters of the CreateNewSite method, an array of instances of these elements can be passed to methods.
http://msdn.microsoft.com/en-us/library/ms525342(v=vs.90).aspx
CC-MAIN-2013-48
refinedweb
2,271
65.01
Last night I produced the plot below and was very surprised at the jagged spike. I knew the curve should be smooth and strictly increasing. My first thought was that there must be a numerical accuracy problem in my code, but it turns out there’s a bug in SciPy version 0.8.0b1. I started to report it, but I saw there were similar bug reports and one such report was marked as closed, so presumably the fix will appear in the next release. The problem is that SciPy’s erf function is inaccurate for arguments with imaginary part near 5.8. For example, Mathematica computes erf(1.0 + 5.7i) as -4.5717×1012 + 1.04767×1012 i. SciPy computes the same value as -4.4370×1012 + 1.3652×1012 i. The imaginary component is off by about 30%. Here is the code that produced the plot. from scipy.special import erf from numpy import linspace, exp import matplotlib.pyplot as plt def g(y): z = (1 + 1j*y) / sqrt(2) temp = exp(z*z)*(1 - erf(z)) u, v = temp.real, temp.imag return -v / u x = linspace(0, 10, 101) plt.plot(x, g(x)) 4 thoughts on “Bug in SciPy’s erf function” In [31]: sp.__version__ Out[31]: ‘0.9.0.dev’ In [32]: sp.special.erf(1.0 + 5.7j)*1e-12 Out[32]: (-4.5717045780553551+1.0476748318787288j) The fix is also included in the current Scipy 0.8.0 release. BONJOUR, erf(z) =int(exp-t2dt) 0<t<x (1) erf(z)=1-[1+0.278393z+0.230389z2+0.000972z3+0.078108z4]puiss-4 +e(z) (2) jai pas compris comment trouve (2) aprtir de (1) marci
http://www.johndcook.com/blog/2010/09/02/bug-in-scipys-erf-function/
CC-MAIN-2015-18
refinedweb
285
77.43
In this article, we will learn numerous ways to execute system commands in Python. ➥ Problem: Given an external command that can run on your operating system, how to call the command using a Python script? ➥ Example: Say you want to ping a remote server using your operating system’s ping command—all from within your Python program. How will you do it? Before learning to execute system commands with Python, you must have an idea about what system commands are. 👨💻System Commands In Python In Python, it is important to coordinate some features that work on various tasks of the system administration. These incorporate discovering the files, running some shell commands, doing some high-level document handling, and more. To do so, we need some approach that helps us to find an interface between the operating system and the Python interpreter. Whether you are a developer or a system administrator, automation scripts that involve system commands. Now that we know what a system command is, let’s dive into the different methods to execute system commands with Python. ❖ The OS Module Python is loaded with valuable tools within the Python standard library. The os module in Python is one of those powerful tools of the Python standard library. It is a standard utility module that helps you to communicate with the operating system. ➠ In other words, the os module in Python provides several functions to interact with the shell and the system directly. Let us have a look at a couple of methods to invoke system commands using the os module. (1) os.system() – It returns the state code after the execution result is executed in the shell, with 0 that shows a fruitful execution. (2) os.popen() – It is the immediate return of the execution result that returns a memory address and parses the information in the memory address with read(). 💻os.system() The os.system() function helps to immediately interact with the shell by passing commands and arguments to the system shell. It returns an exit code upon completion of the command, which means that an exit code of 0 denotes successful execution,re. Note: You must import the os module in order to utilize it. ✨Example: # Importing the os module import os # System command to Check Python Version ver = "python --version" # This method will return the exit status of the command status = os.system(ver) print('The returned Value is: ', status) # System command to run Notepad editor = 'notepad' os.system(editor) Output: The above example returns the Python version and opens Notepad on Windows using command line execution. It also returns the exit code (status = 0), which implies that the program executed the system command successfully. 💻os.popen() os.popen() will do exactly the same thing as os.system apart from that, it’s anything but a document-like object that you can use to get to standard input or output for that process. On the off chance that you call the os.popen() it returns the output to Python as a string. The string contains numerous strings that have the newline character \n. Syntax: Here, - command is what you’ll execute, and its output will be accessible through an open file. - mode characterizes whether this output document is readable ‘r’ or writable ‘w’. To recover the exit code of the command executed, you should use the exit()method for the document object. - bufsize advises popenhow much data it can buffer. It accepts one of the following values: - 0 – un-buffered - 1 – line buffered - N – estimated buffer size, when N > 0. ✨Example 1: Let us have a look at the following program which uses popen to print a string as output by invoking the shell command, echo: import os print(os.popen("echo Hello FINXTER!").read()) Output: Hello FINXTER! ✨Example 2: The following example illustrates how you can use the os.popen() command to create a new folder in windows using the mkdir command. # Importing the os module import os # This method will store the output p = os.popen('mkdir new_folder') # Printing the read value print(p.read()) Output: ❖ The Subprocess Module The subprocess module accompanies different techniques or functions to produce new processes, associate with their input or output and error pipes, then acquire their return codes. Just like the os module, you have to import the subprocess module in order to utilize it. 💻subprocess.call() The subprocess.call() method takes in the command line arguments. The arguments are passed as a list of strings or with the shell argument set to True. Thus the subprocess.call() function is used to return an exit code which can then be used in the script to determine if the command executed successfully or it returned an error. Any return code other than “ 0” means that there was an error in execution. ✨Example: Let us have a look at the following program using the call() function to check if the ping test is successful in the system : import subprocess return_code = subprocess.call(['ping', 'localhost']) print("Output of call() : ", return_code) Output: 💻subprocess.Popen subprocess.Popen takes into account the execution of a program as a child process. Since this is executed by the operating system as a different process, the outcomes are independent of the platform. The subprocess.Popen is a class and not simply a method. Consequently, when we call subprocess.Popen, we’re really calling the constructor of the class Popen. Note: The Python documentation suggests the utilization of subprocess.Popen only in advanced cases, when other strategies such like subprocess.call can’t satisfy our necessities. ✨Example: # Importing the subprocess module import subprocess # path will vary for you! subprocess.Popen('C:\Program Files (x86)\Microsoft Office\Office14\EXCEL.EXE', shell=True) Output: The above code will execute the command and start the MS EXCEL given that the shell is set to True. 💻subprocess.run() and subscribe for more interesting discussions in the future. 🖊️Post Credits: SHUBHAM SAYON and RASHI AGARWAL!!
https://blog.finxter.com/how-to-execute-system-commands-with-python/
CC-MAIN-2021-43
refinedweb
993
56.86
Details Activity - All - Work Log - History - Activity - Transitions tests fixed Thanks! I am curious to know more about the issues with JAXB that caused the switch to XMLBeans. Can anyone explain the details around this issue? While the default JAXB binding configuration does not retain namespace information for the HTD expression nodes I believe the JAXB binding configuration can be modified to do so. It seems that most of the HTD expressions are of types TExpression and TExtensibleMixedContentElements. Both of these types are XML mixed types and the preferred JAXB binding for mixed types is a w3c DOM. If the HTD expressions were represented as DOM objects then the namespaces could easily be retrieved by invoking the DOM lookupNamespaceURI method and then the prefix/namespaces could be programatically declared to Saxon. Having the expression nodes as DOM objects would probably be more efficient than the previous conversion of JAXB generated strings back to DOM nodes for XQuery evaluation. Another option would be to create a custom JAXB javaType binding in the JAXB custom binding file and specify a parse and print methods as well as setting the hasNsContext JAXB attribute to true so that the curent NameSpaceContext could be passed in and retained in the generated Java object. If desired I could demonstrate either of these approaches using a recent revision of the trunk prior to the conversion to XMLBeans. Why do you think JAXB is better? At the beginning I did not want to remove JAXB but I encountered difficulties. 1. JAXB object but also marshalled jaxb object does not retain all declared namespaces. I do not know how can I change this using the binding configuration. 2. I think that by using <jaxb:dom/> I could get only fragment of the document tree. That fragment doesn't contain "unused" namespace declarations from parent nodes. 3. hasNsContext is not supported by the JAXB RI. Now, HISE reaches stability after the change. I do not know whether we want revert to JAXB. Rafał decides. I'm curious about a solution to this problem, so let me know if you have one. The main reason I would like HISE to continue to use JAXB is that it is an official JCP standard and it is fully integrated into JAXWS. A couple months ago I made a contribution to remove the direct dependency on Spring so that HISE could be used in other frameworks such as JavaEE 6 and SCA. I am currently working on a JavaEE 6 deployment framework for HISE but unfortunately I have not been able to complete it due to recent increases in work and travel for my job. The switch to XMLBeans may halt my work since I am unfamiliar with XMLBeans and I don't believe their is native support for a XMLBeans binding for JAXWS outside of CFX. There was no discussion on the HISE development list on the challenges of using JAXB or the decision to switch to XMLBeans otherwise I would have offered to contribute to related issues earlier. It appears you have already looked into JAXB DOM bindings and didn't have any success and due to time constraints switching to XMLBeans was the only option. I will see if I can find an adequate JAXB solution and report back. After reverting the trunk to use JAXB again on my local machine I was able to retrieve the full NamespaceContext for the tExpression elements. I had to attach the jaxb:dom binding to both the tExpression instances (priority, etc) and the TTask's any declaration. After unmarshalling the WS-HumanTask documentI am able to pull the DOM element from the any property and then I can use the apache commons XMLSchema's NodeNamespaceContext utility to rebuild the NamespaceContext just like how it is done currently. I used the following example: <htd:humanInteractions xmlns:htd="" ... xmlns: <htd:tasks> <htd:task ... <htd:priority xmlns: xs:integer(htd:getInput("ClaimApprovalRequest")/cla:prio*(cla2:t2+cla3:t3)) </htd:priority> and I was able to pull back the prefix mappings for the three cla declarations so I am fairly confident this solution is adequate. Because the DOM content is being attached to the any property instead of the corresponding named property I will need to further enhance the binding customization to compensate for this. Now that I have concluded that it is technically possible to revert back to JAXB I was wondering if there is any interest in doing so. The change would be quite extensive and due to the amount of work left I would like to get feedback from the project on whether it is worth pursuing before I commit to completing the task. Here is a patch that re-introduces JAXB marshalling to the current trunk. It uses the JAXB Binder class to to map JAXB classes to DOM elements and vice versa. As a future enhancement a custom coded JAXB class for the tExpression type can be inserted into the JAXB code generation process and an unmarshaller listener can be created to populate the custom tExpression type with a NameSpace context value. This way after unmarshalling the Binder and DOM document can be discarded while the necessary expression objects would still have the namespacecontext property available for expression evaluation. Another enhancement would be to create custom coded JAXB classes for the lax types that contain content that needs to be unmarshalled properly. More specifically, the tFrom type. Without the JAXB binding customization generateMixedExtensions="true" the tFrom children remain as DOM objects and do not get unmarshalled. Even with generateMixedExtensions="true" the content does get marshalled but it gets placed in a protected field that can only be accessed via reflection. This is rather inelegant and should be remedied. Namespace context of expressions is available with xmlbeans.
https://issues.apache.org/jira/browse/HISE-68?focusedCommentId=12897673&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-35
refinedweb
963
50.87
Dynamic help I had an attendee come up to me today asking if they could talk to someone from VS-Core. 'Core' is the name that we use for the team that is responsible for the VS infrastructure that is shared amongst all languages. An example of this would be the editor. While the C# team is responsible for the C# editing experience, we fulfill that experience by sitting on top of the interfaces provided by Core. Of course, this isn't a hard rule. The debugger is shared by all languages, however they're contained with the C# team. He was happy with VS as a whole, but wanted to give some critical feedback about Dynamic Help. The basic problem was that while he used (and liked) DH he didn't like how having it open could affect the start-up time of VS. I agreed that that sort of a delay is quite frustrating and that we should work hard to fix up that issue. I also pointed him to Stephanie (a PM on the Core team) to let him talk to her directly about these issues It got me thinking and had me start asking attendees what they used DH for. It turns out that big part of it is to get information about the structure of a class. I.e. if they type: "IList list", then they'll get the help on IList (using DH) so that they can know all about that class in a way that completion lists fail. They can see all the information at once, rather than just 8 lines as a time. They can see inheritance relationships and casts. They can see all the docs at once rather than method by method. Well, it turns out that we've added something into the C# editing experience to help accomplish all these needs. It's called the "Code Definition" window and it works like this: When your cursor is on a identifier we automatically determine the definition of that identifier and we show the code where that thing is defined in this window. For example of you have: IList snarf; //lots of code Console.WriteLine(snarf); If you then have your cursor on the 'snarf' identifier in the last line, then the code definition window will show the source code for this file centered on the line containing 'IList snarf'. So right then and there you know what snarf is. Another case is where you have your cursor on 'IList' in the case above. We'll then show you the code for IList in the definition window. Now, chances are you don't have the source code for IList, so what we'll do instead is take the metadata for System.Collections.IList and we'll convert it into pseudo-C#. We'll even take the XML doc comments and we'll insert it above all the class/method definitions. I call is pseudo-xml because while it's grammatically correct, it's uncompilable. For example, say you're looking at the definition of ArrayList. We'll spit out something like: public class ArrayList : //some bases and interfaces { //lots of stuff int Count { get; set; } //lots of stuff } i.e. we'll spit out the signatures of methods, but not the bodies (although that would be a fun project decompiling IL). Having this capability seemed to meet all the needs of how people tended to use DH. The places where it doesn't are when you want help to stay up in a separate window and you don't want it to change when you move around in your file. It also doesn't help when you are looking for things that are help only (like examples). So I was wondering how people here felt about DH. Do you love it/hate it? Either way, why? I'll work on getting some images up of the Code Definition window, but before i do that, feel free to give me some thoughts on if you like the idea, or are looking for something else.
https://docs.microsoft.com/en-us/archive/blogs/cyrusn/dynamic-help
CC-MAIN-2020-24
refinedweb
681
78.59
Developing a 2D game in Java ME - Part 3 This article shows how to use Java ME's low-level interface classes to create the game screen and access key presses. This is the third group triangle. - work.(getGraphics()); // redraw screen this.flushGraphics(); // controls at which rate are the updates done try { Thread.sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } } midlet.endGame(lifes, score, time); } … }. Through, when hit with the pad it bounces to other direction - The bricks are static blocks in the upper part of the screen. When they are hit by the ball they disappear. The goal of the game is to make the bricks disappear as fast as possible, and not let the ball get away through where the graphics class is used to draw the entity. - collided(): Helper function that allows to check collision between entities. Next, extend the class Entity for each game element and implement the update and paint methods. For the ball: public class Ball extends Entity { public int radius = 2; public Ball(int radius) { this.radius = radius; width = radius * 2; height = radius * 2; // red color this.color = 0x00FF0000; } /** * paints the ball */ public void paint(Graphics g) { g.setColor(color); g.fillArc(x, y, radius*2, radius to speed x += speedX; // check if the world bounds are reached if (x < minLimit) { x = minLimit; } if (x + width > maxLimit){ x = maxLimit - width; } } } And finally for the bricks: public class Brick extends Entity { boolean active = true; public Brick(int color) { this.color = color; } public void paint(Graphics g) { // only paints brick if it's still active if (active) { g.setColor(color); g.fillRect(x, y, width, height); } } public void update() { // the bricks don't move } } a ball ball = new Ball(4); ball.x = getWidth() / 2; ball.y = getHeight() / 2; ball.speedX = 1; ball.speedY = 1; // set collision limits wallMinX = 0; wallMaxX = getWidth(); wallMinY = 0; // to allow ball get out of screen wallMaxY = getHeight() + 4 * ball.radius; //); } } public void updateGameState() { pad.update(); ball.update(); checkBallCollisionWithWalls(); checkBallCollisionWihPad(); checkBallCollisionWithBricks(); checkBallOutOfReach(); // check if all bricks are hit if (bricksHit == bricks.size()) { run = false; } } The game needs to react to keypad events to move the player pad:.
http://developer.nokia.com/community/wiki/index.php?title=Developing_a_2D_game_in_Java_ME_-_Part_3&oldid=168762
CC-MAIN-2014-35
refinedweb
356
57.98
Help: Mojo Canvas, Glyph Navigator Idea Hi all I am trying to make a zoomed in preview that has similar functionality to Illustrators Navigator panel, it's purpose is so that I can have a permanent crop of an enlarged area of a glyph whilst I am editing it. I should be able to control the level of zoom, and it should be draggable (you might call this panning) so I can inspect other areas of the glyph at the same magnification. Are there native ways of the canvas to achieve this? Perhaps there's something like canvas.scale()? When it comes to moving around the canvas I assumed it would scroll if the canvas is bigger than the window, but this does not seem to be the case even with the relevant keyword args set to true, at least in my test. I also tried acceptsMouseMoved=True(as it sounds useful) but this is an unexpected keyword. I also tried to do something with the mouseDragged()method but I am not sure how to use it correctly. I would appreciate some guidance on this. I am receiving the NSEventcontaining dragging data, but I don't know how to do something useful with it. I have attached my first test of implementing this idea. If I have done something wrong, please let me know. I haven't changed much from the canvas API example it is based on. () My apologies, I cannot attach the .py file. Here is a gist: - you can transform the canvas with a drawBot style api see scale(value)will do it acceptsMouseMovedis very intensive as the canvas would send notifications whenever the mouse has moved, instead of the sequence mouseDown mouseDrag mouseUpbut you will need mouseMovedtoo you will need to check for the notifications: glyphWindowWillOpenand glyphWindowWillCloseand currentGlyphChangedto be able to make such a panning panel each glyph window has a glyph view, and a handy method is _getMousePosition(nsEventObj)now private, but I can open that in next versions... this is converting the NSEventmouse positions to glyph space coordinates. I guess from here you should be able to make the puzzle otherwise ask enjoy! and for acceptsMouseMovedthat seems to be a typo in the docs your controller class needs: def acceptsMouseMoved(self): return True Hi Frederik, thank you for your response. I have been trying your suggestions and have had success with the use of scale, but am still struggling to make sense of mouseDown mouseDrag mouseUprelative to the canvas. From what I have tried, using those observers in the usual way works for the mouse position relative to the current glyph window, but not relative to my canvas window. I can get the 'raw' NSEventof mouseDownrelative to my canvas, but I have no idea how to turn that into useful tuples etc. I updated the Gist with comments and questions that explain what I mean, please take a look I didn't understand what _getMousePosition(nsEventObj)is or how to use it. Could you explain further? I hope to learn a lot from this, thanks for your guidance. PS: sorry I can't find the formatting instructions to make this post look as pretty as yours! no problem here are some hints and tricks to get it working I hope this sets you in the right direction good luck from mojo.canvas import Canvas from mojo.drawingTools import * import mojo.drawingTools as mojoDrawingTools from vanilla import * from mojo.events import addObserver,removeObserver from defconAppKit.windows.baseWindow import BaseWindowController import AppKit class ExampleWindow(BaseWindowController): def __init__(self): self.size = 50 self.offset = 0, 0 self.w = Window((400, 400), minSize=(200, 200)) self.w.slider = Slider((10, 5, -10, 22), value=self.size, callback=self.sliderCallback) self.w.canvas = Canvas((0, 30, -0, 400), canvasSize=(1000, 1000),delegate=self, hasHorizontalScroller=True, hasVerticalScroller=True)#,acceptsMouseMoved=True) # acceptsMouseMoved=True sounds useful, but it is an unexpected keyword? # ----->>>> 'acceptsMouseMoved' is a method, return True to receive mouseMoved callback in the delegate (see below) self.setUpBaseWindowBehavior() self.w.open() print help(mojoDrawingTools) # module docs link broken # ----->>>>> euh funny :) print help(Canvas) ## HELP ## if I use these observers, I can only get information about the mouse when clicking in a glyph window, ## I Want the information when I click in the canvas I created ## If I don't use these observers and use the methods directly, I do get the event when clicking on the canvas, but it is NSEvent, the information for instance location is relative to this window, which is what I want. ## how can I get this inforation as python dict or tuples or lists? addObserver(self, "_mouseMoved", "mouseMoved") addObserver(self, "_mouseDown", "mouseDown") # ----->>>>> # this object acts a delegate for the canvas object # every event can be redirected to this object from the canvas object # this will provide you info about events within the canvas object # obserers just observer a specific action and send a notification to a given method # <<<<<<----- def windowCloseCallback(self, sender): #when this window w is closed, remove the observer removeObserver(self, "mouseMoved") removeObserver(self, "mouseDown") # super I do not know what it's for, but it is in the example # ------>>>>> you need this as the BaseWindowController also does stuff while closing the window super(ExampleWindow, self).windowCloseCallback(sender) def sliderCallback(self, sender): self.size = sender.get() self.w.canvas.update() def draw(self): # offset the canvas x, y = self.offset translate(x, y) #set scale of canvas scale(self.size*0.1) #print CurrentGlyph() if CurrentGlyph(): #draw current glyph to canvas (will need to observe when current glyph changes, but this is just to test) drawGlyph(CurrentGlyph()) def acceptsMouseMoved(self): return False # mouse down that is not from observer, gives NSEvent for click relative to window def mouseDown(self, event): # ----->>>> see print type(event) view = self.w.canvas.getNSView() print view.convertPoint_fromView_(event.locationInWindow(), None) # mouse down from observer give info as dict def _mouseDown(self, notification): print notification["event"] print notification["glyph"] print notification["point"] # mouse moved that is not from observer, gives NSEvent for mouse relative to window def mouseMoved(self, event): print event.locationInWindow() # mouse moved from observer give info as dict def _mouseMoved(self, notification): print notification def mouseDragged(self, event): # how to grab and drag the canvas? from sender I get information in an NSEvent, how can I access that information and do something useful with it? print 'drag?', event x, y = self.offset self.offset = x + event.deltaX(), y - event.deltaY() self.w.canvas.update() ExampleWindow() Thank you so much for the example, not only for the fact you got the basic functionality there, but also for the comments and tips. I will study it hard I can certainly take it from here, It's mostly a learning exercise, but if it turns out to be useful I'll see about making it robust enough for sharing. Thanks again Side note about docs: I think that if you print help it should not print a joke (or broken) link otherwise it leads you to believe you're missing important information that is no longer accessible, perhaps it can be updated? It's a similar story for the "starting with mojo" link on the homepage, I can't count the amount of times I clicked on that thinking that's where I will find answers and there's nothing there. Both examples undermine the authority of the docs that do exist, leaving you wondering if they are incomplete.
http://forum.robofont.com/topic/422/help-mojo-canvas-glyph-navigator-idea/7
CC-MAIN-2018-34
refinedweb
1,239
50.67
iostream and files larger than 4GB Discussion in 'C++' started by Robert Kochem,,392 - Dylan Parry - Jun 3, 2005 #include <iostream.h> or <iostream>John Tiger, Aug 4, 2003, in forum: C++ - Replies: - 10 - Views: - 5,594 - ghl - Aug 6, 2003 is MS newer <iostream> is slower than older <iostream.h>?ai@work, Dec 15, 2004, in forum: C++ - Replies: - 9 - Views: - 542 - Ron Natalie - Dec 16, 2004 iostream + iostream.hS. Nurbe, Jan 14, 2005, in forum: C++ - Replies: - 7 - Views: - 776 - red floyd - Jan 15, 2005 Semi OT: Mixing iostream and iostream.hred floyd, Mar 8, 2005, in forum: C++ - Replies: - 3 - Views: - 535 - Dietmar Kuehl - Mar 8, 2005
http://www.thecodingforums.com/threads/iostream-and-files-larger-than-4gb.626902/
CC-MAIN-2014-41
refinedweb
110
84.81
💬 Long Range Vehicle Controller - openhardware.io last edited by @openhardware.io Can you explain more of what this would be used for? I am guessing that it wouldn't be for the directional motor control as I don't see any kind of H bridge drivers. H-bridge drivers would give it differential steering, though you do say "Includes tilt-compensated compass for fully automated hold-heading and/or course correction." I also don't see any programming header. - CORTEX Systems Hardware Contributor last edited by It can be used to engage up to 3 thrusters, solenoids, or other unidirectional actuators . Speed is controlled by pulse width modulation. Directional control is achieved by varying the speeds of two identical motors rather than reversing motor direction. There is no programming header. The Atmega chip is in a socket, and can be easily removed for programming on a breadboard or Arduino socket. I am trying to keep the price and number of parts as low as possible because I will be using this with robotic ocean craft that may have a high rate of loss due to extreme weather. @CORTEX-Systems My reason for the question was for an old project I am resurecting. I just started a forum post about it. I am basically looking to redo all of the electronics and one of the things I have to think of is navigation. You mention "fully automated hold-heading and/or course correction". Do you do this using the LSM303? I would be curious to see some code. - CORTEX Systems Hardware Contributor last edited by CORTEX Systems The mowbot looks pretty interesting -- I like the variety of sensors you incorporated. Here is some basic code you can use to test the course correcting ability of the LSM303. The code will print to screen "turn right" or "turn left." After a set amount of time, the code will print "reverse" and set you on a new heading. #include <Wire.h> #include <LSM303.h> int target = 350; int target2 = 180; unsigned long timer = 60000; boolean reverse = false; LSM303 compass; void setup() { Serial.begin(9600); Wire.begin(); compass.init(); compass.enableDefault(); /* Calibration values; the default values of +/-32767 for each axis lead to an assumed magnetometer bias of 0. Use the Calibrate example program to determine appropriate values for your particular unit. */ compass.m_min = (LSM303::vector<int16_t>){-485, -508, -291}; compass.m_max = (LSM303::vector<int16_t>){+476, +456, +588}; } void loop() { compass.read(); /* When given no arguments, the heading() function returns the angular difference in the horizontal plane between a default vector and north, in degrees. The default vector is chosen by the library to point along the surface of the PCB, in the direction of the top of the text on the silkscreen. This is the +X axis on the Pololu LSM303D carrier and the -Y axis on the Pololu LSM303DLHC, LSM303DLM, and LSM303DLH carriers. To use a different vector as a reference, use the version of heading() that takes a vector argument; for example, use compass.heading((LSM303::vector<int>){0, 0, 1}); to use the +Z axis as a reference. */ int heading = compass.heading(); float deviation = abs(target - heading); if (deviation > 1){ if (deviation >= 180){ if (heading < target){ Serial.println("turn left");} else{ Serial.println("turn right");} } else { if (heading < target){ Serial.println("turn right");} else{Serial.println("turn left");} }} else{Serial.println("On Course");} Serial.println(heading); if (reverse == false) { if (millis() >= timer){ target = target2; reverse = true; Serial.println("Reverse");}} delay(100); } You can change the variables target, target2 and timer, the delay interval and also deviation in the line of code: if (deviation > 1){ You can further modify the code so that instead of (or in addition to) to printing commands, the commands are sent to the motors. For smoother operation, you can average the deviation over several loops before course-correcting. Nice, Thanks
https://forum.mysensors.org/topic/7850/long-range-vehicle-controller
CC-MAIN-2022-27
refinedweb
644
57.57
The last major React-related topic we are going to look at is less about React and more about setting up your development environment to build a React app. Up until now, we've been building our React apps by including a few script files: <script src=""></script> <script src=""></script> <script src=""></script> These script files not only loaded the React libraries, but they also loaded Babel to help our browser do what needs to be done when it encountered bizarre things like JSX: To review what we mentioned earlier when talking about this approach, the downside is performance. As part of your browser doing all of the page-loading related things it normally does, it is also responsible for turning your JSX into actual JavaScript. That JSX to JavaScript conversion is a time-consuming process that is fine during development. It isn't fine if every user of your app has to pay that performance penalty. The solution is to setup your development environment where your JSX to JS conversion is handled as part of your app getting built: With this solution, your browser is loading your app and dealing with an already converted (and potentially optimized) JavaScript file. Good stuff, right? Now, the only reason why we delayed talking about all of this until now is for simplicity. Learning React is difficult enough. Adding the complexity of build tools and setting up your environment as part of learning React is just not cool. Now that you have a solid grasp of everything React does, it's time to change that with this tutorial. In the following sections, we're going to look at one way to set up your development environment using a combination of Node, Babel, and webpack. If all of this sounds bizarre to you, don't worry. You'll be on a first name basis with all of these tools by the end of it. Onwards! Meet the Tools Ok, it is time to move further away from generalities (and sweet diagrams). It is time to get serious...er. It is time to meet the tools that we are going to be relying on to properly setup our development environment. Node.JS For the longest time, JavaScript was something you wrote to have things happen in your browser. Node.js changes all of this. Node.js allows you to use JavaScript to create applications that run on the server and have access to APIs and system resources that your browser couldn't even dream of. It is basically a full-fledged application development runtime whose apps (instead of being written in Java, C#, C++, etc.) are built and run entirely on JavaScript. For our purposes, we are going to be relying on Node.js to manage dependencies and tie together the steps needed to go from JSX to JavaScript. Think of Node.js as the glue that makes our development environment work. Babel This one should be familiar to us! Simply put, Babel is a JavaScript transformer. It turns your JavaScript into…um...JavaScript. That sounds really bizarre, so let me clarify. If you are using the latest JavaScript features, older browsers might not know what to do when they encounter a new function/property. If you are writing JSX, well...no browser will know what to do with that! What Babel does is take your new-fangled JS or JSX and turn into a form of JS that most browsers can understand. We've been using its in-browser version to transform our JSX into JavaScript all this time. In a few moments, you'll see how we can integrate Babel as part of our build process to generate an actual browser-readable JS file from our JSX. Webpack The last tool we will be relying on is webpack. It is known as a module bundler. Putting the fancy title aside, a lot of the frameworks and libraries your app includes have a lot of dependencies where different parts of the functionality you rely on might only be a subset of larger components. You probably don't want all of that unnecessary code, and tools like webpack play an important role to allow you to only include the relevant code needed to have your app work. They often bundle all of the relevant code (even if it comes from various sources) into a single file: We'll be relying on webpack to bundle up the relevant parts of the React library, our JSX files, and any additional JavaScript into a single file. This also extends to CSS (LESS/SASS) files and other types of assets your app uses, but we'll focus on just the JavaScript side here. Your Code Editor No conversation about your development environment can happen without talking about the most important tool in all of this, your code editor: It doesn't matter whether you use Sublime, Atom, VisualStudio Code, TextMate, Coda, or any other tool. You will spend some non-trivial amount of time in your code editor to not just build your React app, but to also configure the various configuration files that Node, Babel, and WebPack need. It is Environment Setup Time! At this point, you should have a vague idea of what we are trying to do...the dream we are trying to achieve! We even looked at the various tools that will play a role in making this dream a reality. Now, it is time for the hard work to actually make everything happen. Setting up our Initial Project Structure The first thing we are going to do is setup our project. Go to your Desktop and create a new folder called MyTotallyAwesomeApp. Inside this folder, create two more folders called dev and output. Your folder arrangement will look a little bit like the following: What we are doing here is pretty simple. Inside our dev folder, we will place all of our unoptimized and unconverted JSX, JavaScript, and other script-related content. In other words, this is where the code you are writing and actively working on will live. Inside our output folder, we will place the result of running our various build tools on the script files found inside the dev folder. This is where Babel will convert all of our JSX files into JS. This is also where webpack will resolve any dependencies between our script files and place all of the important script content into a single JavaScript file. The next thing we are going to do is create the HTML file that we will point our browser to. Inside the MyTotallyAwesomeApp folder, use your code editor to create a new HTML file called index.html with the following contents: <!DOCTYPE html> <html> <head> <title>React! React! React!</title> </head> <body> <div id="container"></div> <script src="output/myCode.js"></script> </body> </html> Be sure to save your file after adding this content in. Now, speaking of the content, our markup is pretty simple. Our document's body is just an empty div element with an id value of container and a script tag that points to the final JavaScript file (myCode.js) that will get generated inside the output folder: <script src="output/myCode.js"></script> Besides those two things, our HTML file doesn't have a whole lot going for it. If we had to visualize the relationship of everything right now, it looks a bit like this: I've dotted outlined the myCode.js file in our output folder because that file doesn't exist there yet. We are pointing to something in our HTML that currently is non-existent, but that won't stay that way for long. Installing and Initializing Node.js Our next step is to install Node.js. Visit the Node.js site to install the version that is appropriate for your operating system: I tend to always install the latest version, so you should go with that as well. The download and installation procedure isn't particularly exciting. Once you have Node.js installed, test to make sure it is truly installed by launching the Terminal (on Mac), Command Prompt (on Windows), or equivalent tool of choice and typing in the following and pressing Enter: node -v If everything worked out properly, you will see a version number displayed that typically corresponds to the version of Node.js you just installed. If you are getting an error for whatever reason, follow the troubleshooting steps listed here. Next, we are going to initialize Node.js on our MyTotallyAwesomeApp folder. To do this, first navigate to the MyTotallyAwesomeApp folder using your Terminal or Command Prompt. On OS X, this will look like the following: Now, go ahead and initialize Node.js by entering the following: npm init This will kick of a series of questions that will help setup Node.js on our project. The first question will ask you to specify your project name. Hitting Enter will allow you to specify the default value that has already been selected for you. That is all great, but the default name is our project folder which is MyTotallyAwesomeApp. If you hit Enter, because it contains capital letters, it will throw an error: Go ahead and enter the lowercase version of the name, mytotallyawesomeapp. Once you've done that, press Enter. For the remaining questions, just hit Enter to accept all the default values. The end result of all of this is a new file called package.json that will be created in your MyTotallyAwesomeApp folder: If you open the contents of package.json in your code editor, you'll see something that looks similar to the following: { "name": "mytotallyawesomeapp", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC" } Don't worry too much about the contents of this file, but just know that one of the results of you calling npm init is that you have a package.json file created with some weird properties and values that Node.js totally knows what to do with. Installing the React Dependencies What we are going to do next is install our React dependencies so that we can use the React and React DOM libraries in our code. If you are coming from a pure web development background, this is going to sound strange. Just bear with me on this. In your Terminal or Command Prompt, enter the following to install our React dependencies: npm install react react-dom --save Once you Enter this, a lot of weird stuff will show up on your screen. What is happening is that the React and React-DOM libraries (and stuff that they depend on) is getting downloaded from a giant repository of Node.js packages. If you take a look at your MyTotallyAwesomeApp folder, you'll see a folder called node_modules. Inside that folder, you'll see a bunch of various modules (aka what Node.js calls what we mere mortals just call libraries). Let's update our visualization of our current file/folder structure to look as follows: The list of modules you see right now is just the beginning. We'll be adding a few more by the time you reach the end of this, so don't get too attached the number of items you see inside our node_modules folder :P Adding our JSX File Things are about to get (more!) interesting. Now that we've told Node.js all about our interest in React, we are one step closer towards building a React app. We are going to further enter these waters by adding a JSX file that is a modified version of the example we saw in the Components tutorial. Inside our dev folder, using the code editor, create a file called index.jsx with the following code as its contents: import React from "react"; import ReactDOM from "react-dom"; var HelloWorld = React.createClass({ render: function() { return ( <p>Hello, {this.props.greetTarget}!</p> ); } });") ); Notice that the bulk of the JSX we added is pretty much unmodified from what we had earlier. The only difference is that what used to be script references for getting the React and React DOM libraries into our app has now been replaced with import statements pointing to our react and react-dom Node.js packages we added a few moments ago: import React from "react"; import ReactDOM from "react-dom"; Now, you are probably eagerly wondering when we can build our app and get it all working in our browser. Well, there are still a few more steps left. This is what the current visualization of our project looks like: Our index.html file is looking for code from the myCode.js file which still doesn't exist. We added our JSX file, but we know that our browser doesn't know what to do with JSX. We need to go from index.jsx in our dev folder to myCode.js in the output folder. Guess what we are going to do next? Going from JSX to JavaScript The missing step right now is turning our JSX into JavaScript that our browser can understand. This involves both webpack and Babel, and we are going to configure both of them to make this all work. Setting up Webpack Since we are in Node.js territory and both webpack and Babel exist as Node packages, we need to install them both just like we installed the React-related packages. To install webpack, enter the following in your Terminal / Command Prompt: npm install webpack --save This will take a few moments while the webpack package (and its large list of dependencies) gets downloaded and placed into our node_modules folder. After you've done this, we need to add a configuration file to specify how webpack will work with our current project. Using your code editor, add a file called webpack.config.js inside our MyTotallyAwesomeApp folder: Inside this file, we will specify a bunch of JavaScript properties to define where our original, unmodified source files live and where to output the final source files. Go ahead and add the following JavaScript into it: var webpack = require("webpack"); var path = require("path"); var DEV = path.resolve(__dirname, "dev"); var OUTPUT = path.resolve(__dirname, "output"); var config = { entry: DEV + "/index.jsx", output: { path: OUTPUT, filename: "myCode.js" } }; module.exports = config; Take a few moments to see what this code is doing. We defined two variables called DEV and OUTPUT that refer to folders of the same name in our project. Inside the config object, we have two properties called entry and output that use our DEV and OUTPUT variables to help map our index.jsx file to become myCode.js. Setting up Babel The last piece in our current setup is to transform our index.jsx file to become regular JavaScript in the form of myCode.js. This is where Babel comes in. To install Babel, let's go back to our trusty Terminal / Command Prompt and enter the following Node.js command: npm install babel-core babel-loader babel-preset-es2015 babel-preset-react --save With this command, we install the babel-loader, babel-preset-es2015, and babel-preset-react packages. Now, we need to configure Babel to work with our project. This is a two-step process. The first step is to specify which Babel presets we want to use. There are several ways of doing this, but my preferred way is to modify package.json and add the following highlighted content: { "name": "mytotallyawesomeapp", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC", "dependencies": { "babel-loader": "^6.2.4", "babel-preset-es2015": "^6.9.0", "babel-preset-react": "^6.5.0", "react": "^15.1.0", "react-dom": "^15.1.0", "webpack": "^1.13.1" }, "babel": { "presets": [ "es2015", "react" ] } } In the highlighted lines, we specify our babel object and specify the es2015 and react preset values. The second step is to tell webpack about Babel. In our webpack.config.js file, go ahead and add the following highlighted lines: var webpack = require("webpack"); var path = require("path"); var DEV = path.resolve(__dirname, "dev"); var OUTPUT = path.resolve(__dirname, "output"); var config = { entry: DEV + "/index.jsx", output: { path: OUTPUT, filename: "myCode.js" }, module: { loaders: [{ include: DEV, loader: "babel", }] } }; module.exports = config; We added the module and loaders objects that tell webpack to pass the index.jsx file defined in our entry property through Babel to turn into JavaScript. With this change, we've pretty much gotten our development environment setup for building a React app. Building and Testing Our App The last (and hopefully most satisfying) step in all of this is building our app and having the end-to-end workflow work. To build our app, go back to our Terminal / Command Prompt and enter the following: ./node_modules/.bin/webpack This command runs webpack and does all the things we've specified in our webpack.config.js and package.json configuration files. Your output in your Terminal / Command Prompt will look something like the following: Besides seeing something that vaguely looks like a successful build displayed in cryptic text form, go to your MyTotallyAwesomeApp folder. Open your index.html file in your browser. If everything was setup properly, you'll see our simple React app displaying: If you venture into the Output folder and look at myCode.js, you'll see a fairly hefty (~700Kb) file with a lot of JavaScript made up of the relevant React, ReactDOM, and your app code all organized there. Conclusion Well...so that just happened! In the preceding many sections, we followed a bunch of bizarre and incomprehensible steps to get our build environment setup to build our React app. What we've seen is just a very small part of everything you can do when you put Node, Babel, and webpack together. The unfortunate thing is that covering all of that goes well beyond the scope of learning React, but if you are interested in this, you should definitely invest time in learning the ins and outs of all of these build tools. There is a lot of cool things you can_15<< NEWSLETTER No spam. No fluff. Just awesome content sent straight to your inbox!
https://www.kirupa.com/react/setting_up_react_environment.htm
CC-MAIN-2017-09
refinedweb
3,051
64.41
User:Pongo Version 2/Implementor of the Month/Archive From Uncyclopedia, the content-free encyclopedia < User:Pongo Version 2 | Implementor of the Month edit July 2007 edit 70.189.115.41 (Talk • Contribs (del) • Block (rem-lst-all) • Whois • City • Proxy? • WP Edits) Score: 1 IP lover - Nom and For adding good stuff to TextAdventurer. Marshal Uncyclopedian! Talk to me! - Comment — I don't think s/he's made enough reasonable edits to justify giving him/her the award. S/he's made two okay-ish pages, six edits in total. I'm not sure if that's enough. Still, if you think s/he deserves the award, that's fine. ?|COMRADE_PONGO_V2|RUN_CMD|RUN_GSLYR 09:36, 1 July 2007 (UTC) - Comment - TextAdventurer is gone. Just wanted you to know. The sub-pages, however, are another story. Maybe it could be moved to a userspace, and re-create the front article? --Trar (talk|contribs|grueslayer) 16:25, 25 July 2007 (UTC) edit Zork Implementor L (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Score: 5 retro zorkers - Nomination (and for) — while he hasn't done a huge amount of implementing recently, he's the master of the art. ?|COMRADE_PONGO_V2|RUN_CMD|RUN_GSLYR 09:36, 1 July 2007 (UTC) - For! Yeah! -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 18:31, 3 July 2007 (UTC) - For --Sir OCdt Jedravent CUN UmP VFH PLS ACS WH 02:43, 5 July 2007 (UTC) - F☭R,10:26, 11 July 2007 - T-T-T-TIE BREAKER Ж Kalir, Crazy Indie Gamer (missile ponies for everyone!) 16:13, 19 July 2007 (UTC) - Strong For -- Brigadier General Sir Zombiebaron 14:59, 29 July 2007 (UTC) edit Conniving (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Score: 4 Grueslayer PVP players being UTTERLY FUCKING ANNIHILATED - Nomination & For - He created Grueslayer PVP, an idea I actually liked. -)}" > 18:48, 1 July 2007 (UTC) - N+4 MY VOTE STILL STANDS! --Lt. High Gen. Grue The Few The Proud, The Marines 16:18, 2 July 2007 (UTC) - For! The first person to take an interest in The adventures of a grue. --Naughty Budgie 05:41, 3 July 2007 (UTC) - For That bastard. Creating Grueslayer PvP and sending attack guards on me. I didn't even do anything to that piece of shit. If I ever see that dickhead in real life, i'll rip his heart out. Maybe i'll get him to be the first Implementor of the Month, bringing him unwanted attention and making vandals FLOCK to him. That'll teach him to do stuff like making final bosses look pretty and putting Pokemon and Oblivion references in, that fucking nerdy FAGGOT! Conniving 18:50, 4 July 2007 (UTC) - Comment: When does voting end? Conniving 19:24, 4 July 2007 (UTC) - Three Years. --Lt. High Gen. Grue The Few The Proud, The Marines 19:25, 4 July 2007 (UTC) - That is a long way away. :( But it is your award. Conniving 04:00, 7 July 2007 (UTC) edit August 2007 No nominees. edit September 2007 edit Zombiebaron (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) - NF Truly a staple of the Game namespace, if not for entirely the reasons one would hope. However, he's had more contact with said namespace than most this past month, and it's undeniable that "deletion" is a form of "implementation."-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 02:05, 29 September 2007 (UTC) - For Well, the reason IS out of the box... I really hope that it doesn't set a precedent though. Conniving 12:11, 1 October 2007 (UTC) edit October 2007 edit Todd Lyons (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Score: Todd LyNONE - Nomination and For, he's active, helpful, and tottally does whatever this is an award for. -- Brigadier General Sir Zombiebaron 01:40, 29 October 2007 (UTC) - Sure, why not? -- 01:49, 29 October 2007 (UTC) - Comment, he's a Game Implementor? He's only got less than ten edits. Conniving 02:29, 29 October 2007 (UTC) - Against - you know why -:37, 29 October 2007 (UTC) - Against - Forgive me. I just can't support this. There's got to be someone else out that that's done a bit more for our games other than reverting twice and using a bot to revert spelling mistakes. Conniving 23:59, 29 October 2007 (UTC) - Conditional For If you vote for ME next time. Or nominate me. Look at New Zork i made. --Sir General Minister G5 FIYC UPotM [Y] #21 F@H KUN 16:15, 28 November 2007 (UTC) edit Punyhuman (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Score: +4 needles found - Nomination - ForFor making a fun game which can't be classified as just another text "adventure". Conniving 01:15, 30 October 2007 (UTC) - Fore! Oh no! I can't find my golf ball in the haystack...OUCH!! Marshal Uncyclopedian! Talk to me! - Comment - Conniving's "for" vote struck due to the fact Conniving's already voted. However, under new rules, nomination is still valid. —Comrade Pongo (V2) GS Implementor (Talk | Contribs | Award) 09:54, 1 November 2007 (UTC) - Comment - I just want to point out that Zombiebaron admitted that he only voted for Todd Lyons because he wanted SOMEONE to win the award, and he doesn't care about his own nomination anymore since I pointed out my case and "against" vote. Conniving 15:33, 1 November 2007 (UTC) - For Even though the game isn't that hard (and I didn't cheat), it's still a good time waster.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 16:23, 1 November 2007 (UTC) - Yeah -RAHB 02:55, 5 November 2007 (UTC) - Sure. I'll go with the bandwagon. --Hotadmin4u69 [TALK] 02:08, 24 November 2007 (UTC) edit November 2007 No nominees. edit December 2007 edit Cajek (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Score: 2 cheaters - Nom and For well, he did do Is Your Man Cheating On You?-- 19:22, 12 December 2007 (UTC) - UPDATE and Game:Rules. • <-> (Dec 24 / 21:00) - Well, if I'm the only one running Maybe that'll be my next text adventure, Game:Win • <-> (Dec 12 / 19:34) edit January 2008 edit Pongo Version 2 (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Score: 2 person who didn't follow skansam For -- Now PLEASE finish Game:MMORPG. Conniving 04:40, 1 February February 2008 No nominees. edit March 2008 No nominees.
http://uncyclopedia.wikia.com/wiki/User:Pongo_Version_2/Implementor_of_the_Month/Archive?oldid=4340432
CC-MAIN-2015-14
refinedweb
1,092
63.39
I should be preparing my Revit API training in Warsaw, which is coming up next week, and instead I still find myself working on other more interesting topics such as performance profiling at the same time. I hope the training participants will find this interesting as well. New Filtering Motivates Benchmarking The reason why I am so urgently interested in this right now is that the Revit 2011 API provides a whole new filtering API, i.e. for extracting certain specific groups of elements from the Revit database. This is a task that almost every application faces. The new API is extremely powerful and flexible and provides a huge number of classes and methods that can be used and put together in many different ways to achieve the same result. I am very interested in discovering how to make optimal use of this, and you should probably be too. One important tool in order to be able to measure and compare the performance of different approaches is a profiler, which leads me to this post. Quick and Slow Filters Actually, I am rather getting ahead of myself here, because there are so many basic issues that should really be addressed and discussed first. One of the most fundamental ones is that the Revit 2011 filtering API provides a number of filter classes, and they are divided into quick and slow filters. A quick filter is fast and can process an element without fully expanding it in memory, whereas a slow filter needs to read in the entire element data into memory before it can be processed and thus requires more time. Obviously, the trick in performant filtering in Revit 2011 is to apply as many and specific quick filters as possible before resorting to the slow ones, if your search requires them at all. Once the filter has done its job, you have a collection of elements, and in some cases, you may want to postprocess these further to search for characteristics that are not directly supported by any filters, or harder to implement using them. Anyway, we will discuss these topics more in depth real soon now. To give you a quick first impression of what Revit 2011 API filters can look like, here are two helper methods used in the code presented below. The first one returns the first family symbol found with a built-in category OST_StructuralColumns, which we use to create lots of new column instances in the model: FamilySymbol GetColumnType() { FilteredElementCollector columnTypes = new FilteredElementCollector( m_document ); columnTypes.OfCategory( BuiltInCategory.OST_StructuralColumns ); columnTypes.OfClass( typeof( FamilySymbol ) ); return columnTypes.FirstElement() as FamilySymbol; } The second returns a list of all levels in the model: IList<Level> GetLevels() { FilteredElementCollector a = new FilteredElementCollector( m_document ); a.OfCategory( BuiltInCategory.OST_Levels ); a.OfClass( typeof( Level ) ); return a.ToElements().Cast<Level>().ToList<Level>(); } Both of these use quick filters exclusively. Profiling Tool So in order to enable you to immediately do some research on and profiling of the new filtering on your own, I want to get this basic profiling tool set up and available to you as soon as possible. This post was prompted by Marcelo Quevedo of hsbSOFT, starting with the following conversation: [M] I am investigating performance, because we received a huge Revit file and our framing Revit command is spending too much time on it. I am using a very manual mode to identify the delays. I created a timer by using the QueryPerformanceCounter and QueryPerformanceFrequency methods from the Windows Kernel32.dll. I call the timer and add the seconds for each call to some of the Revit API functions. I tried to use the JetBrains dotTrace profiling tool for .Net, but it doesnt work with Revit 2011. If you know of a profiling tool that works with Revit 2011, please let me know. [J] Thank you very much for your nice manual profiling tools and examples! No, I do not know of a profiling tool that works for Revit, which is why I was curious. By the way, I am sure that you can simplify the calling and usage of the timer very significantly by some clever use of constructors and destructors. [M] I followed your recommendations and changed the timer. Instead of using a clever destructor, I am using the System.IDisposable interface so that you can use the using statement to identify the delay of a source code portion. I attached a C# project in which this manual profiling tool is used. This project defines a simple Revit command that creates hundreds of structural columns and groups them in order to test the delay in various Revit API methods. Here are the resulting two groups: Here are the columns, which are only visible individually when we zoom in a bit closer: The performance timer implementation makes use of the CodeProject High-Performance Timer in C#. [J] I cleaned it up a bit more: - Modified the collection to use a dictionary instead of manually searching for entries by key. - Rewrote the GetColumnType and GetLevels methods. - Sorted the output by percentage of time. Here is the implementation of the basic Timer class that we use: public class Timer { ; /// <summary> /// Constructor /// </summary> public Timer() { startTime = 0; stopTime = 0; if( !QueryPerformanceFrequency( out freq ) ) { throw new Win32Exception( "high-performance counter not supported" ); } } /// <summary> /// Start the timer /// </summary> public void Start() { Thread.Sleep( 0 ); // let waiting threads work QueryPerformanceCounter( out startTime ); } /// <summary> ///Stop the timer /// </summary> public void Stop() { QueryPerformanceCounter( out stopTime ); } /// <summary> /// Return the duration of the timer in seconds /// </summary> public double Duration { get { return ( double ) ( stopTime - startTime ) / ( double ) freq; } } } Marcelo implemented a PerfTimer class add some syntactic sugar and the IDisposable wrapper to the Timer class and make it easier and more automatic to start and stop for a specific call with a minimum of effort and coding. Here is the PerfTimer implementation: public class PerfTimer : IDisposable { private string _key; private Timer _timer; private double _duration = 0; /// <summary> /// Gets time in seconds /// </summary> public double Duration { get { return _duration; } } /// <summary> /// Performance timer /// </summary> /// <param name="what_are_we_testing_here"> /// Key describing code to be timed</param> public PerfTimer( string what_are_we_testing_here ) { _key = what_are_we_testing_here; _timer = new Timer(); _timer.Start(); //starts the time } void IDisposable.Dispose() { // When the using statement block finishes, // the timer is stopped, and the time is // registered _timer.Stop(); _duration = _timer.Duration; TimeRegister.AddTime( _key, _duration ); } } After preparing all this, I noticed the following comment on the CodeProject Timer class: "System.Diagnostics.Stopwatch class: .NET 2.0 now provides this functionality as part of the framework. See class: System.Diagnostics.Stopwatch in System.dll." I rewrote the PerfTimer class to make use of the built-in stopwatch instead of reinventing the wheel, and it now looks like this: public class PerfTimer : IDisposable { #region Internal TimeRegistry class // . . . #endregion // Internal TimeRegistry class string _key; Stopwatch _timer; double _duration = 0; /// <summary> /// Performance timer /// </summary> /// <param name="what_are_we_testing_here"> /// Key describing code to be timed</param> public PerfTimer( string what_are_we_testing_here ) { _key = what_are_we_testing_here; _timer = Stopwatch.StartNew(); } /// <summary> /// Automatic disposal when the the using statement /// block finishes: the timer is stopped and the /// time is registered. /// </summary> void IDisposable.Dispose() { _timer.Stop(); _duration = _timer.Elapsed.TotalSeconds; TimeRegistry.AddTime( _key, _duration ); } public void Report() { TimeRegistry.WriteResults( _duration ); } } The internal TimeRegistry class was initially defined by Marcelo and manages a collection of individual timer instances for measuring the time required by the various different Revit API methods. I pretty much rewrote it from scratch in various iterations. At the end of the session, it reports the total times of the various operations, for instance like this: ----------------------------------------------------------- Percentage Seconds Calls Process ----------------------------------------------------------- 2.76% 0.90 7200 Parameter.Set 3.29% 1.07 1200 NewFamilyInstance 15.44% 5.04 1 Creation.Document.NewGroup 18.02% 5.89 1200 Document.Rotate 59.19% 19.33 1201 Document.Regenerate 100.00% 32.67 1 TOTAL TIME ----------------------------------------------------------- The command mainline Execute method driving the whole operation now looks like this: public Result Execute( ExternalCommandData commandData, ref string message, ElementSet elements ) { Result result = Result.Failed; try { m_document = commandData.Application.ActiveUIDocument.Document; m_createDoc = m_document.Create; // Keep track of the total time used: PerfTimer ptTotalTime = new PerfTimer( "TOTAL TIME" ); using( ptTotalTime ) { ModelColumns(); using( PerfTimer pt = new PerfTimer( "Document.Regenerate" ) ) { m_document.Regenerate(); } } // Report all resulting time delays in a text file: ptTotalTime.Report(); result = Result.Succeeded; } catch( System.Exception e ) { message = e.Message; result = Result.Failed; } return result; } As you can see, it defines a top level performance time measuring the total time, which includes all the detailed time measurements for the individual methods. All the real action takes place in the ModelColumns method. Some of the actions performed and timed by ModelColumns include: - Creating a large number of column instances. - Setting parameters on the column instances. - Rotating the column instances. - Creating groups. You can see some of the others from the performance report displayed above. Here is an example showing how easy it is to add a performance timer for a specific method and which generates the corresponding entry for Document.Rotate in the report listed above: using( PerfTimer profiler = new PerfTimer( "Document.Rotate" ) ) { rotated = m_document.Rotate( element, axis, dAngle ); } Manual Regeneration Mode Another interesting issue that Marcelo encountered concerns the regeneration mode. The regeneration mode for this command is set to Manual, which implies that Revit does not automatically regenerate the model after each modification. Instead, we are responsible for doing this ourselves manually by calling doc.Regenerate when required from within our plug-in: [M] If I set the regeneration mode to Manual, I need to call SetLocationPoint a second time after the call to Rotate, otherwise the column is modelled in a wrong position. [J] Try calling doc.Regenerate after modifying things before trying to read the results, or use automatic regeneration. [M] If I call doc.Regenerate after creating each column it works fine, so it is not necessary to set the start point twice anymore. I cleaned it up bit more and deleted the unused SetLocationPoint method. But doc.Regenerate is now called hundreds of times which causes a new delay. Is it ok to do this, or do you suggest some other alternative? [J] Yes, of course you should avoid calling doc.Regenerate hundreds of times if possible. You should only call it at all if you have the following situation: - You have modified the model AND - You need to query the modified model. I would assume that it is better to avoid this, or, if not possible, to make all the modifications in one go, then call doc.Regenerate once only, and then perform all the queries at once in a single second step. This is a pure untested assumption at this time and needs testing and benchmarking. Generic Programming with Anonymous Methods I am really getting into these neat little anonymous methods that go so well with the new LINQ and generic methods. If you look at the little report above, you will note that I print the percentage of the total time used at the beginning of the line. I put it in that order to make it easy to sort the lines by percentage, since that is the most important aspect of the profiling results. I also wanted to add those separator lines and make them long enough to span the longest lines, but not too long. So what is the maximum line length? How can I ask the compiler to calculate the longest line length for me? Using a generic template algorithm and an anonymous method, this is really easy: string header = " Percentage Seconds Calls Process"; int n = Math.Max( header.Length, lines.Max<string>( x => x.Length ) ); Rather sweet, isn't it? The Max template algorithm takes a functor which transforms each string element into an integer, and then returns the maximum integer value as its result. In this case, the functor is implemented using an anonymous or lambda function. You will be seeing a lot more of these in upcoming posts, since I like to use them for succinctly post-processing Revit API filter collector results. I have already benchmarked some variations and established that they are neither more nor less effective than other mechanisms. But once again I am getting ahead of myself and will slow down a bit. More on this anon. Download Here is the complete PerformanceProfiling source code and Visual Studio solution after the numerous iterations back and forth between Marcelo and Jeremy to perfect various aspects. Very many thanks to Marcelo for initiating the topic, sharing his code, the fruitful discussion and the many code improvement iterations! I am very much looking forward to making use of this system to benchmark and compare the numerous different approaches that can be chosen to make use of the new Revit 2011 API functionality. I hope it will be useful for you as well, and may all your solutions be optimal.
http://thebuildingcoder.typepad.com/blog/2010/03/performance-profiling.html
CC-MAIN-2017-17
refinedweb
2,135
54.02
Hi! I just started digging into Dragonruby. I'm really liking the straightforwardness of the API so far. The first thing I wanted to do was try out some simple spritesheet-based animation. So I had a look in mygame/documentation/05-sprites.md and found the "Sprite Sub Division / Tile" example there. But when I tried it out, with my own spritesheet, nothing got rendered to the screen. After some experimentation I found that if I don't set source_h, or I set it to `-1` like other parts of the docs suggest, then I get the all the way to the bottom edge of the image rendered, as expected. But then If I set it to a positive number like 32 then the image does not show up at all. Here's a small code example that demonstrates the issue for me: def tick args args.outputs.sprites << { x: 100, y: 100, w: 100, h: 100, path: "dragonruby.png", source_x: 0, source_y: 0, source_w: 32, source_h: 32 } end As is, this shows a blank screen, but if I remove the source_h line, then I see part of the dragonruby logo as expected.
https://itch.io/t/857307/setting-source-h-makes-the-image-not-get-rendered
CC-MAIN-2020-34
refinedweb
194
80.01
unlink, unlinkat - remove a directory entry relative to directory file descriptor #include <unistd.h> int unlink(const char *path); int unlinkat(int fd, const char *path, int flag); The unlink() function shall remove a link to a file. If path names a symbolic link, unlink() shall remove the symbolic link named by path and shall not affect any file or directory named by the contents of the symbolic link. Otherwise, unlink() shall remove the link named by the pathname pointed to by path and shall decrement the link count of the file referenced by the link. When the file's link count becomes 0 and no process has the file open, the space occupied by the file shall be freed and the file shall no longer be accessible. If one or more processes have the file open when the last link is removed, the link shall be removed before unlink() returns, but the removal of the file contents shall be postponed until all references to the file are closed. The path argument shall not name a directory unless the process has appropriate privileges and the implementation supports using unlink() on directories. Upon successful completion, unlink() shall mark for update the last data modification and last file status change timestamps of the parent directory. Also, if the file's link count is not 0, the last file status change timestamp of the file shall be marked for update. The unlinkat() function shall be equivalent to the unlink() or rmdir() function except in the case where path specifies a relative path. In this case the directory entry to be removed_REMOVEDIR - Remove the directory entry specified by fd and path as a directory, not a normal file. If unlinkat() is passed the special value AT_FDCWD in the fd parameter, the current working directory shall. These functions shall fail and shall not unlink the file if: - process does not satisfy the criteria specified in XBD Directory Protection. - [EROFS] - The directory entry to be unlinked is part of a read-only file system. The unlinkEXIST] or [ENOTEMPTY] -. These functions may fail and not unlink the file if: - [EBUSY] - [XSI] The file named by path is a named STREAM.The file named by path entry to be unlinked is the last directory entry to a pure procedure (shared text) file that is being executed. The unlinkat() function may fail if: - [EINVAL] - The value of the flag argument is not valid. The following example fragment creates a temporary password lock file named LOCKFILE, which is defined as /etc/ptmp, and gets a file descriptor for it. If the file cannot be opened for writing, unlink() is used to remove the link between the file descriptor and LOCKFILE.#include <sys/types.h> #include <stdio.h> #include <fcntl.h> #include <errno.h> #include <unistd.h> #include <sys/stat.h> #define LOCKFILE "/etc/ptmp" int pfd; /* Integer for file descriptor returned by open call. */ FILE *fpfd; /* File pointer for use in putpwent(). */ ... /* Open password Lock file. If it exists, this is an error. */ if ((pfd = open(LOCKFILE, O_WRONLY| O_CREAT | O_EXCL, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)) == -1) { fprintf(stderr, "Cannot open /etc/ptmp. Try again later.\n"); exit(1); } /* Lock file created; proceed with fdopen of lock file so that putpwent() can be used. */ if ((fpfd = fdopen(pfd, "w")) == NULL) { close(pfd); unlink(LOCKFILE); exit(1); } Replacing Files The following example fragment uses unlink() to discard links to files, so that they can be replaced with new versions of the files. The first call removes the link to LOCKFILE if an error occurs. Successive calls remove the links to SAVEFILE and PASSWDFILE so that new links can be created, then removes the link to LOCKFILE when it is no longer needed.#include <sys/types.h> #include <stdio.h> #include <fcntl.h> #include <errno.h> #include <unistd.h> #include <sys/stat.h> #define LOCKFILE "/etc/ptmp" #define PASSWDFILE "/etc/passwd" #define SAVEFILE "/etc/opasswd" ... /* If no change was made, assume error and leave passwd unchanged. */ if (!valid_change) { fprintf(stderr, "Could not change password for user %s\n", user); unlink(LOCKFILE); exit(1); } /* Change permissions on new password file. */ chmod(LOCKFILE, S_IRUSR | S_IRGRP | S_IROTH); /* Remove saved password file. */ unlink(SAVEFILE); /* Save current password file. */ link(PASSWDFILE, SAVEFILE); /* Remove current password file. */ unlink(PASSWDFILE); /* Save new password file as current password file. */ link(LOCKFILE,PASSWDFILE); /* Remove lock file. */ unlink(LOCKFILE); exit(0); Applications should use rmdir() to remove a directory. Unlinking a directory is restricted to the superuser in many historical implementations for reasons given in link() (see also rename()). The meaning of [EBUSY] in historical implementations is "mount point busy". Since this volume of POSIX.1-2008 does not cover the system administration concepts of mounting and unmounting, the description of the error was changed to "resource busy". (This meaning is used by some device drivers when a second process tries to open an exclusive use device.) The wording is also intended to allow implementations to refuse to remove a directory if it is the root or current working directory of any process. The standard developers reviewed TR 24715-2006 and noted that LSB-conforming implementations may return [EISDIR] instead of [EPERM] when unlinking a directory. A change to permit this behavior by changing the requirement for [EPERM] to [EPERM] or [EISDIR] was considered, but decided against since it would break existing strictly conforming and conforming applications. Applications written for portability to both POSIX.1-2008 and the LSB should be prepared to handle either error code. The purpose of the unlinkat() function is to remove directory entries in directories other than the current working directory without exposure to race conditions. Any part of the path of a file could be changed in parallel to a call to unlink(), resulting in unspecified behavior. By opening a file descriptor for the target directory and using the unlinkat() function it can be guaranteed that the removed directory entry is located relative to the desired directory. None. close, link, remove, rename, rmdir, symlink XBD Directory Protection, <fcntl.h>, <unistd.h>693 [461], XSH/TC1-2008/0694 [324], XSH/TC1-2008/0695 [278], and XSH/TC1-2008/0696 [278] are applied. return to top of pagereturn to top of page
http://pubs.opengroup.org/onlinepubs/9699919799/functions/unlink.html
CC-MAIN-2014-41
refinedweb
1,032
56.45
Opened 7 years ago Closed 6 years ago Last modified 5 years ago #12787 closed (fixed) TemplateDoesNotExist exception does not report the correct template_name Description (last modified by ) When calling templateA which {% includes templateB %} that does not exist, the exception reports that templateA does not exist instead of templateB. Version (1, 1, 0, 'final', 0) reported the correct template_name. ### This is how 1.1.0 reported templates that did not exist when called by {% include %} in another template. _order_detail.html included in order_detail.html located at /orders/650/ TemplateDoesNotExist at /orders/650/ _order_detail.html Request Method: GET Request URL: somewhere/outside/orders/650/ Exception Type: TemplateDoesNotExist Exception Value: _order_detail.html Exception Location: /usr/lib/python2.4/site-packages/django/template/loader.py in find_template_source, line 74 Python Executable: /usr/bin/python Python Version: 2.4.3 ## This is how trunk reports the same situation. Notice here that it reports that the actual template being called by the view as not existing instead of the included template. Here I included _order_detail.html (does not exist) in po_form.html pos/po_form.html Request Method: GET Request URL: somewhere/else/pos/new/ Exception Type: TemplateDoesNotExist Exception Value: pos/po_form.html Exception Location: /home/trigeek38/lib/python2.5/django/template/loader.py in find_template, line 125 Python Executable: /usr/local/bin/python Python Version: 2.5.4 Python Path: ['/home/trigeek38/lib/python2.5/html5lib-0.11.1-py2.5.egg', '/home/trigeek38/lib/python2.5/pisa-3.0.30-py2.5.egg', '/home/trigeek38/lib/python2.5/django_pagination-1.0.5-py2.5.egg', '/home/trigeek38/lib/python2.5', '/home/trigeek38/webapps/django_trunk/lib/python2.5', '', '/usr/local/lib/python2.5/site-packages/PIL', '/home/trigeek38/webapps/django_trunk/projects/', '/home/trigeek38/webapps/django_trunk/projects/'] Server time: Thu, 4 Feb 2010 21:43:40 -0500 Template-loader postmortem Django tried loading these templates, in this order: * Using loader django.template.loaders.filesystem.Loader: * Using loader django.template.loaders.app_directories.Loader: Attachments (4) Change History (18) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by comment:4 Changed 7 years ago by this patch fixes it, but my testing suite is currently broken otherwise on this machine. if someone else tests it should work, if not then i will rerun tomorrow after fixing my testing suite. the problem is that code after version 1.1 throws array the real exception and creates it own based on the templatename being used. this patch is slightly uglier, but its simple and it works. and its my first real python code!!! (even though its only one line) Changed 7 years ago by patch file to fix the bug comment:5 Changed 7 years ago by scrapped the old patch.... the new patch(es) overload the previous TemplateDoesNotExist execption, to add wether or not that error should be thrown out. it is worded as to wether or not a super function calling it "should_keep_trying" in regards to this, any place that previously raised an TemplateDoesNotExist error needs to be rewritten to have an extra arg... this also is the case in the testing file, which has been modified accordingly... please excuse my code as i am not fully familiar with python and the coding conventions used by the django community. Changed 7 years ago by template patches (complete) Changed 7 years ago by comment:6 Changed 7 years ago by jkatzer: In the future please attach a single diff file created by svn diff from the root of the tree. See:. That's much easier to deal with than a bunch of individual .diff files zipped up. I'm uncomfortable with the originally proposed way of fixing this, so I've attached an alternative approach. The root cause of this bug is similar to #12992. During load_template, the load of the requested template may succeed but then get_template_from_string may raise TemplateDoesNotExist if the loaded template can't be compiled due to some other template not existing. That TemplateDoesNotExist, raised by load_template, is interpreted by its caller to mean that the requested template doesn't exist, leading to an erroneous report of the actual problem on the debug page. Attached patch changes the loader to code to fall back to returning the template source and name, as it used to, if it was able to find the requested template but attempting to compile it raises a TemplateSyntaxError. Thus the loader will only raise TemplateSyntaxError if the specifically-requested template does not exist. That prevents this code: for loader in template_source_loaders: try: source, display_name = loader(name, dirs) return (source, make_origin(display_name, loader, name, dirs)) except TemplateDoesNotExist: pass raise TemplateDoesNotExist(name) from moving on and trying other loaders, then ultimately ending with TemplateDoesNotExist(name), which mis-identifies the template that does not exist. That change alone is not sufficient to fix the problem, though. The extends node get_parent method catches TemplateDoesNotExist and turns it into a TemplateSyntaxError stating that the template specified to be extended does not exist. Prior to r11862, the call covered by the try/except was a find_template_source call, which would only raise TemplateDoesNotExist if that specific template does not exist. In r11862 the call inside the try/except was changed to get_template, which both finds the source and compiles it, and so may raise TemplateDoesNotExist if some other template needed to compile the extended template does not exist. We could either put the code here back the way it was before r11862 or remove this try/except entirely. The attached patch does the latter, because I don't really see what additional value is added by the different message ("... can't be extended because it doesn't exist") over a plain template does not exist message. There is a test in the patch, but it's not really sufficient since it does not test both the base loader case and the cached loader case. But it does illustrate when the problem arises. Changed 7 years ago by comment:7 Changed 7 years ago by comment:8 Changed 7 years ago by thanks for the advice... i was at a code sprint, and asked around for the best way to fix it and thats what we came up with.... comment:9 Changed 6 years ago by This fix doesn't work - the original problem remains. The original exception (found in loader.py, also in changeset:12792): raise TemplateDoesNotExist(name) that's the correct line of code, however, at the same file (and same changeset:12792): except TemplateDoesNotExist: # If compiling the template we found raises TemplateDoesNotExist, back off to # returning the source and display name for the template we were asked to load. # This allows for correct identification (later) of the actual template that does # not exist. return source, display_name the exception is caught but value is not used, and so we "loose" the exception. However, since we try to render it again, it's thrown again at the same place. This time, it's caught, but there's an error there: the original call look like this: def select_template(template_name_list): "Given a list of template names, returns the first that can be loaded." for template_name in template_name_list: try: return get_template(template_name) except TemplateDoesNotExist: continue # If we get here, none of the templates could be loaded raise TemplateDoesNotExist(', '.join(template_name_list)) and so the exception caught there is not because the original template was not found, but because templateB was not found, but it's never recorded and identified. A possible solution: def select_template(template_name_list): "Given a list of template names, returns the first that can be loaded." error_list = [] for template_name in template_name_list: try: return get_template(template_name) except TemplateDoesNotExist, e: if e.message not in template_name_list: error_list.append(e.message) continue # If we get here, none of the templates could be loaded template_name_list += error_list raise TemplateDoesNotExist(', '.join(template_name_list)) we need to make sure the reason for the TemplateDoesNotExist exception is not "just" looking for TemplateA in a wrong directory (the external loop). A different approach would be to create SubTemplateDoesNotExist exception type. comment:10 Changed 6 years ago by comment:11 follow-up: 12 Changed 6 years ago by The problem identified in this ticket was fixed, really. The fix includes tests, which failed before the code change and are still passing on current trunk: ~/django/trunk/tests --> ./runtests.py --settings=testdb.sqlite -v2 templates.Templates.test_extends_include_missing_baseloader templates.Templates.test_extends_include_missing_cachedloader Importing application templates Creating test database for alias 'default' (':memory:').... Creating test database for alias 'other' ('other_db')... Destroying old test database 'other'.... test_extends_include_missing_baseloader (regressiontests.templates.tests.Templates) ... ok test_extends_include_missing_cachedloader (regressiontests.templates.tests.Templates) ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.004s OK Destroying test database for alias 'default' (':memory:')... Destroying test database for alias 'other' ('other_db')... Further, if you doubt the tests, if I change one of my current projects to have the problem noted in the original description -- one existing template including a 2nd existing template that in turn attempts to include a 3rd template that does not exist, the debug page I get correctly identifies the template that is missing. You may be seeing a bug, but it is not this exact bug. Better than trying to show how the code change made here is wrong would be a test case that shows the case you are running into where a problem exists. And that should go into its own new ticket (that perhaps references this one), because this specific one is fixed. comment:12 Changed 6 years ago by The issue that was fixed here + the tests are for the wrong thing: an extended template that is missing an include, and not a normal template that is missing an include - the tests are also testing a different scenario. The tested template has: {% extends "broken_base.html" %} while it should have: {% include "just/something.html" %} comment:13 Changed 6 years ago by Re-reading now, I find the original description inconclusive as to which case it was reporting -- depends on what was meant by "call": render or extend. I don't remember if I tried both while initially looking at the ticket and choose to focus on the extends case because it was a superset of the other, or if I just assumed "call" meant extend. In some brief tests now with running against r12791/r12792 (right before and after the fix) if I switch things around to be just a plain include of a missing template, I see the wrong template reported before the fix and the correct one after the fix, so the case you mention here was also fixed by this code change. I do also believe that case is a subset of the extends case that is actually tested, so I don't believe either the tests or the fix here are "wrong". Perhaps the problem you are encountering is the one identified in #15502, which involves a similar problem when a list of templates is provided. Sorry, I did not consider that case when looking at this ticket (there was no mention of lists of templates being tried), so that case was not fixed by this ticket. comment:14 Changed 5 years ago by Milestone 1.2 deleted (Reformatted description. Please use preview.)
https://code.djangoproject.com/ticket/12787
CC-MAIN-2017-09
refinedweb
1,869
53
Our focus here is the difference between Python lists and tuples. Often confused, due to their similarities, these two structures are substantially different. A tuple is an assortment of data, separated by commas, which makes it similar to the Python list, but a tuple is fundamentally different in that a tuple is "immutable." This means that it cannot be changed, modified, or manipulated. A tuple is typically used specifically because of this property. A popular use for this is sequence unpacking, where we want to store returned data to some specified variables. Something like: def example(): return 15, 12 x, y = example() print(x,y) # in the above case, we have used a tuple and cannot modify it... and # we definitely do not want to! If you notice, the tuple had no brackets around it at all. If there are no encasing brackets or braces of any type, then Python will recognize the data as a tuple. Tuples also can have curved brackets like "(" or ")" Next, we have the far more popular Python list. To define a list, we use square brackets. A Python list acts very much like an array in other languages like php. Here's an example of a list and an example use: x = [1,3,5,6,2,1,6] ''' You can then reference the whole list like: ''' print(x) # or a single element by giving its index value. # index values start at 0 and go up by 1 each time print(x[0],x[1])
https://pythonprogramming.net/python-lists-vs-tuples/?completed=/making-modules/
CC-MAIN-2019-26
refinedweb
251
63.9
After completing no small amount of refactoring, I'm pleased to announce a new release of Kamelopard, a Ruby gem for generating KML. KML, as with most XML variants, requires an awful lot of typing to write by hand; Kamelopard makes it all much easier by mechanically generating all the repetitive XML bits and letting the developer focus on content. An example of this appears below, but first, here's what has changed most recently: - All KML output comes via Ruby's REXML library, rather than simply as string data that happens to contain XML. This not only makes it much harder for Kamelopard developers to mess up basic syntax, it also allows examination and modification of the KML data using XML standards such as XPath. - Kamelopard classes now live within a module, preventing namespace collisions. This is important for any large-ish library, and probably should have been done all along. Previous to this, some classes had awfully strange names designed to prevent namespace collisions; these classes have been changed to simpler, more intuitive names now that collisions aren't a problem. - Perhaps the biggest change is the incorporation of a large and (hopefully) comprehensive test suite. I'm a fan of test-driven development, but didn't start off on the right foot with Kamelopard. It originally shipped with a Ruby script that tried a few examples and hoped it didn't crash; that has been replaced with a full RSpec-based test suite, including tests for each class and in particular, extensive test of the KML output to ensure it meets the KML specification. Run these tests from the Kamelopard source with the command rspec spec/* Now for some code. We recently got a data set containing several thousand locations, describing the movement of an aircraft on final approach and landing, with the request that we turn it into a Google Earth tour, where the viewer would follow the aircraft's path, flight simulator style. The actual KML result is over 56,000 lines, but the KML code is fairly simple: require 'rubygems' require 'kamelopard' require 'csv' CSV.foreach(ARGV[0]) do |row| time = row[0] lon = row[1].to_f lat = row[2].to_f alt = row[3].to_f p = Kamelopard::Point.new lon, lat, alt, :absolute c = Kamelopard::Camera.new(p, get_heading, get_tilt, get_roll, :absolute) f = Kamelopard::FlyTo.new c, nil, pause, :smooth end puts Kamelopard::Document.instance.get_kml_document.to_s Along with some trigonometry and linear algebra to calculate the heading, tilt, and roll, and a CSV file of data points, the script above is all it took; the KML result runs correctly in Google Earth without further modification. Kamelopard has been published to RubyGems.org, so installation is simply gem install kamelopardGive it a try!
http://blog.endpoint.com/2011_11_01_archive.html
CC-MAIN-2014-15
refinedweb
459
61.16
I've been (re)searching for a prng on the web. I came up with the following (excuse my syntax -- I'm a newbie in shading): A few depending on bitwise operations: 1. It supposedly returns normalized values.It supposedly returns normalized values.Code :#extension GL_EXT_gpu_shader4: enable float rnd(vec2 v) { int n = int(v.x * 40.0 + v.y * 6400.0); n = (n << 13) ^ n; return 1.0 - float( (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0; } 2. Code :#extension GL_EXT_gpu_shader4: enable int LFSR_Rand_Gen(int n) { n = (n << 13) ^ n; return (n * (n*n*15731+789221) + 1376312589) & 0x7fffffff; } This belongs to George Marsaglia (hope I've written the code right): 3. Code :#extension GL_EXT_gpu_shader4: enable int rando(ivec2 p) { p.x = 36969*(p.x & 65535) + (p.x >> 16); p.y = 18000*(p.y & 65535) + (p.y >> 16); return (p.x << 16) + p.y; } There also is the xorshift () but uses static vars. I can't enable the appropriate extension in my software. I didn't want it anyway as I like to keep things simple. And I don't want such a large period / perfect distribution, either. Here's one of unknown origin which looks suspicious due to the sin() for which reason I'll refrain from using it, namely "one-liner": 4. Code :float rand(vec2 co) { return fract(sin(dot(co.xy, vec2(12.9898,78.233))) * 43758.5453); } and finally, the most fitted prng for myself that I found was (*rolling drums*): Lehmer random number generator, which accepts different setup values from which I chose: 5. Code :int lcg_rand(int a) { return (a * 75) % 65537; } The trouble with this last prng (and not only) is that the returned value must be saved as the seed for the next call. So, how can it be saved? Is there a static qualifier or some workaround, so the function could be called from within the fragment shader and each subsequent call to set the "static" seed for the next? OR Does anybody know of a fast short prng which pops the random number based on, say, one of the texCoord floats? Or any other ideas? The whole idea goes towards generating fast 1D / 2D noise with under 10 lines code.
http://www.opengl.org/discussion_boards/showthread.php/182830-simple-fast-PRNG-s?p=1255215
CC-MAIN-2014-10
refinedweb
372
76.11