text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
device_find_child -- search for a child of a device #include <sys/param.h> #include <sys/bus.h> device_t device_find_child(device_t dev, const char* name, int unit); This function looks for a specific child of dev. with the given name and unit. If it exists, the child device is returned, otherwise NULL. device_add_child(9) This manual page was written by Doug Rabson. FreeBSD 5.2.1 June 16, 1998 FreeBSD 5.2.1
https://nixdoc.net/man-pages/FreeBSD/man9/device_find_child.9.html
CC-MAIN-2020-34
en
refinedweb
Remove an element from a vector Is there a simple way to remove an element from a Vec<T>? There's a method called remove(), and it takes an index: usize, but there isn't even an index_of() method that I can see. I'm looking for something (hopefully) simple and O(n). This is what I have come up so far (that also makes the borrow checker happy): let index = xs.iter().position(|x| *x == some_x).unwrap(); xs.remove(index); I'm still waiting to find a better way to do this as this is pretty ugly. Note: my code assumes the element does exist (hence the .unwrap()). vector::erase - C++ Reference, All the elements of the vector are removed using clear() function. erase() function on the other hand, is used to remove specific elements from the container or a vector::erase () 1. Run a loop till the size of the vector. 2. Check if the element at each position is divisible by 2, if yes, remove the element and decrement iterator. 3. Print the final vector. You can use the retain method but it will delete every instance of the value: fn main() { let mut xs = vec![1, 2, 3]; let some_x = 2; xs.retain(|&x| x != some_x); println!("{:?}", xs); // prints [1, 3] } vector erase() and clear() in C++, If you need to remove multiple elements from the vector, the std::remove will copy each, not removed element only once to its final location, while the vector::erase Removes from the vector either a single element (position) or a range of elements ([first,last)). This effectively reduces the container size by the number of elements removed, which are destroyed. Because vectors use an array as their underlying storage, erasing elements in positions other than the vector end causes the container to relocate all the elements after the segment erased to their new positions. Your question is under-specified: do you want to return all items equal to your needle or just one? If one, the first or the last? And what if there is no single element equal to your needle? And can it be removed with the fast swap_remove or do you need the slower remove? To force programmers to think about those questions, there is no simple method to "remove an item" (see this discussion for more information). Remove first element equal to needle // Panic if no such element is found vec.remove(vec.iter().position(|x| *x == needle).expect("needle not found")); // Ignore if no such element is found if let Some(pos) = vec.iter().position(|x| *x == needle) { vec.remove(pos); } Remove last element equal to needle Like the first element, but replace position with rposition. Remove all elements equal to needle vec.retain(|x| *x != needle); ... or with swap_remove Remember that remove has a runtime of O(n) as all elements after the index need to be shifted. Vec::swap_remove has a runtime of O(1) as it swaps the to-be-removed element with the last one. If the order of elements is not important in your case, use swap_remove instead of remove! Difference between std::remove and vector::erase for vectors , In this article, we will go through multiple ways to delete elements from a vector container in C++ STL like pop_back, pop_front, erase, clear, remove and more. Since std::vec.begin() marks the start of container and if we want to delete the ith element in our vector, we can use: vec.erase(vec.begin() + index); There is a position() method for iterators which returns the index of the first element matching a predicate. Related question: Is there an equivalent of JavaScript's indexOf for Rust arrays? And a code example: fn main() { let mut vec = vec![1, 2, 3, 4]; println!("Before: {:?}", vec); let removed = vec.iter() .position(|&n| n > 2) .map(|e| vec.remove(e)) .is_some(); println!("Did we remove anything? {}", removed); println!("After: {:?}", vec); } Different ways to remove elements from vector in C++ STL, Erasing an element from a vector. Ex: v.erase(v.begin()+4); (erases the fifth element of the vector v). erase(int start,int end): Removes the elements in the range std::remove does not actually erase the element from the container, but it does return the new end iterator which can be passed to container_type::erase to do the REAL removal of the extra elements that are now at the end of the container: std::vector<int> vec; // .. put in some values .. int int_to_remove = n; vec.erase(std::remove(vec.begin(), vec.end(), int_to_remove), vec.end()); If your data is sorted, please use binary search for O(log n) removal, which could be much much faster for large inputs. match values.binary_search(value) { Ok(removal_index) => values.remove(removal_index), Err(_) => {} // value not contained. } Vector-Erase, Since calling the erase() method on the vector element invalidates the iterator, special care needs to be taken while erasing an element. We can do that in many I just want to remove all the appearances of elements of A from the data vector. I gave the example [1,2] . The element 1 for example might appear 1000 (or not at all) times in data vector, and I want to delete all of them, and same for 2. Remove elements from a vector inside a loop in C++, vector::erase() is a library function of "vector" header, it is used to erase/delete elements from the vector, it either removes one element from a delete element from vector. Learn more about delete element from vector, cheat sheets vector::erase() function with example in C++ STL, Remove an element from C++ std::vector<> by index can be done by following way −Example Live Demo#include #include using namespace Removing an element from C++ std::vector<> by index?, pos, -, iterator to the element to remove. first, last, -, range of elements to remove. Type requirements. -. T must meet the requirements of The java.util.vector.remove (int index) method is used to remove an element from a Vector from a specific position or index.
http://thetopsites.net/article/53636248.shtml
CC-MAIN-2020-34
en
refinedweb
Java Exception is a mechanism to handle abnormal situations that may occur during program execution. Exceptional codes are encompassed within a try...catch block so that, if exception occurs in the course of program flow, it may be tackled programmatically rather than resulting in crashes. This article walks through some concepts, codes, and scenarios to broadly understand how it can be used effectively in Java programs. Java Exception The exception handling mechanism of Java is heavily influenced by the work Exception Handling for C++ of Andrew Koenig and Bjarne Stroustrup. The simplest way to realize the real concept behind the existence of the exception handling mechanism in programming languages is to understand 'divide by zero exception'. Simply put, what the program is supposed to do or behave in the course of the code if there is a situation where a number is divided by zero. Note that 'divide by zero' is a well-known mathematical concept which states that when any number is divided by zero, the result is undefined. So, if such a situation occurs in the code, the program is bound to terminate abnormally. But, not so, if there is a way to handle the situation and give the code, say, another chance, an ability to recover or terminate gracefully. Grossly, this is all about the concept of exception handling in Java. Now, let's try a very rudimentary example to illustrate a 'divide by zero' program without handling any exceptions. package com.mano.examples; import java.util.Scanner; public class Main { public static void main(String[] args) { do { Scanner input = new Scanner(System.in); System.out.printf("\nEnter numerator: "); int num = input.nextInt(); System.out.printf("\nEnter denominator: "); int denom = input.nextInt(); int quo = num / denom; System.out.printf("\nResult = " + quo); System.out.printf("\nDo you want to continue (y/n):"); String choice = input.next(); if(!choice.trim().equals("y")) break; }while(true); } } The output is as follows: Enter numerator: 10 Enter denominator: 0 Exception in thread "main" java.lang.ArithmeticException: / by zero at com.mano.examples.Main.main(Main.java:15) As we execute the program and give an input value of 0 as the denominator, the program terminates abnormally with ArithmeticException. The ArithmeticException is an extension of RuntimeException thrown when an exceptional arithmetic condition occurs; as we can see in our case, we tried to divide a number by zero. There are many such exception classes to take care of exceptional situations in Java program, ArithmeticException is just one among them for that specific purpose. Now, the question is: Can we improve the code and give it another chance to execute? The answer is yes, we can do so by appropriately handling the exception. package com.mano.examples; import java.util.InputMismatchException; import java.util.Scanner; public class Main { public static void main(String[] args) { do { Scanner input = new Scanner(System.in); try { System.out.printf("\nEnter numerator: "); int num = input.nextInt(); System.out.printf("\nEnter denominator: "); int denom = input.nextInt(); int quo = num / denom; System.out.printf("\nResult = " + quo); }catch (InputMismatchException ex1){ System.err.println("Sorry, input does not match. Try again."); input.nextLine(); System.err.println("Enter only numbers."); }catch(ArithmeticException ex2){ System.err.println("Please enter a non-zero denominator. Try again."); ex2.printStackTrace(); }finally { System.out.printf("\nDo you want to continue (y/n):"); String choice = input.next(); if (!choice.trim().equals("y")) break; } }while(true); } } Notice that here in the code we handled multiple exceptions: one, ArithmeticException for the 'divide by zero' problem and another, InputMismatchException, to handle the situation where the input given is a strict integer and not a string or float value. Multiple catches take care of both the situation and displays appropriate messages. The finally block executes in either case, whether an exception occurs or not does not matter. Therefore, here we put the choice whether the user wants to try again or not. A 'y' input would make the loop continue again and any other input would cause the code to break from the infinite loop. The crux of the matter is that with, appropriate handling of the exception, we are able to get an abnormal situation under control and let the program terminate in a graceful manner. Ignoring Exceptions Exceptions are nothing but events signaled during a program's execution. The goal of this event is to obstruct the normal execution flow. In earlier programming languages such as C, a very rudimentary version of this kind was error codes and execution status checking. Exception handling took the idea further and provided a solid mechanism to tackle the situation. Java does not make it compulsory to encompass code within a try...catch block. We can literally throw every exception that occurs in the program without handling any of them as follows. public class Main { public static void main(String[] args) throws Throwable{ // ... } } What it does is simply ignore any exceptions that occur in the program and throws them. But, this is not a good idea. If we are to write a robust code, we should never ignore an exception. If possible, we must log them so that later on, when inspecting the log file, we may get crucial clues to pinpoint what is wrong with the code. There is really a rare circumstance where we may ignore any exceptions. Exceptions vs Errors All Java exception classes are a direct or indirect subclass of Exception, which is the top class in the exception hierarchy. We can extend this hierarchy to create our own custom exception classes. Throwable is the superclass of Exception and has two subclasses: Exception and Error. The Exception class, and its subclasses, is used to represent exceptional situations in a Java program that may be handled appropriately. The Error class and its subclasses, on the other hand, represent abnormal conditions that occur in the JVM. Exceptions are more frequent than Errors and the later should not be caught by applications. Exceptions are recoverable, whereas Errors are fatal. Checked and Unchecked Exceptions There are two classes of exceptions in Java: checked and unchecked. Checked exceptions represent exceptional situations that are beyond immediate control of the program such as memory, file system, or network exceptions. They are subclasses of Exception and must be handled explicitly. The most common unchecked exception is the IOException. Unchecked exceptions are those which signal error conditions about program logic, assumptions made such as null pointers, unsupported operations, invalid arguments, and so forth. They typically are a subclass of RuntimeException and may not be handled explicitly. The most common unchecked exception is the NullPointerException. Structurally, Java does not make any distinction between checked and unchecked exceptions. Try-with-resource The try-with-resource is better than the try...catch and finally blocks. In any major try...catch block, there is a block called finally, which typically contains the cleanup code. For every socket, file, or database connection opened, it must be closed appropriately, whether an exception occurs or not does not matter. Therefore, these cleanup codes are usually written within the finally block. As a result, the code in the finally block often looks very cluttered and dirty. Connection con = null; try { con = DriverManager.getConnection(DB_URL, USER_NAME,PASSWORD); // ... }catch(Exception ex){ ex.printStackTrace(); }finally{ if(con!=null){ try { con.close(); }catch (SQLException ex){ } } } In the preceding code snippet, we tried to close the database connection in the finally block. There is a better way to write the same code. The try-with-resource construct significantly simplifies the code structure and can be used to manage the resource appropriately as follows. try(Connection con = DriverManager.getConnection (DB_URL, USER_NAME,PASSWORD)){ Class.forName("com.mysql.jdbc.Driver"); // ... }catch(Exception ex){ ex.printStackTrace(); } Note that Java does many things behind the scenes with the try-with-resource to make the code work perfectly, including resource management. The code looks much cleaner and should be used more often than the try...catch and finally blocks. Lambdas and Exceptions Lambda expression in Java makes the code concise and clean, with less boilerplate code. But, an exception with the try...catch expression actually professes an elaborate description of the code. Hence, exceptions do not align well with lambda expression. However, there are provisions where Java code written in the form of lambda expression also can throw an exception, provided that the exception is compatible with the specification of the functional interface. The functional interface method must specify the exception type or its supertype. Therefore, exception handling with lambda expression is solely dependent upon how the functional interface decides to handle the exception. If it does not specify the exception, a lambda expression cannot throw it. Let's try a very rudimentary and simple example to prove the point. Try to compile the following code. It will not compile and will complain about Unhandled Exception:java.lang.Exception. package com.mano.examples; public interface FunctInt { int add(int a, int b); } package com.mano.examples; public class Main { public static void main(String[] args) { FunctInt f = (a,b)->{ int c = a + b; throw new Exception(); }; //... } } But, as we add throws Exception in the functional interface method, the code compiles without error. package com.mano.examples; public interface FunctInt { int add(int a, int b) throws Exception; } Conclusion These are some of the key points to ponder when writing exception handling effectively. Understand that exceptions are an indication of a problematic situation that may occur in the course of program execution. Handling them programmatically gives you an opportunity to resolve them at the earliest, possibly without causing an adverse domino effect or program crash. If handled appropriately, an application in execution may continue to execute without major consequence. Therefore, exception handling is a major step in writing fault-tolerating programs that can not only deal with its own problems and carry on its affairs but also terminate in a graceful manner.
https://mobile.developer.com/java/how-to-use-exceptions-effectively-in-java.html
CC-MAIN-2020-34
en
refinedweb
ReminderAlertNotification Class Contains information related to the reminder alert. Namespace: DevExpress.XtraScheduler Assembly: DevExpress.XtraScheduler.v20.1.Core.dll Declaration public class ReminderAlertNotification : ReminderBaseAlertNotification Public Class ReminderAlertNotification Inherits ReminderBaseAlertNotification Related API Members The following members accept/return ReminderAlertNotification objects: Remarks For each appointment, you can enable a reminder. A reminder can be invoked at a specific time period before an appointment's start time. When the reminder alerts the SchedulerStorageBase.ReminderAlert, the event fires. You can handle the event to perform specific actions when a reminder is activated. Multiple reminders can raise at the same time. To get these reminders use the event's ReminderEventArgs.AlertNotifications parameter. It holds a collection of notifications represented by ReminderAlertNotification objects. Each notification refers to the associated reminder (ReminderAlertNotification.Reminder) and has the ReminderBaseAlertNotification.Handled property. Setting the ReminderBaseAlertNotification.Handled property to true indicates that this reminder is handled, and no default processing is required. Otherwise, the reminder will be automatically switched off via the ReminderBase.Dismiss method. The ReminderAlertNotification.ActualAppointment property represents an appointment for which an alert is fired. If there is a series of recurrent appointments, a reminder is assigned to the recurrence pattern, and is invoked for each occurrence in a chain. This property points to the actual occurrence for which the reminder alerts.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraScheduler.ReminderAlertNotification
CC-MAIN-2020-34
en
refinedweb
IN THIS ARTICLE Disclaimer: This post was written using PubNub's React SDK V1. Check out our React SDK docs for an up-to-date reference. A crucial part of developing applications is focusing on user experience. Delivering a truly progressive app experience, companies have come to realize that their overall growth increases in every category, from engagement to conversions. We know this by looking at the relatively new outline set out by Google that says a progressive web application should follow these three simple characteristics: - Reliable - Fast - Engaging The key thing that we’re going to look at is how we can make our web app more engaging. Of course, we can’t make it perfect but we can start off by creating a project that can become a skeleton for a bigger project. Under the umbrella of creating an engaging web app, we have a couple of other examples as defined by Google. - Worthy of being on the home screen - Work reliably, no matter the network conditions - Increased engagement - Improved conversions At the moment we won’t be trying to cover all of these sections of a web app, instead, we’ll focus on how we can increase engagement by integrating push notifications into our web app. If you want to learn how to make your site more engaging and improve conversions, check out my article on creating a chat assistant. Another way of making an app worthy of being on a home screen is by having beautiful components, I go through how to make beautiful chart component with PubNub in this article. The full GitHub repo for this article is available here. Increasing Engagement with Push Notifications Let’s begin by creating a React template and changing into that directory. npx create-react-app pubnub-pwa && cd pubnub-pwa Now, inside of our project, we’re going to install the react client for PubNub. This is going to streamline our development. npm i pubnub-react@1 --save Now we can go src/App.js and import the PubNub react SDK. import PubNubReact from 'pubnub-react'; From here we’re going to now initiate our connections to our channels by adding in our publish and subscribe keys. But first, you’ll need to signup for PubNub. constructor(props) { super(props); this.pubnub = new PubNubReact({ publishKey: 'your pub key', subscribeKey: 'your sub key' }); this.pubnub.init(this); } componentWillMount() { this.pubnub.subscribe({ channels: ['PWA'] }); this.pubnub.getMessage('PWA', (msg) => { this.notifyMe(msg.message.text); }); } On top of this, we’re going to add a notification function which displays the incoming notifications. notify(message) { if (!("Notification" in window)) { alert("This browser does not support system notifications"); } } Currently, the function takes in a message and checks if the browser has the ability to show push notifications. The next step is to ask permissions if notifications are available or do something in the case that they aren’t. Adding that functionality then leads our function to look like this. notify(message) { if (!("Notification" in window)) { alert("This browser does not support system notifications"); } else if (Notification.permission === "granted") { if(typeof message === 'string' || message instanceof String){ var notification = new Notification(message); }else{ var notification = new Notification("Hello World"); } } else if (Notification.permission !== 'denied') { Notification.requestPermission(function (permission) { if (permission === "granted") { var notification = new Notification("Hello World"); } }); } } Now we can add a button to test out our functionality before we hook check out our notifications on PubNub. render() { return ( <div className="App"> <button onClick={this.notify}> Send Notification </button> </div> ); } Now when we refresh our page, we should get an alert that asks us to allow notifications. Let’s hit the allow button so we can see our beautiful notification. And just like that, we’ve got our notification showing up on the top right of our screen. So how does this work on PubNubs side? We can head over to the admin panel and go to the debug console to see our custom notification. Wrapping Up! Just like that we have our application be able to send push notifications. The next steps would be to be able to send these notifications in the background. You can do this by creating a service worker and pushing messaged from PubNub Functions.
https://www.pubnub.com/blog/add-realtime-web-notifications-to-your-progressive-web-app/
CC-MAIN-2020-34
en
refinedweb
I am working on n-tier application, where I have Data Access Layer. which is independent to any other application. I have create Class Library for .NET Core 1.1, I can see dependencies folder but not any config/JSON file. I want to know, Can I add AppSetting.JSON file in class library project? followed by how can I add connection string. I am aware to override ConfigureBuilder in DbContext class --> SQLServerConnection but I want to keep in separate config file and refer to these connection strings. Also to note, This class library will not link to ASP.NET application directly While searching on google, I have found following link but my question remains, where and how I add connection string for Class Library project? You've kind of got this the wrong way round. You dont want to have a config file for your DAL assembly, you simply want your DAL to know nothing at all about configuration! Typically, all you configure for a DAL is a conection string and this can easily be passed in on a constructor: public class MyRepository : IMyRepository { public MyRepository(string connectionString){ ... } } When you use this DAL in something like a webapi chances are you'll be using Dependency Injection - and it's here you would read the conn string from the web api's configuration public void ConfigureServices(IServiceCollection services) { var connectionString = Configuration.GetConnectionString("MyConnectionString"); services.AddScoped<IMyRepository>(sp => new MyRepository(connectionString)); }
https://entityframeworkcore.com/knowledge-base/45412366/how-to-add-config-file-in-class-library--followed-by-connection-string-for--net-core-1-1
CC-MAIN-2020-34
en
refinedweb
import "github.com/mjibson/go-dsp/window" Package window provides window functions for digital signal processing. Apply applies the window windowFunction to x. Bartlett returns an L-point Bartlett window. Reference: Blackman returns an L-point Blackman window Reference: FlatTop returns an L-point flat top window. Reference: Hamming returns an L-point symmetric Hamming window. Reference: Hann returns an L-point Hann window. Reference: Rectangular returns an L-point rectangular window (all values are 1). Package window imports 1 packages (graph) and is imported by 19 packages. Updated 2019-04-20. Refresh now. Tools for package owners.
https://godoc.org/github.com/mjibson/go-dsp/window
CC-MAIN-2020-34
en
refinedweb
I am trying to export a 3d Revit model family image for a thumbnail using the Revit API. I have tried to turn on the model edges so that they are displayed as darker lines and I have tried to turn on anti-aliasing so that lines are smoothed. I realise it is probably hopeless to switch on the shadows as this option isn't available in a family doc. I have exhausted all the image export options properties. The code below has the export image options and the enumerated Revit API properties I have managed to set so far. if (view3D != null) { views.Add(view3D.Id); var graphicDisplayOptions = view3D.get_Parameter(BuiltInParameter.MODEL_GRAPHICS_STYLE); // Settings for shaded with edges graphicDisplayOptions.Set(3); var detailLevelOptions = view3D.get_Parameter(BuiltInParameter.VIEW_DETAIL_LEVEL); //Settings for view detail, 3 = fine, 2=med, 1=coarse detailLevelOptions.Set(3); } } catch (Autodesk.Revit.Exceptions.InvalidOperationException) { } var ieo = new ImageExportOptions { //Export image file configuration settings FilePath = ImageFamModelFileName, FitDirection = FitDirectionType.Horizontal, HLRandWFViewsFileType = ImageFileType.BMP, ShadowViewsFileType = ImageFileType.BMP, ImageResolution = ImageResolution.DPI_600, ShouldCreateWebSite = false }; At this blog post there is a test case for Family documents and the View, please take a look. Below is a piece of it. #if !VERSION2014 var direction = new XYZ(-1, 1, -1); var view3D = doc.IsFamilyDocument ? doc.FamilyCreate.NewView3D(direction) : doc.Create.NewView3D(direction); #else var collector = new FilteredElementCollector( doc ); var viewFamilyType = collector .OfClass( typeof( ViewFamilyType ) ) .OfType<ViewFamilyType>() .FirstOrDefault( x => x.ViewFamily == ViewFamily.ThreeDimensional ); var view3D = ( viewFamilyType != null ) ? View3D.CreateIsometric( doc, viewFamilyType.Id ) : null; #endif // VERSION2014
http://m.dlxedu.com/m/askdetail/3/56b288b00d207beb7e1599483abe9002.html
CC-MAIN-2018-22
en
refinedweb
) geniusconnect 4.0.0.9 beta | metacafe video grabber | scratch video dvd | als scan angelina | command line clearmem | google magic photo | related searches email | flash player for | wit and wisdom | nokia multimedia converter | mobile antivirus software | contra 24.nes | shooting games for | 3gp compressor in Please be aware that Brothersoft do not supply any crack, patches, serial numbers or keygen for Yhalematik,and please consult directly with program authors for any problem with Yhalematik. Advertisement vue vs freemind | all in one converter | lg optimus chat c550 o3 | jpeg to wmv converter | ironman magazine for cheap | contra games for n95 8gb | password finder on mac | self extract spanned | anyvision webcam driver | good old sudoku download | latest gpass proxy software | copy navigation cd becker | unprotect rhapsody | sms animated icon | vbhtml namespace models | firefox 1.6.7.0 portable | spss 13 download | optimize tar command | glass mp3 player download mobile | microsoft sticky notes | vcd to mp3 converter | winzip evaluation version
http://www.brothersoft.com/yhalematik-download-191939.html
CC-MAIN-2018-22
en
refinedweb
Hello, would like to execute a script during CE startup . In manual it says to put a script named startup.py into current CityEngine workspace. So I made a simple script that prints something in CE's console to check if it is working: startup.py: from scripting import * ce = CE() if __name__ == '__main__': print ("hello world from startup") but when I startup CE it doesn't print anything to console. However, when I execute it, it works perfectly so it shouldn't contain any errors. but it doesn't get executed automatically during startup. Am I missing something? also where exactly should this script be located by meaning "current CityEngine workspace" ? is it <workspace_root>/scripting.py <workspace_root>/<project>/scripting.py <workspace_root>/<project>/<scripts>/scripting.py I placed it on all loactions and it doesn't work. Any help appreciated. Mark Hello Marek Dekys Thank you for your question. On startup the __name__variable is set to '__startup__' Please change your script to:
https://community.esri.com/thread/203258-startuppy-special-script-not-working
CC-MAIN-2018-22
en
refinedweb
In the last post, we showed how to create a simple XML document in which the data were entered from a DataGrid. We gave the code for constructing the XML as follows:; } The ‘library’ is fetched from a Windows resource defined in the XAML, with this resource being bound to the DataGrid (see earlier post for full details). In writing this code, we glossed over some of the details of how the XElement is built. In fact, we used several techniques in this code that could do with further explanation. The basic form of an XElement constructor is XElement(XName name, params object[] content); The first parameter gives the name of the XElement, which is used as the tag when writing out the XML. Usually, we’ll just enter a string here, and rely on the fact that the XElement constructor will convert this into an XName internally so we don’t need to worry about it. The second parameter uses C#’s params keyword, which allows a variable number (one or more) of arguments to be passed to the constructor. As the data type of the content is just ‘object’, any data type can be passed as the content of an XElement, and it’s here that the richness of the XElement class comes into play. There are 8 specific data types that are handled in special ways when passed in as the content. - A string is, as you might expect, just used as is as the content of the XML tag. (In fact, a string is converted into an XText object before it is used.) - XText: This is a special class which is added as a child node of the XElement, but its value, which is a string, is used as the XElement’s text content. - XCData: This allows insertion of the XML CData type, which consists of unparsed character data. Such strings may contain characters such as > and &, which ordinarily have a special meaning in XML syntax, but would be ignored here. - XElement: The content can be another XElement, which is added as a child node to the parent XElement. - XAttribute: This object is added as a child node, and represents an attribute of the parent node. - XComment: Allows a comment to be attached to the XElement. - IProcessingInstruction: Allows a processing instruction to be added to the XElement. (You don’t need to worry about these for most XML that you’ll write, but I may get back to them at some point.) - IEnumerable: This is the magic data type, since it allows collections of data, such as those produced by LINQ query operations, to be passed in as content. The elements in the collection are iterated over, and each element is treated as a separate parameter. We used this feature in the code above to insert a list of Book objects into the XML using a LINQ Select() call. In addition, you can also pass a null as the content (which does have its uses, though we won’t go into that here). Finally, if the content is any other data type, the XElement will call the ToString() for that data type and use that as the content. This can cause some confusion, since there are some other LINQ to XML classes (such as XDocument) that are used to attach properties to the XML file that will be accepted as content for XElement, but rather than having the expected effect, XElement will just call its ToString() method and use that as content. As a simple example, here’s some code that creates an XElement using most of the data types above as content: using System; using System.Xml.Linq; namespace LinqXml03 { class Program { static void Main(string[] args) { XElement document = new XElement("Library", new XComment("This is a test library"), new XElement("Program", new Program()), new XElement("Book", new XElement("Author", "Isaac Asimov"), new XElement("Title", "I, Robot"), new XAttribute("Pages", 357)), new XElement("Book", new XElement("Author", "Samuel R. Delaney"), new XElement("Title", "Nova"), new XAttribute("Pages", 293)), new XCData("This contains a > and a & character"), new XText("This also contains a > and a & character")); Console.WriteLine(document); } } } This produces the output: <Library> <!--This is a test library--> <Program>LinqXml03.Program</Program> <Book Pages="357"> <Author>Isaac Asimov</Author> <Title>I, Robot</Title> </Book> <Book Pages="293"> <Author>Samuel R. Delaney</Author> <Title>Nova</Title> </Book><![CDATA[This contains a > and a & character]]>This also contains a > and a & character</Library> The top level XElement has the name ‘Library’. Its first content is a comment, which is written with the <!–…–> delimiters. Next, we’ve added a content object of type Program (that is, the class in which this program is written). The output is produced as a normal XElement tag, but the ToString() method is called from the Program class since it’s not one of the data types that has special meaning as an XElement content. The default ToString() method for a class just produces that class’s full pathname, which in this case is LinqXml03.Program. Next, we add a couple of Book elements, each of which contains a couple of other XElements for the author and title. We’ve also added an XAttribute for the number of pages in the book. The last two lines demonstrate the difference between XCData and XText. The XCData reproduces the given text exactly, and encloses it within the <![…]]> delimiters used for CData. The XText places the text as the content of the Library tag, and translates special characters into the XML code, so that > become > and & becomes &. We’ve already seen an example of using IEnumerable in the code fragment at the top of this post.
https://programming-pages.com/tag/xelement/
CC-MAIN-2018-22
en
refinedweb
When building React applications, one thing developers don’t like to utilize is routing in React - usually because of the assumed learning curve involved. In this article, we are going to debunk that myth and show you how easy it is to implement routing and serving responsive routes in your React applications. In layman’s terms, responsive routing is pretty much serving different routes to users based on the viewport of their device. CSS media queries are usually used to achieve this. The only issue with this is the fact that with media queries, you’re restricted to either showing or not showing different elements by using the CSS props. With responsive routes, you can now serve entire separate views our React applications to different users based directly on their screen sizes. What we’ll build In this tutorial, we will see how to build a simple user dashboard application that serves different routes to users based on the size of their device screens. Table of Contents Prerequisites To follow through adequately, you’ll need the following: - NodeJS installed on your machine - NPM installed on your machine To confirm your installations, run the following command: node --version npm --version If you get their version numbers as results, then you’re good to go Getting Started Installing React This article is based on React you need to have it installed on your machine. To install React, run the following command: npm install -g create-react-app Once this is done, your have successfully installed React on your machine. And we can go ahead to create our new each application by running the command: create-react-app responsive-routing cd responsive-routing Next thing to do is to install the necessary modules we would need to successfully build this demo. These modules are the react-router-dom and react-media. We install this by running the command: npm install react-router-dom react-media Now, we can start the application by running the command: npm start Creating Navigation Component The Github logo in the center of the page serves as the navigation part of our application. Let's see how to make that. In your src/ folder, create a new folder called Nav and the following files as follows: cd src mkdir Nav cd Nav && touch index.js Nav.css You’ll need to add the Github logo and save it as logo.svgyou can download it from here Now, update the src/Nav/index.js file to look like this: // src/Nav/index.js import React from 'react'; import './Nav.css'; import logo from './logo.svg'; const Nav = () => ( <nav> <img src={logo} </nav> ); export default Nav; The navigation component has the following styling: /** src/Nav/Nav.css **/ nav { display: flex; justify-content: center; height: 50px; margin-bottom: 10px; } nav > img { display: block; width: 50px; height: auto; } Now, let render the Nav component. To do this, let’s edit the default src/App.js file to look like this: // src/App.js import React, { Component } from 'react'; import Nav from './Nav'; = App extends Component { render() { return ( <div> <Nav /> </div> ); } } export default App; Now, when we run our application using npm start and head over to the browser, we get the following: Creating the User Cards The user cards are responsible for displaying details of the user. Now, let’s see how to create this. In the src/ directory of your app, create a new Users folder and create the following files: mkdir Users cd Users && touch UsersCard.js UsersCard.css Edit the UsersCard.js file to look like this: // src/Users/UsersCard.js import React from 'react'; import {Link} from 'react-router-dom'; import './UsersCard.css' const UsersCard = ({ user, match }) => <Link to={`${match.url}/${user.id}`} <img src={user.avatar} <p className="users-card__name">{user.name}</p> <p className="users-card__username">@{user.username}</p> <div className="users-card__divider"></div> <div className="users-card__stats"> <div> <p>{user.followers}</p> <span>Followers</span> </div> <div> <p>{user.following}</p> <span>Following</span> </div> <div> <p>{user.repos}</p> <span>Repositories</span> </div> </div> </Link>; export default UsersCard; If you pay attention to the code snippet above, we used the Link component from the react-router-dom to allow the user navigate to view details of a single user when the card is clicked. So, for a given user card with an id of 10009 and the Link component will generate a URL like this: existing URL 10009represents the user id. All this information will be passed when the component is rendered. The component has the following styling: /** src/Nav/UsersCard.css **/ .card { border-radius: 2px; background-color: #ffffff; box-shadow: 0 1.5px 3px 0 rgba(0, 0, 0, 0.05); max-width: 228px; margin: 10px; display: flex; flex-direction: column; align-items: center; padding: 0; } .card img { width: 50px; height: auto; border-radius: 50%; display: block; padding: 15px 0; } .users-card__name { font-weight: 400; font-size: 16.5px; line-height: 1.19; letter-spacing: normal; text-align: left; color: #25292e; } .users-card__username { font-size: 14px; color: #707070; } .users-card__divider { border: solid 0.5px #efefef; width: 100%; margin: 15px 0; } .users-card__stats { display: flex; } .users-card__stats p { font-size: 20px; } .users-card__stats div { margin: 10px; text-align: center; } .users-card__stats span { color: #707070; font-size: 12px; } Listing all Users What we see above is a listing of users. To get our application to look like this, we need to first create a UsersList component. In the src/Users directory, create the following files: touch UsersList.js UsersList.css To display the UserCards in the above format, we need to do some heavy lifting - don’t worry, i’ll be your hypeman. Let’s edit the UsersList.js as follows. First, we make the necessary imports: // src/Users/UsersList.js import React from 'react'; import UsersCard from './UsersCard'; import './UsersList.css'; const listOfUsersPerRow = (users, row, itemsPerRow, match) => users .slice((row - 1) * itemsPerRow, row * itemsPerRow) .map(user => <UsersCard user={user} key={user.id} match={match} />); const listOfRows = (users, itemsPerRow, match) => { const numberOfUsers = users.length; const rows = Math.ceil(numberOfUsers / itemsPerRow); return Array(rows) .fill() .map((val, rowIndex) => ( <div className="columns"> {listOfUsersPerRow(users, rowIndex + 1, itemsPerRow, match)} </div> )); }; //... The listOfUsersPerRow and listOfRows functions work hand in hand to ensure that we have not more than the specified number cards on each row. Next thing to do is then use the functions to create the listing of the users as follows //src/Users/UsersList.js //... const UsersList = ({ users, itemsPerRow = 2, match }) => ( <div className="cards"> <h3 className="is-size-3 has-text-centered">Users</h3> {listOfRows(users, itemsPerRow, match)} </div> ); export default UsersList; The UsersList.css will look like this: /** src/Users/UsersList.css **/ .cards { margin-left: 20px; } .columns { margin-top: 0; } Creating the User Details View When a single user card is clicked from the listing of users, the single user card is displayed under a details section. Let’s see how to make this component. Create the following files in the src/Users directory: touch UsersDetails.js UsersDetails.css Now, let’s add the following to the UsersDetails.js file: // src/Users/UsersDetails.js import React from 'react'; import UsersCard from './UsersCard'; const UsersDetails = ({ user, match }) => <div> <h3 className="is-size-3 has-text-centered">Details</h3> <UsersCard user={user} match={match} /> </div>; export default UsersDetails; Creating the Dashboard Component The dashboard component is simple. We display the UserList and when a card is clicked, display the details on the side of the screen without having to reload the page. Let’s see how to make it work. Create a UsersDashboard.js file in the Users directory: touch UserDashboard.js Edit the UserDashboard.js to look as follows: // src/Users/UsersDashboard.js import React from 'react'; import { Route } from 'react-router-dom'; import UsersList from './UsersList'; import UsersDetails from './UsersDetails'; const UsersDashboard = ({ users, user, match }) => ( <div className="columns"> <div className="column"> <UsersList users={users} match={match} /> </div> <div className="column"> <Route path={match.url + '/:id'} render={props => ( <UsersDetails user={ users.filter( user => user.id === parseInt(props.match.params.id, 10) )[0] } match={match} /> )} /> </div> </div> ); export default UsersDashboard; In the above, we used the Route component provided by react-router-dom as a component to display the specific user detail when the card is clicked. Now, lets put this all together. Update the src/App.js file to look as follows: // src/App.js import React, { Component } from 'react'; import { Route, Redirect } from 'react-router-dom'; import Nav from './Nav'; import UsersList from './Users/UsersList'; import UsersDetails from './Users/UsersDetails'; import UsersDashboard from './Users/UsersDashboard'; import './App.css'; class App extends Component { state = { users: [ { id: 39191, avatar: '', name: 'Paul Irish', username: 'paulirish', followers: '12k', following: '1k', repos: '1.5k' }, //... other user data ] }; render() { return ( <div className="App"> <Nav /> <Route path="/dashboard" render={props => ( <UsersDashboard users={this.state.users} {...props} /> )} /> <Redirect from="/" to="/dashboard"/> <Redirect from="/users" to="/dashboard"/> </div> ); } } export default App; When we go back to the browser, we get the following: Note the difference in the URL when the user details are displayed Responsive Routing Here’s where it all gets interesting, when users visit this application, no matter the screen size, they get this same view and functionality. In full-blown applications, it’s good to give the users experiences they can enjoy properly and one way to do that is to serve them views that match their exact device sizes. We are now going to take a look at how to do this in our application. When visiting the application on a wide screen, the user is redirected to the /dashboard route of the application and when viewing on a smaller screen, the user should be directed to the /users route of the application. Let’s see how to do this. Update the src/App.js file to look like this: // src/App.js import React, { Component } from 'react'; import { Route, Switch, Redirect } from 'react-router-dom'; import Media from 'react-media'; import Nav from './Nav'; import UsersList from './Users/UsersList'; import UsersDetails from './Users/UsersDetails'; import UsersDashboard from './Users/UsersDashboard'; import './App.css'; class App extends Component { //set application state [...] render() { return ( <div className="App"> <Nav /> <Media query="(max-width: 599px)"> {matches => matches ? ( <Switch> <Route exact <Redirect from="/dashboard" to="/users"/> </Switch> ) [...] In the snippet above, we use the Media component to check the size of the screen. If the screen width is less than 599px, we set the set what we want to be displayed for different routes and also redirect the / and /dashboard routes to the /users route. Now, if the screen size is greater than 599px we go ahead to display the full user dashboard as we did earlier // src/App.js [...] : ( <Switch> <Route path="/dashboard" render={props => ( <UsersDashboard users={this.state.users} {...props} /> )} /> <Redirect from="/" to="/dashboard"/> <Redirect from="/users" to="/dashboard"/> </Switch> ) } </Media> </div> ); } } export default App; Now, when we visit our application, our app works like this: At this point, we can see that is a lot better to serve different routes based on screen sizes than using just media queries because you can now serve specially designed components to users based on their device sizes. Conclusion In this article, we saw an introduction to component-based routing with React and how to implement conditional rendering in your React applications. Here’s a link to the full Github repository. Feel free to leave a comment or suggestion below
https://scotch.io/tutorials/conditional-routing-with-react-router-v4
CC-MAIN-2018-22
en
refinedweb
. Simply put, an attacker can coerce a victims as described in HTTP filters: import play.api.http.HttpFilters import play.filters.csrf.CSRFFilter import javax.inject.Inject class Filters @Inject() (csrfFilter: CSRFFilter) extends HttpFilters { def filters = Seq getToken method. It takes an implicit RequestHeader, so ensure that one is in scope. import play.filters.csrf.CSRF val> The form helper methods all require an implicit token or request to be available in scope. This will typically be provided by adding an implicit RequestHeader parameter to your template, if it doesn’t have one already. = CSRFCheck { Action { = CSRFAddToken { Action { implicit req =>]) = CSRFCheck(action) } object GetAction extends ActionBuilder[Request] { def invokeBlock[A](request: Request[A], block: (Request[A]) => Future[Result]) = { // authentication code here block(request) } override def composeAction[A](action: Action[A]) = CSRFAdd: Custom Valid.
https://www.playframework.com/documentation/tr/2.4.x/ScalaCsrf
CC-MAIN-2018-22
en
refinedweb
Thanks Lucas, this makes sense. There is something that this patch is fixing and I'm not sure why. Maybe someone can shed some light: Advertising Using datapath from OVS master, and a setup where we have a physical interface connected to an OVS bridge (br-ex) connected to another OVS bridge (br-int) through a patch port, there's a lot of retransmissions of TCP packets when connecting from the host to a VM connected to br-int. The retransmissions seem to be due to a wrong checksum from the VM to the host and only after a few attempts, the checksum is correct and the host sends the ACK back. The packets I am sending using netcat are very small so there shouldn't be a problem with the MTU. However, could it be a side effect of this patch that the checksum gets now correctly received at the host? As a side not: if instead from connecting to the VM from the host I do it from a namespace where I have an OVS internal port connected to br-ex, then I don't see the checksum problems. Acked-by: Daniel Alvarez <dalva...@redhat.com> Tested-by: Daniel Alvarez <dalva...@redhat.com> On Thu, May 17, 2018 at 1:27 PM, <lucasago...@gmail.com> wrote: > From: Lucas Alvares Gomes <lucasago...@gmail.com> > > The commit [0] partially fixed the problem but in RHEL 7.5 neither > .{min, max}_mtu or 'ndo_change_mtu' were being set/implemented for > vport-internal_dev.c. > > As pointed out by commit [0], the ndo_change_mtu function pointer has been > moved from 'struct net_device_ops' to 'struct net_device_ops_extended' > on RHEL 7.5. > > So this patch fixes the backport issue by setting the > .extended.ndo_change_mtu when necessary. > > [0] 39ca338374abe367e28a2247bac9159695f19710 > --- > datapath/vport-internal_dev.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/datapath/vport-internal_dev.c b/datapath/vport-internal_dev.c > index 3cb8d06b2..16f4aaeee 100644 > --- a/datapath/vport-internal_dev.c > +++ b/datapath/vport-internal_dev.c > @@ -88,7 +88,7 @@ static const struct ethtool_ops internal_dev_ethtool_ops > = { > .get_link = ethtool_op_get_link, > }; > > -#if !defined(HAVE_NET_DEVICE_WITH_MAX_MTU) && > !defined(HAVE_RHEL7_MAX_MTU) > +#ifndef HAVE_NET_DEVICE_WITH_MAX_MTU > static int internal_dev_change_mtu(struct net_device *dev, int new_mtu) > { > if (new_mtu < ETH_MIN_MTU) { > @@ -155,6 +155,8 @@ static const struct net_device_ops > internal_dev_netdev_ops = { > .ndo_set_mac_address = eth_mac_addr, > #if !defined(HAVE_NET_DEVICE_WITH_MAX_MTU) && > !defined(HAVE_RHEL7_MAX_MTU) > .ndo_change_mtu = internal_dev_change_mtu, > +#elif !defined(HAVE_NET_DEVICE_WITH_MAX_MTU) && > defined(HAVE_RHEL7_MAX_MTU) > + .extended.ndo_change_mtu = internal_dev_change_mtu, > #endif > .ndo_get_stats64 = (void *)internal_get_stats, > }; > -- > 2.17.0 > > _______________________________________________ > dev mailing list > d...@openvswitch.org > > _______________________________________________ dev mailing list d...@openvswitch.org
https://www.mail-archive.com/ovs-dev@openvswitch.org/msg21495.html
CC-MAIN-2018-22
en
refinedweb
sub ret_list { return $_[0..$#_]; } [download] I'd like to give you a good trouting, but I'm not certain if you did this on purpose or not. :-) The "proper" way to take a slice is as follows: sub ret_slice { return @_[0..$#_]; } [download] This will return the desired result. I think a better example to illustrate your point (lists vs. arrays) would have been this: # the data our @zot = qw(apples Ford perl Jennifer); # the output print "Func Context RetVal \n", "---- ------- ------ \n"; { # our function my @list = &ret_std( @zot ); my $scalar = &ret_std( @zot ); print "Std LIST @{list} \n", # prints 'apples Ford perl' "Std SCALAR ${scalar} \n\n"; # prints 3 } { # a poorly-written function my @list = &ret_bad( @zot ); my $scalar = &ret_bad( @zot ); print "Bad LIST @{list} \n", # prints 'apples Ford perl' "Bad SCALAR ${scalar} \n\n"; # prints 'perl' } { # a better function my @list = &ret_good( @zot ); my $scalar = &ret_good( @zot ); print "Good LIST @{list} \n", # prints 'apples Ford perl' "Good SCALAR ${scalar} \n\n"; # prints 'apples Ford perl' } # the functions # returns full list, or number of elements sub ret_std { my @foo = @_[0..2]; return @foo; } # returns a list each time, but how long, and which parts?? sub ret_bad { return @_[0..2]; } # the "proper" function (from perldoc -f wantarray) # returns the full list, as a space-delimited scalar or list sub ret_good { my @bar = @_[0..2]; return (wantarray()) ? @bar : "@bar"; } [download] I apologize for the length and relative messiness of the code (this would be a good place to use write formats) but I hope I get my point across. Essentially, I follow what you're saying and you raise several crucial issues. Most importantly, PAY ATTENTION to A) where the return value(s) from your function are being used, and B) how your function is delivering those return values. Is it clear, or at least documented? &ret_bad() in particular scares me, I would hate to have a library full of functions like that. "Huh, I got the LAST element of the list? WTF?" I hope I don't come across as being snide. I understand what you are trying to say and it was definitely a very thoughtful post, and should serve as a warning to us all. Thank you. :-) Patience, meditation, and good use of the scalar function will see us through. Alakaboo In reply to (Slice 'em and Dice 'em) RE: Arrays are not lists by mwp in thread Arrays are not lists by tilly Throwing nuclear weapons into the sun Making everybody on Earth disappear A threat from an alien with a mighty robot A new technology or communications medium Providing a magic fish to a Miss Universe Establishing a Group mind Results (150 votes). Check out past polls.
http://www.perlmonks.org/?parent=26326;node_id=3333
CC-MAIN-2018-22
en
refinedweb
In my app i am using AlarmManager public class AlarmService extends Service { @Override public int onStartCommand(Intent intent, int flags, int startId) { //do something //setting new alarm AlarmManager alarmMng = (AlarmManager)getSystemService(Context.ALARM_SERVICE); Intent i = new Intent(this,AlarmService.class); PendingIntent alarmIntent = PendingIntent.getService(this, 0, i, 0); Calendar c = Calendar.getInstance(); if(something) alarmMng.set(AlarmManager.RTC_WAKEUP, c.getTimeInMillis()+1000*60*60*24,alarmIntent); else alarmMng.set(AlarmManager.RTC_WAKEUP, c.getTimeInMillis()+1000*60*60*24*7,alarmIntent); return START_STICKY; } @Nullable @Override public IBinder onBind(Intent intent) { return null; } Is it considered bad programming practice? No - this is a fine use case for creating alarm events. If you look at the documentation, use of AlarmManager is intended to send events to your app even if it is not running. Receiving those events in a Service that then schedules another alarm event is perfectly fine. The rest of my answer is intended to explain how to answer the other question you ask: Is it a good idea to create new alarm from service that was just called by one? To determine if you need a Service really depends on the "do something" portion of your code more than setting the alarm. For example, you might be fine using a IntentService or even a BroadcastReceiver. EDIT: In other words, you will need a background process to handle this. Determining the appropriate background process ( Receiver or Service) depends on how much processing needs to be done. Generally, setting an alarm all by itself could probably be handled in a Receiver but if it takes too long to process (e.g. more than 10 seconds) you will get an ANR (Application Not Responding) crash. That's when you need a service. END EDIT.. This is a good post about services: Service vs IntentService Specifically, the concern you should have is if your service is called multiple times, you should probably include code to cancel any previous alarms created by it, before setting a new alarm. EDIT: Also, you are not "creating a new service" or "new alarm" each time. Services will have onStartCommand called each time an intent is sent to it (by the AlarmManager or by any other means). A new instance is not created unless it is not already instantiated.
https://codedump.io/share/g6bl0NeU2MXV/1/starting-new-alarm-from-service-started-by-alarmmanager
CC-MAIN-2016-50
en
refinedweb
0 alright, so i'm trying to import text from a file into an array so that I have the ability to edit data, find certain functions, etc. This is currently what my program looks like....(I haven't gotten very far) from scipy import * from numpy import * def Radiograph_data: try: file('c:/users/ross/desktop/radiograph_data.txt') #checks desktop for file except IOerror, e: e = urlopen('') #downloads file from internet data = numpy.loadtxt(e, dtype = float32, comments = '!') #places text in array without lines starting with '!' Thanks, Ross
https://www.daniweb.com/programming/software-development/threads/90745/question-about-arrays
CC-MAIN-2016-50
en
refinedweb
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. How to set conditional deleletion in one2many tree in odoo 9? I'd like to set conditional deletion on my one2many tree. Static solution works perfect: <tree string="My Tree" delete="false"> ... </tree> but I can't find a solution to set it dynamically ie. if invoice type equals 'out_refund' than delete="false", else delete="true" Hi, I think another solution is to define an unlink method in that class. Here I think, you need to inherit the account.invoice class and define the unlink method. for eg: class AccountInvoice(models.Model): _inherit = "account.invoice" @api.multi def unlink(self): for rec in self: if rec.type == 'out_refund': raise UserError(_('You cannot delete these type of invoices!')) return super(AccountInvoice, self).unlink() This will allow deletion of records only after checking the condition. Hope this helps. Thanks for your help. It works and gives even more possibilities than I need for my use case. Problem is, this works on a different layer - model, so error is raised when record is saved. It gives strange user experience: when user deletes record in list and when whole form is saved then error appears. I'll use it right now (thanks again) and I'll try to find solution which works on presentation!
https://www.odoo.com/forum/help-1/question/how-to-set-conditional-deleletion-in-one2many-tree-in-odoo-9-101544
CC-MAIN-2016-50
en
refinedweb
I'm trying to build a method that queries a SQL table and assigns the values it finds to a new list of objects. Here's a quick example of how it works (assume the reader and connection are set up and working properly): List<MyObject> results = new List<MyObject>(); int oProductID = reader.GetOrdinal("ProductID"); int oProductName = reader.GetOrdinal("ProductName"); while (reader.Read()) { results.Add(new MyProduct() { ProductID = reader.GetInt32(oProductID), ProductName = reader.GetString(oProductName) }); } MyObject if DbNull if ProductID = reader.GetInt32(oProductID) ?? null, ProductName = reader.GetString(oProductName) ?? null string Operator '??' cannot be applied to operands of type 'int' and '<null>' string int if Null from a database is not "null", it's DbNull.Value. ?? and ?. operators won't work in this case. GetInt32, etc. will throw an exception if the value is null in the DB. I do a generic method and keep it simple: T SafeDBReader<T>(SqlReader reader, string columnName) { object o = reader[columnName]; if (o == DBNull.Value) { // need to decide what behavior you want here } return (T)o; } If your DB has nullable ints for example, you can't read those into an int unless you want to default to 0 or something like. For nullable types, you can just return null or default(T). Shannon's solution is both overly complicated and will be a performance issue (lots of over the top reflection) IMO.
https://codedump.io/share/ZPtwuqT7rSu8/1/inline-null-check-for-sqldatareader-objects
CC-MAIN-2016-50
en
refinedweb
* Note: updated new source-code (SpeedEvent does NOT need to extend EventObject, my fault!) Download: Source-code [NEW] Source-code [OLD] * If you are already familiar with standard Java GUI event-listener pair, you might skip this and go to Define custom source, event and listener. With Java (especially GUI) programs, the user might interact with a button, checkbox, car, or bank account. The program decides either to ignore or to respond to when such cases happen. For example, if we want to do something when a user clicks a button, then we need to identify 3 things: (1) the button itself, (2) the event when the button is clicked, and (3) the responding code that is interested in the button-clicked event (called 'listener'). In java, the event when the button is clicked is called ActionEvent. When such the event happens, the listener that is interested in that ActionEvent, which is called ActionListener, contains the method (called 'handler') to be invoked. Again, Java pre-defines this handler method as actionPerformed(ActionEvent e). It was possible because the ActionListener had been registered to 'listen' to the ActionEvent by the button. The Event-Driving Programming Model in Java consists of 3 type objects: The source object (a JCheckbox, for example) fires an event object (of type ItemEvent, for example) when the user checks or un-checks the checkbox. The event object contains information about its source object. The listener object is an interface that must be registered as a 'listener' by the source object to be able to respond when the event object was fired, and invoke the the handler method of the listener object. * Note: a source object can fire one or multiple events, and one or multiple listeners can be registered by a source object. Also, a listener can declare one or multiple handlers. For example, Java has provided standard models: - A JCheckBox can fire both ActionEvent and ItemEvent when the user checks or un-checks the checkbox. The corresponding listener interface is ActionListener (with the handler method actionPerformed(ActionEvent e)), and ItemListener (with the handler itemStateChanged(ItemEvent e)). - A Component (JTextArea, or JLabel) can fire MouseEvent or KeyEvent when the user has pressed, clicked, moved, exited mouse, pressed or typed key on/from that component. Corresponding listener interface is MouseLister (which defines several handlers, such as mousePressed(MouseEvent e), and mouseEntered(MouseEvent e)), and KeyListener (with pre-defined handlers). Java has already provided many standard components with event-listener pair. But we want to define our own because it is better suited with our application. For example, a bank account can fire a custom event BalanceEvent when the customer widthraws too much from his/her account. For our custom BalanceListener's handler balanceViolated(BalanceEvent e) responds appropriately, it should be able to know how much the current balance is, the minimum amount that the account holder to maintain, and the date and time of that attempted transaction. That means the BalanceEvent contains the data needed for the handler. This is not possible with pre-defined Java event-listener pairs. When defining our own event-listener pair, we should definitely follow the naming convention: XEvent - XListener (where X can be 'Action', 'Mouse', 'Account', 'Balance', 'Speed', etc.) class XEvent { // extends java.util.EventObject // Data that are associated with the source object String customer; int balance; Date dateWidthraw; // an optional object argument, representing the source object // Other optional arguments are information related to the source object public XEvent(/*Object source, */ String customer, int balance, Date date,...) { // super(source); // code to set the associated data here } } * Note: the constructor of the custom XEvent: it's how the source object pass the data to the fired event. interface XListener { // extends java.util.EventListener // It's up to the client code to decide how to implement the handler(s) void handlerMethod1(XEvent e); void handlerMethod2(XEvent e); } class Source { private XListener listener; // or more flexible, the Source should maintain a list (collection) of XListeners // The Source must have code to register/de-register the XListener void addXListener(XListener myListener) // code to add myListener to the list of XListeners // the same for removeXListener method // When a condition happens, source object fires the event XEvent myEvent = new XEvent(this, param1, param2,...) // 'this' is the source object that fires myEvent // param1, param2,... are date describe the status, what is going on to the event listener.handlerMethod(myEvent) // invoke the handler(s) } As we see, the source object can maintain a list of listeners. When a certain condition happends (such as low balance on the bank account, or a biology class is full and cannot take more students), the source object invokes the handler(s) method on its listeners. Since we pass the XEvent object to these handler(s), the handler(s) can respond appropriately. For example, informing the client code the current balance, attempted ammount to widraw on what date. // The client code that makes use of custom Source, XEvent-XListener pair public class Program { // Create a source object that can fire an XEvent // source object can be a bank account, a course in college, or a car Source source = new Source(); XListener listener = new ClassThatImplementsXListener(); // listener(s) interested in the XEvent, and contains the handler(s) // to be invoked when such event occurs // ClassThatImplementsXListener is an 'inner' class of the class Program // to implement the handler(s) in a custom way based on our need source.addXListener(listener); // Without this registering, nothing will happend!! // method on the source object here that causes source to fire the event // for example: source.widthDraw(9000); // for bank account object or source.speedUp(100); // for car object // After this method, the source delegates the event fired // to the listener for processing (invoking the handle(s)) // Inner class private class ClassThatImplementsXListener implements XListener { @Override handler(s) void handlerMethod1(XEvent e) { // code } void handlerMethod2(XEvent e) { // code } } } Now, we are modeling a simple car object, which can fire a SpeedEvent. The listener is SpeedListener interface. Max speed is temporarily set to 60MPH, min speed 40, and the default 50 when the car starts driving on the highway. When the car runs too fast or too slow, it fires SpeedEvent. the SpeedEvent is defined as: public class SpeedEvent { private int maxSpeed; private int minSpeed; private int currentSpeed; public SpeedEvent(/*Object source, */ int maxSpeed, int minSpeed, int currentSpeed) { // super(source); this.maxSpeed = maxSpeed; this.minSpeed = minSpeed; this.currentSpeed = currentSpeed; } // and some getters here } Corresponding SpeedListener declares 2 handlers: public interface SpeedListener { public void speedExceeded(SpeedEvent e); public void speedGoneBelow(SpeedEvent e); } Come back to the Car, it maintains a list of SpeedListeners: private ArrayList<speedlistener> speedListenerList = new ArrayList<speedlistener>(); </speedlistener></speedlistener> The register method for the SpeedListener: // Register an event listener public synchronized void addSpeedListener(SpeedListener listener) { if (!speedListenerList.contains(listener)) { speedListenerList.add(listener); } } When the car speeds up: public void speedUp(int increment) { this.currentSpeed += increment; if (this.currentSpeed > this.maxSpeed) { // fire SpeedEvent processSpeedEvent(new SpeedEvent(this.maxSpeed, this.minSpeed, this.currentSpeed)); } } We see that when the current speed exceeds the max speed, an EventSpeed object is created with relevant information: the source, max and current speed. And then fire that EventSpeed object: private void processSpeedEvent(SpeedEvent speedEvent) { ArrayList<speedlistener> tempSpeedListenerList; synchronized (this) { if (speedListenerList.size() == 0) return; tempSpeedListenerList = (ArrayList<speedlistener>) speedListenerList.clone(); } for (SpeedListener listener : tempSpeedListenerList) { listener.speedExceeded(speedEvent); listener.speedGoneBelow(speedEvent); } } </speedlistener></speedlistener> This processSpeedEvent(SpeedEvent e) method is to be executed when the SpeedEvent is fired. It calls each handler of each SpeedListener object in the speedListenerList, to notify and do something about that event. That we has created and fired the event, and delegated to the listener to process the event. It's up to the client code to implement the concrete handler(s). * Note: the use of 'synchronized' and 'clone' is because a new listener might be added to, or the current listener might be removed from the speedListenerList while the processSpeedEvent() method is running. That leads to the corruption of speedListenerList. It seems all done. Now is the client code (with main method) to make use of above custom Car and SpeedEvent-SpeedListener pair. public static void main(String[] args) { Car myCar = new Car(60, 40, 50); SpeedListener listener = new MySpeedListener(); myCar.addSpeedListener(listener); // Add more listeners if you want myCar.speedUp(50); // fires SpeedEvent myCar.speedUp(50); // fires SpeedEvent myCar.slowDown(70); myCar.slowDown(70); // fires SpeedEvent } The inner class MySpeedListener defines custom concrete handler(s) as follow: // Inner class private static class MySpeedListener implements SpeedListener { @Override public void speedExceeded(SpeedEvent e) { if (e.getCurrentSpeed() > e.getMaxSpeed()) { System.out.println("Alert! You have exceeded " + (e.getCurrentSpeed() - e.getMaxSpeed() + " MPH!")); } } @Override public void speedGoneBelow(SpeedEvent e) { if (e.getCurrentSpeed() < e.getMinSpeed()) { System.out.println("Uhm... you are driving " + e.getCurrentSpeed() + " MPH. Speed up!"); } } } Or if you don't want to use inner class, you can use anonymous class instead: public static void main(String[] args) { Car myCar = new Car(60, 40, 50); SpeedListener listener = new MySpeedListener(); myCar.addSpeedListener(listener); // Add more listeners if you want // Anonymous inner class myCar.addSpeedListener(new SpeedListener() { @Override public void speedExceeded(SpeedEvent e) { // Code } @Override public void speedGoneBelow(SpeedEvent e) { // Code } }); myCar.speedUp(50); // fires SpeedEvent myCar.speedUp(50); // fires SpeedEvent myCar.slowDown(70); myCar.slowDown(70); // fires SpeedEvent } Result output will be: Alert! You have exceeded 40 MPH! Alert! You have exceeded 90 MPH! Uhm... you are driving 10 MPH. Speed.
https://www.codeproject.com/articles/677591/defining-custom-source-event-listener-in-java
CC-MAIN-2016-50
en
refinedweb
The Functional Programming Elevator Pitch What is functional programming about? This tutorial will try to tell you why it's worth your time to learn the functional style. We're focusing on Haskell, the most popular pure functional language, to see where these ideas take us in the extreme. You will see that code you write in Haskell is highly modular and reusable - because the language makes it easy to write higher-order functions that have very particular jobs and are able to move through a program to the right spots the same way that data does. Haskell allows you to create data-types that capture some of the meaning of your algorithms, so that the compiler can check your reasoning, not just your use of punctuation. This eliminates a huge class of frustrating bugs. And Haskell's pure functional discipline allows the Haskell compiler to understand much more about your program than is the case in other languages - the result is aggressive compiler optimization, a much better concurrency model, great code-testing tools, and the eradication of SegFaults, NullPtrExceptions, and the like. Our goal in this post isn't to teach you Haskell (although you will pick up some basics). We just want to demo Haskell to you, in the hope that you will be convinced to give Haskell a good try. Please don't be frightened by the unusual syntax - it will all become clear. First-class functions: Build components with interlocking precision FP is about treating functions like 'first-class citizens' in the language. In most languages, functions take values as arguments and return a value as a result. Functional languages allow you to treat functions themselves as values. A function can take other functions as arguments, and return a function as a result. The syntax for calling a function is also different. e = max(a,b) would be written as e = max a b in Haskell. import Data.Char import Data.List -- Helper functions split a sentence into words and punctuation isWordPart l = notElem l " .,?!-\"' " toParts s = groupBy (\x y -> isWordPart x && isWordPart y) s modSentence :: (String -> Bool) -> (String -> String) -> String -> String modSentence pred mod s = concat . map (\w -> if pred w then mod w else w) . toParts $ s -- show -- Two different tests for whether a word 'w' is in the list isName w = elem w ["Erika","Mark","Katharine"] isPlace w = elem w ["lab", "zoo"] -- Two functions that return either the capitalized -- or abbreviated form of the input word uppercaseWord w = map toUpper w abbreviatedWord w = [ toUpper (head w), '.' ] mySentence = "Are Erika and Mark in the lab? Katharine is at the zoo." main = print $ modSentence isName uppercaseWord mySentence -- /show Two of the arguments to the function modSentence are themselves functions. You can swap them out for other functions that fit in that context ( isPlace and abbreviatedWord). This example focuses on functions that work on strings. But the method applies to functions over all data types, and you will see later that data types in in Haskell play a bigger role than they do in most other languages. Detail: how is this different from function pointers in c, for example? Click "Show more" to find out. The first uses you will find for higher-order functions (functions that take other functions as arguments or return other functions) are the same ones you would think of for function pointers in languages that support them: for example a sorting algorithm that can be called with different ordering functions. Function pointers are an very powerful feature. Functional languages take this feature and make it pervasive in the language. You can see this already in the syntax for Haskell function calls. For example, foldl is a very general higher-order function that takes three arguments: (A) a function that combines two elements to make a third (B) An initial element and (C) list of elements [e1, ..., eN] foldl f i ls returns the function f 'folded' over the initial element and every element in the list like so: f( ... f( f( a e1) e2 ) ... en) -- show a = foldl (+) 0 [5 .. 10] b = foldl (++) "These " ["are ", "some ", "words!"] -- /show main = do putStrLn $ "a: " ++ show a putStrLn $ "b: " ++ show b What would happen if we left off the last argument? It's reasonable to think that this is an error, just as it would be an to try to compute y = min(a,); But in fact, it is very common to leave out arguments in Haskell. The return value of foldl (+) 0 (the above function call, minus its last argument) is a function that stores (+) and 0, and runs when finally given its third argument, a list of elements. -- 'Partial application' of foldl to two of its three arguments myFa = foldl (+) 0 -- myFa is now a function on one argument main = print (myFa [5 .. 10]) It turns out that myFa is a pretty useful function! Try changing the function's name to listSum. Then change the name again to listProduct and see if you can change its definition so that it returns the product of the elements in a list, instead of their sum. Haskell syntax makes it as easy as possible to build up new functions out of old ones and to pass them around in your code. This allows you to move up and down the ladder of abstraction very efficiently. Strong static typing: "If it compiles, it works." You will be very happy with how often your programs work as expected the first time they pass compilation. The reason for this is that Algebraic Data Types make it easy to design types that can only take values that make sense in your program, and Pattern Matching helps you pass these well-formed values between functions. Algebraic Data Types An ADT enumerates the values that a data type may take, with some of these values coming with extra data tacked on. Double colon lines are optional 'type annotations' that convey a value's type to the reader of the program and to the compiler. -- show -- The definition of a Palatability Type data Palatability = Delicious -- Palatability values may be Delicious | NotDelicious -- OR NotDelicious -- The definition of type Taste data Taste = Sweet -- Values of type Taste may be Sweet | Salty -- OR Salty | Umami -- OR ... ... | Bitter Palatability -- Bitter and Sour values must specify | Sour Palatability -- whether they are good or bad -- Example value of type Taste cookieTaste :: Taste -- Here is a type annotation cookieTaste = Sweet brusselsSproutsTaste = Bitter NotDelicious harpoonIpaTaste = Bitter Delicious someTestTaste = Sweet Delicious -- An accident that the compiler will catch -- Sweet values don't come with Palatability -- data anotherTestTaste = Bitter -- /show main = print "OK" Try to make the above code compile by finding and fixing the malformed values. Programming Haskell in the real world is like this - a back-and-forth between you and the compiler until your use of types lines up with the definition you created for them. Pattern Matching Where Algebraic Data Types give structure to values, Pattern Matching allows functions to take advantage of that structure. A function with an ADT parameter may 'match' each possible value in turn, and bind any accompanying data for use within the function. In this example, a sensor reading can only be extracted from a SensorReading value tagged as 'ValidReading'. If a function tried to extract a value from an invalid sample, that would be a type error, and the program would simply fail to compile. If a programmer forgot to check for the possibility of error by handling the HardwareError case, the compiler would issue a warning. We are using types to enforce invariants in the code. The ability to move a program's semantics up into the type system, where they can be checked by the compiler, provides a powerful bug-deterrent. --show data SensorReading = ValidReading Float | HardwareError updateHeading :: SensorReading -> Float -> Float updateHeading (ValidReading r) oldHeading = oldHeading + r updateHeading HardwareError oldHeading = oldHeading -- /show main = print "OK" Try as you might - it is impossible to accidentally read a value from a SensorReading carrying the HardwareError tag. The compiler won't let you. Using C structs, or objects Python, the programmer would have remember to check validity flags by hand in every function using SensorReadings; or values would need to be accessed through special helper functions. Forgetting to exercise such discipline in other languages can result in errors that are silent all the way up until the project is deployed / the analysis is done and the paper is published / the flight-control system is up in the air. Using ADT's and pattern matching, more errors are caught during compilation and fixed before they can wreak havoc. Pattern matching also provides a very expressive way to manipulate data structures. Here we define a binary tree type, which may either be Empty, or a Branch with two sub-trees and a node value. The function insert breaks down Branch values into named parts that get used in the function. See the full code for more pattern matching examples. -- show data CharTree = Empty | Branch CharTree Char CharTree deriving (Show) -- depth takes a tree and returns its depth depth :: CharTree -> Integer depth Empty = 0 -- Handle the empty case depth (Branch l el r) = 1 + max (depth l) (depth r) -- Handle the branch case -- insert takes a tree and a character c, returning a new tree with c inserted insert :: CharTree -> Char -> CharTree insert Empty el = Branch Empty el Empty -- Handle the empty case insert (Branch l c r) el -- Recursively handle Branch case... | el <= c = Branch (insert l el) c r -- when element <= center node | el > c = Branch l c (insert r el) -- when element > center node -- /show isElement :: CharTree -> Char -> Bool isElement Empty el = False isElement (Branch l c r) el | el == c = True | el < c = isElement l el | el > c = isElement r el -- See the note about function pointers above to learn about foldl -- show -- Insert all the characters from this string into the tree in turn myTree = foldl insert Empty "Hello World!" myTestChar = 'o' -- /show reportTest :: Char -> String reportTest testChar | isElement myTree testChar = "We found your element - " ++ [testChar] | otherwise = "Could not find " ++ [testChar] main = do putStrLn $ "Depth: " ++ show (depth myTree) putStrLn $ reportTest myTestChar putStrLn $ "The tree: " ++ show myTree Note: Pattern matching is a big help during code refactoring, as the compiler tells you the location of every function that needs to be changed to reflect a change that you make in a data type. This feature isn't turned on yet on the School of Haskell website, but it works just fine in your project. Pattern Matching guides you through code refactorings Data structures often change as a program grows. Haskell makes the process of synchronizing the rest of the program far less painful than it is in languages with weaker type systems. Add Tornado as a flight condition and recompile. The compiler makes sure that every function handles all possible input values, and will tell you which functions need to be modified to accommodate your refactoring. (In the case of a tornado, set the wingDirection to Up and thrust to Afterburners) data Direction = WLeft | WRight | WUp | WDown deriving (Show) data ThrustStrength = EnginesOff | Thrust Integer | Afterburners deriving (Show) data FlightCondition = Calm | LightWind Direction -- Given weather conditions and our current wing direction, -- what should our new wing direction be? wingResponse :: FlightCondition -> Direction -> Direction wingResponse Calm d = d -- In calm conditions stay the course wingResponse (LightWind WLeft) _ = WRight -- Bank against light wind wingResponse (LightWind WRight) _ = WLeft -- Bank against light wind wingResponse (LightWind _) d = d -- In up/down wind, stay the course thrustResponse Calm t = t thrustResponse (LightWind _) (Thrust t) = Thrust (t+10) -- Increase power against wind -- /show main = print $ wingResponse (LightWind WUp) WLeft Generic programming from the ground up In functional programming you call it 'polymorphism', and its use predates the templates and generics that you see in c++, Java, etc. Haskell's support for polymorphic data types and polymorphic functions is excellent in terms of syntactic simplicity, expressiveness, and speed. Here is an example of a polymorphic tree. You can see that the syntax is so natural that there was never a need to make trees non-generic in the first place. import Data.Map hiding (insert, foldl) -- show -- A tree with nodes that can take any type data MyTree a = MyEmptyTree | Branch (MyTree a) a (MyTree a) deriving ) -- A MyTree of Strings myTree :: MyTree String myTree = foldl insert MyEmptyTree ["Ada","Brent","Conel","Doaitse","Ertugrul"] -- A MyTree of (Map from String to Integer), -- just to demonstrate that we can make trees of ANY orderable type myTree2 :: MyTree (Map String Integer) myTree2 = foldl insert MyEmptyTree [fromList [("c++", 6),("ocaml", 9),("haskell",10),("python",7)] ,fromList [("LeBron", 250),("Rajon", 186),("Kyrie",191)] ,fromList [("Big Mac",550),("Whopper",670),("Baconator",970)] ] -- /show main = do {print myTree; print myTree2} Functional Purity and Immutable Data: A different way of thinking about programming Functions in FP are modeled after mathematical functions: instead of performing a sequence of operations, they express strict mathematical relationships between arguments and a return value. Imperative programs are sequences of commands that each have an effect on memory or on the computer's output. Functional programs are collections of statements that are true at all times. Purity and immutability are what makes functional programming seem so alien. They are also what gives functional programming its power. When using imperative languages (c, c++, Java, Python, C#, etc etc), the programmer has to keep a mental timeline of their program's execution. Immutability in FP frees functional programmers from having to think about sequences of changes - they focus instead on the validity of the mathematical relationships between types they design. Here are a few surprising examples of this. Surprise #1: Variables never change their value -- show a = 5 -- ERROR! To quote Miran Lipovača, "If you say that a is 5, a = a + 1 -- you can't say it's something else later because you just -- said it was 5. What are you, some kind of liar?" -- /show main = print "OK" What we do instead depends on our reason for wanting to increment the value a in the first place. a = 5 -- Simply refer to the incremented value b = (a + 1) -- Define a function that refers to a incremented incr a = a + 1 b = incr a Surprise #2: No for loops -- There is no such thing in Haskell, -- because the values of i, a, and b are changing a = [1, 2, 3, 4, 5] b = 1.5 for (i = 0; i < 5; i++) a(i) = 2 * a(i) end for (i = 0; i < 100, i++) b = b^2 - 1 end Wherever a for loop is used in imperative code, functional languages offer combinators that more precisely capture the nature of that particular loop. Two short examples: -- show a = [1,2,3,4,5] -- map maps a function (here, the number-doubling function) -- over all elements in a list r1 = map (* 2) a -- iterate recursively applies a function to its argument -- to any depth. Take and last pull out the value you're -- interested in from that list r2 = last . take 100 $ iterate (\x -> x^2 - 1) 1.5 -- /show main = do putStrLn $ "r1: " ++ show r1 putStrLn $ "r2: " ++ show r2 It's natural to wonder, Without being able to change variables, how can you possibly get any work done? The general answer is, You quickly get used to working with recursion, and you get used to the combinators that abstract over recursion. Believe it or not, coding in this style is a lot of fun, and if you learn Haskell, you will probably find yourself re-writing these combinators for your own use in the other languages you work with. Surprise #3: Functions other than main can't do any I/O Functions that do I/O aren't functions in the mathematical sense, because they do things instead of expressing relationships. A function that does input could return different values each time it was run, and a function that does output can't be treated as an algebraic entity that can be combined with others and safely moved around, as terms often are in actual algebra. printAndDouble :: Integer -> Integer printAndDouble n = print n; -- Not allowed! We give up this sort of (n * 2) -- thing to pay for Haskell's other features If functions can't do I/O, how does a functional program interact with the world, read data, communicate over the internet? They use "monads". A Monad safely mediates communication between the real world and pure functions Data in the world (on a disk, on the internet, from the keyboard) may only enter and leave a purely functional program through a dedicated channel called the IO monad. In general, monads offer specific patterns that simulate sequential code execution, in much the same way that map and iterate simulate particular patterns found in for loops, without actually mutating data. Haskell's type system enforces the rule that only values that are 'in the IO monad' may interact with the real world. A functional program is structured so that the majority of its functions are pure, communicating with the real world using only a few functions in the IO monad. import System.IO import Control.Monad shiftLetter :: Integer -> Char -> Char shiftLetter n c | n == 0 = c | n > 0 = shiftLetter (n - 1) (succ c) | n < 0 = shiftLetter (n + 1) (pred c) -- simulating sequential code in the IO monad main :: IO () main = do hSetBuffering stdout NoBuffering hSetBuffering stdin NoBuffering putStr "How many lines do you want to encode: " nLines <- readLn putStr "How deep do you want each line to be encoded: " depth <- readLn sequence_ . replicate nLines $ do putStr "Enter a string: " entry <- getLine putStrLn $ "Encoded: " ++ (map (shiftLetter depth) entry) ++ "\n" That is not to say that occurrences of putStrLn and getLine can only exist directly within the main function. On the contrary, putStrLn is a function that can be passed to and from other higher-order functions, integrated into lists and data structures, and glued to other IO-performing functions using monad combinators. As you learn Haskell, you will discover how monads can help you raise the expressiveness and abstraction level of sequential code. A common metaphor for monads is 'a programmable semicolon', because they allow you to flexibly define what it means to do actions in a sequence. You may ask, "What exactly is a Monad?" A very rough answer is: any data type for which you can explain how to program the semicolon. Too abstract? It will all become clear as you see them more and use them in your own code. Haskell can be Pragmatic Haskell code runs really fast Although Haskell is a very high-level language, it's now a top contender in code speed shootouts. Haskell compiles to LLVM and to native code on Linux, OS X, and Windows. Unoptimized code compiled with GHC (the most commonly used of Haskell's several compilers) generally runs within 1/5 the speed of unoptimized c programs, and quite a bit faster than Python. Heavily optimized Haskell code is generally within half the speed of heavily optimized c; in some cases faster. Haskell has pretty strong library offerings Hackage is an online database with many high-quality, community-written libraries covering lots of areas (linear algebra, crypto, web frameworks, etc. etc. etc.). The School of Haskell website is powered by Yesod, a powerful Haskell web framework. There are Haskell bindings to OpenGL, GUI libraries, databases, CUDA, and on. Libraries for concurrency and parallelism are particularly strong. Haskell has incredible Unit Testing Haskell's QuickCheck library is an absolutely wonderful unit-testing framework. You use QuickCheck to test algorithms by defining a collection of properties of that algorithm that should always be true. QuickCheck uses the function types to intelligently generate randomized data sets (as many as you like - the default is 100) that get into far more corner-cases than any unit-test writer would ever care to write by hand. import Data.Map hiding (insert, foldl) import Test.QuickCheck -- A tree with nodes that can take any type -- show data MyTree a = MyEmptyTree | Branch (MyTree a) a (MyTree a) deriving (Show) -- /show -- depth recursively descends the tree to find its maximum depth depth :: MyTree a -> Integer depth MyEmptyTree = 0 depth (Branch l _ r) = 1 + max (depth l) (depth r) -- nElem recursively counts elements nElem :: MyTree a -> Integer nElem MyEmptyTree = 0 nElem (Branch l _ r ) = 1 + (nElem l) + (nElem r) --) -- /show isElem :: Ord a => a -> MyTree a -> Bool isElem _ MyEmptyTree = False isElem el (Branch l c r) | el == c = True | el < c = isElem el l | el > c = isElem el r -- show -- QuickCheck will generate random inputs to test these properties prop_insIsElem :: (MyTree Double) -> Double -> Bool prop_insIsElem tree el = isElem el (insert tree el) prop_prevIsElem :: (MyTree Double) -> Double -> Double -> Bool prop_prevIsElem tree el el' = isElem el (insert (insert tree el) el') prop_insChangeDepth :: (MyTree Double) -> Double -> Bool prop_insChangeDepth tree el = (depth (insert tree el)) - depth(tree) <= 1 prop_depthLowerBound :: (MyTree Double) -> Bool prop_depthLowerBound tree = (depth tree) >= floor (logBase 2 $ fromIntegral(nElem tree)) -- /show instance (Arbitrary a, Ord a) => Arbitrary (MyTree a) where arbitrary = do elems <- listOf arbitrary return (foldl insert MyEmptyTree elems) main = do let args = stdArgs { maxSuccess = 10, maxSize = 10, chatty = True } putStrLn "Checking insertion properties." quickCheckWith args prop_insIsElem quickCheckWith args prop_prevIsElem putStrLn "Checking depth properties." quickCheckWith args prop_insChangeDepth quickCheckWith args prop_depthLowerBound Haskell has personality Haskell is backed by a lot of amazing people - both great engineers and great teachers. It has a cool history and a formal background that might inspire you to learn some abstract math. The #haskell IRC channel and haskell-cafe mailing list are famously friendly and educational places to hang out. Seasoned Haskellers flock to new language features and take a peculiar interest in teaching about them. Haskell Weekly News is consistently filled with links to blog posts explaining interesting corners of the language and reddit stories about new experiments and abstractions. You can challenge yourself for a very long time before you can reach to all corners of the Haskell world; and as a research language, it is always growing. You can do it! Where to go next. It's rumored that Haskell is hard. It's probably more accurate to say that Haskell makes easy things a little tricky; while bring things that would otherwise be extremely difficult within reach. It's important to remember that no one was born knowing Haskell. We all came to it out of curiosity and a sense of adventure. You will have to study in order to get Haskell, but that's half of the fun. If you are ready to start learning the language, take a look at Learn You a Haskell for Great Good and Real World Haskell. Check out the many tutorials here at School of Haskell. Find other resources at haskell.org. And download the Haskell Platform. Have fun! ** Feel free to help out or suggest improvements for this post on github.
https://www.schoolofhaskell.com/user/imalsogreg/functional-programming-elevator-pitch
CC-MAIN-2016-50
en
refinedweb
>>.'" Congratulations:Congratulations to Judge Alsup (Score:3, Insightful) Long term speaking, its a win for Oracle. It's really only a matter of time before it would have bit them in the butt. There developers use APIs as well.:Decimated (Score:5, Insightful) The original 'reduce by one tenth' decimate is archaic. Modern usage means kill/weaken a significant portion of the group/thing being decimated. And you know this, too. Re:Good to Know :Congratulations to Judge Alsup like.":Decimated definition will also be listed. More likely ahead of the one you've given, illustrating the most common usage of the word. Yes... languages evolve. Words change meaning. Live with it. Re:Seriously? (Score:2, Insightful) Why even bother to have two returns? int max(int a, int b) { return (a > b ? a : b); }:Congratulations to Judge Alsup (Score:5, Insightful) The judge knows more about programming than Oracle does!:Good to Know (Score:5, Insightful). Re:Stallman was right! (Score:4, Insightful)
https://tech.slashdot.org/story/12/05/31/237208/judge-rules-apis-can-not-be-copyrighted/insightful-comments
CC-MAIN-2016-50
en
refinedweb
This chapter briefly describes how to configure logging and view the server log. It contains the following sections: Setting Custom Log Levels Administration Console online help.. This section explains how to configure custom logging levels for applications that make use of the java.util.logging package and access the Application Server's logging sub system. The java.util.logging package provides a hierarchical name space in which logger instances can be created. Whether a particular logging record is output to an Application Server instance's log file depends on the log level of the Log Record and the log level specified. The Application Serverlogger settings configuration provides over twenty logging modules that allow fine grained control over the Application Server's own internal logging. There is also an option to create additional custom Log Modules by specifying a module name and the logging level that the module should use. The important point here is that the logger is a static name and no inheritance is provided. Therefore, if a custom logger is configured with the name com.someorg.app and an application attempts to look up the logger com.someorg.app.submodule, then it will not be provided with a logger that inherits the settings from com.someorg.app. Instead, com.someorg.app.submodule will have a default logger that is set to log at the INFO level or higher. If an application needs to use logger inheritance, this can be configured by editing the logging.properties file of the Java runtime that is being used to run the Application Server. For example, adding the following entry to the logging.properties file, would result in com.someorg.app.submodule inheriting the same FINE level when it is created: com.someorg.app.level = FINE For more details about the Java logging API, refer to the Java documentation at, as well as the other java.util.logging classes. The Application Server provides a logger for each of its modules. The following table lists the names of the modules and the namespace for each logger in alphabetical order, as they appear on the Log Levels page of the Administration Console. The last three modules in the table do not appear on the Log Levels page.Table 15–1 Application Server Logger Namespaces
http://docs.oracle.com/cd/E19528-01/819-4733/abluj/index.html
CC-MAIN-2016-50
en
refinedweb
I am running the rpcrouter servlet under weblogic 6.1. My classpath has xerces.jar first (v 1.4.0), followed by soap.jar (v 2.2), followed by weblogic.jar. When I run the ServiceManagerClient utility to deploy my services, I get the following error: SOAP-ENV: Client Unable to resolve namespace URI for 'ns2' Does anyone know why this would be happening? I just upgraded from weblogic 5.1 and did not have this problem with that version. Thanks, Ed
http://mail-archives.apache.org/mod_mbox/xml-soap-dev/200106.mbox/%3C4F6281061B2CD411B82200508BC8256C3C1038@maple.interactiveportal.com%3E
CC-MAIN-2016-50
en
refinedweb
User talk:80n Contents - 1 Osmarender - Neat Stuff! - 2 Streetmap for mobile phones - 3 Wiki namespace - 4 NaviGPS bicycle mount - 5 Tehran map - 6 Getting Text Right - 7 UTF-8 Support broken - 8 Turning circle in Osmarender - 9 Abstention on Wreck Proposal - 10 Woking Mapping Party invite - 11 I can probably help with Osmarender text labels - 12 Connecting Wikipedia and OSM via OSMXAPI - 13 OSMhack - 14 XAPI: gz-compressed output - 15 XAPI - 16 XAPI - bounding box with relations does not work correctly - 17 XAPI bugs - 18 XAPI Server - 19 XAPI status ? - 20 Xapi output precision - 21 XAPI setup Howto Osmarender - Neat Stuff! Osmarender is definitely Neat Stuff. I think it's great that your renderings look good enough to decorate Wikipedia articles. That kind of thing will surely attract more energy and enthusiasm from Wikipedia contributors to the OpenStreetMap project. From a quick read of your description it looks like an elegant way to implement this kind of thing as well. I shall have take that XSLT for a spin myself some time. -- Harry Wood 23:46, 22 Mar 2006 (UTC) - Harry, I'm glad you like it :) XSL is powerful stuff but can be tricky to get to grips with. The latest version of Osmarender is driven by a simple rules file so you don't have to know any XSL at all to use it. 80n 23:49, 22 Mar 2006 (UTC) Streetmap for mobile phones I've written a small Java-program for mobile phones and it is available on the MMVLWiki now. The software doesn't store meta-information, but it allows you to set a target marker, so that one knows the direction, even if the target location is not visible on the screen. It would be desirable of course, if the map would also show the street names. -- Wedesoft 16:57 BST 02.04.2006. - Jan, this looks really cool. If I can figure out how to do java on my phone, I'll download it and try it out. Street names will come, right now not many roads have been named... 80n 16:24, 2 Apr 2006 (UTC) - I've released a new J2ME Weybridge streetmap using the new multiresolution data and the street names. It only goes up to zoom-level 15 (otherwise the package would be around 10 MBytes). Let me know if someone wants a larger map. -- Wedesoft Sat Mar 24 22:34:55 GMT 2007 Wiki namespace What's the plan with occupying wiki page names such as Press and Media and Accommodation just for that weekend event in Rutland, England? You can easily keep those items within the single page. And if they really need pages of their own, make them subpages or give them unique names pertaining to this event or place. --LA2 00:48, 25 Aug 2006 (BST) - Someone created links to some non-existent pages, I just followed the links and added some detail without any thought or planning. I agree that this is not well organised and I'll see about changing it. 80n 08:28, 25 Aug 2006 (BST) - Perhaps not ideal but then I didnt see any harm in having pages with these headings as they can be used for more than one event and have content deleted once done. A lot easier than trying to jam it all onto one page. I'm easy though. If someone has a better layout approach go for it :-) Blackadder 08:45, 25 Aug 2006 (BST). - I have renamed them as sub-pages beneath the project. This makes me think that we need to start thinking about the difference between a project page and a "production" page. Compare: WikiProject Rutland England with Isle of Wight and Walton on Thames. I think the Isle of Wight page should be the finished product whereas WikiProject Isle of Wight should be the planning and progress tracking page (and Isle of Wight workshop 2006 should probably be a sub-page or merged into the WikiProject page). But then, should there be a production page for everywhere that gets mapped? I suppose they would be showcases or galleries of different renderings of places of interest. At the moment, the main map server is so slow that this is the only reasonable way that members of the public can see anything presentable or useful. 80n 11:58, 25 Aug 2006 (BST) Hi. Could you please let me know how you broke the bicycle mount for your NaviGPS, and jus how flimsy it is? I'm thinking of getting one for geotagging photos and am wondering whether to bother with the bike mount. Also I take it from the review that your experience of the NaviGPS been good overall? Abarnes 22:57, 4 September 2006 (BST) - My experience overall has been excellent. It broke because first I did a lot of cycling over very rough terran and the screws that attach the bracket to the GPS worked loose. I tried to overtighten the screws to fix this, resulting in me cracking the bracket around the screw hole. A dollop of glue has fixed the problem and its been fine ever since. 80n 09:26, 5 September 2006 (BST) Tehran map Nice work on Image:Tehran-university-persian.png. It seems that there a few rendering problems of the labels, possibly related to inkscape. I will look into it. BTW, it seems that while osmarender4 renders universities just fine, tiles@Home doesn't. Roozbeh 12:27, 2 March 2007 (UTC) Getting Text Right Hi. I'm interested in the Getting Text Right project from Things_To_Do. How is the current status of this project? I believe that the text should not only be abbreviated, but also moved to fit. Do you have any ideas of "where" this should be done? I'm not sure if postprocessing the SVG file is the best place to do it, but I can't think of another place where it could be done. You can email me in the same user login at gmail.com.--Gfonsecabr 17:32, 2 December 2007 (UTC) UTF-8 Support broken UTF-8 support is broken in osmxapi. See: Is this opensource? Maybe I can fix it. - Where is the osmosis error description? Half of the world has encoding problems. Only ascii and iso-8859-1 works. 99% of the characters are not supported at the moment. People start to name chinese cities with latin chars - that looks realy strange for asian people. I like the osmxapi and hope it comes back soon. Great work! Turning circle in Osmarender I've started using T@H, and thus Osmarender for rendering. So far everything's good, but there's one thing that would be a nice rendered feature. I've been tagging plenty of nodes with [[turning_circle]], but Osmarender doesn't recognize it yet. I tried editing osm-map-features-z17 myself with the following rule (it was exactly the same as h-u-o, except that I doubled the line stroke size) <rule e="way" k="highway" v="unclassified|residential|minor"> //existing rule ++<rule e="node" k="highway" v="turning_circle"> ++ <line class='highway-core highway-unclassified-core-turning-circle' /> ++</rule> This didn't work as intended; instead it made the whole road (not just node) larger and gray (#777777). Can you help me with the proper syntax / rule? Thanks. Alexrudd 02:59, 21 February 2008 (UTC) - Take a look at how mini-roundabouts are done. Turning circles should be very similar. 80n 10:55, 22 February 2008 (UTC) Abstention on Wreck Proposal I have an interesting situation with the wreck proposal. It has 8 votes approving and 1 abstention (from you). To be approved, it requires 15 votes total with a majority or 6 unanimous approving votes. Arguably, your abstention will keep the voting open until I get more votes because it is not unanimous. If you do not have any concerns with the proposal, could you move your abstention to being a comment? That would save me time. Otherwise I will carry on collecting votes. Regards --TimSC 09:21, 17 March 2008 (UTC) Woking Mapping Party invite Hi Etienne, Big thanks for the invite! I'd love to be there, unfortunately I'm leaving that weekend for a three month cycle tour of Norway with the intention of mapping the scenic bits above the Arctic Circle for OSM. Appreciate you are really busy but if you know anyone from the project who's been working on Norway or Sweden I'd be very grateful for an introduction via email to them. No worries if you can't. I'm kicking myself for booking this trip on a weekend when you are less that five miles away from here :-( JerryW 03:20, 17 May 2008 (BST) I can probably help with Osmarender text labels I read on Things_To_Do#Task:_Getting_Text_Right That there's a need to know how big rendered text will be. I could write a script that would generate statistics on the sizes of characters and strings is various fonts, and then work out an algorithm that could accurately guess the size of strings once rendered in inkscape. I'd probably happily go that far, but I'm not interested at this point in working directly with the Osmarender source at this time. Is this immediately useful to you? If so I have a few questions: 1) what fonts are used? (Probably best if at some point I have the actual SVG/CSS code that Osmarender sends to inkscape.) 2) What character set should I use? This could come later I suppose, I'll probably use a subset of ascii for testing more quickly anyway. 3) I'm hoping that the final output of my scripts will be a file containing a function like get_rendered_size(text, font) which you can include and call from Osmarender. What language should it be in? Hope to hear from you soon, JasonWoof 14:54, 22 May 2008 (UTC) Connecting Wikipedia and OSM via OSMXAPI Hello, I sent you an email but perhaps the mail was blocking by spam-filter or so. At german Wikipedia I'm working on projects like Wikipedia-World: So the geocoding of point objects is really successfully at Wikipedia. But we have a problem with longer objects like streets, rivers, railway and so on. Thats the point where we want to link to single OSM objects. I think both projects can pofit from this plan and it's better than to collecting the same datas in wikipedia again. User:Dschwen and I created the: which creates a query to OSMXAPI, put the answer into an OSM-to-KML XSLT-processor, zip it to KMZ, cache it, and put it into GoogleMaps or Google Earth. The script is not in a final version, but it's running for paths. As result we can see an object like river Havel clearly in the Browser: So it could be a good completion to: But it can be also goog to find gaps in the osm-datas. So now my question: It seems that we have a problem with the connection limitation per user of the OSMXAPI. We want to make many but relative small queries. So could you raise the limit for Toolserver of the wikimedia were we work: I believe the IP is 91.198.174.194 for our server Hemlock. The next big problem is that it seems that we we can't query objects with a whitespace in there name. So I try to get "Prager Straße" in Dresden but don't get it. An important point for wikipedia is that we should try to get a long-time support for the parameters we would write in many articles. This shouldn't change too often. A other way which could also reduce the traffic volume by factor 10 would be to move our scripts to our server on informationfreeway.org . But then we would like to get an account to caring this scripts in future. The source code you can find here: I hope you can help us. --Kolossos 07:17, 29 May 2008 (UTC) OSMhack Hello, I want to go online with my script. Therefore I write a little bit documentation: Query-to-map There, I also describe the additional OSMhack script which I want to integrate in Template:Place I hope your API/your server is ready for this load. Is it? To avoid this load-peak I wouldn't write to mailing list or so. In the moment the performance seems really good. At the toolserver we get the last days new hardware so the load there seems to be no problem in the moment. --Kolossos 07:09, 27 June 2008 (UTC) XAPI: gz-compressed output Hey George. Can you tell me, whether gz-compression for the XAPI service is supposed to work? I have got a ruby script called OSM-Wolf (available via Bitbucke/Mercurial) running here, which fetches data from the XAPI service for analyzing. Although the author plausibly assured me the script would support gz-compressed XAPI data, the XAPI server (hypercube) seems to send uncompressed data, even if gz-compression is explicitly requested. Best Regards, --Claas A. 20:07, 2 August 2009 (UTC) XAPI Hello George, I found out that you are the "Maintainer of XAPI" :-) The only thing about the OSM XAPI Server I found out, is "Currently serving data as at 0.5 cut-off. 0.6 service will start shortly." In the "platform status". Do you know, when this service will be available again? osmxapi.hypercube.telascience.org seems the last server and today I don't get a connetion. Thx Softeis 14:18, 5 November 2009 (UTC) XAPI - bounding box with relations does not work correctly With the following request I get data from the hole world. please try:[bbox=9.5,52,9.6,52.1][type=restriction] --Langläufer 22:51, 11 December 2009 (UTC) Now it is compleetly wrong. I mostly get none of the requested objects. () In the sample I request the [bbox=9.5,52,9.6,52.1]. The result is much smaller than before (75KB instead of 2.8MB), but mostly the longitudes are far from being between 9.5 and 9.6 and restrictions are not so large. e.g. lat='51.4277078' lon='-0.0115522' lat='56.2396279' lon='42.0909317' lat='52.0284373' lon='11.2171290' --Langläufer 11:38, 13 December 2009 (UTC) - The algorithm I was using was a bit crude. Draw a box that encloses the relation and then see if it overlaps with the requested bbox. It works fine for short ways, but doesn't work so well for relations which tend to span large areas. I've now implemented a better algorithm which looks at each node to see if any are in the bbox. This should work better for you. Let me know. 80n 19:18, 14 December 2009 (UTC) I tryed something new at. Here I also want to draw relations and ways with the piste:type=nordic tag. The result of the request: .../api/0.6/*[piste:type=nordic][bbox=...] did not contain all refered ways of the relations. Maybe the same error? Now I use two request, one for ways, and one for relations - it works fine - but with only one request I could save transfer volume. --Langläufer 10:07, 12 January 2010 (UTC) XAPI bugs Sorry that I misused the Platform status for my bug report. I hope here I'm right. I used this command: or. Both give me a file with only nodes which are used in different admin_level relations. After ca. 150 MB the download interrupts with the error message "<error>Query limit of 1000000 elements reached</error>". I expect to get a file with only relations (admin_level=2) and their members. I hope it is possible to fix this behaviour and if you need more information you can write me an email over my OSM accout, too. Thanks, Daswaldhorn 11:30, 5 December 2009 (UTC) XAPI Server Hi, As the Xapi Server's are not that realiable, I would like to setup an instence myself. Unfortunately, the source code isn not available anymore under the given links. Is there any chance to make them public again? Is there a howto: on setting up an XAPI server instance? best - They should be much more reliable now. Let me know if you have problems. 80n 07:25, 27 December 2009 (UTC) - Hi, thanks a lot. It would be great if you can provide me with the source code and a short description of how to get it run. best - How can I contact you? Where can I get XAPI sources? link on xapi.openstreetmap.org appears to be broken. thanks in advance. --Komяpa 18:45, 6 April 2010 (UTC) See platform status --Lulu-Ann 21:07, 6 April 2010 (UTC) XAPI status ? I noticed that isn't responding since ~2 days and I have the "impression" (sorry, can't be more specific) that more and more lags behind the OSM database. Is this just a subjective impression or could there be something wrong with the data replication from the OSM database? --Gubaer 22:32, 5 February 2010 (UTC) - Hypercube is down due to someone deleting the database. It'll be back in a few hours. Generally XAPI is up-to-date within 2 minutes of the main database. You can check the age of the data by looking at the xapi:planetDate attribute in the first line of the response. But I've just noticed that this only shows the date, it ought to show the time as well. 80n 13:50, 6 February 2010 (UTC) - In the past the time was there written, my last complete date was "xapi:planetDate='200911201903'". So I have a little hope, that this could be fixed in the future again. Who can I ask for this issue? Daswaldhorn 16:32, 13 February 2010 (UTC) Xapi output precision Hi, dunno if you monitor the Xapi:Talk page, if not fyi: - - XAPI setup Howto Hi, I'd like to setup a mirror for the XAPI. Is there a detailed howto for this ? If not, I'd be willing to write one. Thanks Vdb 08:57, 7 September 2010 (BST)
http://wiki.openstreetmap.org/wiki/User_talk:80n
CC-MAIN-2016-50
en
refinedweb
Automating Visual Studio 2008 Visual Studio Features we can’t live without Developing code with Visual Studio is a breeze compared to other Integrated Development Environments (IDE’s) due to the many features it provides: Efficient Navigation: Visual Studio’s Code Editor provides efficient means to navigate around large files (using collapsible code blocks), large projects (using Bookmarks) that contain large numbers of classes and/or properties (using Object Explorer) as well as solutions that contain numerous projects (using Solution Explorer). Syntax Coloring: Coloring the Code Editor’s text improves code comprehension, increasing productivity. It’s difficult to explain how useful this is – but it’s immediately apparent once you have to do without it when going back to a more primitive code editor. Background Compilation: In addition to syntax coloring, Visual Studio provides immediate syntax warnings and errors (as green and red wavy underlines) by compiling newly entered code in the background. Code Completion: Many other IDE’s provide some form of code completion – but none as complete and intuitive as IntelliSense. Not only does Visual Studio’s code completion system work for all the built in languages (C++, C#, VB.NET, and next year -- in 2010 -- F# and PHP as well) but also XML, CSS, XAML, and any other number of languages for which extensions for Visual Studio are available. Code Creation: It was named Visual Studio, rather than Code Studio, or Text Studio, because in addition to the traditional text based way of entering code, it also offers a Visual way of developing the code and metadata necessary to describe UI’s, schemas, classes and mappings. It does this by providing intuitive graphical designers for developing UI forms and controls for WinForm, WPF, Web, and Silverlight applications, making the process trivially easy by supporting drag and drop placement of standard and custom controls, separation of design layout code and logic code, with automatic event wiring between the two following the event driven programming model, support for databinding, as well as custom popup property editors and design tasks. In addition to UI designers, Visual Studio provider database schema designers (to graphically create and design database schemas as well as queries), a class designer (to graphically develop or document classes and their interactions), and a mapping designer (to graphically design the mapping between database schemas and the code entities that encapsulate the data). Code Commenting: Anybody who has had to pick up and continue the work of others – or had to go back to their own code after six months -- knows the importance of documentation. Visual Studio’s use of XML based comment tags was a fantastic addition to our arsenal. Code Reuse: Although not a great fan of code snippets (saved templates for repetitive code) because I believe that the increased effort required to create and maintain them introduces code latency (bugs found and fixed in production code are not often reapplied to the original templates as well), Visual Studio offers a built in code snippet creation and management solution. Code Refactoring: Beginning with VS2005, several refactoring tools are built directly into the IDE, which are accessible from the Code Editor’s context menu, or the main menu’s Refactor menu (which is only visible when the Code Editor has focus). The refactoring tools offered are sparse, but helpful: o Extract Method: Defines a new method, based on a selection of code statements. o Encapsulate Field: Turns a public field into a private field, wrapping it with a public property. o Extract Interface: Defines a new interface based on the current class’ public properties and methods. o Reorder Parameters: Provides a way to reorder member parameters, as well as the arguments of all references to it throughout a project. o Remove Parameters: Removes a member’s parameter, and the argument from of all references to it throughout a project. o Rename: This allows you to rename a code token (method, field, etc.), and all references to it throughout a project. o Promote Local Variable to Parameter: Moves a local variable to the parameter set of the defining method. Visual Studio Automation In addition to the above mentioned features, Visual Studio’s IDE is also completely automatable, in much the same way as any Microsoft Office product. Automation is achieved by using the DTE (design-time environment) object model to manage the following: o Window objects (used to close, rearrange, or otherwise manipulate open windows) o Document objects (used to edit text) o Solution and project objects (used to manage the underlying files and project collection) o Code-manipulation objects (used to analyze your project’s code constructs) o Tool-window objects (used to configure the IDE’s interface) o Debugging objects (used for tasks such as creating breakpoints and halting execution) o Event objects (used to react to IDE events) o Automation can be achieved with full blown Extensions for Visual Studio, or in a more ad-hoc way, using Macros. Extensions The exposure of the DTE means that in addition to Visual Studio’s built in tools and features (such as Code Snippets and Refactoring tools), one can download and install a whole selection of extension tools. Their quality range from the truly mediocre (and worse) to the very useful. Note: In many ways, this is Microsoft’s distribution strategy: they provide a fantastic extensible shell, with enough features to be better than their competitors, but still leave as much space for 3rd parties to develop extensions to provide additional value. This can sometimes be a little infuriating, as one would sometimes like to see them add features we think of as essential, but also understandable, from their business point of view. Two non-free extensions that are constantly highly rated by end users are Resharper (R#) v4.1 and CodeRush. Resharper takes refactoring to a completely higher level, and CodeRush provides a more seamless means of creating, managing, and using code templates. Note: Although not an extension, if you are satisfied with the code snippet solution provided in Visual Studio 2008, and not interested in using CodeRush, you may be interested in Snippet Editor. Another extension that is very highly appreciated -- and happens to be free -- is Roland Weigelt's GhostDoc, which improves the Code Commenting capabilities of Visual Studio, by automatically documenting your code with text heuristically developed from the name and type of the Property itself, producing output similar to the following: #region Properties private Uri _ImagePath; /// /// Gets or sets the image path. /// /// The image path. public Uri ImagePath { get { return _ImagePath; } set { _ImagePath = value; } } #endregion Even if GhostDoc can easily be abused to make absolutely useless documentation (garbage in / garbage out) – it can save an enormous amount of time under certain circumstances. Note: At the very least it ultimately leads to developers choosing clearer names for their properties and methods – if only to nudge GhostDoc in the right direction, which adds clarity to code in general. Macros Finally, even though there are many free and not so free extensions available for Visual Studio, the results they produce may not be to your liking. Sometimes the task you wish to achieve is specific to your work environment, and therefore you never will find an extension that will do it. You could develop your own extension to fulfill your needs, but the development time required to create a full blown extension is not trivial. In many cases the creation of a custom macro is a more appropriate solution. A Macro to decorate Control Properties with ComponentModel Attributes An example of an automatable task that would not be worth the trouble of developing a complete extension, but could appropriately be addressed by a macro is the decoration of control properties with ComponentModel attributes. For example, in the above code example, the ImagePath property, if part of a Control, needs to be decorated with at least one or two attributes: o System.ComponentModel.CategoryAttribute o System.ComponentModel.DescriptionAttribute and potentially, with: o System.ComponentModel.BrowsableAttribute (only required if the public property should be hidden from the Properties Window, since its default value is true anyway) In addition to the above attributes, the following should also be considered (they will depend to some degree on whether the control is for intended for use in WinForms or WebForms): o System.ComponentModel.EditorAttribute o System.ComponentModel.DesignerSerializationVisibilityAttribute o System.Web.UI.PersistenceModeAttribute Note:To keep things simple, we won’t be addressing these last 3 attributes in this demonstration. The final result could look something like: /// /// Gets or sets the image path. /// /// The image path. [ Category("Appearance"), Description("Gets or sets the image path."), DefaultValue(null), ] public Uri ImagePath { get; set; } Leaving this to be done by hand is error prone at best, and the kind of task that an automation macro is perfect for. Let’s create one. Launching the Macro IDE The process of creating a macro for Visual Studio, will be familiar to anyone who has created macros for Microsoft’s Office products (Excel, Word, Outlook, etc.). The Visual Studio’s Macro Editor is launched either by pressing Alt-F11, or selecting Tools/Macros/Macro IDE… from Visual Studio’s menu: For this example, we’re start by creating a new Module file to contain our public subroutine, and call it XSS_AttributeManagement: Macros are simply Public Parameter less Methods within Modules For your macro to be later callable from a Visual Studio button or Shortcut key, it has to be defined as Public and a Sub that takes no arguments, so we’ll create in our newly created Module, a Sub called AutoAttributeControlProperty, and as long as its parameter less, and public, it will immediately become visible in Visual Studio’s Macro explorer: The Goal of the Macro Breaking down the functionality we need to achieve, we need a macro that does basically the following: o Figures out what Document is currently being shown in the Code Explorer (ie, the DTE.ActiveDocument) o Figure out the CodeElement the cursor is over o See if the CodeElement can be typed as a CodeProperty (rather than a CodeVariable or CodeFunction, or white space within a CodeClass) before continuing. o Figuring out where the start of the CodeProperty is, and create an EditPoint from it. o Insert new Attributes at the new EditPoint. The following annotated code does this: Imports System Imports EnvDTE Imports EnvDTE80 Imports EnvDTE90 Imports System.Diagnostics 'Add ref to System.Xml Namespace 'after having added System.XML.dll 'as a Reference to 'MyMacros': Imports System.Xml Public Module XSS_AttributeManagement 'Macro to add Attributes to properties for IDE integration. Sub AutoAttributeControlProperties() Dim fileCodeModel As FileCodeModel = _ DTE.ActiveDocument.ProjectItem.FileCodeModel ' Get the current selection, within the ' current document: Dim selection As TextSelection = _ DTE.ActiveDocument.Selection ' From the selection, get the active point ' which is the current cursor location: Dim point As TextPoint = selection.ActivePoint ' Try to get the code element -- of type Property -- ' at the current location: Dim codeElement As CodeElement Try codeElement = _ fileCodeModel.CodeElementFromPoint( _ point, vsCMElement.vsCMElementProperty) Catch ex As Exception 'CodeElementFromPoint is a bit rude, and 'throws an exception if it doesn't work... codeElement = Nothing End Try 'If it's null...then we're not on a Property: If (codeElement Is Nothing) Then MsgBox( _ String.Format( _ "Place cursor on a Property before running this macro ('{0}').", _ System.Reflection.MethodBase.GetCurrentMethod().Name), _ MsgBoxStyle.Exclamation) Return End If 'Good... 'The element we are on is a Property, 'so let's type it for easier subsequent use: Dim codeProperty As CodeProperty = codeElement 'From the property, extract its name for later use: Dim propName As String = codeProperty.Name 'For comparison purposes only, let's lowercase it: Dim propNameLowered = propName.ToLower() 'Right... 'Let's start trying to fill in the bits... 'If all goes well, we want a Category and Description 'for the property: Dim category As String = String.Empty Dim description As String = String.Empty 'Note to self... 'Is there a Switch/Case in VB.NET ? If (propNameLowered.StartsWith("border")) Then category = "Appearance" ElseIf (propNameLowered.StartsWith("background")) Then category = "Appearance" ElseIf (propNameLowered.EndsWith("width")) Then category = "Appearance" ElseIf (propNameLowered.Contains("color")) Then category = "Appearance" ElseIf (propNameLowered.EndsWith("height")) Then category = "Appearance" ElseIf (propNameLowered.Contains("image")) Then category = "Appearance" ElseIf (propNameLowered.EndsWith("icon")) Then category = "Appearance" ElseIf (propNameLowered.StartsWith("show")) Then category = "Appearance" ElseIf (propNameLowered.Contains("style")) Then category = "Appearance" ElseIf (propNameLowered.StartsWith("allow")) Then category = "Behavior" ElseIf (propNameLowered.StartsWith("use")) Then category = "Behavior" ElseIf (propNameLowered.EndsWith("data")) Then category = "Data" End If 'Right... 'should have a piece of text suitable for a CategoryAttribute 'But don't yet have a suitable piece of text for a decent 'DescriptionAttribute... 'One place I would look for one is try to reuse the description 'that was already put on the property by the programmer... 'Did the Property have a comment already associated to it? If (codeProperty.DocComment <> String.Empty) Then 'Yes...the Property has an associated comment... 'so, theoretically, it should 'be an Xml document that can be parsed... 'So load it up! Dim xmlDoc As System.Xml.XmlDocument = _ New System.Xml.XmlDocument() xmlDoc.LoadXml(codeProperty.DocComment) 'And look for a node called summary: Dim xmlNode As System.Xml.XmlNode = _ xmlDoc.SelectSingleNode("//summary") If Not (xmlNode Is Nothing) Then 'If there was a node, then we want the inner text 'we don't want any para tags or other stuff 'in the DescriptionAttribute: description = xmlNode.InnerText 'And we don't want newlines that will cause havoc 'with the built in Xml documentation: description = _ description.Replace(System.Environment.NewLine, "") End If End If 'Where are we? 'We have a Category and we have a Documentation '(although both could be string.Empty...) 'Let's inject it in... Dim indentLevel As Integer = 2 'We want to insert attributes just above 'the code description, but under any 'code comments that may or may not 'already be there... Dim insertPoint As EditPoint = _ codeProperty.StartPoint.CreateEditPoint() 'insertPoint.LineUp(1) insertPoint.Insert("[") insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert("#if (!CE) && (!PocketPC) && (!pocketPC) && (!WindowsCE)") insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert("// Attributes not available in Compact NET") insertPoint.Insert(" (cf: DesignTimeAttributes.xmta)") insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert("System.ComponentModel.Browsable(true),") insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert(String.Format("System.ComponentModel.Category(""{0}""),", category)) insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert(String.Format("System.ComponentModel.Description(""{0}""),", description)) insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert("#endif") insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert("System.ComponentModel.DefaultValue(null), //TODO:Set this.") insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) insertPoint.Insert("]") insertPoint.Insert(System.Environment.NewLine) insertPoint.Indent(Nothing, indentLevel) End Sub End Module You’ll notice that the code references the System.Xml namespace to extract the property’s description – but the System.XML.dll assembly is not automatically referenced by the Macro environment, so you first will have to add the reference to your project before it will be useable: Taking your Macro out for a first spin Once that has been accomplished, your module should compile, and you’re ready to take it for a test run. Return to your User Control’s definition in Visual Studio proper, click anywhere within the Property we intend to decorate, and invoke the Macro by clicking the macro’s name in the Macro Explorer: If all works out as intended, it analyses the code under the cursor, and fills in the missing attributes. #region Properties /// /// Gets or sets the image path. /// /// The image path. [ #if (!CE) && (!PocketPC) && (!pocketPC) && (!WindowsCE) // Attributes not available in Compact NET (cf: DesignTimeAttributes.xmta) System.ComponentModel.Browsable(true), System.ComponentModel.Category("Appearance"), System.ComponentModel.Description("Gets or sets the image path."), #endif System.ComponentModel.DefaultValue(null), //TODO:Set this. ] public Uri ImagePath { get; set; } #endregion Making it easier to invoke the Macro: creating a custom Toolbar Button When you have debugged and tweaked the macro to your satisfaction, it’s probable you will wish to have it available for use in the future. Although the Macro Explorer is perfectly suitable, you may prefer a faster way to invoke the macro, such as creating a button for it. o select from the menu Tools, then Customize. o Pick the Command Tab from the Customize dialog that comes up. o Select Macros from the Categories list, and the macro you just created from the Commands list: o Drag and drop the macro on to any menu bar: o Right-click the new button you’ve just created, and use the context menu to change the button’s name and icon: o Close the Customize dialog, to see the results: Click your new button while the cursor is within a Property in the Code Editor…presto! Better Yet – Hook it up as a Keyboard Shortcut Buttons are fun…but not as practical as keyboard shortcuts. To hook your Macro up to a keyboard shortcut, do the following: o Select Tools, Options o In the Options dialog that pops up, expand the Environment, and select Keyboard o In the text box labelled Show commands containing, start typing the word ‘macro’, then the name of your macro, the macro becomes highlighted in the list below. Select it. o Update the value within the Use new shortcut in dropdown to state Text Editor. o Within the Press shortcut keys press a key combination. I chose Ctrl-Shift-Alt-D, as it was very similar to GhostDoc’s Ctrl-Shift-D: o Hit Ok, and give it a spin. Where to go from here The above Sub macro was intentionally kept simple in order to introduce the concept of macro development. One can add more complex logic to add the other attributes mentioned, namely System.ComponentModel.EditorAttribute, System.ComponentModel.DesignerSerializationVisibilityAttribute and System.Web.UI.PersistenceModeAttribute. There are many macro samples online to help you get where ever you wish to go, including some examples at my website you may find useful getting started. Links: Visual Studio 2008 Visual Studio Features: o o Refactoring C# Code Using Visual Studio 2005 Visual Studio 3rd Party Language Packages: o PHP for Visual Studio: Phalanger, PHP for .NET: Introduction for .NET developers Visual Studio Extensions: o Visual Studio Gallery of Extensions for Visual Studio o Visual Studio Add-Ins Every Developer Should Download Now Other Editors: o Emonics o Development Resources: o My blog’s Visual Studio Macro posts o Automation and Extensibility Reference o EnvDTE Namespace Attributes for properties in Control development: o System.ComponentModel.CategoryAttribute o System.ComponentModel.DescriptionAttribute o System.ComponentModel.BrowsableAttribute o System.ComponentModel.EditorAttribute o System.ComponentModel.DesignerSerializationVisibilityAttribute o System.Web.UI.PersistenceModeAttribute Windows Azure Table Storage - Getting Started - Part 2 Comment by kjward, on 29-Sep-2009 09:47 thanks lots for all the great info. i'm sure the answer to my question is in there somewhere, but please bear with me while i expose my general ignorance... we would like to create an app with a list of general classes (data access, etc.) that have been stored in our source control library that a developer can then choose to include in a new project baseline. once their selection is complete, the app then generates the visual studio project code and opens visual studio with the baseline project ready to edit. how to do this? thanks in advance Comment by hockmanli, on 2-Apr-2009 20:30 Good posting! more professional web templates at itemplatez.com... its a easy download.
http://www.geekzone.co.nz/vs2008/6324
CC-MAIN-2016-50
en
refinedweb
The advent of XML has highlighted the importance of namespaces in the data domain. Namespaces also play an important role in C++, specifically in compartmentalizing code. They’re easy to use and they provide numerous benefits such as readability and improved modularity. C++ provides excellent support for handling exceptions. Namespace code improves the structure of C++ programs, whereas exception handling improves the associated dynamic behavior. Both are useful additions to your C++ arsenal. Companion Code As usual in my articles, the code can be downloaded (see the end of this article for the link). The code consists of two files that illustrate the namespace text: - EventHandler.cpp - EventHandler.h It also includes one file that illustrates C++ exception handling: - SomeExceptions.cpp If you’re using Visual C++, feel free to create two Win32 Console applications, drop the first two files into the first workspace and build. Then do the same for the second file (SomeExceptions.cpp). The code supplied should compile and run.
http://www.informit.com/articles/article.aspx?p=445732&amp;seqNum=6
CC-MAIN-2016-50
en
refinedweb
To read all comments associated with this story, please click here. IE8 does, of course, not support SVG. And actually I don't think that SVG support will arrive in the foreseeable future. SVG in combination with JavaScript and other AJAX stuff can -- at least to a certain degree -- replace Flash and Silverlight. MS has no interest in supporting anything that's a threat to Silverlight. ie8 does, of course support SVG . but it's (microsoft style) not supporting xhtml as it should. for some reason they did not make a mac version so i have trouble testing it. this can make my live a lot easier if i have to make graphs and stuff. but it should be possible to embed svg in a cross browser and standard way (but i'm scared it "just does not work"). for some info about namespaces in xhtml/html/xml hybrid bastard pages :... Yet, SVG is - besides of advanced CSS etc. - one such good open web standard that MS is now supposedly more willing to support in its browsers? Or not? Just cheap empty marketing talk? The lack of SVG support is one, maybe small, but very good example of the reasons why I don't usually like or use Microsoft software practically at all anymore. Their main business goal does not seem to be serving people or following fair play rules like open standards, but instead, only making as much money as possible and hindering competitors. The whole OOXML vs. ODF file format debate is another example of the same, already all too well-known MS business behavior. But maybe, just maybe, there are now some signs of MS finally learning its lessons and becoming more of a cooperative team player instead of a greedy monopolist only? At least they talk about the importance of open standards more nowadays. Let's hope that it is not just cheap talk aimed at fooling people. However, I'm afraid it will take some time and effort from Microsoft before people will learn to trust its willingness to support and follow open standards. And if Microsoft won't learn the importance of open standards, they will just continue making enemies and losing customers, maybe slowly but gradually anyway. Edited 2008-03-06 00:10 UTC XHTML 1.1 Strict? There's only the XHTML 1.1, and no, IE8 doesn't support the application/xhtml+xml mime type. Internet Explorer is free for download and it can be freely ran on any system capable of running WINE, so cut the crap. Not with "my" IE8. I am not able to scroll the page neither with arrow keys nor mouse wheel. It should be allowed to do that. You have IE7 running in WINE successfully? Do post instructions on how you managed that, since the IEs4Linux crew have never really gotten past IE6 (they've managed to create an IE6 hybrid that uses some of IE7's rendering engine, but that's it). Member since: 2007-04-29 does it support xhtml 1.1 strict? that way i can finally serve properly formatted xml, and embed svg. which means i can draw lines (finally! after 20 years i can draw lines in my webpage)
http://www.osnews.com/thread?303552
CC-MAIN-2016-50
en
refinedweb
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Programming languages are ultimately interfaces for people to interact with the world of computational possibilities. In that backdrop, I've recently been interested in the influence of cognitive models on programming language design and would like to hear the thoughts of the tU community on the topic. I think the recent rise of DSLs warrants more research in this area. Past work has been largely populated by debates on completeness, compiling vs. interpreting, efficiency, dynamic vs. static, typed vs. untyped, parallel vs. sequential, distribution, designer idiosyncrasies and, not to mention, elegant vs. ugly. While I don't deny the importance of the debates, the only one that comes anywhere near the territory of cognitive models is this notion of "elegance", but that's (as far as I know) never dealt with formally. I see a few areas of work here - (Maybe others strike you.) For an example of (1), Drescher's schema building mechanism seems a very interesting angle to building interfaces to the world of computation. (see Made-up Minds) .. and it does look like people are trying to apply that approach to special areas. Production rules (grammars) are another category. Of course, one can't forget Prolog. AppleScript and Hypercard are simple examples of (2), 'cos they were intended to be usable by non-computer science folk and yet are powerful enough scripting languages. Elements of the language such as "tell ...", and "it" exploit our ability to refer to "at hand" and relative entities without much effort. The Logo world and, particularly, Scratch must be mentioned too. I myself got interested in this topic after I worked on adding "the" and "it" to my own scheme toy and found some cognitive modeling results such as working memory size useful in determining usage boundaries. (3) is interesting as part of the design iteration loop. Thots? --- Link summary (extracted from posts) - While programming may be a mental exercise, it is rarely an exercise completed by individual programmers on any non-trivial project. More important, I suspect, is designing to integrate the efforts of, say, ten-thousand programmers working on hundreds of projects that integrate in ad-hoc manners. When dealing with large groups of programmers and large codebases, it shouldn't be an error to model individual programmers as having narrowly scoped goals, very limited foresight and knowledge of the codebase, and a severe distaste for reading or modifying existing code except as necessary to tweak it for their own work. What language features would be useful in that hugely scaled environment? That's the question I think we should be answering. Because, whether we or our languages admit to it or not, that's already the environment in which most of us are working. I'm laboring on my own answer to the idea of a language scalable to support tens of thousands of concurrent programmers with interdependent projects, and I won't go too much into it here (don't want to sound like a brochure), but I think language and development environment design would do well to learn from existing large-scale projects like Wikipedia. While programming may be a mental exercise, it is rarely an exercise completed by individual programmers on any non-trivial project This is most definitely false. Programming is often performed in small groups of one or a few people, e.g., they are writing entire compilers, kernels, games, and so on mostly by themselves. A lot of non-trivial programs and libraries out there were written by one or a few people. If you want evidence, just look at most successful OOS projects, before they were popular, one or two people were behind them. In this case, programming is the mental exercise that dominates, and process is to the side (though still important). What you are talking about is software engineering, which tries to scale programming to larger projects and larger groups of people. In this case, process dominates, and programming becomes less important. Of course, when less effort is devoted to programming, less gets done, but what else can you do on large projects? I would argue that MOST of the programming performed is done by very small groups (often just one person), and therefore, studying programming in isolation of larger-scale software engineering is useful. Of course, software engineering is also important and useful to study. But they are really different things... A lot of non-trivial programs and libraries out there were written by one or a few people.. When these libraries, operating systems, and standards are upgraded, it influences the projects that utilize them. When you're looking at programming in the large, it is almost never performed by individuals. Projects are interdependent even when individual programmers have narrowly scoped goals. Programming is often performed in small groups of one or a few people, e.g., they are writing entire compilers, kernels, games, and so on mostly by themselves. If you want evidence, just look at most successful OOS projects, before they were popular, one or two people were behind them. I agree that successful projects often begin their existence written by individuals. But you won't find many successful kernels or games that never have more than a few people laboring on all the different pieces of them. What you are talking about is software engineering, which tries to scale programming to larger projects and larger groups of people. Well... not really. I don't really believe software engineering is about scale so much as it is about achieving predictable results (in terms of completion, quality, budget, etc.). That concerns of scale get involved with software engineering at all is just a side-effect of realities surrounding all (or nearly all) non-trivial programming projects.. Sometimes the little things matter, such as: Is source code named in hierarchical niches, or is it in a flat namespace? How confident can you feel in someone else's work and can you easily fix or upgrade it if necessary? How are the risks of malicious code injection controlled (security, etc)? Can you trust the optimizer, or will you need to do xyzzy yourself to do it right? If someone else has done what you need, can it be slightly tweaked to make it do what you need with minimal invasive change to the code? How 'fragile' are the language components under refactoring, feature addition, and forward compatibility? The design decisions we make when designing a language can be targeted towards individuals or towards groups, and it is certainly possible to achieve a balance of the two. Heavy support for extending the language with EBNF macros, for example, is something that favors individuals in the small scale, but groups could take advantage of it by standardizing and agreeing upon a few DSLs for, say, describing interactive scenegraphs for an object browser (e.g. something akin to Inform language). I would argue that MOST of the programming performed is done by very small groups (often just one person), and therefore, studying programming in isolation of larger-scale software engineering is useful. I would argue that MOST of programming involves combining stuff written by other people and, therefore, studying programming with the idea of mashups and project composition firmly in mind is even more useful. I'm not talking about disabling the individual programmer. The sort of social engineering decisions above would effectively allow individual programmers to be highly productive in spite of having narrowly scoped goals, limited foresight and knowledge of the codebase, and a distaste for learning more. Indeed, such individuals may 'feel' they completed useful projects on their own... but that would require the (erroneous) perspective that the libraries, systems, standards, and protocols they utilized were given to them by mother nature. When these libraries, operating systems, and standards are upgraded, it influences the projects that utilize them. When you're looking at programming in the large, it is almost never performed by individuals. Projects are interdependent even when individual programmers have narrowly scoped goals. Most programming tasks aren't so glamorous. You produce something that someone will never reuse, just for some task at hand. When you look at programming in terms of population and tasks, I'm sure that more solitary programming dominates. Even when you look at most programs out there, they were written initially by one or two people: Linux, Emacs, OmniGraffle, compilers for Ruby, Python, even Javac was written by a small team. There are very few projects out there with more than 10 programmers (say Windows) and when projects beyond a couple of programmers grow process dominates, and actually most of the people involved in the project aren't contributing much code. I agree that successful projects often begin their existence written by individuals. But you won't find many successful kernels or games that never have more than a few people laboring on all the different pieces of them. Yes, plenty. World of Goo comes to mind as a game, OmniGraffle comes to mind as an office-style application, kernels are a bit more difficult if you include drivers, but if you are just talking about the core kernel, I don't think the team of direct contributors to Linux is very big. Anyways, go through the applications that you use daily and I'm sure you'll find lots of examples.. Fair enough. My counter-argument is that you might miss because the problems with most large-scale projects don't occur at the programming level, rather they suffer from problems in project management, design requirements, communication, integration, and so on...I don't see how you can shove those things in a programming language, but if you could, that would be great. I would argue that MOST of programming involves combining stuff written by other people and, therefore, studying programming with the idea of mashups and project composition firmly in mind is even more useful. Ah, we all agree reuse and components are good. But the problem you are talking about is much harder: how do you get multiple programmers to work closely together on a project. Its easy enough to produce a library when the line of communication, coordination, collaboration between the producer and user is almost nil: the library better be well designed or the user will just skip it. There isn't much of a social process there. Anyways, I see what you are saying and wish you luck in looking at this. I'm just arguing against your premises, half the battle in research is just figuring out what the problem really is. Most programming tasks aren't so glamorous. You produce something that someone will never reuse, just for some task at hand. Agreed. But it is exactly in those situations where 'you' become the person reusing other things in order to accomplish the task at hand. Integrating components written by other people is a non-trivial, and ultimately social, task. By no means was World of Goo developed by just three people... not when you include the components, documentation, service, integration, etc. associated with use of Simple DirectMedial Layer, Open Dynamics Engine, XML and bug trackers and version control, font generation, etc. problems with most large-scale projects don't occur at the programming level, rather they suffer from problems in project management, design requirements, communication, integration, and so on... Well, I wouldn't say that problems with large-scale projects don't occur at the programming level. And I'd certainly consider 'integration' to be a programming-level problem. But I readily agree that solving just programming level problems would be far from sufficient to solve all problems. For technology-based support of software engineering, I think the better approach is to focus on enhancing features of the integrated development environment (including source version control, bug-tracking, feature-requests/user-stories, configuration management, quality assurance and testing, etc.). The enhancements or properties needed often don't make sense in the language proper, excepting that it'd be nice if all the IDE features were available in libraries. In the broader sense, more than just project management could be managed by such IDEs. One could support an economic model with technology as well... e.g. allowing people to pool or auction rewards and incentives towards achieving new features or greater optimizations and integration of open source technologies, and offering clouds or grids in which one can run active services (e.g. video game servers) at competitive rates. But the problem you are talking about is much harder: how do you get multiple programmers to work closely together on a project. That's not the problem I'm concerned with. Small cliques of programmers will form naturally without any special language or IDE design efforts. Besides, if I wanted to 'force' people to 'work closely' with one another, I'd actually have a reverse policy... similar to how locking files in a version control system can 'force' you to go meet people you'd otherwise rarely go see, simply because they forgot to unlock a file. Rather, the important question is a slight (but significant) variation: how do I support integration and enhancement in a system where individuals and small cliques are mostly acting on their own individual interests? I.e. what I want is cooperation as an emergent behavior, not as a required one. I do have answers (maybe even correct ones) to this question, both for integrating and enhancing active services (which supports closed source services) and supporting cross-project integration, enhancement, and refactoring work in an open source arena. While I don't assume malice is common (I couldn't find answers for a largely byzantine environment) I also have answers for many relevant security, resource management, and privacy concerns. half the battle in research is just figuring out what the problem really is Indeed. I've been working on it for seven years now. I can't say with certainty I have the problem right, but I can say I've given it a lot of thought, observation, hypothesis and testing. I'm reasonably confident I have the problem right, and that the problem is not in any significant sense cognitive models in the small scale of individual programmers. "How well does the language support integrating stuff other people write?" is, I believe, a far more relevant question than "How well does the language support the individual mind?" Frameworks and libraries won't take you very far if it becomes a combinatorial hassle to put more than two of them together (e.g. due to mutex management, diverse memory management, safety issues, inability to tweak them a bit them prior to integration, etc.). what I want is cooperation as an emergent behavior It sounds like you're talking about cooperation in the positive sense - that people accommodate each other, work with each other, etc. It might be worth expanding that to *all* co-operation to include disagreements, fist fights, whatever. (a) "How well does the language support integrating stuff other people write?" is, I believe, a far more relevant question than (b) "How well does the language support the individual mind?" I've not thought much on this topic, but my gut feel is that the two aspects you mention are intertwined - in ways that we probably don't understand quite well yet. For example, it seems to make sense to ask the question "how does the language support *me* integrating stuff that I developed in the past?". Is that (a) or (b)? Design-wise, it probably has the same issues as (a), but cognition-wise it seems more like (b). It sounds like you're talking about cooperation in the positive sense - that people accommodate each other, work with each other, etc. It might be worth expanding that to *all* co-operation to include disagreements, fist fights, whatever. I would suggest that, as much as possible, the design of the language should be aimed towards positive cooperation and avoiding conflict. But I do agree that support for working in spite of disagreements and resolving disagreements should be included in the language and IDE design. the two aspects you mention are intertwined Likely, yes. But the degree to which two questions are intertwined or correlated is also the degree to which answering one question provides also an answer to the other. Thus, the more intertwined, the less any potential penalty for ignoring the other question.. If you integrate a library written by other people, your project has now been written by multiple people. If your project uses open standards and protocols designed by many people, your project has now been written by multiple people. and if I might add - you're probably using a programming language designed by other people :) And that language is also among those things that will change over time. To spoil an old joke, there's a great difference between "standing on the shoulders of giants" and "stepping on each others' toes". It seems that your discussion of "social modeling and engineering" is blurring the distinction between three modes of programming that I experience in different ways: 1) consumer - a "lone wolf" programmer who is (by definition) using languages, libraries, etc. created by others, but who is treating them as artifacts of nature simply to be used as they are, while she/he pursues a private agenda. The interaction pattern is read-only. The self-description might be, "If I find something useful, I'll use it; otherwise, I'll write it myself." 2) producer - a programmer (lone or otherwise) who is creating a language, library, etc. that will / may be used by others, but with an agenda not driven by their requirements/feedback. The interaction pattern is write-only. The self-description might be, "I'm writing this for my purposes. If someone else finds it useful, they're welcome to it; otherwise, they should look elsewhere." 3) collaborator - a programmer who is working concurrently and interactively with other programmers on a project shaped by their varying goals and contributions. The interaction pattern is iterative read/write. The self-description might be, "We're working together -- at least for the moment -- on something that won't happen without the accumulated contributions. We'll have to negotiate and compromise as we go." I'm not trying to propose a taxonomy, but looking to emphasize the difference between no-man-is-an-island generalities and "real" social and cooperative development, as a matter of the individual's orientation and attitude toward others. To switch metaphors, I may prepare my solitary breakfast (with the implicit recognition that other people existed, such as the farmer who grew the oats), but that's a radically different process from collectively preparing a family holiday dinner in real-time with a kitchen full of others. (And that, in turn, is different than operating a communal kitchen for a festival event with a population of thousands.) It would seem to me that there's an entire range of issues (packaging, portioning, utensil design, room layout, etc.) that will be different among these extremes. Are you suggesting that there are language design issues that benefit all of the cases of the "lone wolf" programmer, the small in-house development team, and the loosely-coupled open source project? language design issues that benefit all of the cases of the "lone wolf" programmer, the small in-house development team, and the loosely-coupled open source project? Well, every language design issue affects people driven by the various incentives you name, and given the range of language design decisions one should be extremely surprised if there weren't some language design decisions (and especially combinations thereof) that benefit all three. But we shouldn't restrict ourselves to design decisions that benefit all classes of work; it is more that we should find alternatives to combinations of design decisions for which there is justifiable reason to believe it might significantly hinder those operating under any particular incentive. Anyhow, consider a slight variation in the user stories: 1) consumer - a "lone wolf" programmer who is integrating frameworks, libraries, macro DSLs, etc. into a product (potentially intended to be a one-off product), and who is likely bumping into and resolving issues to make the integration possible. Sometimes, the lone wolf will discover it easier to fix issues in their source rather than work around them in his own (especially if said lone wolf bumps into the problems in more than one project), and so the lone wolf would often like the ability to commit framework/library tweaks (even for one-off products). For long term products, issues of maintenance further motivate the ability to push changes. This lone wolf also might have no particular concerns about sharing his/her product (i.e. it doesn't contain anything private) especially if it means (a) ability to work on it directly in a public wiki-style repository from any javascript-enabled browser from any network-enabled computer and if (b) there is a culture such that when other producers update the stuff they wrote (e.g. refactoring names, API, generalizing something, etc.) they'll fix the lone wolf's public code too. Pure self-interest motivates refactoring/tweaking/sharing. Language design decisions to aide the lone wolf includes especially support (a) for dependency injection, such that the lone wolf can modify a library to change an internal component based on an external provision, then set the default for said component to whatever the original value happened to be, (b) for injecting default parameters, such that the lone wolf can add a new parameter to a function without needing to modify other people's code. 2) producer - a programmer creates a library or framework for his own project, not particularly concerned about how others use, not particularly interested in reading or understanding or learning about existing projects that solve the same goal. Pattern is write-only, reinvention rather than reuse. We can presume the producer often doesn't care whether other people modify the product so long as said modifications don't cause runtime bloat, inefficiencies, or break anything in his own product. Producers concerned about forwards compatibility of their product among users benefit if they can make changes to, say, the API of their project then simply find all users in the public repository and push changes to them. This would be similar in concept to making a change to GNU Math and pushing the API update through to every user of GNU Math on all of SourceForge. Other people than the producer may also go through and fix such things (wiki-gnoming supported by unit tests?). Others will tweak for integration, modify, refactor away duplicated efforts (e.g. "yet another implementation of ackermann function" (as a highly contrived example)), etc. If concerned about toes being stepped upon, the producer may choose to keep a private version of the project in a private repository that inherits (in distributed version control fashion) from a public repository. With DVC this allows cherry-picking of changes made to his/her work, and allows pushing of updates he/she makes. If feeling lazy, the producer may simply allow others to pull from his private repository and cherry-pick/push changes to the public one themselves. Maintenance of a private repository is the common case today. The life of a producer is, thus, no worse than it is today, and is possibly much better... especially so if the library/framework/etc. utilizes other libraries, or if the producer isn't the sort of masochistic genius programmer who has and applies the foresight such that the resulting product is 'perfect'. As a producer, I would be more willing to have my code modified or references to external code added if the language provides whole-program and partial-evaluation optimizations such that tweaks like adding parameters to functions doesn't bloat the code for fixed parameters, and referencing other pages doesn't introduce need for more separately compiled components. (The benefits associated with separate compilation can readily be achieved by other vectors.) I'm also less likely to have problems if such changes can be guaranteed to not introduce concurrency (deadlock, safety) and security concerns. 3) collaborator - actively communicating programmers aiming to achieve a product that resolves cross-cutting concerns. Example products would be a robot command+control protocol, or a graphics library, or a common scene graph system for a virtual world interconnect. We can presume collaborators include consumers who each have motivations pushing for certain changes or additions, along with producers who actually know enough to really make the changes but lack the resources to try all of them. Distributed version control and public repositories help a great deal for collaborators that are willing to participate in it. All collaborators are aided if it is easy to branch the whole system, try a change, integrate it, and test the integration. and commit it back if it produces the desired effect and all the unit and integration tests pass. Consumer collaborators thus have an easier time operating as producers of the system, and producer collaborators have greater access to a testbed. If the public repository can include a class of 'tests' that are run every time dependencies change and alert the appropriate individuals of the failure, so much the better. Language design decisions that aid the collaborators include first-class support for service configurations, such that they can be abstracted, fired up and tested, instantiated with references too the real world to get real services running, etc. in addition to support for confinement so that such tests don't interact with the 'real' world, and support for overriding aspects of processes such that, for example, the state inside a process/actor can be replaced with a reference to a database allowing systematic observations and unit/integration testing. ------------- Of course, all this doesn't touch on other classes of users such as those with continuously running services, and those who wish to upgrade clients and services at runtime without touching the code with which said clients or services are written. There are language design decisions that help these guys a lot, too, including support for automatic distribution of code (so clients can put agents near services and vice versa), security, recognition of relative levels of distrust, and support for arbitrary distributed transactions (which allows ad-hoc composition of protocols and services without introducing race conditions). Here's where there is a lot of room for a curious researcher, in my opinion. You guys have a good back and forth about 'individuals', but wouldn't it be interesting to do a survey to see what sorts of social programming models prevail? You could look at how much code is written by how many people, and in groups, what the distribution is like. You could look at the influence (or rather, correlation) of languages on those numbers. On language scalability: I think the true measure of a language's "scalability" in most cases is the range it can handle, not the absolute high end. In other words, a language might be well suited to 10,000 person projects, but if it's no good for whipping up something quick, it has a limited range. I'd rather use a language that is good for a quick hack, and still performs ok at the huge project, even if not quite as well as the one that only works for huge projects. Good idea, but the caveat I would add is that looking at what large groups use is naturally going to select against obscure languages and models, even if they are perfectly well suited to large projects involving many developers. Similarly, PHP is a very popular language for large group projects, but I doubt it really has any real technical advantages in that setting... I was referring more to technical aspects rather than 'social' aspects such as how popular a language already is. Something like Java seems to be popular for large projects, and reasonably good at it, in that it enforces some barriers. It's not so good at smaller, nimbler things, and often requires more boilerplate just to get started doing something (at least that has been my experience). Something like Ruby, on the other hand, might be great for smaller projects, and still work ok for bigger ones, although of course if people start doing lots of "monkey patching" type activities, that could quickly sink a larger project in a hurry. When "whipping up something quick" or performing a "quick hack", is it not usually the case that you are, in actuality, composing and enhancing services built already by other people? If so, does that not require scalability of the language, such that hooking together services and frameworks and such to produce new ones can be performed without gotchas or considerable effort? If it takes a great deal of care or knowledge to combine services and frameworks safely, without deadlock, without error, and without sacrificing performance, then the language is not upwards scalable because composition of services will need to be "shallow" to be successful. Except in the case where one is building a new project using only the 'core' language features, "Quick hack" projects benefit primarily from the high-end form of scalability. And, while I do believe the ability to write up a quick functions and vocabulary from scratch is useful, I will happily sacrifice it in favor of shaping the language such that the path of least resistance encourages everyone to code in a manner that automatically supports safe, efficient, flexible, and comprehensible project composition. That said, I don't tolerate boiler-plate. The need for boiler-plate code greatly resists scalability. Definitely an interesting angle on the problem, though I wouldn't rule out the significance of individual programming effort as a "rare" thing. For one thing, I do a *lot* of "individual programming" :) .. but even there, my style and preferences have been recursively honed by my exposure to the work of the community. So, programmer development as a consequence of programmer-community interaction is probably the way to look at it. Vygotsky's social development theory is relevant I think. Saying you built your house all on your own doesn't give proper credit to those who paved the roads, built your truck, trimmed your lumber, and gave you some power tools. But, also relevantly, it doesn't give the necessary credit to how your needs, and those of others like you, shaped the very industry that gives you lumber and power tools. People who do "individual programming" are very, very rare. People who think they do "individual programming" are very, very common. What we call "individual" is usually a social dynamic of "implicit emergent cooperation". Distinguishing between these should be useful when making design decisions for IDEs and languages, especially with regards to sharing access to projects. Forgive my ignorance, but it seems this model relegates "individual programming" not to rarity but full extinction. Is there an example of someone, real or imaginary, who has constructed a program without the assistance of any human, that is, someone who is an "individual programmer" under the given model? Is there an example of someone, real or imaginary, who has constructed a program without the assistance of any human I'd include among such people those that built their own circuit boards. But one could also recognize degrees to which a project revolves around individual programmers in a non-binary sense. I would, to a lesser degree, include those that write assembler, write original drivers, certainly the dude who wrote Synthesis OS. To an even lesser degree, I would include those who use a 'pure' higher level language and compiler in a manner suitable for writing an OS from scratch (no libraries beyond the standard, no integration with an existing OS). This would also include many various 'trivial' projects of the sort you might see when learning Haskell, Scheme, or C in school, where use of libraries is often forbidden because you're learning how to write your own linked lists. this model relegates "individual programming" not to rarity but full extinction. The problem isn't with the definition, but with the fact that "individual programming" really is near full extinction, and has been moving steadily closer to it. If it survives today, it's in embedded systems and sometimes due to bad lessons taught to beginning students in programming about their role as programmers (sometimes they come out of school feeling they really need to write their own linked lists...). We should recognize this, call it a good thing, not elevate the status of the individual programmer based on illusion or egotistical desire. Right now, despite the fact that individually developed programs are rare for anything more than school exercises, (1) programmers are still treated by IDEs as gatekeepers, masters of a domain for individual projects, (2) language projects aren't designed to integrate naturally, makefiles are problematic, name conflicts abound if just tossing code together due to the heavy use of hierarchy, code is often copied from project to project. Our tools should treat the programmer and the code as they are: elements in an interactive and reactive society. This actually empowers 'individual' programmers by giving them more power to utilize, update, refactor, and document code written initially for projects other than their own. There is no cost in terms of IP when code needs to be protected... even within a company, one could inherit code from an OS arena using distributed version control then separately add and manage private code, making it easy for different project teams working on different projects to share and integrate (and even support pushing some enhancements back to the OS arena). Even without sharing one's own project, access to other projects, of recognition that code is a social effort that is mostly about integrating existing libraries, plus the language features necessary to make it practical (i.e. integration w/o makefiles and complicated build routines, ability to write libraries usable and optimized without boilerplate code, support for automatic and continuous unit testing, ability to be alerted of updates to certain pages, etc.) would offer considerable advantage. You can do "individual programming" by just thinking. Maybe you can use paper/pencil/whiteboard as the next step... ok maybe you use a word processor. Hmmm... why not an IDE then. .... but you're surely "running" your programs in your head, so you don't really need a computer, do you .... It is definitely individual cognition. Nevertheless, it has been shaped by interactions with the society as well. That's why I think both points of view have to be considered. It is not possible to isolate them or say one is more important than the other. Each is impossible without the other. People who do "individual programming" are very, very rare. People who think they do "individual programming" are very, very common. No way to verify such a statistic, but your statement surely implies the phenomenal success all programming languages to date have achieved w.r.t. collaboration :) What it explains today is DLL hell, the headaches we go through with frameworks and associated boilerplate code, serious challenges to share structured data between processes, deadlock concerns, the difficulty we experience when integrating shared libraries, etc. Because, with few exceptions, what we have today is minimal support for collaboration hacked in atop languages and IDEs and Operating Systems each designed with the idea of the individual programmer in mind. If you've not already come across it then the work of Chris Barker may be quite interesting to you. He has published several papers on the links between natural languages and formal languages. In particular he gave a keynote at POPL this year called Wild Control operators that seems to touch on points (2) and (3). I can't really offer you much detail to describe it, as it is somewhat outside of my area, but if you treat the semantics of natural languages as implementations of cognitive models then his work would be right in the middle of the area that you are asking about. There are several posters on Lambda who know his work far better than I do who could maybe give an explanation... Thanks. I didn't know about Chris Barker's work. I skimmed the wild control operators popl abstract and it seemed to be about probing the formalism of natural language. I was expecting the other direction - picking operators in natural language and bringing them over to formal languages ... which, however, begs the question, do end-user programming languages *have* to be formal? do end-user programming languages *have* to be formal? Well, something needs to interpret them, right? But perhaps one could focus on 'best guesses' and 'heuristic interpretations' and varying interpretations based on runtime observations (e.g. "kick the dog" would need to interpret "the dog" and "kick" appropriately based on whether the avatar is sitting, standing, where a dog is, whether you're more likely referencing the one in a woman's arms or the one on the ground (even if the one in the woman's arms is closer, etc.) I suspect an 'informal' language could be good for AI, describing scene-graphs or skits for 3D characters, scripted behaviors. However, even Inform language has a formal interpretation. Still, it's worth looking up. Thanks for the link to Inform. I understand what you mean. In fact, I'm finding it difficult to declare something as informal as long as it has a computer implementation :) You hit a nerve on the "varying interpretations based on runtime observations" point. We (as people) do that all the time, yet very few programming languages (if any) exploit that. I like Drescher's schema building approach for that reason. The axiom of formality for langauges: The degree to which a language is formal, is the degree to which the language and its semantics/results/side-effects/abstract-interpretation is: Put differently, IF you want your language's semantics to be well-defined and simple (from a reductionist perspective), THEN you will want your language to have an interpretation that can be formally treated. The difference between a programming langauge and a formal language: Programming languages have intrinsic resource-usage "semantics" (i.e. memory and time resources, and possibly other kinds of resources), while formal language EITHER don't, OR they run on machines that are abstract or do not have physical limitations. Isn't the definition of formality something like - "whose specification does not depend on anything outside the system."? That definition doesn't even make sense to me. I assume you know what symbol grounding is. Isn't it the role of all languages to ground some aspects of some external phenomena within a system of symbolic expressions which represent/denote those phenomena? No langauge is useful unless it is grounded, or applied (i.e. it is interpreted w.r.t a mapping from expressions to external phenomena). Once we ground a language, then it can be useful. From my experience, formality means (or implies?) that a language conforms, strictly, to a set of hard-and-fast rules. Of course, this definition is tentative. In retrospect, I have a hunch I've misunderstood what you mean. If this is correct, what do you mean by "outside the system"? that a language conforms, strictly, to a set of hard-and-fast rules That's in the spirit of what I wanted to say. A formal system can be specified purely mechanically, without any mention of "use" or "connection to the real world" or anything of that kind ... with almost an attitude of "if you find a use for it, that's your problem". That should resonate with how some mathematicians approach number theory, for instance. More formally (;-P) - a formal system is a set of postulates and a set of procedures or "rules" to derive "truths" from postulates and/or other "truths" of the formal system. You are forbidden from invoking any "rule" *outside* of the formal system in order to infer "truths". When applied to languages, I understand non-formality to mean that I can write expressions in the language that depend on the interpreter (the thing that connects the language to the world) for truth or falsehood - i.e. the set of postulates and rules I'm willing to write down for the symbols of the language are intentionally insufficient to express the kind of "truths" I'm interested in. Program-level integration These things all take the philosophy that the best way to make things reusable is by encapsulating their implementation behind a process wall, and by providing a standard system under which the processes can all be fully orchestrated. Another option: I'm surprised nobody's caught on to "Documentation--by-Contract"! Have you ever noticed how there is a rigid structure to typical API documentations (at least, the way Microsoft documents their APIs) that could be formalized?! Documentation specifies the semantics of: valid parameters, return values, invocation side-effects, etc. I'm sure there's something to be gained from this idea. Formal specifications are rigid, less ambiguous, and best of all, much briefer (not to mention their amenability to automated formal analysis). SOA, COM/DCOM, Unix Pipes and such only encapsulate part of the implementation. In particular, data representation, protocol, and contract cannot be encapsulated. To compose these systems easily requires a great deal of support for integrating at the communications boundaries, which often involves sharing source. As designs go, the SOA and dataflow approaches seem pretty solid... they still need a few enhancements to support security, secrecy, disruption tolerance, failover redundancy, graceful degradation, level of detail, ad-hoc cross-service transactions for service mashups, etc. Support for automatic distribution would allow parts of a composed service to automatically float over to wherever they need to be to optimize latencies and bandwidth. Just a few piddling trifles. ^_^ For source-layer and cross-process optimization, there are additional issues. By transporting messages in shared memory, one can avoid two payments for data translation. Given shared source (or high enough level bytecode), one can potentially inline the necessary aspects of the intermediate processes and services... and automatically duplicate the relevant bits when automatically distributing a multi-part service configuration. Due to the possible optimizations, users in SOA and Unix pipe systems, and to a lesser degree COM/DCOM systems, programmers are reasonably torn between reproducing efforts for efficiency and doing the simple thing. It is better if this pressure is avoided. Wherever possible, instead of encapsulating source and using separate compilation, share the source and support whole-system optimizations. This can be achieved with first-class dataflow pipes or process-calculi/actors-model based service configurations but not (in general) Unix pipes or SOA. Compile-time safety for data representation (plus possibly protocol and, stretching a bit, contract) would be icing on the multi-service cake. I'm surprised nobody's caught on to "Documentation--by-Contract"! Have you ever noticed how there is a rigid structure to typical API documentations (at least, the way Microsoft documents their APIs) that could be formalized?! More than a few people have noticed. I'm a believer in the idea that "anything that goes in a comment should really be automatically verified". However, in practice, there are limits to what we can check or test. In any case, I'd suggest having languages support a formal system of annotations and an extensible syntax to include said annotations. Then allow users to process the AST including these annotations using tools they build. The same system can be used to offer suggestions to the optimizer. You might be interested in Microsoft's CodeContracts being released for .NET 4.0. Thanks for the suggestion. Microsoft uses preconditions, postconditions, and invariants to aide with runtime debugging and manages partial static verification. Such an approach seems a good one for verifying certain classes of comments. Type and Category theory only takes you so far. You need to "skin" it. The future is tools and interactive systems. There's a gal at Sun that has talked about this (sorry can't find her writings). Would that be Cristina Cifuentes by any chance? It sounds like a quote from a talk on Parfait. I'm a psychology student at the University of Washington and the way in which programming languages influence the problem space in the mind of the programmer is one of my main interest areas (I'm also a professional programmer). I recently completed a paper exploring the idea a bit that folks may find interesting (or not :P). If nothing else, I believe it demonstrates a possible approach to the study of this matter on the scale of the individual (doesn't tie to the social issues people have surfaced here). It's at the undergrad level so there was no lit. review etc. I had to hash it out from conception to completion in less than a month, but it's a start. Semantic Organization of Programming Language Concepts To add my two cents to the individual versus social debate, a social context is in fact a collection of individual minds so in my opinion we would need to start there. However, there are emergent properties that arise from the interaction of the individuals, which are aspects not present at the individual scale. Those emergent properties need to be identified and addressed as well. Perhaps these are two reasonably separable topics. Setting aside all the usual critiques one might make (insufficient sample size, etc.) about your study, I want to zero in on the central premise: that the clustering of words elicited correlates to the semantic model of the domain. Rather than assume this, would it not be simpler to assume that different styles of programming have different communities with different associated jargons? For example, the jargon use of the word "prime" in your paper is idiosyncratic to the literature of psychology. I would predict that someone who was a psychologist and a mathematician who was "primed" with either psychology or math tasks would show different associations with the word "prime". I wouldn't expect this to tell me anything about how they actually think about either discipline. The concern regarding the assumption that the clustering of terms is related to how one would actually apply those concepts in a problem solving situation is certainly warranted. The key inference made with models like this is that related terms mentioned in response to a cue term are likely to be the ones the participants would also consider when working an actual problem, reasoning, drawing analogies, etc. This hits the classic problem of lab vs. nature and whether or not the findings are still valid outside the lab. However, there's a fair amount of literature on these kinds models with respect to novice vs expert comparisons and cognitive changes due to disease, etc. Setting aside all the usual critiques one might make (insufficient sample size, etc.) about your study, I want to zero in on the central premise: that the clustering of words elicited correlates to the semantic model of the domain. [Whoops, should've read the article first..] Got it right though: this is an assumption, mostly proven, which is used by most linguists. this is an assumption, mostly proven, which is used by most linguists. I can only humbly suggest that either you are over-simplifying or that you lack any familiarity with the field of linguistics as a whole. This technique is widely accepted as a basis for psychology and pycholinguistic (a specific sub-discipline of linguistics) experiments. However, to say that it is "mostly proven" is very arguable. I think it tends more to be a case of lack of better techniques for dealing with a tricky area. ... that I am laughing my head of, right now? Given what I know, his definition of a domain/semantic model is as good as any; and surely good enough for investigation. Do you believe the point you raised is that relevant? But, ok, if you are the expert, feel free to discuss. I liked what he did. You should really ask someone like Oleg to draw a domain model ;-). This site needs greater participation from people interested in more than programming languages themselves. I've been trying to get one of my best friends, a geologist and an avid R programmer, to consider participating occassionally, as I think a lot of people here would be interested to hear what he has to say. :-) I'm a bit skeptical of your methodology as well, but I think Marc's carping is somewhat unfair. My bigger complaint would be "just three programmers?!" Here is an anecdotal, though possibly useful insight into functional programmer psychology: I have to admit my first impression of pkhuong's code was the exact same as Qrczak's and Winheim's. And Winheim even had the benefit of seeing somebody else make the same mistake... so I suspect the code tricked a lot of functional programmers reading it, not just us three. I think that the monad do-notation might be a useful case study. I have an intuition to match a for-loop (in C, say) or recursion or pattern-matching or any number of constructs. But I am struggling to get one to handle do-notation. The problem is that each monad reduces to its own very special form. Consider this expression, which is close to something I came across on the web: do x := t u return (g x) -- (1) Define the ST type: type ST a = a -> (a,Int) Define return and bind for that type. Now, take (1), desugarise, replace bind and return, reduce and I get: \r -> (g (value (t r)), state (u (state (t r)))) --(2) where state and value extract the state from a pair and the value from a pair by state = snd and value = fst. Do the same process in the list monad, starting with a modified version because I prefer to call lists l and m: do x := l m return (g x) -- (1) and I get this: concat ( fmap (\x -> (concat ( fmap (\_ -> [g x]) m ) )) l) --(3) OK, so I see that there is an analogy here between 'list shape' and 'state transformer', and in each case the thing in question is being used to apply a basic function, g. There are still differences: the state transformer doesn't treat state separately from the value, the state interacts with generating the value. In the list shape, the shape introduced by m doesn't have the same interaction with the value. I suppose that that sort of interaction might be seen if the shape monad related to tensors,where there would be interaction between off-diagonal elements. But back to the notation: wherever I see an expression in do-notation or bind-notation, I can only, so far, understand it if I have first taken an example such as (1) and converted it into the equivalent of (2) or (3). This feels similar to interpreting obfuscated code, where the initial statement of the code cannot be understood until some transformations have been applied. do-notation and bind-notation emphasise sequence but there is much more to a monad that that. Monads, ignoring the IO monad as an exceptional case, are no less functional than any other part of haskell. In evidence of that, let me just observe that wherever there is a monad, there is also some function of the form extract :: Monad m => m a -> a. Part of the problem here is that monads abstract over a class of exactly such intuitions. So in some sense you'll never get "an intuition" for monads. That's the whole point! However, you can develop an intuition for exactly which intuitions are monadic. ;) In evidence of that, let me just observe that wherever there is a monad, there is also some function of the form extract :: Monad m => m a -> a In evidence of that, let me just observe that wherever there is a monad, there is also some function of the form extract :: Monad m => m a -> a Incidentally, this is not true. For a counter-example, consider the list monad mentioned in the parent. Also, the 'ST a' presented should I assume be Int -> (a, Int). Yeah, you need a generalisation to account for things like error monads where you genuinely might not have an answer. Not to mention IO-wrapper monads and the like. You can kinda cover the IO-wrapper case by saying the IO monad is the 'host' language rather than Haskell, though. ... on both counts. Matt and Phillipa also make good points here. Saying something is a monad really isn't saying much at all, so you usually can't get much of an intuition for what a bit of code does just by virtue of the fact it has a monadic interface. As for how to understand monads, it takes work, and time. My suggestion is simply to get familiar with *lots* of different examples of monads, preferably some advanced examples as well. I've been considering trying my hand at an advanced monad tutorial, that includes some of the most interesting (and in-depth) examples of monadic abstractions I'm aware of. Monads are useful as a standard kind of construct, that enables you to express a surprising variety of things. And since they can express almost anything, it's hardly surprising that they offer very little insight into anything in particular. The notation really does show you what's going on in general, though in a rather abstract way. Bind corresponds to a single-variable ML-style let, and do notation to a sequence of them ending in a result. What you don't have is a fixed evaluation order for it. In fact you don't even have guaranteed determinism for it - the list monad effectively gives this little language a non-deterministic interpretation. To simplify: monads are about binding. This includes binding and using other computations. See Wadler's comprehending monads for an alternative notation over monads based on list comprehensions so your examples would look something like [g x | x <- t, u] In Scala your examples look like for { x <- t _ <- u } yield g(x) And in C#'s LINQ from x in t from temp in u select g(x) Honestly I think these notations do a better job than "do" for guiding intuition on some kinds of monad but do significantly worse for most, especially for those that aren't particularly collection like. Thanks to everyone for these points. I hope to get a firmer grip on monads over time. I did raise the idea more on the basis that a cognitive model of 'do' might be a useful thing to have, rather than seeking advice on my own deficiencies, useful though that has been. Some of the responses suggest that 'do' might not be a good candidate for the question of such a model. Anyhow, thanks for the discussion.
http://lambda-the-ultimate.org/node/3229
CC-MAIN-2022-40
en
refinedweb
nanonext for Cross-language Data Exchange Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. {nanonext} is an R package available on CRAN which provides bindings to the C library NNG (Nanomsg Next Gen), a successor to ZeroMQ. Designed for performance and reliability, the NNG library is written in C and {nanonext} is a lightweight wrapper depending on no other packages. It provides a fast and reliable data interface between different programming languages where NNG has a binding, including C, C++, Java, Python, Go, Rust etc. The following example demonstrates the exchange of numerical data between R and Python (NumPy), two of the most commonly-used languages for data science and machine learning. Using a messaging interface provides a clean and robust approach that is light on resources and offers limited and identifiable points of failure. This is especially relevant when processing real-time data, as an example. This approach can also serve as an interface / pipe between different processes written in the same or different languages, running on the same computer or distributed across networks, and is an enabler of modular software design as espoused by the Unix philosophy. Create socket in Python using the NNG binding ‘pynng’: import numpy as np import pynng socket = pynng.Pair0(listen="ipc:///tmp/nanonext") Create nano object in R using {nanonext}, then send a vector of ‘doubles’, specifying mode as ‘raw’: library(nanonext) n <- nano("pair", dial = "ipc:///tmp/nanonext") n$send(c(1.1, 2.2, 3.3, 4.4, 5.5), mode = "raw") #> [1] 9a 99 99 99 99 99 f1 3f 9a 99 99 99 99 99 01 40 66 66 66 66 66 66 0a 40 9a #> [26] 99 99 99 99 99 11 40 00 00 00 00 00 00 16 40 Receive in Python as a NumPy array of ‘floats’, and send back to R: raw = socket.recv() array = np.frombuffer(raw) print(array) #> [1.1 2.2 3.3 4.4 5.5] msg = array.tobytes() socket.send(msg) Receive in R, specifying the receive mode as ‘double’: n$recv(mode = "double") #> $raw #> [1] 9a 99 99 99 99 99 f1 3f 9a 99 99 99 99 99 01 40 66 66 66 66 66 66 0a 40 9a #> [26] 99 99 99 99 99 11 40 00 00 00 00 00 00 16 40 #> #> $data #> [1] 1.1 2.2 3.3 4.4 5.5 Links nanonext on CRAN: Package website: NNG website: NNG.
https://www.r-bloggers.com/2022/02/nanonext-for-cross-language-data-exchange/
CC-MAIN-2022-40
en
refinedweb
django_content_type_app_label_key Constraint on Heroku Django comes with some awesome CLI tools. Manage.py is a beast of magic and lore. And it loves the fantastical kingdom of Heroku, where is romps with merry measure twixt the ether. But, when I've tried to go through a dumpdata of a previous site, syncdb on a migration to Heroku, and loaddata for moving the data, I've run into a snag on django_content_type_app_label_key more than once. Here are some resolutions. The Error Stack Specifically, when I do a sync of the database: heroku run python manage.py syncdb It works like a charm. And then a loading of the data: heroku run python manage.py loaddata data.json It runs for a bit then spews this small hiccup: Running python manage.py loaddata data.json attached to terminal... up, run.2 Problem installing fixture 'data.json': Traceback (most recent call last): File "/app/lib/python2.7/site-packages/django/core/management/commands/loaddata.py", line 174, in handle obj.save(using=using) # ...more stack trace... File "/app/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) IntegrityError: duplicate key value violates unique constraint "django_content_type_app_label_key" Lovely. It turns out that syncdb, in addition to running the DDLs for your table creation also populates the django_content_type table. And then when you loaddata it tries to repopulate the table, violating the unique constraint on the content type name. Make the Magic Live Again There are a couple ways around this: Dump Something Specific When you dumpdata, only dump specific apps instead of the whole project. For example: python manage.py dumpdata myApp Django 1.3 Exclude If you're on Django 1.3 or above, you get a nice new option with dumpdata to exclude certain apps. So you could run: python manage.py dumpdata --exclude contenttypes Try in Vain to Reset Another one I tried (but didn't work) was: heroku run python aprilandjake/manage.py reset contenttypes Sql Truncate Or, if you're still trying to dumpdata on your whole project, you could syncdb on Heroku and then truncate the data out of django_content_type like this: heroku run python aprilandjake/manage.py dbshell And then truncate (inside the dbshell): truncate django_content_type cascade; Problem for me is that didn't work either. I am on the super cheap in Heroku, so I get this lovely denial: Running python manage.py dbshell attached to terminal... up, run.5 Error: You appear not to have the 'psql' program installed or on your path. (It's not available in a shared database): heroku pg:psql ! Cannot ingress to a shared database Delete via Admin UI And finally, if you want to get rid of the data via the admin UI, set it up to appear as editable. In an admin.py in your project, try something like this: from django.contrib.contenttypes.models import ContentType class ContentTypeAdmin(admin.ModelAdmin): list_display = ['name', 'app_label'] fieldsets = ( ('', { 'classes': ('',), 'fields': ('name', 'app_label') }), ) admin.site.register(ContentType, ContentTypeAdmin) Now you should be able to loaddata and feel the Django wind in your hair and the Heroku grass beneath your feet again.
https://jaketrent.com/post/django_content_type_app_label_key-constraint-herok/
CC-MAIN-2022-40
en
refinedweb
Author: xiaoyu WeChat official account: Python Data Science Purpose: This article introduces you to a primary project of data analysis. The purpose is to understand how to use Python for simple data analysis through the project. Data source: Bloggers collect official account data of Beijing secondary housing through crawler collection (full back of the public network). Preliminary study on data First, import the scientific computing packages numpy,pandas, visual matplotlib,seaborn, and machine learning package sklearn. import pandas as pd import numpy as np import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt from IPython.display import display plt.style.use("fivethirtyeight") sns.set_style({'font.sans-serif':['simhei','Arial']}) %matplotlib inline # Check Python version from sys import version_info if version_info.major != 3: raise Exception('Please use Python 3 To complete this project') Copy code Then import the data and make preliminary observations. These observations include understanding the missing values, outliers and approximate descriptive statistics of data characteristics. # Import chain home second-hand housing data lianjia_df = pd.read_csv('lianjia.csv') display(lianjia_df.head(n=2)) Copy code It is preliminarily observed that there are 11 characteristic variables. Price is our target variable here, and then we continue to observe it in depth. # Check for missing values lianjia_df.info() Copy code It is found that there are 23677 pieces of data in the data set, of which the Elevator feature has obvious missing values. lianjia_df.describe() Copy code The above results give some statistical values of eigenvalues, including mean, standard deviation, median, minimum, maximum, 25% quantile and 75% quantile. These statistical results are simple and direct, which is very useful for the initial understanding of the quality of a feature. For example, we observed that the maximum value of the Size feature is 1019 square meters and the minimum value is 2 square meters. Then we need to think about whether this exists in practice. If it does not exist, it is meaningless, then this data is an abnormal value, which will seriously affect the performance of the model. Of course, this is only a preliminary observation. Later, we will use data visualization to clearly show and confirm our guess. # Add new features and average house price df = lianjia_df.copy() df['PerPrice'] = lianjia_df['Price']/lianjia_df['Size'] # Reposition columns columns = ['Region', 'District', 'Garden', 'Layout', 'Floor', 'Year', 'Size', 'Elevator', 'Direction', 'Renovation', 'PerPrice', 'Price'] df = pd.DataFrame(df, columns = columns) # Revisit the dataset display(df.head(n=2)) Copy code We found that the Id feature has no practical significance, so we removed it. Since the house unit price is easy to analyze and can be obtained simply by using the total price / area, a new feature PerPrice (only for analysis, not for prediction) is added. In addition, the order of features has been adjusted to look more comfortable. Data visualization analysis Region feature analysis For regional characteristics, we can analyze the comparison of house prices and quantities in different regions. # Compare the number of second-hand houses and the house price per square meter by regional grouping of second-hand houses df_house_count = df.groupby('Region')['Price'].count().sort_values(ascending=False).to_frame().reset_index() df_house_mean = df.groupby('Region')['PerPrice'].mean().sort_values(ascending=False).to_frame().reset_index() f, [ax1,ax2,ax3] = plt.subplots(3,1,figsize=(20,15)) sns.barplot(x='Region', y='PerPrice', palette="Blues_d", data=df_house_mean, ax=ax1) ax1.set_title('Comparison of unit price per square meter of second-hand houses in Beijing',fontsize=15) ax1.set_xlabel('region') ax1.set_ylabel('Unit price per square meter') sns.barplot(x='Region', y='Price', palette="Greens_d", data=df_house_count, ax=ax2) ax2.set_title('Comparison of the number of second-hand houses in Beijing',fontsize=15) ax2.set_xlabel('region') ax2.set_ylabel('quantity') sns.boxplot(x='Region', y='Price', data=df, ax=ax3) ax3.set_title('Total price of second-hand houses in Beijing',fontsize=15) ax3.set_xlabel('region') ax3.set_ylabel('Total house price') plt.show() Copy code The network perspective function of pandas is used to sort groups by groups. The visualization of regional features is directly completed by seaborn. The palette palette parameter is used for color. The lighter the color gradient, the less the description, and vice versa. It can be observed that: - Average price of second-hand houses: the house price in Xicheng District is the most expensive, and the average price is about 110000 / Ping, because Xicheng is within the second ring road and is the gathering place of popular school district houses. The second is about 100000 / ping in Dongcheng, then about 85000 / ping in Haidian, and the others are lower than 80000 / Ping. - Number of second-hand houses: statistically speaking, there are hot areas in the second-hand house market. Haidian District and Chaoyang District have the largest number of second-hand houses, almost close to 3000 sets. After all, there is a large demand in the region. Then there is Fengtai District, which is under transformation and construction in recent years and has the potential to catch up with and surpass. - Total price of second-hand houses: according to the box chart, the median total price of houses in all major regions is less than 10 million, and the discrete value of total price of houses is high, with the highest value of 60 million in Xicheng, indicating that the characteristics of house price are not ideal Zhengtai distribution. Size feature analysis f, [ax1,ax2] = plt.subplots(1, 2, figsize=(15, 5)) # Distribution of building time sns.distplot(df['Size'], bins=20, ax=ax1, color='r') sns.kdeplot(df['Size'], shade=True, ax=ax1) # Relationship between building time and selling price sns.regplot(x='Size', y='Price', data=df, ax=ax2) plt.show() Copy code - Size distribution: use distplot and kdeplot to draw a histogram to observe the distribution of size characteristics, which belongs to the distribution of long tail type, which shows that there are many second-hand houses with large area and beyond the normal range. - Relationship between Size and Price: the scatter diagram between Size and Price is drawn through regplot. It is found that the Size feature basically has a linear relationship with Price, which is in line with basic common sense. The larger the area, the higher the Price. However, there are two groups of obvious anomalies: 1. The area is less than 10 square meters, but the Price exceeds 100 million; 2. The area of a point exceeds 1000 square meters and the Price is very low. You need to check the situation. df.loc[df['Size']< 10] Copy code After checking, it is found that this group of data is a villa. The reason for the abnormality is that the villa structure is special (no orientation and no elevator), and the field definition is different from that of second-hand commercial housing, resulting in the dislocation of crawler crawling data. Because the second-hand house of villa type is not within our consideration, it is removed and the relationship between Size distribution and Price is observed again. df.loc[df['Size']>1000] Copy code After observation, this abnormal point is not an ordinary civil second-hand house, but probably a commercial house. Therefore, there is 1 room and 0 hall with such a large area of more than 1000 square meters, so we choose to remove it here. df = df[(df['Layout']!='stacked townhouse')&(df['Size']<1000)] Copy code No obvious outliers are found after re visualization. Layout feature analysis f, ax1= plt.subplots(figsize=(20,20)) sns.countplot(y='Layout', data=df, ax=ax1) ax1.set_title('House type',fontsize=15) ax1.set_xlabel('quantity') ax1.set_ylabel('House type') plt.show() Copy code This feature is really unknown. There are strange structures such as 9 rooms, 3 rooms, 4 rooms and 0 rooms. Among them, two bedrooms and one living room account for the vast majority, followed by three bedrooms and one living room, two bedrooms and two living rooms. However, after careful observation, there are many irregular names under the feature classification, such as 2 bedrooms and 1 living room, 2 bedrooms and 1 bathroom, and villas. There is no unified name. Such features can not be used as the data input of machine learning model, and feature engineering needs to be used for corresponding processing. Innovation feature analysis df['Renovation'].value_counts() Copy code Hardcover 11345 Paperback 8497 Others 3239 Blank 576 North south 20 Name: Renovation, dtype: int64 It is found that there are north and South in the Renovation decoration features, which belongs to the orientation type. It may be that some information positions are empty during the reptile process, resulting in the "Direction" orientation feature appearing here, so it needs to be removed or replaced. # Remove the error data "north and South", because some information positions are empty during the crawling process, resulting in the presence of the "Direction" feature, which needs to be cleared or replaced df['Renovation'] = df.loc[(df['Renovation'] != 'north and south'), 'Renovation'] # Frame setting f, [ax1,ax2,ax3] = plt.subplots(1, 3, figsize=(20, 5)) sns.countplot(df['Renovation'], ax=ax1) sns.barplot(x='Renovation', y='Price', data=df, ax=ax2) sns.boxplot(x='Renovation', y='Price', data=df, ax=ax3) plt.show() Copy code It is observed that the number of second-hand houses with fine decoration is the largest, followed by simple decoration, which is also common in our weekdays. For the price, the blank type is the highest, followed by fine decoration. Elevator feature analysis When exploring the data, we found that the Elevator feature has a large number of missing values, which is very unfavorable to us. First, let's see how many missing values there are: misn = len(df.loc[(df['Elevator'].isnull()), 'Elevator']) print('Elevator The number of missing values is:'+ str(misn)) Copy code Number of missing values for Elevator: 8237 What about so many missing values? This needs to be considered according to the actual situation. The commonly used methods include average / median filling method, direct removal, or modeling and prediction according to other features. Here we consider the filling method, but whether there is an elevator is not a numerical value, and there is no mean and median. How to fill it? Here is an idea for you: judge whether there is an elevator according to the Floor. Generally, there are elevators on floors greater than 6, while there are generally no elevators on floors less than or equal to 6. With this standard, the rest is simple. # Due to individual types of errors, such as simple and hardcover, the eigenvalues are misaligned, so they need to be removed df['Elevator'] = df.loc[(df['Elevator'] == 'There is an elevator')|(df['Elevator'] == 'No elevator'), 'Elevator'] # Fill in the missing value of Elevator df.loc[(df['Floor']>6)&(df['Elevator'].isnull()), 'Elevator'] = 'There is an elevator' df.loc[(df['Floor']<=6)&(df['Elevator'].isnull()), 'Elevator'] = 'No elevator' f, [ax1,ax2] = plt.subplots(1, 2, figsize=(20, 10)) sns.countplot(df['Elevator'], ax=ax1) ax1.set_title('Comparison of elevator quantity',fontsize=15) ax1.set_xlabel('Is there an elevator') ax1.set_ylabel('quantity') sns.barplot(x='Elevator', y='Price', data=df, ax=ax2) ax2.set_title('Comparison of house price with and without elevator',fontsize=15) ax2.set_xlabel('Is there an elevator') ax2.set_ylabel('Total price') plt.show() Copy code It is observed that the number of second-hand houses with elevators is in the majority. After all, the high-rise land utilization rate is relatively high, which is suitable for the needs of the huge population in Beijing, and the high-rise buildings need elevators. Accordingly, the house price of second-hand houses with elevators is higher, because the early decoration fee and later maintenance fee of elevators are included (but this price comparison is only an average concept. For example, the price of a 6-storey luxury community without elevators is certainly higher). Year feature analysis grid = sns.FacetGrid(df, row='Elevator', col='Renovation', palette='seismic',size=4) grid.map(plt.scatter, 'Year', 'Price') grid.add_legend() Copy code Under the classification conditions of Renovation and Elevator, the Year features are analyzed by FaceGrid. The observation results are as follows: - The whole second-hand housing price trend increases with time; - The house prices of second-hand houses built after 2000 have increased significantly compared with those before 2000; - Before 1980, there was almost no data on second-hand houses with elevators, indicating that there was no large-scale installation of elevators before 1980; - Before 1980, among the second-hand houses without elevators, the simple decoration second-hand houses accounted for the vast majority, while the hard decoration was very few; Floor feature analysis f, ax1= plt.subplots(figsize=(20,5)) sns.countplot(x='Floor', data=df, ax=ax1) ax1.set_title('House type',fontsize=15) ax1.set_xlabel('quantity') ax1.set_ylabel('House type') plt.show() Copy code It can be seen that the number of second-hand houses with six floors is the largest, but the individual floor characteristics are meaningless, because the total number of floors of houses in each community is different. We need to know the relative significance of floors. In addition, there is also a very important connection between the floor and culture. For example, Chinese culture is seven up and eight down. The seventh floor may be popular and the house price is expensive. Generally, there will not be four floors or 18 floors. Of course, under normal circumstances, the middle floor is more popular and the price is high. The popularity of the ground floor and the top floor is low and the price is relatively low. Therefore, the floor is a very complex feature, which has a great impact on the house price. summary This sharing aims to let you know how to do a simple data analysis with Python. It is undoubtedly a good exercise for friends who have just come into contact with data analysis. However, there are still many problems to be solved in this analysis, such as: - Solve the problem of data source accuracy obtained by crawler; - Need to crawl or find more good selling features; - More feature engineering work needs to be done, such as data cleaning, feature selection and screening; - Use statistical model to establish regression model for price prediction; More content will be introduced and shared slowly. Please look forward to it. Author: Python Data Science Link:
https://programmer.group/the-best-practical-project-for-getting-started-with-python-data-analysis.html
CC-MAIN-2022-40
en
refinedweb
Arduino: LCD with HD44780 Display Driver IC I guess everyone who starts with Arduino will connect a LCD one day. There are several tutorials how to connect a LCD online. This page describes the usage of the "Noiasca Liquid Crystal". The LCD Parallel Interface in 4 bit Mode The "Noiasca Liquid Crystal" library is just one of many out there, supporting the 4 bit mode of a LCD directly connected to an Arduino. The class for the parallel interface comes more from curiosity: I have already written a library for I2C expanders, needed a SPI expander and had a "LCD keypad shield" from the early Arduino days laying around. Therefore I decided to combine all my LCD libraries into one and to implement the parallel LCD also. The Character Set of the HD44780U A00 ROM The "Noiasca Liquid Crystal" library does just a little bit more than other libraries: a character mapping from UTF-8 to the existing characters in the Hitachi HD44780U A00 ROM. As example some special characters in the second row of the display: (Degree, Division, middle Point, n with tilde, Pound, Yen, Tilde, Sqareroot, Proportional to, Infinity, Left Arrow, Right Arrow, Backslash) You can read more about this character mapping in the Introduction. For the beginning you should know that you don't need to enter the octal ROM addresses of the special characters manually and these outptus can be done by a simple: lcd.print("°÷·ñ£¥~√∝∞←→\\"); The Hardware Driver for the LCD The library offers a basic class for displays connected directly to GPIOs on the Arduino. It uses the 4bit mode only. Nevertheless two additional pins are needed for RS and EN. If you don't need the backlight pin, set it to 255. The necessary #include and the constructor are: #include <NoiascaLiquidCrystal.h> #include <NoiascaHW/lcd_4bit.h> LiquidCrystal_4bit lcd(rs, en, d4, d5, d6, d7, bl, cols, rows); Your own Character Converter If you need a different converter for the characters you can hand over a callback function as optional last parameter. Obviously - you also have to implement the callback function to do all the magic. See the general example how to write your own character converter. German Umlauts For my German readers: the default constructor enables the support for the small letters ä ö ü and ß. The large German umlauts Ä Ö Ü will be converted to their counterpart A O U. If you want to try other variants, you can use following constructors: LiquidCrystal_4bit lcd(rs, en, d4, d5, d6, d7, bl, cols, rows); // Ä gets A //LiquidCrystal_4bit lcd(rs, en, d4, d5, d6, d7, bl, cols, rows, convert_ae); // Ä becomes Ae //LiquidCrystal_4bit lcd(rs, en, d4, d5, d6, d7, bl, cols, rows, convert_small); // Ä becomes ä //LiquidCrystal_4bit lcd(rs, en, d4, d5, d6, d7, bl, cols, rows, convert_special); // Ä becomes Ä Summary If you need an easy support of the given character set of a HD44870 display, take the "Noiasca Liquid Crystal" library in consideration. Links - Download the Noiasca Liquid Crystal Version 2.1.1 (2022-07-29) - Start page of Noiasca Liquid Crystal Library - How to install a library - This page in German - Datasheet for the HD44870 (LCD IC) - LCD Display on Aliexpress (* Affiliate - Advertisement) - LCD Display on Amazon (* Affiliate - Advertisement) (*) Disclosure: Some of the links above are affiliate links, meaning, at no additional cost to you I will earn a (little) comission if you click through and make a purchase. I only recommend products I own myself and I'm convinced they are useful for other makers. Protokoll First upload: 2020-09-02 | Version: 2022-07-29
https://werner.rothschopf.net/202009_arduino_liquid_crystal_parallel_en.htm
CC-MAIN-2022-40
en
refinedweb
Quick tip: Custom Material-UI styles with Typescript I recently started to use Typescript with Material-UI. I wanted to add color instances to the palette for the AppBar and Hero background. No problem, just edit /src/gatsby-theme-material-ui-top-layout/theme.js like this. import { createMuiTheme } from '@material-ui/core' const theme = createMuiTheme({ palette: { background: { appbar: '#0c052e', }, }, }) export default theme Dang, now I have errors in my file. Alright, we can fix this. Let's first find out where the issue is by hovering over the error. So TypeBackground is the culprit, let's find it in node_modules. Opening @material-ui/core/styles/createPalette you will find it. import { Color, PaletteType } from '..'; ... export interface TypeBackground { default: string; paper: string; } ... export default function createPalette(palette: PaletteOptions): Palette; Now create a new file where we can extend the type /src/types/createPalette.d.ts. import * as createPalette from "@material-ui/core/styles/createPalette" declare module "@material-ui/core/styles/createPalette" { export interface TypeBackground { default: string paper: string appbar: string } } Great, now we need to import this file somewhere. I have no idea what the best practice is, but I would like all the types to get imported into one file. I'm going to do it in index.tsx until someone tells me why I shouldn't. import TypeBackground from '../types/createPalette' VSCode is happy again!
https://jameskolean.tech/post/2020-07-30-quick-tip-materialui-typescript/
CC-MAIN-2022-40
en
refinedweb
For the one shot upload usecase I tend to lean towards /v3/<plugin>/<content_type>/upload/ and for content_types that require special treatment we can define separate endpoints. If talking about modulemd or modulemd_defaults, it could be /v3/rpm/modules/upload/. -------- Regards, Ina Panova Senior Software Engineer| Pulp| Red Hat Inc. "Do not go where the path may lead, go instead where there is no path and leave a trail." On Wed, Jul 31, 2019 at 1:04 PM Tatiana Tereshchenko <ttereshc at redhat.com> wrote: > If the goal is to make endpoints unified across all actions, then I think > we can only do > POST /pulp/api/v3//plugin/action/ types=[] > > Having plugin/content_type/upload would be nice, however I'm not sure if > it covers enough use cases. > E.g. For pulp_rpm, it makes sense for packages or advisories to have a > dedicated endpoint each, however it doesn't make much sense for modulemd or > modulemd_defaults, because usually they are in the same file and uploaded > in bulk (maybe a separate endpoint is needed for this case). > > For the copy case, it's common to copy more than one type, I think, so > probably 'plugin/copy/ types=[]' makes more sense. > > It would be great to here from more people and other plugins. > > > > On Mon, Jul 29, 2019 at 5:46 PM Pavel Picka <ppicka at redhat.com> wrote: > >> +1 for discuss this to keep some standard as I have already opened PRs >> for rpm modulemd[-defaults]. >> I like idea of /upload in the end. >> But also think it can work without as it will be differ by POST/GET >> methods. >> >> On Mon, Jul 29, 2019 at 4:49 PM Dana Walker <dawalker at redhat.com> wrote: >> >>> Just to provide an added data point, I'll be merging the one-shot PR for >>> pulp_python soon and it currently uses /api/v3/python/upload/ >>> >>> I wanted to keep it simple as well, and so would be happy to change it >>> for consistency based on whatever we decide. >>> >>> --Dana >>> >>> Dana Walker >>> >>> She / Her / Hers >>> >>> Software Engineer, Pulp Project >>> >>> Red Hat <> >>> >>> dawalker at redhat.com >>> <> >>> >>> >>> >>> On Mon, Jul 29, 2019 at 10:42 AM Ina Panova <ipanova at redhat.com> wrote: >>> >>>> Hi all, >>>> As of today, plugins have the freedom to define whichever endpoints >>>> they want ( to some extent). >>>> This leads to the question - shall we namespace one-shot upload and >>>> copy endpoints for some consistency? >>>> >>>> POST /api/v3/content/rpm/packages/upload/ >>>> POST /api/v3/content/rpm/packages/copy/ >>>> >>>> or >>>> >>>> POST /api/v3/content/rpm/upload/ type =package >>>> POST /api/v3/content/rpm/copy/ type = [package, modulemd] >>>> >>>> I wanted to bring this up, before it diverges a lot. For the record, I >>>> have checked only RPM plugin, I am not aware of the state of the other >>>> plugins. >>>> Right now we have an active endpoint for one-shot upload of rpm package: >>>> POST /api/v3/content/rpm/upload/ >>>> >>>> And there is PR for one-shot upload of modulemd-defaults: >>>> POST /api/v3/content/rpm/modulemd-defaults/ >>>> >>>> For rpm copy we have POST /api/v3/content/rpm/copy/ types=[] >>>> >>>> We are starting some work on docker recursive copy, so it would be >>>> helpful to reach some agreement before going further that path. >>>> >>>> Thank >>> >>> >> >> >> -- >> Pavel Picka >> Red Hat >> _______________________________________________ >> Pulp-dev mailing list >> Pulp-dev at redhat.com >> >> > _______________________________________________ > Pulp-dev mailing list > Pulp-dev at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://listman.redhat.com/archives/pulp-dev/2019-August/003349.html
CC-MAIN-2022-40
en
refinedweb
Hello guix! This is a second try, hopefully this time without silly errors. The packaging of sagemath is for now structured as follows: - `sagemath-just-build' builds everything using the minimal set of dependecies to avoid rebuilding where possible. As runtime dependencies are missing, this is useless on its own. - `sagemath-with-dependencies' adds all remaining dependencies as propagated-inputs. This should work, if we set a few additional environment variables. - `sagemath-tests' simply runs the tests. If I interpret the output correctly, these are the tests that are still failing: > sage/calculus/calculus.py # 6 doctests failed > sage/categories/primer.py # 1 doctest failed > sage/doctest/control.py # 3 doctests failed > sage/doctest/sources.py # 1 doctest failed > sage/doctest/test.py # 1 doctest failed > sage/env.py # 4 doctests failed > sage/doctest/forker.py # 1 doctest failed > sage/functions/exp_integral.py # 1 doctest failed > sage/interfaces/gap_workspace.py # 2 doctests failed > sage/interfaces/maxima_abstract.py # 2 doctests failed > sage/interfaces/maxima_lib.py # 2 doctests failed > sage/lfunctions/sympow.py # 10 doctests failed > sage/libs/eclib/interface.py # 7 doctests failed > sage/misc/package_dir.py # 1 doctest failed > sage/modular/abvar/abvar.py # 1 doctest failed > sage/modular/hecke/submodule.py # 1 doctest failed > sage/plot/plot3d/tachyon.py # 2 doctests failed > sage/repl/display/jsmol_iframe.py # 11 doctests failed > sage/repl/display/formatter.py # 4 doctests failed > sage/repl/ipython_kernel/install.py # 1 doctest failed > sage/repl/ipython_tests.py # 2 doctests failed > sage/repl/rich_output/backend_ipython.py # 1 doctest failed > sage/repl/rich_output/output_graphics.py # 25 doctests failed > sage/repl/rich_output/output_graphics3d.py # 34 doctests failed > sage/repl/rich_output/output_video.py # 14 doctests failed > sage/repl/rich_output/backend_doctest.py # 17 doctests failed > sage/schemes/elliptic_curves/ell_rational_field.py # 14 doctests failed > sage/symbolic/integration/integral.py # 1 doctest failed > sage/symbolic/relation.py # 1 doctest failed > sage/tests/gap_packages.py # 1 doctest failed > sage/tests/cmdline.py # 14 doctests failed Some of these seem harmless, others seem more concerning (including segmentation faults), some due to still missing dependencies. Some Notes: - We need to set some environment variables, so that sage can find all dependencies (see `sagemath-tests'). I suppose we could wrap the `sage' command, but this would of course add them as dependencies. - For the `sagemath-data-*' packages, I couldn't find explicit Licenses. In Sage's COPYING.txt it just says "None (database)" for license. - How should we handle test failures? Given errors such as the ones below, it seems delusional to expect every test to succeed. > Failed example: > solve_ineq_fourier([x+y<9,x-y>4],[y,x]) > Expected: > [[y < min(x - 4, -x + 9)]] > Got: > [[y < min(-x + 9, x - 4)]] > Failed example: > FDS.basename > Expected: > 'sage.rings.integer' > Got: > '/gnu/store/.../lib/python3.9/site-packages/sage/rings/integer.pyx' - `python-cython' is build-time dependency but may be needed at runtime. This could cause issues when cross-compiling, right? I think its path is only in the wrappers in bin/*. vicvbcun (29): gnu: Remove ecl-16. gnu: edge-addition-planarity-suite: Update to 3.0.2.0. gnu: gap: Update to 4.11.1. gnu: cliquer: Update to 1.22. gnu: lcalc: Update to 2.0.5. gnu: ntl: Update to 11.5.1. gnu: eclib: Update to 20220621. gnu: lrcalc: Update to 2.1. gnu: maxima: Update to 5.46.0. gnu: python-sympy: Update to 1.10.1. gnu: cddlib: Update to 0.94m. gnu: Add python-memory-allocator. gnu: Add python-pplpy. gnu: Add primecount. gnu: Add python-primecountpy. gnu: Add python-lrcalc. gnu: Add palp. gnu: Add gfan. gnu: Add flintqs. gnu: Add tachyon. gnu: Add sagemath-data-conway-polynomials. gnu: Add sagemath-data-elliptic-curves. gnu: Add sagemath-data-combinatorial-designs. gnu: Add sagemath-data-graphs. gnu: Add sagemath-data-poytopes-db. gnu: Add pari-galdata. gnu: Add sagemath-just-build. gnu: Add sagemath-with-dependencies. gnu: Add sagemath-tests. gnu/local.mk | 9 +- gnu/packages/algebra.scm | 203 +++-- gnu/packages/graph.scm | 4 +- gnu/packages/maths.scm | 31 +- .../ecl-16-format-directive-limit.patch | 83 -- .../ecl-16-ignore-stderr-write-error.patch | 17 - gnu/packages/patches/ecl-16-libffi.patch | 16 - .../patches/lcalc-default-parameters-1.patch | 26 - .../patches/lcalc-default-parameters-2.patch | 58 -- gnu/packages/patches/lcalc-lcommon-h.patch | 13 - .../patches/lcalc-using-namespace-std.patch | 43 - gnu/packages/patches/lrcalc-includes.patch | 92 --- gnu/packages/patches/tachyon-make-arch.patch | 13 + gnu/packages/python-xyz.scm | 4 +- gnu/packages/sagemath.scm | 780 ++++++++++++++++-- 15 files changed, 833 insertions(+), 559 deletions(-) delete mode 100644 gnu/packages/patches/ecl-16-format-directive-limit.patch delete mode 100644 gnu/packages/patches/ecl-16-ignore-stderr-write-error.patch delete mode 100644 gnu/packages/patches/ecl-16-libffi.patch delete mode 100644 gnu/packages/patches/lcalc-default-parameters-1.patch delete mode 100644 gnu/packages/patches/lcalc-default-parameters-2.patch delete mode 100644 gnu/packages/patches/lcalc-lcommon-h.patch delete mode 100644 gnu/packages/patches/lcalc-using-namespace-std.patch delete mode 100644 gnu/packages/patches/lrcalc-includes.patch create mode 100644 gnu/packages/patches/tachyon-make-arch.patch base-commit: f6904c0b19c2fcca41bbf1400c738bd833fec9a8 -- 2.37.0
https://lists.gnu.org/archive/html/guix-patches/2022-08/msg00629.html
CC-MAIN-2022-40
en
refinedweb
itya Puri1,080 Points How will the constructor return the class? At 1:00 he says that the constructor returns the class "PezDispenser".How will the constructor return the class ? How can it return a class? Isn't it the part of the class? And how is constructor a method?? 4 Answers tobiaskrause9,159 Points Here we have an constructor. The constructor is, like you can see in the video, used to create in instance of a class. This is useful because you can give the fields of the class initial values. public class Test { private String testString; public Test(String valueForTestString) { testString = valueForTestString; } } Test test = new Test("Some Test String"); // so now we give the constructor test the parameter "Some test String" // as you can see above the testString will get this value // we created an instance of the class Test with the name "test" with initial values for the private field testString As you can see here test is a instance of the class Test because the constructor returns the same type of class tobiaskrause9,159 Points Seems like you still don't understand what the usage of constructors is Maybe this will help you: Constructors are some kind of weird species. Maybe it is wrong to call them methods. Some people call it one because we call the operator new followed by the call of a construction method. BY THE WAY not all methods give you a return value. Every void-method like you main method wont return anything. Method method = new Method("Something"); Wrong...Class nameofInstance = new Class("Something"); We just create an instance of a class and give the constructor values for the init-values. There is a reason why constnructors have the same name like the class. Aditya Puri1,080 Points still don't understand why the constructor has the same name as the class.... tobiaskrause9,159 Points Because there is not any syntax or keyword for a constructor...it is just the desing/concept of constructors in serveral oop langauges. This SOF thread might help you yadin michaeliCourses Plus Student 5,423 Points Hello Aditya maby i will try to help here. when you create a new object you type PezDispenser dispenser = new PezDispenser () in the parentheses you type the Constructors that you wrote in PezDispenser class the Constructors is this public PezDispenser(String characterName) { mCharacterName = characterName; } when you type this line of code you can give in PezDispenser dispenser = new PezDispenser () a new character name that you like between the two parentheses beacuse you initialize mCharacterName field and give the Constructors a String parameters hope it help :) Aditya Puri1,080 Points Aditya Puri1,080 Points but still...how is the constructor a method? It doesn't behave like one...it makes a new class...it doesn't return anything or do any calculation. And I don't think you can do this in your code- Method method = new Method("Something");
https://teamtreehouse.com/community/how-will-the-constructor-return-the-class
CC-MAIN-2022-40
en
refinedweb
State in Python State is a behavioral design pattern that allows an object to change the behavior when its internal state changes. The pattern extracts state-related behaviors into separate state classes and forces original object to delegate the work to an instance of these classes, instead of acting on its own. Usage of the pattern in Python Complexity: Popularity: Usage examples: The State pattern is commonly used in Python consists of? - What roles do these classes play? - In what way the elements of the pattern are related? main.py: Conceptual Example from __future__ import annotations from abc import ABC, abstractmethod class Context(ABC): """ The Context defines the interface of interest to clients. It also maintains a reference to an instance of a State subclass, which represents the current state of the Context. """ _state = None """ A reference to the current state of the Context. """ def __init__(self, state: State) -> None: self.transition_to(state) def transition_to(self, state: State): """ The Context allows changing the State object at runtime. """ print(f"Context: Transition to {type(state).__name__}") self._state = state self._state.context = self """ The Context delegates part of its behavior to the current State object. """ def request1(self): self._state.handle1() def request2(self): self._state.handle2() class State(ABC): """ The base State class declares methods that all Concrete State should implement and also provides a backreference to the Context object, associated with the State. This backreference can be used by States to transition the Context to another State. """ @property def context(self) -> Context: return self._context @context.setter def context(self, context: Context) -> None: self._context = context @abstractmethod def handle1(self) -> None: pass @abstractmethod def handle2(self) -> None: pass """ Concrete States implement various behaviors, associated with a state of the Context. """ class ConcreteStateA(State): def handle1(self) -> None: print("ConcreteStateA handles request1.") print("ConcreteStateA wants to change the state of the context.") self.context.transition_to(ConcreteStateB()) def handle2(self) -> None: print("ConcreteStateA handles request2.") class ConcreteStateB(State): def handle1(self) -> None: print("ConcreteStateB handles request1.") def handle2(self) -> None: print("ConcreteStateB handles request2.") print("ConcreteStateB wants to change the state of the context.") self.context.transition_to(ConcreteStateA()) if __name__ == "__main__": # The client code. context = Context(ConcreteStateA())
https://refactoring.guru/design-patterns/state/python/example
CC-MAIN-2019-30
en
refinedweb
Reply Product Image depending on scan Product Image depending on scan I have a page with scan funktion. I can get name, number, product and so on from the database when a scan the product with barcode scanner. I have a database with a collection named products. In the products collection a have Name, product_category and code. I also have product_image where i have uplod images. Ok to the problem. I want to scan a code on a product. It working so a get the right code, name och product_category in a page in the app. Now a want to present the right image when a scan the product. Depending on the scan of a code the right product should show with the right image...i have a JaveScript that get the image from the colection, that works but can a get the rught image depending on the code scan? - Serhii Kulibaba (Employee) June 08, 2018 10:14Hello, Please run the query service with the scanned code to read related images: - Can you give me a sample. Not so god at coding. This is the search code... Before sent var q = '{"ean_code" : "'+value+'"}'; console.log("Query value is" + q); return q; And the Success added as a image a want the product_image to display for the right product - view 5 more comments - Trying this, dont work? return ("..."); - what can i do? - Serhii Kulibaba (Employee) June 13, 2018 14:21Please use the code below here: return "" + Apperyio.storage.image_token.get(); - Is it possible to extent 1 week more. Just to test this function before a dicide to buy? developer@planet4us.com - Not working? return "..." + Apperyio.storage.image_token.get(); - Serhii Kulibaba (Employee) June 15, 2018 14:18Please replace return "" + Apperyio.storage.image_token.get(); with: return "" + Apperyio.storage.image_token.get(); If it doesn't work, please share (...) your app with support@appery.io and provide us with the following information: 1) App name 2) Test credentials if login functionality is implemented in your app 3) Detailed steps to reproduce the issue - But i have my images in the collections...I get the fileName right from image_token. See image from iPhone test. But the - App name Planet4us3, se under page Scan_user - OK, a get it to work. - Serhii Kulibaba (Employee) June 18, 2018 13:49Thank you for the update! Glad it works now!
https://getsatisfaction.com/apperyio/topics/product-image-depending-on-scan?page=1
CC-MAIN-2019-30
en
refinedweb
In this project, you’ll create an Insect Robot that walks forward on four legs. Using ultrasound, the robot can see in the dark, just like a bat. When it detects an obstacle, it takes a few steps back, turns, and continues forward. The robot can walk over small obstructions, and it looks more human than its wheeled relatives. Before starting this project, you should know what ultrasonic sensors are,and make sure that the “Hello World” Blink code is working properly with your Arduino. You’ll learn about some new things, including servo motors (motors that can be manipulated to rotate to a specific angular position), in this chapter. We will build a body for the insect by gluing two servos together and shaping legs for it from a wire clothes hanger. Arduino will turn the two servos one at a time, which moves each pair of metal legs like real legs so the insect can crawl forward. We will also make holders for the battery and Arduino so our insect can behave autonomously. The insect will need eyes to react to its environment. We will connect an ultrasonic sensor to the insect’s head to enable the robot to precisely measure its distance from objects in front of it. Finally, we will teach the insect to react to an obstacle by backing up and turning. The distance of the obstruction will trigger a series of commands that make the robot back up several steps, turn, and move forward several more steps. Then our Insect Robot will be able to move and avoid obstructions on its own. After you have spent some time observing your new pet’s movements, you can add new sensors to the robot or teach it new tricks. Step 1: Tools and Parts 1. 9V battery clip (EL14: 34M2183; SFE: PRT-00091). 2. Two small metal rods. You could salvage these from other devices, such as an old typewriter. If you have metal snips and a small amount of sheet metal, you could also cut them yourself (but be sure to use a metal file or metal sandpaper to smooth the edges, which will be extremely sharp). 3. Heat-shrink tubing (14cm) for the feet (EL14: 90N7288). Hot glue works well, too. 4. 28cm and 25cm pieces from a wire clothes hanger. 5. Two pairs of pliers. 6. Wire strippers (EL14: 61M0803; SFE: TOL-08696). 9. Two large servo motors (SFE: ROB-09064;: 900- 00005). 11. 9V battery. 12. Servo extension cable (SFE: ROB-08738;: 805- 00002). 13. Red, black, and yellow (or some other color) jumper wire (SHED: MKEL1; EL14: 10R0134; SFE: PRT-00124). 15. Thin metal wire/metal hanger. 17. PING ))) ultrasonic sensor (SHED: MKPX5;) 18. Arduino Uno (SHED: MKSP4; EL14: 13T9285; SFE: DEV-09950). The older Duemilanove works just as well. Step 2: Servo Motors Servo motors come in different sizes and prices and are based on different technologies. In this context, we are talking about hobby servos, the kind used in remote-control cars, for example. Servo motors have a servo controller that directs the position of the motor wherever we want.The motor itself is a DC (direct current) motor with gears. Servo motors usually rotate rather slowly and with a relatively strong torque. You can buy hobby servo motors with either limited rotation or continuous rotation. Limited rotation models work for most purposes, and you can control their movement quite precisely by degrees of rotation. In continuous rotation servos, you can control only speed and direction. Step 3: Wiring Up the Circuit Connect the servo to the Arduino by attaching the servo’s black wire to any of the Arduino GND pins. The red wire indicates positive voltage; connect it to the Arduino +5V pin. The white or yellow data wire controls the servo; you will connect it to one of the digital pins. For this project, connect the data wire to the first available digital pin: digital pin 2 (D2). Figure 4-4 shows the connection, and Figure 4-5 shows the schematic. Step 4: Using the Servo Library in Arduino Libraries are collections of subroutines or classes that let us extend the basic functionality of a platform or language such as Arduino. There are many different libraries that help us interpret data or use specific hardware in much simpler and cleaner ways. You can explore the libraries available for Arduino at. As these libraries are meant to extend our code only when needed, we must declare each library in any sketch where one will be used. We do this with a single line of code. Here’s how to include the library for controlling servo motors: #include Now we can reference methods and objects from within that library at any time in our sketch. We will be using the Servo library to interface with our motors in this chapter. The Servo library comes with a standard installation of Arduino and can support up to 12 motors on most Arduino boards and 48 motors on the Arduino Mega. For each servo motor we are using, we must create an instance of the Servo object with Servo myServo;. In the setup() function, we must associate this instance of Servo to a specific pin, the same pin to which the data wire of our motor is attached, using the command myServo.attach(2);. Now talking to our motor is easy. There are several functions for communicating with it, including read(), write(), detach(), and more, all of which you can explore in the library reference at. For this chapter, when talking to our motors, we will use only the write() function, which requires a single argument: degree of rotation. Step 5: Centering the Servo This example shows how we use the Servo library to connect a single servo motor and rotate it toward the absolute center point. 1. Import the library. 2. Create an instance of Servo and name it myServo. 3. Attach myServo to pin 2. 4. Tell the servo to rotate 90 degrees. Because we want the motor to move to one position and stay there, we can include all our code in the setup() function. The loop() function must be declared for Arduino to compile, but because we don’t need to do anything in the loop, it can remain empty. When we write() to a servo, we set it to a specific position. Limited rotation servo motors can turn from 0 to 180 degrees, so setting ours to 90 turns it exactly half of its maximum rotation. The servo is now perfectly centered, and it will remain that way until given further instruction. <br><p>// servoCenter.pde - Center servo #include 1 Servo myServo; 2 void setup() { myServo.attach(2); 3 myServo.write(90); 4 } void loop() { delay(100); }</p> Step 6: Moving the Servo Let’s write a small program that will rotate the servo first to the center, then to the maximum angle, back to center, and then to the minimum angle. These are the only differences between this and the previous servo code: 1. The variable delayTime variable (set to 1,000 milliseconds, or 1 second) determines how much time to wait between each rotation. 2. Set the servo to a new angle of rotation, and then wait for a specified duration of time. 3. The duration, named delayTime, must also take into account how quickly the motor can turn. In this case, we must wait a minimum of 375 milliseconds before sending the motor a new command. It takes this much time for the motor to rotate 90 degrees. Play with the value of this variable. You will notice that any value less than 375ms is not enough time for the motor to reach its destination, so it will begin to malfunction. 4. Similarly, you can rotate the servo to other positions simply by changing the values written to myServo. Any value between 0 and 180 will function properly, because this is our range of rotation. In this example, these values are hardcoded, meaning they are written explicitly on each line. In the future, we’ll store these values in variables for more complicated and efficient applications. the maximum angle, back to center, and then to the minimum angle.<br>// moveServo.pde - Move servo to center, maximum angle // and to minumum angle #include Servo myServo; int delayTime = 1000; 1 void setup() { myServo.attach(2); } void loop() { myServo.write(90); 2 delay(delayTime); 3 myServo.write(180); 2 delay(delayTime); myServo.write(90); delay(delayTime); myServo.write(0); delay(delayTime); } Step 7: Constructing the Frame(optional) Making the Legs Cut two pieces from a wire clothes hanger: 28cm for the rear legs and 25cm for the front legs, as shown in.Bend the legs with pliers.It’s important to make the legs long enough and to make sure that the feet point backward, which lets them act as hooks and enables the robot to climb over obstacles. At this stage, don’t worry too much about the shape of the legs. You can adjust them later (and will likely need to). The legs will have a better grip if you cover them with heat-shrink tubing, as shown in. Heat-shrink tubing is rubber tubing that will shrink in diameter by 50% when heated, for example, with a heat gun or hair dryer. Cut two 7cm pieces of the tubing and shrink them to fit around the back legs. Next, attach the legs to the servos. The servos come with one or more plastic attachments that will connect to the servo axis. Attach the legs by pulling metal wires through the servo holes, and secure each leg by tightening the metal wire, as shown in. Cut any excess wire to keep it from hindering the motor’s movement. Finally, add hot glue to the underside to stabilize the legs,but do not fill the center screw hole with glue. Assembling the Frame The frame of the walker consists of two connected servo motors. Before gluing them together, you need to remove a small plastic extension (meant for mounting the servos) from both of the servo motors. Remove the extension next to the servo arm from the front-facing servo, and remove the opposite part from the rear-facing servo. You can do this easily with a utility knife, as shown in. It’s a good idea to smooth out the cutting seam with a small file to make sure that the glued joints do not become uneven and weak. Spread the hot glue evenly on the rear servo, and immediately press the servos together, holding them steady for a while to give the glue time to set. The servos are connected so that the frontfacing servo arm points forward and the rear-facing arm points down. The top sides of the motors should be placed evenly, to make it easier to attach them to the Arduino. If you make a mistake gluing the parts together, it’s easy to separate them without too much force. (Hot-gluing is not necessarily ideal for building sturdy devices, but it is a quick and easy way to attach almost anything, and it works well with simple prototypes.) Making the Holder for the Arduino We will use two metal strips to build a holder on top of the robot that will make it easy to attach and detach the Arduino. Cut two 10cm metal pieces. Bend the sides of each strip so that the space in the middle is equal to the width of the Arduino and glue both strips to the servos. The Arduino Duemilanove (and later models, such as the Uno) used in this project is 5.2cm wide. If the metal you’re using is flexible enough, it is helpful to bend the corner inward slightly. This way, you can snap the Arduino in place sturdily and, when you are finished, remove it painlessly for other uses. Attaching a Battery We’ll use Velcro tape to make an attachment system in the rear of the robot for the 9V battery. Cut a 16cm strip from the Velcro tape and attach the ends together. Make holes for two screws in the middle of the tape. Attach the Velcro tape with screws to the servo’s extension part in the rear of the robot. Assembly Now you can place the Arduino board on top of the servos. Attach the legs to the servos, but don’t screw them in tightly yet. We’ll connect the servos to each other and to the Arduino with jumper wires. First, connect the servos’ black (GND) wires using a black jumper wire from one servo to the other, and then use another black jumper wire to connect one of the servos to an Arduino GND pin. You might have to use a bit of force to insert two wires into one of the servo connection headers. Next, connect the red power wires in a similar manner, first from one servo to the other and then to the Arduino’s 5V power pin. Use white (the actual color may vary) jumper wires to control each servo, and connect a yellow jumper wire from the rear servo to Arduino pin 2 and from the front servo to Arduino pin 3. Earlier , in the “Centering the Servo” section, you ran a sketch to center a single servo. If you run this code again to center the servo, you will be able to attach the leg in the correct position. But there are now two servos, so let’s alter the previous centering code to turn both servos toward the center: The only difference between this and the earlier centering code is the addition of two servo objects named frontServo and rearServo: 1. Define an instance of the Servo object for the rear servo. 2. Within setup, attach the rearServo to pin 3. 3. Send pulses to both of the motors, making them turn toward the center. //twoServosCenter.pde - Center two servos #include Servo frontServo; Servo rearServo; 1 void setup() { frontServo.attach(2); rearServo.attach(3); 2 frontServo.write(90); 3 rearServo.write(90); } void loop() { delay(100); } Step 8: Screwing the Legs in Place Now that the servos are centered, you can screw the legs into place. It’s also a good idea to attach the battery now, because the additional weight will affect the way the robot walks. However, it is not necessary to connect the battery wires to the Arduino; you can take power straight from the USB cable while you are programming and testing the device. Step 9: Programming the Walk Walking Forward If you power up the Arduino running the code that swings only one servo (see the earlier “Moving the Servo” section), it will start rocking on either its front or rear legs. Walking forward will require coordination between both front and rear legs. When the servos move at the same tempo, but in opposite directions, the robot starts to walk. Here is some code that will make the robot walk forward: Let’s have a look at the code: 1. This is the center position for the servos. Ninety degrees is precisely half of 180 possible degrees of rotation. 2. Maximum position the right front leg will rise to. 3. Maximum position the left front leg will rise to. 4. Maximum position the right rear leg will bend to. 5. Maximum position the left rear leg will bend to. 6. The moveForward function turns the servos first to opposite directions. The variables defined in the preceding lines set how far each of the servos will rotate. Before we turn in another direction, we will tell the servos to rotate toward a predefined center point for a short span of time. This ensures that the servos don’t start drifting out of sync. We return to the center point at the end of every step to make the walk more elegant and efficient. 7. Call the moveForward function repeatedly within the loop, which will make our robot move one step forward. The subsequent delay controls how long the robot waits before taking its next step. Removing the delay is the equivalent of having the robot run as fast as it can. // walkerForward.pde - Two servo walker. Forward.<br>// (c) Kimmo Karvinen & Tero Karvinen <a href="" rel="nofollow"> </a> // updated - Joe Saavedra, 2010 #include Servo frontServo; Servo rearServo; int centerPos = 90; 1 int frontRightUp = 72; 2 int frontLeftUp = 108; 3 int backRightForward = 75; 4 int backLeftForward = 105;5 void moveForward() 6 { frontServo.write(frontRightUp); <p>rearServo.write(backLeftForward);<br setup() { frontServo.attach(2); rearServo.attach(3); } void loop() { moveForward(); 7 delay(150); //time between each step taken, speed of walk }</p> Step 10: Walking Backward When walking forward works, walking backward is easy. This time, the servos move at the same pace and in the same direction: Let’s see what has changed from the previous code: 1. The moveBackward() function is similar to moveForward(), but this time the right front leg will rise up when the right rear leg moves forward, and the left front leg rises when the left rear leg moves forward. 2. Now moveBackward() is called in the loop() function. // walkerBackward.pde - Two servo walker. Backward.<br> #include <br>Servo frontServo; Servo rearServo; int centerPos = 90; int frontRightUp = 72; int frontLeftUp = 108; int backRightForward = 75; int backLeftForward = 105; void moveBackward() 1 { frontServo.write(frontRightUp); rearServo.write(backRightForward); delay(125); frontServo.write(centerPos); rearServo.write(centerPos); delay(65); frontServo.write(frontLeftUp); rearServo.write(backLeftForward); delay(125); frontServo.write(centerPos); rearServo.write(centerPos); delay(65); } void setup() { frontServo.attach(2); rearServo.attach(3); } void loop() { moveBackward(); 2 delay(150); //time between each step taken, speed of walk } Step 11: Turning Backward Moving the robot forward and backward is not enough if we want it to avoid obstacles. The preferred outcome is for the robot to detect the obstacle, turn in another direction, and continue to walk. Naturally, it could just back up and turn after that, but the turn would be more efficient if the robot first backs up to the right and then turns to the left The robot can turn to the right as it walks backward if you alter the center point of the servo and the threshold levels a bit toward the right side. This will also change the balance of the robot, which can easily lead to one of its front legs rising higher than the other. You can solve this problem by adding a bit of movement to the lowered leg, raising it into the air: The new moveBackRight() function is similar to the moveBack() function in the previous example. 1. The movement of the rear servo is reduced by 6 degrees, which will move its center point to the right. As we noted earlier, the changed rear servo position will likely change the balance of the entire robot. To account for this, we add 9 degrees to the frontLeftUp value. If any of your own robot’s legs stay in the air or drag, you can increase or decrease these values as needed. // walkerTurnBackward.pde - Two servo walker. Turn backward. #include Servo frontServo; Servo rearServo; int centerPos = 90; int frontRightUp = 72; int frontLeftUp = 108; int backRightForward = 75; int backLeftForward = 105; void moveBackRight() 1 { setup() { frontServo.attach(2); rearServo.attach(3); } void loop() { moveBackRight(); delay(150); //time between each step taken, speed of walk } Step 12: Turning Forward A turn forward resembles otherwise normal forward walking, but now the center points of both servos are changed. The movement of one front leg must also be adjusted to keep it from rising too high. 1. Create a new variable that will define the center point of the servos during a turn (the center turn position). Notice we are 9 degrees away from the motor’s halfway point. 2. Calculate the maximum position to which the right front leg will rise by deducting 18 degrees from the center turn position. 3. Calculate the position of the left front leg in the same way as the right front leg, but instead of adding 18 degrees to the center turn position, add 36 (to balance the front legs). 4. Calculate the position for the right rear leg by deducting 15 degrees from the center turn position. 5. Add the same distance to the left rear leg. 6. The MoveTurnLeft() function is similar to all the other walks. This time, the variables just mentioned are used for turning, and both servos are centered to a different position than when walking forward. 7. Repeat turning forward in the loop and the delay to control the speed of our walk. // walkerTurnForward.pde - Two servo walker. Turn forward. #include Servo frontServo; Servo rearServo; int centerTurnPos = 81; 1 int frontTurnRightUp = 63; 2 int frontTurnLeftUp = 117; 3 int backTurnRightForward = 66; 4 int backTurnLeftForward = 96;5 void moveTurnLeft() 6 {); } void loop() { moveTurnLeft(); 7 delay(150); //time between each step taken, speed of walk } Step 13: Avoiding Obstacles Using Ultrasound Attaching the Ultrasonic Sensor Use the servo extension cable to attach the sensor. \Using hot glue, we connected the other end of the extension cable to the front of the robot with the connector part pointing down (if you’d rather have it pointing up, that will work, too). If the connector doesn’t snap in easily, you can glue the wires in place. The pin assignments are marked on top of the pins. After the glue has dried, you can snap the ultrasonic sensor into place on the connector. Connect the other end of the servo extension cable to the Arduino. The red goes to the Arduino 5V port, black to the GND port, and white to the digital pin 4. Since the Arduino does not have many free pins left, you can connect the black and red wires in the same holes as the servo cables. Step 14: The Final Circuit Step 15: Code See the file below. Because we went through the code for various turning techniques earlier, we will cover only the combination of the sketches here: 1. Declare global variables in the beginning, gathering all global variables here from previous sketches. At the same time, check that there are no conflicts with the names. 2. These functions (microsecondsToInches, microsecondsToCentimeters, and distanceCm) come from the distance sensor code. 3. Pull in functions relating to walking such as moveForward() from earlier sketches in this chapter. 4. Add all the lines from the setup() function within the new program’s setup() function. The setup code for the pin modes is identical to the previous example. 5. There is new code in the main program’s loop() function. This is the central program logic. 6. Measure a distance toward an object in front of us with the distanceCm() function. 7. Sometimes, the PING))) sensor might return an incorrect reading of 0.00. This is not uncommon for sensors of this type; however, we must compensate for these false readings with a simple filter. This if statement allows only readings above 1cm to pass through. 8. If the measured distance (distanceFront) is longer than 1cm but shorter than the declared startAvoidanceDistance, an obstacle is detected and must be avoided. 9. Avoid the obstacle by backing up and turning to the right for nine steps. The delay for walkSpeed is keeping the rhythm of our steps consistent. Then, take 11 steps forward and turn simultaneously to the left. bk. If there is no obstacle within the 20cm range, take a step forward with moveForward(). Now the insect can walk forward. It will also avoid obstacles without touching them. Participated in the Arduino All The Things! Contest 4 Discussions 3 years ago Please vote if you liked this project. 3 years ago it is a very niceproject. 3 years ago is there a video somewhere as well? that would be great! Reply 3 years ago i will try to put a video as soon as possible.
https://www.instructables.com/id/Arduino-Smart-Distance-Controlled-Insect/
CC-MAIN-2019-30
en
refinedweb
One reason a developer would use a technology like MEF is to, as the name implies, make an application extensible through a process called discovery. Discovery is simply a method for locating classes, types, or other resources in an assembly. MEF uses the Export tag to flag items for discovery, and the composition process then aggregates those items and provides them to the entities requesting them via the Import tag. Download the source code for this example It occurred to me when working with PRISM and MEF (see the recap of my short series here) that some of this can be done through traditional means and I might be abusing the overhead of a framework if all I’m doing is something simple like marrying a view to a region. Challenges with PRISM include both determining how views make it into regions and how to avoid magic strings and have strict type compliance when dealing with regions. This post will address one possible solution using custom attributes and some fluent interfaces. Fluent Interfaces Fluent interfaces simply refers to the practice of using the built-in support for an object-oriented language to create more “human-readable” code. It’s really a topic in and of itself, but I felt it made sense to simplify some of the steps in this post and introduce some higher level concepts and examples along the way. There are several ways to provide fluent interfaces. One that is built-in to the C# language is simply using the type initializer feature. Instead of this: public class MyClass { public string Foo { get; set; } public string Bar { get; set; } public MyClass() { } public MyClass(string foo, string bar) { Foo = foo; Bar = bar; } } Which results in code like this: ... MyClass myClass = new MyClass(string1, string2); ... Question: which string is foo, and which string is bar, based on the above snippet? Would you consider this to be more readable and “self-documenting”? ... MyClass myClass = new MyClass { Foo = string1, Bar = string2 }; ... It works for me! So let’s do something simple in our PRISM project. If you’ve worked with PRISM, then you’ll know the pattern of creating a Shell and then assinging it to the root visual in a Bootstrapper. The typical code looks like this in your Bootstrapper class: protected override DependencyObject CreateShell() { Shell shell = new Shell(); Application.Current.RootVisual = shell; return shell; } That’s nice, but wouldn’t it also be nice if you could do something simple and readable, like this? Keep in mind we’re not cutting down on generated code (and in fact, sometimes fluent interfaces may increase the amount of generated code, which is a consideration to keep in mind), but we’re focused on the maintainability and readability of the source code. protected override DependencyObject CreateShell() { return Container.Resolve<Shell>().AsRootVisual(); } In one line of code I’m asking the container to provide me with the shell (I do this as a common practice as opposed to creating a new instance so that any dependencies I may have in the shell will be resolved), then return it “as root visual.” I think that is pretty readable, but how do we get there? The answer in this case is using extension methods. In my “common” project I created a static class called Fluent which contains my fluent interfaces (this is for the example only and would not scale in production … you will want to segregate your interfaces into separate classes related to the modules they act upon). In this static class, I create the following extension method: public static UserControl AsRootVisual(this UserControl control) { Application.Current.RootVisual = control; return control; } An extension method does a few things. By using the keyword this on the parameter, it tells the compiler this method will extend the type. The semantics in the code look you are calling something on the UserControl, but the compiler is really taking the user control, then calling the method on the static class and passing the instance in. It is common for the extension methods to return the same instance so they can be chained. In this case, we simply assign the control to the root visual, then return it so it can be used elsewhere. We’re really doing the same thing we did before, but adding a second method call, in order to make the code that much more readable. One important concern to have and address with fluent interfaces is the potential for “hidden magic.” What I mean by this is the extension methods aren’t available on the base class and only appear when you include a reference to the class with the extensions. This may make them less discoverable based on how you manage your code. It also means you will look at methods that aren’t part of the known interface. It’s not difficult to determine where the method comes from. Intellisense will flag extension methods as extensions, and you can always right click and “go to definition” to see where the method was declared. Auto-discoverable Views I have two main goals with this project: the first is to be able to tag views so they are automatically discovered and placed into a region, and the second is to type the region so I’m not using magic strings all over the place. My ideal solution would allow me to add a view to a project, tag it with a region, and run it, and have it magically appear in that region. Possible? Of course! Typing the Regions I am going to type the regions to avoid magic strings. Because the main shell defines the regions, I’m fine with the strings there … that is sort of the “overall definition”, but then I want to make sure elsewhere in the code I can’t accidentally refer to a region that doesn’t exist. My first step is to create a common project that all other modules can reference, and then add an enumeration for the regions. The enumeration for this examle is simple: namespace ViewDiscovery.Common { public enum Regions { TopLeft, TopRight, BottomLeft, BottomRight } } Enumerations are nice because I can call ToString() and turn it into the string value of the enumeration itself. I decided to adopt the convention “Region.” when tagging it in the shell, so my shell looks like this: <UserControl x: <Grid x: <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <TextBlock Grid. <ItemsControl Grid. <ItemsControl Grid. <ItemsControl Grid. <ItemsControl Grid. </Grid> </UserControl> This is just a simple 2×2 grid with a region per cell. I used the ItemsControl so each cell can host multiple views. We’re off to a good start! Now let’s figure out how to tag our views. The Custom Attribute Custom attributes are powerful and easy to implement. I want to be able to tag a view as a region using my enumeration, so I define this custom attribute: namespace ViewDiscovery.Common { [AttributeUsage(AttributeTargets.Class,AllowMultiple=false)] public class RegionAttribute : System.Attribute { const string REGIONTEMPLATE = "Region.{0}"; public readonly string Region; public RegionAttribute(Regions region) { Region = region.ToString().FormattedWith(REGIONTEMPLATE); } } } Notice that I don’t allow multiple attributes and that this attribute is only valid when placed on a class. Attributes can take both positional parameters (defined in the constructor) and named parameters (defined as properties). In this case, I only have one value so I chose to make it positional. When the region enumeration is passed in, I cast it to a string and then format it with the prefix, so that Regions.TopLeft becomes the string Region.TopLeft. Notice I snuck in another fluent interface, the FormattedWith. To me, that’s a sight prettier than “string.Format” if I’m only dealing with a single parameter. The extension to make this happen looks like this: public static string FormattedWith(this string src, string template) { return string.Format(template, src); } Now that we have a tag, we can create a new module and get it wired in. I created a new project as a Silverlight Class Library (sorry, this example doesn’t do any fancy dynamic module loading), built a folder for views, and tossed in a view. The view simply contains a grid with some text: <Grid x: <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Text="I am in ModuleOne." Grid. <TextBlock Text="I want to be at the top left." Grid. </Grid> Tagging the view was simple. I went into the code-behind, added a using statement to reference the common project where my custom attribute is defined and then tagged the view with the attribute. Here’s the code-behind with the tag: namespace ViewDiscovery.ModuleOne.Views { [Region(Regions.TopLeft)] public partial class View : UserControl { public View() { InitializeComponent(); } } } So now it’s clear where we want the view to go. Now how do we get it there? Discovering the Views The pattern in PRISM for injecting a module is for the module to have an initialization class that implements IModule and then adds the views in the module to the region. We want to do this through discovery. To facilitate this, I created a base abstract class for any module that wants auto-discovered views. The class looks like this: public abstract class ViewModuleBase : IModule { protected IRegionManager _regionManager; public ViewModuleBase(IRegionManager regionManager) { _regionManager = regionManager; } #region IModule Members public virtual void Initialize() { IEnumerable<Type> views = GetType().Assembly.GetTypes().Where(t => t.HasRegionAttribute()); foreach (Type view in views) { RegionAttribute regionAttr = view.GetRegionAttribute(); _regionManager.RegisterViewWithRegion(regionAttr.Region, view); } } #endregion } The code should be very readable. We enforce that the region manager must be passed in by creating a constructor that takes it and stores it. We implement Initialize as virtual so it can be overridden when needed. First, we get the assembly the module lives in, then grab a collection of types that have our custom attribute. Yes, our fluent interface makes this obvious because we can do type.HasRegionAttribute(). The extension method looks like this: public static bool HasRegionAttribute(this Type t) { return t.GetCustomAttributes(true).Where(a => a is RegionAttribute).Count() > 0; } This takes the type, grabs the collection of custom attributes (using inheritance in case we’re dealing with a derived type) and returns true if the count of our attribute, the RegionAttribute, is greater than zero. Next, we iterate those types and get the region attribute, again with a nice, friendly interface ( GetRegionAttribute) that looks like this: public static RegionAttribute GetRegionAttribute(this Type t) { return (RegionAttribute)t.GetCustomAttributes(true).Where(a => a is RegionAttribute).SingleOrDefault(); } Now we have exactly what we need to place the view into the region: the region it belongs to, and the type. So, we register the view with the region and we’re good to go! In my module, I add a class for the module initializer called ModuleInit. I’m only using the auto-discovery so there is nothing more than an implementation of the base class that passes the region manager down: namespace ViewDiscovery.ModuleOne { public class ModuleInit : ViewModuleBase { public ModuleInit(IRegionManager regionManager) : base(regionManager) { } } } Now we go back to the main project and wire in the module catalog. I’m not using dynamic modules so I just reference my modules from the main project and register them by type: protected override IModuleCatalog GetModuleCatalog() { return new ModuleCatalog() .WithModule(typeof(ModuleOne.ModuleInit).AssemblyQualifiedName.AsModuleWithName("Module One")); } OK, so I had some fun here as well. I wanted to extend the module catalog to allow chaining WithModule for adding multiple modules, and be able to take a type name as a string, then make it a module with a name. Working backwards, we turn a string into a named ModuleInfo class like this: public static ModuleInfo AsModuleWithName(this string strType, string moduleName) { return new ModuleInfo(moduleName, strType); } Next, we extend the catalog to allow chaining on new modules like this: public static ModuleCatalog WithModule(this ModuleCatalog catalog, ModuleInfo module) { catalog.AddModule(module); return catalog; } Notice this simply adds the module then returns the original catalog. At this point, we can run the project and see that the view appears in the upper left. I then added a second view with a rectangle: <UserControl x: <Grid x: <Rectangle Width="100" Height="100" Fill="Red" Stroke="Black"/> </Grid> </UserControl> … and tagged it: namespace ViewDiscovery.ModuleOne.Views { [Region(Regions.TopRight)] public partial class Rectangle : UserControl { public Rectangle() { InitializeComponent(); } } } An finally compiled and re-ran it. The rectangle shows up in the upper right, as expected: Next, I added a second module with several views … including a few registered to the same cell. Adding the new module to the catalog was easy with the extension for chaining modules: protected override IModuleCatalog GetModuleCatalog() { return new ModuleCatalog() .WithModule(typeof(ModuleOne.ModuleInit).AssemblyQualifiedName.AsModuleWithName("Module One")) .WithModule(typeof(ModuleTwo.ModuleInit).AssemblyQualifiedName.AsModuleWithName("Module Two")); } Compiling and running this gives me the final result: And now we’ve successfully created auto-discoverable views that we can strongly type to a region and feel confident will end up being rendered where they belong. Download the source code for this example
https://www.wintellect.com/auto-discoverable-views-using-fluent-prism-in-silverlight/
CC-MAIN-2021-43
en
refinedweb
Android integration of multiple icon providers such as FontAwesome, Entypo, Typicons,... Iconify offers you a huge collection of vector icons to choose from, and an intuitive way to add and customize them in your Android app. It has been introduced in this blog post which is a good place to get started. Pick any number of modules and declare them in your Application. dependencies { compile 'com.joanzapata.iconify:android-iconify-fontawesome:2.2.2' // (v4.5) compile 'com.joanzapata.iconify:android-iconify-entypo:2.2.2' // (v3,2015) compile 'com.joanzapata.iconify:android-iconify-typicons:2.2.2' // (v2.0.7) compile 'com.joanzapata.iconify:android-iconify-material:2.2.2' // (v2.0.0) compile 'com.joanzapata.iconify:android-iconify-material-community:2.2.2' // (v1.4.57) compile 'com.joanzapata.iconify:android-iconify-meteocons:2.2.2' // (latest) compile 'com.joanzapata.iconify:android-iconify-weathericons:2.2.2' // (v2.0) compile 'com.joanzapata.iconify:android-iconify-simplelineicons:2.2.2' // (v1.0.0) compile 'com.joanzapata.iconify:android-iconify-ionicons:2.2.2' // (v2.0.1) } public class MyApplication extends Application { @Override public void onCreate() { super.onCreate();()); } } If you need to put an icon on a TextViewor a Button, use the { }syntax. The icons act exactly like the text, so you can apply shadow, size and color on them! {fa-code 12px}, {fa-code 12dp}, {fa-code 12sp}, {fa-code @dimen/my_text_size}, and also {fa-code 120%}. {fa-code #RRGGBB}, {fa-code #AARRGGBB}, or {fa-code @color/my_color}. {fa-cog spin}. Drawable If you need an icon in an ImageViewor in your ActionBarmenu item, then you should use IconDrawable. Again, icons are infinitely scalable and will never get fuzzy! // Set an icon in the ActionBar menu.findItem(R.id.share).setIcon( new IconDrawable(this, FontAwesomeIcons.fa_share) .colorRes(R.color.ab_icon) .actionBarSize()); In case you can't find the icon you want, you can extend the available icon directly from your app. All you need to do is to implement IconFontDescriptorwith a .ttffile in your assets and provide the mapping between keys and special characters, then give it to Iconify.with(). You can use the FontAwesomeModule as an example. There are no constraints on the icon keys, but I strongly suggest you use a unique prefix like my-or anything, to avoid conflicts with other modules. FYI, if there is a conflict, the first module declared with Iconify.with()has priority. The only dependency you need if you want to use a custom icon is Iconify core. compile 'com.joanzapata.iconify:android-iconify:2.2 uses FontAwesome font by Dave Gandy, licensed under OFL 1.1, which is compatible with this library's license.
https://xscode.com/JoanZapata/android-iconify
CC-MAIN-2021-43
en
refinedweb
CodePlexProject Hosting for Open Source Software Hi, Is there a way to restrict access to blogengine only to users login? Basically I wanted to prevent people from viewing blog entries unless they are logged in? I tried disabling the publish checkbox but this then prevents posts from being archived? Thanks in advance, joe BlogEngine does not provide this functionality. However, it is trivial to implement. Simply add an extension that replaces the post content with an empty string (or message) if the user is not authenticated. If you want to do this for certain posts, just embed a token in posts that you want to conceal, and have your extension search for the token. Here's a simple example which does the same for comments. You'd need to hook the Post.Serving event instead. #region using using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Data; using System.Text.RegularExpressions; using BlogEngine.Core; using BlogEngine.Core.Web.Controls; using System.Xml.Linq; using System.Text; using System.Collections.Specialized; #endregion /// <summary> /// Summary description for CommentVeiler /// </summary> [Extension("Hides comments from any non-members", "1.0", "My Name")] public class CommentVeiler { static protected ExtensionSettings _settings = null; public CommentVeiler() { ExtensionSettings settings = new ExtensionSettings("CommentVeiler"); settings.IsScalar = true; settings.AddParameter("HiddenCommentText", "Hidden Comment Text", 100, true); settings.AddValue("HiddenCommentText", "This comment is visible to members only."); _settings = ExtensionManager.InitSettings("CommentVeiler", settings); } /// <summary> /// Called when a comment is served. If the current context indicates that the /// user is not authorized/logged into the site, the comment will be hidden. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void Comment_Serving(object sender, ServingEventArgs e) { if (false == System.Threading.Thread.CurrentPrincipal.Identity.IsAuthenticated) { e.Body = _settings.GetSingleValue("HiddenCommentText"); } } } Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://blogengine.codeplex.com/discussions/82856
CC-MAIN-2016-50
en
refinedweb
Reloading strongly typed Options on file changes in ASP.NET Core RC2 In the previous version of ASP.NET, configuration was typically stored in the <AppSettings> section of web.config. Touching the web.config file would cause the application to restart with the new settings. Generally speaking this worked well enough, but triggering a full application reload every time you want to tweak a setting can sometimes create a lot of friction during development. ASP.NET Core has a new configuration system that is designed to aggregate settings from multiple sources, and expose them via strongly typed classes using the Options pattern. You can load your configuration from environment variables, user secrets, in memory collections json file types, or even your own custom providers. When loading from files, you may have noticed the reloadOnChange parameter in some of the file provider extension method overloads. You'd be right in thinking that does exactly what it sounds - it reloads the configuration file if it changes. However, it probably won't work as you expect without some additional effort. In this article I'll describe the process I went through trying to reload Options when appsettings.json changes. Note that the final solution is currently only applicable for RC2 - it has been removed from the RTM release, but will be back post-1.0.0. Trying to reload settings To demonstrate the default behaviour, I've created a simple ASP.NET Core WebApi project using Visual Studio. To this I have added a MyValues class: public class MyValues { public string DefaultValue { get; set; } } This is a simple class that will be bound to the configuration data, and injected using the options pattern into consuming classes. I bind the DefaultValues property by adding a Configure call in Startup.ConfigureServices : void ConfigureServices(IServiceCollection services) { // Configure our options values services.Configure<MyValues>(Configuration.GetSection("MyValues")); services.AddMvc(); } } I have included the configuration building step so you can see that appsettings.json is configured with reloadOnChange: true. Our MyValues class needs a default value, so I added the required configuration to appsettings.json: { "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Debug", "System": "Information", "Microsoft": "Information" } }, "MyValues": { "DefaultValue" : "first" } } Finally, the default ValuesController is updated to have an IOptions<MyValues> instance injected in to the constructor, and the Get action just prints out the DefaultValue. [Route("api/[controller]")] public class ValuesController : Controller { private readonly MyValues _myValues; public ValuesController(IOptions<MyValues> values) { _myValues = values.Value; } // GET api/values [HttpGet] public string Get() { return _myValues.DefaultValue; } } Debugging our application using F5, and navigating to, gives us the following output: Perfect, so we know our values are being loaded and bound correctly. So what happens if we change appsettings.json? While still debugging, I updated the appsettings.json as below, and hit refresh in the browser… { "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Debug", "System": "Information", "Microsoft": "Information" } }, "MyValues": { "DefaultValue": "I'm new!" } } Hmmm… That's the same as before… I guess it doesn't work. Overview of configuration providers Before we dig in to why this didn't work, and how to update it to give our expected behaviour, I'd like to take a step back to cover the basics of how the configuration providers work. After creating a ConfigurationBuilder in our Startup class constructor, we can add a number of sources to it. These can be file-based providers, user secrets, environment variables or a wide variety of other sources. Once all your sources are added, a call to Build will cause each source's provider to load their configuration settings internally, and returns a new ConfigurationRoot. This ConfigurationRoot contains a list of providers with the values loaded, and functions for retrieving particular settings. The settings themselves are stored internally by each provider in an IDictionary<string, string>. Considering the first appsettings.json in this post, once loaded the JsonConfigurationProvider would contain a dictionary similar to the following: new Dictionary<string, string> { {"Logging:IncludeScopes": "Debug"}, {"Logging:LogLevel:Default": "Debug"}, {"Logging:LogLevel:System": "Information"} {"Logging:LogLevel:Microsoft": "Information"}, {"MyValues:DefaultValue": first} } When retrieving a setting from the ConfigurationRoot, the list of sources is inspected in reverse to see if it has a value for the string key provided; if it does, it returns the value, otherwise the search continues up the stack of providers until it is found, or all providers have been searched. Overview of model binding Now we understand how the configuration values are built, let's take a quick look at how our IOptions<> instances get created. There are a number of gotchas to be aware of when model binding (I discuss some in a previous post), but essentially it allows you to bind the flat string dictionary that IConfigurationRoot receives to simple POCO classes that can be injected. When you setup one of your classes (e.g. MyValues above) to be used as an IOptions<> class, and you bind it to a configuration section, a number of things happen. First of all, the binding occurs. This takes the ConfigurationRoot we were supplied previously, and interrogates it for settings which map to properties on the model. So, again considering the MyValues class, the binder first creates an instance of the class. It then uses reflection to loop over each of the properties in the class (in this case it only finds DefaultValue) and tries to populate it. Once all the properties that can be bound are set, the instantiated MyValues object is cached and returned. Secondly, it configures the IoC dependency injection container to inject the IOptions<MyValues> class whenever it is required. Exploring the reload problem Lets recap. We have an appsettings.json file which is used to provide settings for an IOptions<MyValues> class which we are injecting into our ValuesController. The JSON file is configured with reloadOnChange: true. When we run the app, we can see the values load correctly initially, but if we edit appsettings.json then our injected IOptions<MyValues> object does not change. Let's try and get to the bottom of this... The reloadOnChange: true parameter We need to establish at which point the reload is failing, so we'll start at the bottom of the stack and see if the configuration provider is noticing the file change. We can test this by updating our ConfigureServices call to inject the IConfigurationRoot directly into our ValuesController, so we can directly access the values. This is generally discouraged in favour of the strongly typed configuration available through the IOptions<> pattern, but it lets us bypass the model binding for now. First we add the configuration to our IoC container: public class Startup { public void ConfigureServices(IServiceCollection services) { // inject the configuration directly services.AddSingleton(Configuration); // Configure our options values services.Configure<MyValues>(Configuration.GetSection("MyValues")); services.AddMvc(); } } And we update our ValuesController to receive and display the MyValues section of the IConfigurationRoot. [Route("api/[controller]")] public class ValuesController : Controller { private readonly IConfigurationRoot _config; public ValuesController(IConfigurationRoot config) { _config = config; } // GET api/values [HttpGet] public IEnumerable<KeyValuePair<string,string>> Get() { return _config.GetValue<string>("MyValues:DefaultValue"); } } Performing the same operation as before - debugging, then changing appsettings.json to our new values - gives: Excellent, we can see the new value is returned! This demonstrates that the appsettings.json file is being reloaded when it changes, and that it is being propagated to the IConfigurationRoot. Enabling trackConfigChanges Given we know that the underlying IConfigurationRoot is reloading as required, there must be an issue with the binding configuration of IOptions<>. We bound the configuration to our MyValues class using services.Configure<MyValues>(Configuration.GetSection("MyValues"));, however there is another extension method available to us: services.Configure<MyValues>(Configuration.GetSection("MyValues"), trackConfigChanges: true); This extension has the property trackConfigChanges, which looks to be exactly what we're after! Unfortunately, updating our Startup.Configure() method to use this overload doesn't appear to have any effect - our injected IOptions<> still isn't updated when the underlying config file changes. Using IOptionsMonitor Clearly we're missing something. Diving in to the aspnet/Options library on GitHub we can see that as well as IOptions<> there is also an IOptionsMonitor<> interface. Note, a word of warning here - the rest of this post is applicable to RC2, but has since been removed from RTM. It will be back post-1.0.0. using System; namespace Microsoft.Extensions.Options { public interface IOptionsMonitor<out TOptions> { TOptions CurrentValue { get; } IDisposable OnChange(Action<TOptions> listener); } } You can inject this class in much the same way as you do IOptions<MyValues> - we can retrieve our setting value from the CurrentValue property. We can test our appsettings.json modification routine again by injecting into our ValuesController: private readonly MyValues _myValues; public ValuesController(IOptionsMonitor<MyValues> values) { _myValues = values.CurrentValue; } Unfortunately, we have the exact same behaviour as before, no reloading for us yet: Which, finally, brings us to… The Solution So again, this solution comes with the caveat that it only works in RC2, but it will most likely be back in a similar way post 1.0.0. The key to getting reloads to propagate is to register a listener using the OnChange function of an OptionsMonitor<>. Doing so will retrieve a change token from the IConfigurationRoot and register the listener against it. You can see the exact details here. Whenever a change occurs, the OptionsMonitor<> will reload the IOptionsValue using the original configuration method, and then invoke the listener. So to finally get reloading of our configuration-bound IOptionsMonitor<MyValues>, we can do something like this: public void Configure(IApplicationBuilder app, IHostingEnvironment env,(); } In our configure method we inject an instance of IOptionsMonitor<MyValues> (this is automatically registered as a singleton in the services.Configure<MyValues> method). We can then add a listener using OnChange - we can do anything here, a noop function is fine. In this case we create a logger that writes out the full configuration. We are already injecting IOptionsMonitor<MyValues> into our ValuesController so we can give one last test by running with F5, viewing the output, then modifying our appsettings.json and checking again: Success! Summary In this post I discussed how to get changes to configuration files to be automatically detected and propagated to the rest of the application via the Options pattern. It is simple to detect configuration file changes if you inject the IConfigurationRoot object into your classes. However, this is not the recommended approach to configuration - a strongly typed approach is considered better practice. In order to use both strongly typed configuration and have the ability to respond to changes we need to use the IOptionsMonitor<> implementations in Microsoft.Extensions.Options. We must register a callback using the OnChange method and then inject IOptionsMonitor<> in our classes. With this setup, the CurrentValue property will always represent the latest configuration values. As stated earlier, this setup works currently in the RC2 version of ASP.NET Core, but has been subsequently postponed till a post 1.0.0 release.
http://andrewlock.net/reloading-strongly-typed-options-when-appsettings-change-in-asp-net-core-rc2/
CC-MAIN-2016-50
en
refinedweb
* This is the base class for all cache implementations provided in the64 * org.apache.oro.util package. To derive a subclass from GenericCache65 * only the ... methods66 * need be overridden.67 * Although 4 subclasses of GenericCache are provided with this68 * package, users may not derive subclasses from this class.69 * Rather, users should create their own implmentations of the70 * {@link Cache} interface.71 72 @author <a HREF="dfs@savarese.org">Daniel F. Savarese</a>73 @version $Id: GenericCache.java,v 1.1.1.1 2000/07/23 23:08:54 jon Exp $74 75 * @see Cache76 * @see CacheLRU77 * @see CacheFIFO78 * @see CacheFIFO279 * @see CacheRandom80 */81 public abstract class GenericCache implements Cache, java.io.Serializable {82 /**83 * The default capacity to be used by the GenericCache subclasses84 * provided with this package. Its value is 20.85 */86 public static final int DEFAULT_CAPACITY = 20;87 88 int _numEntries;89 GenericCacheEntry[] _cache;90 Hashtable _table;91 92 /**93 * The primary constructor for GenericCache. It has default94 * access so it will only be used within the package. It initializes95 * _table to a Hashtable of capacity equal to the capacity argument,96 * _cache to an array of size equal to the capacity argument, and97 * _numEntries to 0.98 * <p>99 * @param capacity The maximum capacity of the cache.100 */101 GenericCache(int capacity) {102 _numEntries = 0;103 _table = new Hashtable(capacity);104 _cache = new GenericCacheEntry[capacity];105 106 while(--capacity >= 0)107 _cache[capacity] = new GenericCacheEntry(capacity);108 }109 110 public abstract void addElement(Object key, Object value);111 112 public synchronized Object getElement(Object key) { 113 Object obj;114 115 obj = _table.get(key);116 117 if(obj != null)118 return ((GenericCacheEntry)obj)._value;119 120 return null;121 }122 123 public final Enumeration keys() { return _table.keys(); }124 125 /**126 * Returns the number of elements in the cache, not to be confused with127 * the {@link #capacity()} which returns the number128 * of elements that can be held in the cache at one time.129 * <p>130 * @return The current size of the cache (i.e., the number of elements131 * currently cached).132 */133 public final int size() { return _numEntries; }134 135 /**136 * Returns the maximum number of elements that can be cached at one time.137 * <p>138 * @return The maximum number of elements that can be cached at one time.139 */140 public final int capacity() { return _cache.length; }141 142 public final boolean isFull() { return (_numEntries >= _cache.length); }143 }144 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/oro/util/GenericCache.java.htm
CC-MAIN-2016-50
en
refinedweb
#include <playerclient.h> Inherits ClientProxy. List of all members. PlannerProxy [virtual] Set the goal pose (gx, gy, ga) Get the list of waypoints. Writes the result into the proxy rather than returning it to the caller. Enable/disable the robot's motion. Set state to 1 to enable, 0 to disable. All proxies must provide this method. It is used internally to parse new data when it is received. Reimplemented from ClientProxy.).
http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/classPlannerProxy.php
CC-MAIN-2016-50
en
refinedweb
Now dispatch stats update is lock free. But reset of these stats stilltakes blkg->stats_lock and is dependent on that. As stats are per cpu,we should be able to just reset the stats on each cpu without any locks.(Atleast for 64bit arch).On 32bit arch there is a small race where 64bit updates are not atomic.The result of this race can be that in the presence of other writers,one might not get 0 value after reset of a stat and might see somethingintermediateOne can write more complicated code to cover this race like sending IPIto other cpus to reset stats and for offline cpus, reset these directly.Right not I am not taking that path because reset_update is more of adebug feature and it can happen only on 32bit arch and possibility ofit happening is small. Will fix it if it becomes a real problem. Forthe time being going for code simplicity.Signed-off-by: Vivek Goyal <vgoyal@redhat.com>--- block/blk-cgroup.c | 28 ++++++++++++++++++++++++++++ 1 files changed, 28 insertions(+), 0 deletions(-)diff --git a/block/blk-cgroup.c b/block/blk-cgroup.cindex 3622518..e41cc6f 100644--- a/block/blk-cgroup.c+++ b/block/blk-cgroup.c@@ -537,6 +537,30 @@ struct blkio_group *blkiocg_lookup_group(struct blkio_cgroup *blkcg, void *key) } EXPORT_SYMBOL_GPL(blkiocg_lookup_group); +static void blkio_reset_stats_cpu(struct blkio_group *blkg)+{+ struct blkio_group_stats_cpu *stats_cpu;+ int i, j, k;+ /*+ * Note: On 64 bit arch this should not be an issue. This has the+ * possibility of returning some inconsistent value on 32bit arch+ * as 64bit update on 32bit is non atomic. Taking care of this+ * corner case makes code very complicated, like sending IPIs to+ * cpus, taking care of stats of offline cpus etc.+ *+ * reset stats is anyway more of a debug feature and this sounds a+ * corner case. So I am not complicating the code yet until and+ * unless this becomes a real issue.+ */+ for_each_possible_cpu(i) {+ stats_cpu = per_cpu_ptr(blkg->stats_cpu, i);+ stats_cpu->sectors = 0;+ for(j = 0; j < BLKIO_STAT_CPU_NR; j++)+ for (k = 0; k < BLKIO_STAT_TOTAL; k++)+ stats_cpu->stat_arr_cpu[j][k] = 0;+ }+}+ static int blkiocg_reset_stats(struct cgroup *cgroup, struct cftype *cftype, u64 val) {@@ -581,7 +605,11 @@ blkiocg_reset_stats(struct cgroup *cgroup, struct cftype *cftype, u64 val) } #endif spin_unlock(&blkg->stats_lock);++ /* Reset Per cpu stats which don't take blkg->stats_lock */+ blkio_reset_stats_cpu(blkg); }+ spin_unlock_irq(&blkcg->lock); return 0; }-- 1.7.1
http://lkml.org/lkml/2011/5/18/368
CC-MAIN-2014-15
en
refinedweb
DataGridView.DataSource Property Gets or sets the data source that the DataGridView is displaying data for. Assembly: System.Windows.Forms (in System.Windows.Forms.dll) Property ValueType: System.Object The<T> class. The IBindingListView interface, such as the BindingSource class. For specific examples, see the Example section and the task table at the end of this section. null rather than using the default value of DBNull.Value, which is appropriate for database data. For more information, see Displaying Data in the Windows Forms DataGridView Control. The following table provides direct links to common tasks related to the DataSource property. The following code example demonstrates how to initialize a simple data-bound DataGridView. It also demonstrates how to set the DataSource property. using System; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; using System.Drawing; public class Form1 : System.Windows.Forms.Form { private DataGridView dataGridView1 = new DataGridView(); private BindingSource bindingSource1 = new BindingSource(); public Form1() { dataGridView1.Dock = DockStyle.Fill; this.Controls.Add(dataGridView1); InitializeDataGridView(); } table; } [STAThreadAttribute()] public static void Main() {.
http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.datasource(v=vs.90).aspx
CC-MAIN-2014-15
en
refinedweb
Hey, I was working on this when Garretts e-mail came in. It's killing me I should be working on my paying job, but I can't get this stuff out of my head. I've been subverted:-) I wanted to hold off submitting but I wanted you guys to see what I'm working on. back and forth I went in my head. So now I'm crazy and it's all your fault:-) The namespace problem he mentioned should have never been missed. Sorry. Or should I say scary. cmpilato@collab.net wrote: >Sander Striker <striker@apache.org> writes: > > > >>Can we please just create a branch and let gat work on >>that. Development is a lot easier to follow when it takes place in >>the repos. Furthermore I think this is cool enough to attract other >>developers to help out. >> >> > >Ben, Karl, and I had dinner with Glenn the other night, and I think we >all agree that a branch would be a Good Thing, but there are a couple >of reasons why we (at least, Glenn, Ben and I -- Karl had to jet >early) decided not to go this route up front: > This was my understanding as well. I'm not saying I don't want a branch. I'm saying I haven't earned a branch *yet*:-) > 1. We don't generally give commit access -- for any reason -- to > someone who has not, as Glenn says himself, "passed the litmus > test". Once we've seen some of his work, we can better > consider the branch option. > what he said. > 2. Glenn admitted that his initial runs at this work were not > being done in a way that minimized code churn on a per-patch > basis. In other words, and as you can see for yourself, his > first patch is a friggin' bomb-blast in the ol' FS construction > area! :-) > You were kinda warned:-) And anyone who helps me get this pig in, I owe huge >Glenn could benefit from a lot of initial review of > his patch, as well as some suggestions on how to, in the > future, make his changes more incrementally. > >Glenn, I don't *think* I'm speaking out of line with this -- if I am, > > >I'll accept public beratement as you see fit to so deliver. > Beratement; not gonna doit:-) The last thing in the world I want to do is mess up Subversion. I plan to earn a seat at the table. >The rest of you: I think Glenn has some good ideas about tackling what >is likely to be the biggest change to Subversion since self-hosting, >and I'd be thrilled if everyone who "gotz tha skeelz" could take the >time to review his work, understand and brainstorm about his master >plan, and then help him with the implementation. > > To be honest, the SQL page is only the tip of my evil master plan. Unfortunately, my writing skills are not so great. OK so they suck. But I will write and re-write until my point is made well. Once I get this other crap I'm working on finished. uggghh. That said, it's important that my work not mess with the primary goals. Period. End of story. I would feel horrible if it did. But I know the commiters won't let that happen. Which is totally cool from where I'm sitting. In the basement by the way. Some of my views: DAV and Apache were awesome choices. The auto-commit stuff will bring users in droves. No doubt in my mind. But the thing that turns me on the most is the FS/repos. I really want to roll up my sleeves and become one with it :-) It's a nice balance of complexity and simplicity. I friggin love it. The permanent store is the holy grail in my mind. I can't begin to list the number of things I think it applies to. I'm sure you guys see it. You built the thing. When you guys were looking for a logo. I wanted to submit a sponge btw. Lastly, let me leave you with this thought. Many people on this list have talked about their favorite ra layer. Mine would have to be ra_federated. Imagine a world where repos' play nice together. Like so. Hi, I'm gats repo. What's your name? Do we have any common friends. OH cool ......... Behind door number 1 is my java tape library stuff, enjoy. Behind door number two is .... I have have about 10GB of free space if you need a mirror. Please submit your mirroring requests as follows ..... I know you guys have sat around thinking about this crazy kind of stuff; right? OK now I'm going to go work. Email is down and I'm going to go close xchat. gat --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org Received on Fri Nov 22 04:07:54 2002 This is an archived mail posted to the Subversion Dev mailing list.
http://svn.haxx.se/dev/archive-2002-11/1458.shtml
CC-MAIN-2014-15
en
refinedweb
#include <math.h> double log10(double x); float log10f(float x); long double log10l(long double x); Link with -lm. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): log10f(), log10l(): For special cases, including where x is 0, 1, negative, infinity, or NaN, see log(3). For a discussion of the errors that can occur for these functions, see log(3).
http://www.makelinux.net/man/3/L/log10f
CC-MAIN-2014-15
en
refinedweb
Help with objectifying code (constructor method) for simple compound interest quesion java nubee Greenhorn Joined: Mar 09, 2006 Posts: 4 posted Mar 09, 2006 14:30:00 0 Howdy Partners. First time here (so be gentle). Newbee to Java (but getting there). Have class assign to calculate compound interest. 1st bit of assign ok with so far. i.e. 3 command line args (amount,rate,time) are used to output a total (integer). 2nd bit of assign i'm stuck: ". . .rewrite the class as account.. using a constructor of the form: public Account(int a, float r) { ... } and a method of the form public double getBalance(int time) { ... } that returns the balance the queried Account object would have after the elapsed time/years. This returns a double and is supposed to leave the balance in the queried Account object unchanged. I have started with this public class Account { int amount; float rate; public Account(int a, float r){ amount = a; float = r; } } public double getBalance(int time) { // do the compound interest formula return theResult } Thats as far as i can get with my little brain. I dont know where i sould put public void main( String ..... (or even if it needs one). Any help is much appreciated. :-) Tom Sullivan Ranch Hand Joined: Dec 20, 2005 Posts: 72 posted Mar 09, 2006 14:50:00 0 In your constructor, change it to be: this.amount = a; Do the same for all values you pass in where you have declared the local vars. You don't have to have a main in this class. You could do: public class InterestCalc { private int var1; private int var2; private int var3; public InterestCalc(int var1, int var2, int var3) { this.var1 = var1; this.var2 = var2; this.var3 = var3; } public int doCalc() { int result; //obviously, you want to change the calc to your needs... return result = (var1 + var2)/var3 } } Now you can use another class to instantiate this one for testing . public class TestInterestCalc { public static void main (String [] agrs) { InterestCalc ic = new InterestCalc(1, 1, 1); int result = ic.doCalc(); System.out.println(result); } } Good luck. Layne Lund Ranch Hand Joined: Dec 06, 2001 Posts: 3061 posted Mar 09, 2006 14:55:00 0 I would start by seeing if the code you gave compiles. If not, where are the errors? Can you see how to fix them? Once you get that much to compile, then figure out where main() should go. You could put it in the Account class, if you wish. However, it is very common to have a separate class just with the main() method. The final thing you need to figure out is what to do in the getBalance() method. Do you know how to calculate compound interested by hand? What is the formula for this? How do you translate that formula into Java? Also, computing some examples by hand will help you verify that your program is correct. I suggest you do these examples before you even write any more code. Let me know what you figure out from here. And feel free to come back with more questions. Layne Java API Documentation The Java Tutorial Tom Sullivan Ranch Hand Joined: Dec 20, 2005 Posts: 72 posted Mar 09, 2006 14:59:00 0 One more thing. If you have to use a command line arg, you won't be able to use the example I gave as it sits. But, you can configure the system to take the command line args in either class by incorporating the main thread, taking in the args and then saying new InterestCalc(args[0], args[1], args[2]); in main. Of course this is after you parse the string to the type you want as you would already be doing if your first version works as you expect with a main. [ March 09, 2006: Message edited by: Tom Sullivan ] java nubee Greenhorn Joined: Mar 09, 2006 Posts: 4 posted Mar 09, 2006 17:38:00 0 Tom, From the assignment, i get the impression that its only wanting 1 class and not 2 :-( . The teacher is going to test by doing ' java Account 100 100 1' Also, he has given us a list of deliverables (ref below). AccountApplication and AccountApplet form part of the 'teachers' code which shows up in a html/gui face, the 3 input variables and output. So, i'm still stuck on how to go to the next bit of code. Arggg ..... help -C ------------------------------------------------------------------- -rw-rw-r-- 1 comp285 comp285 474 Mar 3 16:37 AccountApplet.class -rw------- 1 comp285 comp285 192 Jan 10 10:45 AccountApplet.html -rw------- 1 comp285 comp285 399 Mar 3 16:39 AccountApplet.java -rw-rw-r-- 1 comp285 comp285 521 Mar 3 16:37 AccountApplication.class -rw-rw-r-- 1 comp285 comp285 372 Mar 3 15:54 AccountApplication.java -rw-rw-r-- 1 comp285 comp285 1272 Mar 3 16:37 Account.class -rw------- 1 comp285 comp285 2363 Jan 10 10:45 Account.java -rw-rw-r-- 1 comp285 comp285 1679 Mar 3 16:37 AccountWidget$1.class -rw-rw-r-- 1 comp285 comp285 1765 Mar 3 16:37 AccountWidget.class -rw------- 1 comp285 comp285 2602 Mar 3 15:26 AccountWidget.java -rw-rw-r-- 1 comp285 comp285 1493 Mar 3 16:37 AdvancedAccount.class -rw------- 1 comp285 comp285 2793 Jan 10 10:45 AdvancedAccount.java -rw-rw-r-- 1 comp285 comp285 515 Mar 3 16:37 CenteredFrame$1.class -rw-rw-r-- 1 comp285 comp285 842 Mar 3 16:37 CenteredFrame.class -rwxrwxrwx 1 comp285 comp285 38 Mar 3 14:29 CenteredFrame.java -rw-rw-r-- 1 comp285 comp285 1115 Mar 3 16:37 Compound.class -rw------- 1 comp285 comp285 1479 Mar 3 14:38 Compound.java drwxrwxr-x 5 comp285 comp285 4096 Mar 3 16:37 doc -rw-rw-r-- 1 comp285 comp285 378 Mar 3 16:37 Makefile I agree. Here's the link: subject: Help with objectifying code (constructor method) for simple compound interest quesion Similar Threads SubClass Blues! Having trouble understanding an error code i keep getting. Please help! Very Confused!! Program like ATM where person enters amount in dollars and cents but program uses int for monies Help on testprogram and subclass please All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/402701/java/java/objectifying-code-constructor-method-simple
CC-MAIN-2014-15
en
refinedweb
Inheri-tance, actually.. you had a typo. ;-) By my standards, there's exactly one reason to use inheritance: to define the shared interface and default behavior for a family of similar classes. The code you gave doesn't quite do that. The default interface for Vehicle is 'pretty much anything'. You've tried to roll your own way of making the interfaces stable, but it doesn't work as well as it could. There's no guarantee Bike and Car will share the same data members (and, by extension, the same access methods), for instance. You've forced the interfaces to be the same by defining the '%valid' hash redundantly, but you'd be better off sinking that information down to the Vehicle class. In fact, I'd suggest getting rid of AUTOLOAD entirely, defining the methods you want explicitly in Vehicle, then scrapping the '%ro' hash and overriding the subclass methods to get the behavior you want: package Vehicle; sub new { my ($type, $data) = @_; my $O = bless $type->_defaults(), $type; # [1] for $k (keys %$O) { $O->{$k} = $data->{$k} if defined ($data->{$k}); # [2] } $O->_sanity_check(); # [3] return $O; } =item new (hash-ref: data) : Vehicle-ref [1] We start from a prototype data structure known to be good. [2] Now we override any values defined by the arguments. We only override values that are already in the prototype, though. We don't want to add anomalous data members by just copying everything straight over. [3] Now we run the new values through a hygiene filter to make sure everything's still good. =cut sub _defaults { return ({ 'wheels' => 0, 'doors' => 0, 'color' => 'none', 'passengers' => 0, }); } =item _defaults (nil) : hash-ref This method takes no input, and returns a pre-filled hash of valid attributes for a given vehicle type. Technically, this is a lobotomized version of the Factory Method design pattern. =cut sub _sanity_check { my $O = shift; if ($O->{'wheels'}) { print STDERR "I ran into a problem.. " . "a generic vehicle shouldn't have " . $O->{'wheels'} . ' ' . "wheels.\n" ; } if ($O->{'doors'}) { print STDERR "I ran into a problem.. " . "a generic vehicle shouldn't have " . $O->{'doors'} . ' ' . "doors.\n" ; } if ('none' ne $O->{'color'}) { print STDERR "I ran into a problem.. " . "a generic vehicle shouldn't be colored " . $O->{'color'} . ".\n" ; } if ($O->{'passengers'}) { print STDERR "I ran into a problem.. " . "a generic vehicle doesn't carry " . $O->{'passengers'} . ' ' . "passengers.\n" ; } return; } =item _sanity_check (nil) : nil This method doesn't take any input or return any value as output, but it does print any errors it sees to STDERR. In a real program, we'd use some kind of trace to see when and where the error occured. =cut sub _access { my ($O, $item, $value) = @_; if (defined ($value)) { $O->{$item} = $value; $O->_sanity_check(); } return ($O->{$item}); } =item _access (item, value) : value This is a generic back-end for the accessor functions. It takes the attribute name and an optional value as input, and returns the item's value as output. I've thrown in a sanity check every time an item's value is changed, just for the sake of paranoia. =cut sub Wheels { return ($_[0]->_access ('wheels', $_[1])); } sub Doors { return ($_[0]->_access ('doors', $_[1])); } sub Color { return ($_[0]->_access ('color', $_[1])); } sub Passengers { return ($_[0]->_access ('passengers', $_[1])); } =item accessor methods These are trivial methods that handle get-and-set operations for the attributes. The fact that _access() does an automatic sanity check after setting any new value means we don't have to put sanity checks in each of these methods.. though we probably would do individual sanity checks in a real application. This is one of those cases where 'lazy' means 'doing lots of work now so we won't have to do even more work later'. =cut package Bike; @ISA = qw( Vehicle ); =item Class Bike This class will override _defaults(), _sanity_check(), and possibly some of the access methods if we want to make 'wheels' always equal 2, for instance: sub Wheels { if ((defined $_[1]) && (2 != $_[1])) { print "You can't do that. I won't let you. So there.\n"; } } =cut package Car; @ISA = qw( Vehicle ); =item Class Car Again, this class will override _defaults(), _sanity_check(), and any access methods we want to harden against changes. =cut [download] In reply to Re: OO Inheritence by mstone in thread OO Inheritence by Limbic~Region A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (427 votes), past polls
http://www.perlmonks.org/?parent=356482;node_id=3333
CC-MAIN-2014-15
en
refinedweb
Name | Synopsis | Description | Attributes | See Also #include <unistd.h> long gethostid(void); The gethostid() function returns the 32-bit identifier for the current host. If the hardware capability exists, this identifier is taken from platform-dependent stable storage; otherwise it is a randomly generated number. It is not guaranteed to be unique. If the calling thread's process is executing within a non-global zone that emulates a host identifier, then the zone's emulated 32-bit host identifier is returned. See attributes(5) for descriptions of the following attributes: hostid(1), sysinfo(2), attributes(5), standards(5), zones(5) Name | Synopsis | Description | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i09929/index.html
CC-MAIN-2014-15
en
refinedweb
On Sat, 27 Nov 2010, Mathieu Desnoyers wrote:> *.>> Then we might want to directly target the implementation with> this_cpu_add_return/this_cpu_sub_return (you implement these in patch 03), which> would not need to disable preemption on the fast path. I think we already> discussed this in the past. The reason eludes me at the moment, but I remember> discussing that changing the increment/decrement delta to the nearest powers of> two would let us deal with overflow cleanly. But it's probably too early in the> morning for me to wrap my head around the issue at the moment.We still would need to disable preemption because multiple this_cpu_opsare needed. The only way to avoid preemption would be if the modificationsto the differential and the adding to the per zone counters would beper cpu atomic.
https://lkml.org/lkml/2010/11/29/282
CC-MAIN-2014-15
en
refinedweb
Development of Harmattan apps requires a device at the moment, largely because the simulator for Harmattan builds is so damn slow. What I'd like to try is building my harmattan apps for the desktop (windows or linux) instead. I know that this will mean a lot of things probably won't work properly, but it would certainly allow me to get a feel for the app framework. Can anyone provide any guidance on how to enable this for my desktop builds against the latest SDK (ideally on windows) import QtQuick 1.1 import com.meego 1.0
http://developer.nokia.com/community/discussion/showthread.php/227002-Qt-Quick-1-2-and-com-meego-1-0-for-desktop-builds?p=852961&mode=threaded
CC-MAIN-2014-15
en
refinedweb
There's only one way to answer this correctly. "Hmmm... I'm not sure, turn around..." Approach from behind, wrap firmly in arms, bend head back, and apply spine-tingling-toe-curling-leave-her-breathless kiss, then <edited for the sake of maintaining a semblance of innocence>. ;) 1) Never expect a man to lower the toilet seat. He puts it up, you put it down, life will always be like this. 2) Be high maintanence and proud of it. If you don't ask to be looked after, you won't be. And then when you suddenly turn around and do something extra nice in return (s)he'll be grateful for days. 3) Never ask if you look fat unless you're prepared to hear "yes". Of course a boyfriend who cooks and cleans makes life easy too :) If you follow those directions to the letter, there is precious little time left to complicate your life...
http://www.webmasterworld.com/forum9/3758-2-30.htm
CC-MAIN-2014-15
en
refinedweb
Illegal Prime Number Unzips to DeCSS CmdrTaco posted about 13 years ago | from the allright-thats-pretty-friggin-clever dept. . Isn't that whole DeCSS thing getting kind of old? (1) Anonymous Coward | about 13 years ago | (#355825) Re:In other news.. (1) Anonymous Coward | about 13 years ago | (#355826) The best reason why 1 is not considered prime is so that the Fundamental Theorem of Arithmetic can be stated elegantly. The Fundamental Theorem of Arithmetic is that every natural number has a unique prime factorization. If 1 were prime that factorization would not be unique. But if you're not into hardcore math then you can call 1 whatever the heck you want since everybody will still know what you talking about. 'Gotta love that math! (1) Anonymous Coward | about 13 years ago | (#355827) Re:Hmmm... (Hey moderators) (1) Anonymous Coward | about 13 years ago | (#355828) Like evrything, moderation should be done in moderation. Re:Hmmm... (2) Anonymous Coward | about 13 years ago | (#355830) Re:Hmmm... (2) Anonymous Coward | about 13 years ago | (#355831) Re:Isn't that whole DeCSS thing getting kind of ol (2) Anonymous Coward | about 13 years ago | (#355832). Useful math? (1) Indomitus (578) | about 13 years ago | (#355834) </sarcasm> Re:Hmmm... (1) MassacrE (763) | about 13 years ago | (#355840) Hmm.. (5) Chacham (981) | about 13 years ago | (#355841) --- ticks = jiffies; while (ticks == jiffies); ticks = jiffies; Other uses of primes (2) acb (2797) | about 13 years ago | (#355846) Re:Isn't that whole DeCSS thing getting kind of ol (2) dattaway (3088) | about 13 years ago | (#355851) Tomorrow's Headlines Today (5) Just Some Guy (3352) | about 13 years ago | (#355852) RIAA Petitions Congress To Ban Number Theory Mathematicians Declared "Enemy of Intellectual Property (and the American Way)" Rambus Patents Prime Numbers Any guesses about which one you'll see first? :) Re:Hmm.. (1) heretic (5829) | about 13 years ago | (#355858) Re:In other news.. (2) ocie (6659) | about 13 years ago | (#355862) Re:Numbers and hyperlinks (1) dwlemon (11672) | about 13 years ago | (#355873) Re:In other news.. (1) HeghmoH (13204) | about 13 years ago | (#355874) Of course, does it really matter? Re:In other news.. (1) HeghmoH (13204) | about 13 years ago | (#355875) Re:Hmmm... (2) HeghmoH (13204) | about 13 years ago | (#355876) So, I guess what I'm saying is, yeah, that's probably what happened, but so what? Easy--infinite number of primes (4) crow (16139) | about 13 years ago | (#355881):Hmm.. (1) ibis (16191) | about 13 years ago | (#355882) Re:Isn't that whole DeCSS thing getting kind of ol (1) Teun (17872) | about 13 years ago | (#355886) Re:Shorter code (1) wavelet (17885) | about 13 years ago | (#355887) Another illegal prime, efdtt.c (5) wavelet (17885) | about 13 years ago | (#355888) Incredibly Cool... (1) augustz (18082) | about 13 years ago | (#355889) Re:Easy--infinite number of primes (3) platypus (18156) | about 13 years ago | (#355892):Hmmm... (1) mTor (18585) | about 13 years ago | (#355895) And now for something completely different... Let me digress a bit... I checked your homepage and I agree 100% with what you said right here: It's amazing how now things changed. Check this link: I think that this passage from 1984 sums it all up the best: Hmmm... (5) mTor (18585) | about 13 years ago | (#355896):numbers and itellectual property (3) mindstrm (20013) | about 13 years ago | (#355898) (5) dutky (20510) | about 13 years ago | (#355899) Re:Hmmm... (2) YoJ (20860) | about 13 years ago | (#355900) numbers and itellectual property (5) Saint Nobody (21391) | about 13 years ago | (#355901). Hmm.. (2) abelsson (21706) | about 13 years ago | (#355904):In other news.. (prime) (1) proffi (21949) | about 13 years ago | (#355905) 1 is NOT prime! (see e.g. "Kleine Enzyklopädie Mathematik", p. 24, Verlag Harri Deutsch, Frankfurt, 1984) Apart from that little question, I still have to verify the claim deposited on that Don't conform! IkKampfProfessor75 Hee hee hee. (1) ChrisGoodwin (24375) | about 13 years ago | (#355908) -- Re:In other news.. (1) suraklin (28841) | about 13 years ago | (#355910) In other news.. (1) PovRayMan (31900) | about 13 years ago | (#355915) - Age - Math Class - Slashdot UIDs - Programming Judge Sipowitz was then sued by CmdrTaco because of his slashdot uid. CmdrTaco is also suing for emotional damages... (I asked a bunch of people on IRC if 1 was a prime number. I got lot of yes and no responses. Then again turning to IRC for help? I must be crazy.) ---------- Windows 2000 encoded to a single number! (5) alex@thehouse (43299) | about 13 years ago | (#355920)... (2) cyberdonny (46462) | about 13 years ago | (#355921)... (2) cyberdonny (46462) | about 13 years ago | (#355922) Trailing zeroes (1) alehmann (50545) | about 13 years ago | (#355926) Number and the GPL (1) alehmann (50545) | about 13 years ago | (#355927) ...Which brings up an interesting question... can a _number_ be GPL'd? What about patented? This scheme allows basically any computer program to be represented as a number, and if you want a prime all you have to do is append trailing garbage (ignored by gzip) until the number is prime. Re:What about binaries... (1) alehmann (50545) | about 13 years ago | (#355928) A-HA (3) mr100percent (57156) | about 13 years ago | (#355930) BTW, doesn't the MPAA's address have the number 666 in it? Or am I thinking of another corp.? --Never trust a tech who tattoes his IP to his arm, especially if its DHCP. You can reduce this further. (5) TrevorB (57780) | about 13 years ago | (#355934) However determining this number would be (ludicrously) computionally expensive. Another quest for distributed.net? Why work on the CSS code, why not the keys themselves? That would be more interesting. One: prime or composite? (1) Cantara (68186) | about 13 years ago | (#355940) +-1 is a unit. Re:Isn't that whole DeCSS thing getting kind of ol (4) dbrutus (71639) | about 13 years ago | (#355944) Reminds me of the Crystal Rod Encyclopedia (5) Speare (84249) | about 13 years ago | (#355954):Easy--infinite number of primes (1) cheese_wallet (88279) | about 13 years ago | (#355958) I don't know if you could always find such a prime number, but I have a gut feeling that such a number exists for each case. Oh grow up! (2) donutello (88309) | about 13 years ago | (#355959) What about binaries... (2) jmv (93421) | about 13 years ago | (#355960) Re:Hmmm... (2) OmegaDan (101255) | about 13 years ago | (#355966) I can't wait for the t-shirt :) Re:Sans Tables? (1) BradleyUffner (103496) | about 13 years ago | (#355973) =\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\ Pre-Slashdot Effect? (1) rograndom (112079) | about 13 years ago | (#355978) prime directive (2) jafuser (112236) | about 13 years ago | (#355979) -- EFF Member #11254 Re:In other news.. (2) (void*) (113680) | about 13 years ago | (#355982) Where 1 is a prime or not is largely a matter of contention. But you certainly cannot call it a composite number, for then all other prime numbers would have to be composite as well. Re:One: prime or composite? (2) (void*) (113680) | about 13 years ago | (#355983) LOL. (1) kormoc (122955) | about 13 years ago | (#355986) Re:Hmm.. (2) hrieke (126185) | about 13 years ago | (#355989) Numbers and hyperlinks (1) pallex (126468) | about 13 years ago | (#355990) Perhaps we'll find out soon whether numbers CAN be illegal - not just very very long ones, such as a cd (or religious secret), but short or natural ones. Re:numbers (1) pallex (126468) | about 13 years ago | (#355991) So a database of prime numbers would have to exclude certain numbers? What year do you think it will be when the last country with internet access on earth bows to the wishes of an American judge and orders a such a database to be taken off line? I dont think it will happen this year, anyway? Primster? (2) pallex (126468) | about 13 years ago | (#355992)? Patent Office closing early today. (1) crashnbur (127738) | about 13 years ago | (#355994) *low growl* Higher Text book prices (2) Adler (131568) | about 13 years ago | (#355995) I'm gonna go start work on "Texter" an internet text book trading programme. Re:In other news.. (1) jedwards (135260) | about 13 years ago | (#355998) Re:Incredibly Cool... (2) jedwards (135260) | about 13 years ago | (#355999) Re:Hmmm... (1) coolgeek (140561) | about 13 years ago | (#356003) Re:Isn't that whole DeCSS thing getting kind of ol (2) Salsaman (141471) | about 13 years ago | (#356004) This prime number _IS_ deCSS. The MPAA will either have to ban this prime number, ban gzip, or ban anyone from telling people that the number is deCSS. Either way, I don't see this getting through the courts, even in the US... Re:1984 online? (2) elegant7x (142766) | about 13 years ago | (#356008) Rate me on Picture-rate.com [picture-rate.com] length == precision (2) elegant7x (142766) | about 13 years ago | (#356009) Rate me on Picture-rate.com [picture-rate.com] numbers (1) gunner800 (142959) | about 13 years ago | (#356010) Just how persuasive is that? I don't know. Windows is copyrighted; the bigass number is not. I suspect the final outcome will be some judge saying "too bad" and declaring this number illegal without actually explaining himself. My mom is not a Karma whore! CSS not an honest attempt at encryption (2) Dram (149119) | about 13 years ago | (#356015) Re:Hmm.. (1) d_pirolo (150996) | about 13 years ago | (#356016) Shorter code (1) Pxtl (151020) | about 13 years ago | (#356017) Re:Hmm.. (2) chipuni (156625) | about 13 years ago | (#356026) One advantage (1) eric434 (161022) | about 13 years ago | (#356029) DeCSS old, but an illegal number is certainly inte (2) TeknoHog (164938) | about 13 years ago | (#356035) This also raises the interesting question whether you could take any pattern in nature, filter it through some (legal) algorithm and get DeCSS. You could always (in principle) hack such a filter that produced the DeCSS code out of any pattern you happen to choose. Because there number of such patterns is infinite, there would be an infinite number of filters (including all filters already written). But since they cannot outlaw nature (I hope), all filters would become illegal. However, the above scenario is so absurd that the only conclusion is: you just can't outlaw DeCSS!-) -- Not just DeCSS! (4) Dyolf Knip (165446) | about 13 years ago | (#356036):Isn't that whole DeCSS thing getting kind of ol (1) dadragon (177695) | about 13 years ago | (#356043) Segfault wallpaper (2) Alien54 (180860) | about 13 years ago | (#356048) one of those pretty random number things, and then get is distributed on the free downloads sites as a windows theme.... share the wealth. Re:48565...2944 (1) Ratcrow (181400) | about 13 years ago | (#356049) Re:Hmm.. (1) Ratcrow (181400) | about 13 years ago | (#356050) Re:Hmm.. (1) bn557 (183935) | about 13 years ago | (#356065) Excellent (2) ZanshinWedge (193324) | about 13 years ago | (#356071) Ya know, this battle of wits between the DVD CCA / MPAA and the hackers of the world is not going particularly well for the corporate interests. every code is just a number... (1) ponxx (193567) | about 13 years ago | (#356072) The one reason this is interesting though is that it highlights an important question about code (and speach in general), does someone "create" it, or does one just "find" it. If I write a programme and combile it I could say I just researched for a while to come up with the hexadecimal number that executes to run a word processor... Maybe, I can claim prior art on all code by just writing down a mathematical representation for all natural numbers (e.g. the commonly used N) + an algorithm for converting it into code (such as the change to hexadec. and gunzip it, or just rename to .exe and execute it). I have in effect written down all possible computer programmes, just because someone else "found" one of them as well does not mean I don't hold my rights to it :), and just because i haven't tested every single one of them, does not mean they don't exist... It might be worth trying to get a US patent on all code that can be obtained from a single number :) (i.e. all code) Irony (2) OverCode@work (196386) | about 13 years ago | (#356078) -John Portable (1) perlyking (198166) | about 13 years ago | (#356079) -- Write an encryption program with this (1) guinsu (198732) | about 13 years ago | (#356080) Sans Tables? (3) Mr. Polite (218181) | about 13 years ago | (#356088) Woohoo (3) kosipov (218202) | about 13 years ago | (#356089) Enough with the Java and Perl script... (1) AFCArchvile (221494) | about 13 years ago | (#356091) Re:Enough with the Java and Perl script... (1) AFCArchvile (221494) | about 13 years ago | (#356092) Re:Hmm.. (1) ThymePuns (222253) | about 13 years ago | (#356093) Great.... (1) codewolf (239827) | about 13 years ago | (#356102) Re:Shorter code (1) samrolken (246301) | about 13 years ago | (#356104) I would like to see an illegal prime number. Re:Hmm.. (2) Anoriymous Coward (257749) | about 13 years ago | (#356106) -- #include "stdio.h" Someone has to say it (3) Joey7F (307495) | about 13 years ago | (#356115) Re:48565...2944 -- NOT!! (1) xkenny13 (309849) | about 13 years ago | (#356117) or what if... (2) screwballicus (313964) | about 13 years ago | (#356122) - Someone writes a java app capable of searching Pi for a number series identical to the ASCII values of the text they wish to tranfer. - Upon finding this series, the location of it in Pi is transferred in a format something like "12137-12193" meaning "the message begins at place 12137 and ends at place 12193" - Bingo. Your recipient has the message and all you transferred was two completely unrelated numbers. Then again, maybe Pi is illegal. 48565...2944 (1) KingFOOL (314876) | about 13 years ago | (#356123)
http://beta.slashdot.org/story/17124
CC-MAIN-2014-15
en
refinedweb
ASPxDashboardExporter Class Performs server-side export of a dashboard/dashboard item displayed in the Web Forms Dashboard. Namespace: DevExpress.DashboardWeb Assembly: DevExpress.Dashboard.v22.1.Web.WebForms.dll Declaration public class ASPxDashboardExporter : WebDashboardExporter Public Class ASPxDashboardExporter Inherits WebDashboardExporter Remarks The ASPxDashboardExporter class allows you to implement server export for the Web Forms Dashboard Control. It uses information stored in a dashboard state to determine the currently selected filters, drill-down levels and parameter values. The state overrides default values specified in dashboard items and parameters. An empty state for the ASPxDashboardExporter means that an end-user has cleared all filters and parameters, and canceled all drill-down operations. The ASPxDashboardExporter inherits members of the WebDashboardExporter class. Note You should specify a dashboard state as the export method’s parameter so that the exported document retains all dashboard adjustments, whether they are default or performed by an end-user. You can also specify export options for the resulting document. See Manage Exporting Capabilities to learn more. To initialize ASPxDashboardExporter for the specified ASPxDashboard control, pass the control’s instance to the ASPxDashboardExporter constructor. using DevExpress.DashboardWeb; //... ASPxDashboardExporter exporter = new ASPxDashboardExporter(ASPxDashboard1); Use the following classes to perform server-side export for other platforms: - ASP.NET MVC: WebDashboardExporter - ASP.NET Core: AspNetCoreDashboardExporter Related GitHub Examples The following code snippets (auto-collected from DevExpress Examples) contain references to the ASPxDashboardExporter class. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/Dashboard/DevExpress.DashboardWeb.ASPxDashboardExporter
CC-MAIN-2022-33
en
refinedweb
written by Eric J. Ma on 2019-03-22 | tags: python hacks tips and tricks data science productivity coding If you’ve done Python programming for a while, I think it pays off to know some little tricks that can improve the readability of your code and decrease the amount of repetition that goes on. One such tool is functools.partial. It took me a few years after my first introduction to partial before I finally understood why it was such a powerful tool. Essentially, what partial does is it wraps a function and sets a keyword argument to a constant. That’s it. What do we mean? Here’s a minimal example. Let’s say we have a function f, not written by me, but provided by someone else. def f(a, b): result = # do something with a and b. return result In my code, let’s say that I know that the value that b takes on in my app is always the tuple (1, 'A'). I now have a few options. The most obvious is assign the tuple (1, 'A') to a variable, and pass that in on every function call: b = (1, 'A') result1 = f(a=1, b=b) # do some stuff. result2 = f(a=15, b=b) # do more stuff. # ad nauseum N = # set value of N resultN = f(a=N, b=b) The other way I could do it is use functools.partial and just set the keyword argument b to equal to the tuple directly. from functools import partial f_ = partial(f, b=(1, 'A')) Now, I can repeat the code above, but now only worrying about the keyword argument a: result1 = f_(a=1) # do some stuff. result2 = f_(a=15) # do more stuff. # ad nauseum N = # set value of N resultN = f_(a=N) And there you go, that’s basically how functools.partial works in a nutshell. Now, where have I used this in real life? The most common place I have used it is in Flask. I have built Flask apps where I need to dynamically keep my Bokeh version synced up between the Python and JS libraries that get called. To ensure that my HTML templates have a consistent Bokeh version, I use the following pattern: from bokeh import __version__ as bkversion from flask import render_template, Flask from functools import partial render_template = partial(render_template, bkversion=bkversion) # Flask app boilerplate app = Flask(__name__) @app.route('/') def home(): return render_template('index.html.j2') Now, because I always have bkversion pre-specified in render_template, I never have to repeat it over every render_template function call.
https://ericmjl.github.io/blog/2019/3/22/functools-partial/
CC-MAIN-2022-33
en
refinedweb
Li Chen's Blog Porting a C# Windows application to Linux I own a Windows application. To expand our customer base, we need to create a Linux edition. In anticipating the demand, we previously decided to place the majority of logics in a few .net standard libraries and this is a big paid-off. However, there are still a few things we need to do so that the same code would work on both Windows and Linux. - Path separator is different between Windows and Linux. Windows uses “\” as separator while Linux uses “/” as separator. The solution is to always use Path.Combine to concatenate paths. Similarly, use Path.GetDirectoryName and Path.GetFileName to split the paths. - Linux file system is case sensitive. The solution is to be very consistent with path names and always use constants when a path is used in multiple places. - In text files, Windows uses \r\n to end lines while Linux uses \r. The solution is to use TextReader.ReadLine and TextWriter.WriteLine. TextReader.ReadLine reads Windows text files correctly on Linux and vice versa. If we have to face line-ending characters explicitly, use Environment.NewLine. - Different locations for program files and program data. Windows by defaults store programs in “c:\Program Files” folder and store program data in “c:\ProgramData”. The exact location can be determined from the %ProgramFile% and %ProgramData% environment variables. Linux, in contrast, has a different convention and one often install programs under /opt and write program data under /var. For complete reference, see:. This is an area we have to branch the code and detect operating system using RuntimeInformation.IsOSPlatform. - Lack of registry in Linux. The solution is to just use configuration files. - Windows has services while Linux has daemon. The solution is to create a Windows Service application on Windows and create a console application on Linux. RedHat has a good article on creating Linux daemon in C#:. For addition information on Systemd, also see:. - Packaging and distribution. Windows application are usually packaged as msi or Chocolatey package. Linux applications are usually packaged as rpm. This will be the subject of another blog post. Building .net core on an unsupported Linux platform Introduction I need to a product that I own from Windows to Amazon Linux. However, Amazon Linux is not a supported platform for running .net core by Microsoft. Although there is a Amazon Linux 2 image with .net core 2.1 preinstalled and it is possible to install the CentOS version of .net core on Amazon Linux 1, I went on a journey to build and test .net core on Amazon Linux to have confidence that my product will not hit a wall. .net core require LLVM 3.9 to build. However, we can only get LLVM 3.6.3 from the yum repository. So we have to build LLVM 3.9.LLVM 3.9 requires Cmake 3.11 or later, but we can only get Cmake 2.8.12 from the yum repository. So we have to start from building CMake. Building CMake The procedure to build CMake can be found in. Here is what I did: sudo yum groupinstall "Development Tools" Sudo yum install swig python27-devel libedit-devel version=3.11 build=1 mkdir ~/temp cd ~/temp wget tar -xzvf cmake-$version.$build.tar.gz cd cmake-$version.$build/ ./bootstrap make -j4 sudo make install Building CLang and LVVM With CMake installed, we can build LLVM. My procedure of building Clang and LLVM is similar to the procedure in. Please also refer to for additional information. cd $HOME git clone cd $HOME/llvm git checkout release_39 cd $HOME/llvm/tools git clone git clone cd $HOME/llvm/tools/clang git checkout release_39 cd $HOME/llvm/tools/lldb git checkout release_39 Before we start building, we need to patch LLVM source code for Amazon Linux triplet.Otherwise LLVM cannot find the c++ compiler on Amazon Linux. To patch, find file ./tools/clang/lib/Driver/ToolChains.cpp, find an array that looks like: "x86_64-linux-gnu", "x86_64-unknown-linux-gnu", "x86_64-pc-linux-gnu", "x86_64-redhat-linux6E", "x86_64-redhat-linux", "x86_64-suse-linux", "x86_64-manbo-linux-gnu", "x86_64-linux-gnu", "x86_64-slackware-linux", "x86_64-linux-android", "x86_64-unknown-linux" Append "x86_64-amazon-linux" to the last line. Similar, append "i686-amazon-linux" to "i686-montavista-linux", "i686-linux-android", "i586-linux-gnu" Now we can build: mkdir -p $HOME/build/release cd $HOME/build/release cmake -DCMAKE_BUILD_TYPE=release $HOME/llvm make –j4 sudo make install Building CoreCLR and CoreFx With Clang/LLVM 3.9 installed, we can now build CoreCLR and CoreFx. We need to install the prerequisites first: sudo yum install lttng-ust-devel libunwind-devel gettext libicu-devel libcurl-devel openssl-devel krb5-devel libuuid-devel libcxx sudo yum install redhat-lsb-core cppcheck sloccount mkdir ~/git git clone git clone Go to each directory and check out a version, for eample: git checkout tags/v2.0.7 Now just follow to the build. ./clean.sh -all ./build.sh -RuntimeOS=linux ./build-tests.sh Also look at: and Conclusions With the steps above, I was able to build and test .net core on Amazon Linux 1 and 2. Note that .net core requires GLIBC_2.14 to run. To find the version of GLIBC on your version of Amazon Linux, run: strings /lib64/libc.so.6 | grep GLIBC If you don’t see 2.14 on the list, .net core will not run. try “sudo yum update” to see if you can update to a later version of GLIBC. Additionally, since many newer programming languages were build on LLVM, this exercise also allow us to build other languages that require newer version of LLVM than the version in the yum repository. Configure Open Live Writer to weblogs.asp.net I have not blogged for a while. When I opened my Open Live Writer, I got error with. I searched the web. Most blogs were still referencing the xmlrpc url which no longer exists. Fortunately, Fixing is easy. Just Add Account and select “Other services” On the next screen, enter the url of the blog (without xmlrpc). Open Live Writer and Orchard are smart enough to figure out the rest. This is certainly an improvement over the earlier versions. If you are curious on how Open Live Writer figured out the post API endpoint, which view the source of your web page and you will see the following lines in the header: <link href="" rel="wlwmanifest" type="application/wlwmanifest+xml" /> <link href="" rel="EditURI" title="RSD" type="application/rsd+xml" /> Top k algorithm revisited 3 years ago, I implemented top K operator in LINQ. I was asked recently why I chose Min Heap since there are faster algorithms. To recap, we try to select top k element from a sequence of n elements. A min-heap has the following property: - find-min takes O(1) time. - extract-min takes O(ln k) time where k is the size of the heap. - insert takes O(ln k) time. For each number in the sequence, I first compare the number to find-min. If the number is smaller, the number is tossed away. If the number is bigger, we do a extract-min followed by an insert. So in the worst scenario, the algorithm runs with time complexity of O(n ln k) and the space complexity of O(k). If we use max-heap instead, we can heapify n elements in O(n) time. Then we do k extract-max so we have the total time complexity of O(n + k ln n) and a space complexity of O(n). We could also use Quick Select. It is very similar to Quick Sort that we randomly select a pivot and move it to the right position. Unlike Quick Sort, we can discard the left side of the pivot whenever we have greater then k elements on the right side. This algorithm converge fairly quickly and we have the average time complexity of O(n) and space complexity of O(n). In average case, the space requirement by Quick Select is less than the max heap approach. So both max-heap and quick select are likely faster than the min-heap approach. Why do I used min-heap then? The reason is that the min-heap approach uses minimum amount of memory and I assume that I will work with large dataset so . Also, if we work with a stream, the min-heap provides a running top k. Ever wonder on which platform Amazon AWS Lambda in C# is running? In last December, AWS announced C# support for AWS Lambda using .NET Core 1.0 runtime. Ever wonder on which platform is it running? I am curious too and I did not see it in any official documentation. So I decided to write a small AWS Lambda function to detect the platform: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.Runtime.InteropServices; using Amazon.Lambda.Core; using Amazon.Lambda.Serialization; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] namespace SysInfoLambda { public class Function { /// <summary> /// A simple function that takes a string and does a ToUpper /// </summary> /// <param name="input"></param> /// <param name="context"></param> /// <returns></returns> public RuntimeInfo FunctionHandler(ILambdaContext context) { return new RuntimeInfo() { FrameworkDescription = RuntimeInformation.FrameworkDescription, OSArchitecture = RuntimeInformation.OSArchitecture, ProcessArchitecture = RuntimeInformation.ProcessArchitecture, OSDescription = RuntimeInformation.OSDescription, OSPlatform = RuntimeInformation.IsOSPlatform(OSPlatform.Linux) ? OS.Linux : RuntimeInformation.IsOSPlatform(OSPlatform.OSX) ? OS.OSX : OS.Windows }; } } public class RuntimeInfo { public string FrameworkDescription { get; set; } public Architecture OSArchitecture { get; set; } public string OSDescription { get; set; } public Architecture ProcessArchitecture { get; set; } public OS OSPlatform { get; set; } } public enum OS { Linux, OSX, Windows } } The result? The AWS C# Lambda runs in 64 bit Linux. The extract OS description is: Linux 4.4.35-33.55.amzn1.x86_64 #1 SMP Tue Dec 6 20:30:04 UTC 2016.. First look at the Visual Studio Tools for Apache Cordova CTP 3.1 The company that I worked for had an old cross-platform mobile app developed by an outside contractor using PhoneGap 1.0. When I was asked to look at the app a few months ago, I had great difficulty collecting large number of moving pieces: PhoneGap, Android SDK and emulator. When I saw Visual Studio Tools for Apache Cordova (I will call it VSTAC in the remaining of this post), I decide to give it a try since it attempts to install a large number of third-party software for me. The journey is not exactly easy, but it is certainly far easier than collecting all the pieces myself with the excellent installation document from MSDN. The result is more than pleasant. Here are some of my findings: 1) After the installation, I could not even get a hello world app to work. It turns out that I had an older version of Nods.js. VSTAC skipped node.js installation. After I uninstall the old node.js and reinstall with the one linked from the VSTAC installation page, I was able to get hello world to work. 2) I was surprise to see the Ripple emulator which I was not aware of previously. The Ripple emulator is very fast and VSTAC provides excellent debugging experience. 3) I had to clear my Apache Cordova cache a few times. This and some other useful items are documented in FAQ. Also visit known issues. 4) The application connects to an old soap web services developed with WCF. It does not support CORS. So I had to use Ripple proxy to connect to it but I kept getting 400 error. Fortunately, I was able to hack Ripple proxy to make it work. 5) I then tried to run the app in Google Android emulator. VSTAC supports this scenario as well. I had to uninstall and reinstall some Android SDK components following this and this directions. Then I had to run AVD Manager to create and start a device. Then I had to update my display driver to make sure I have compatible OpenGL ES driver installed. After that, the Google emulator ran beautifully. It was not as fast as Ripple but is acceptable. So at the end, I want to give a big thank you to the Microsoft VSTAC team. I know this is not easy but the excellent document got me through. It certainly saved me lots of time. Missing methods in LINQ: MaxWithIndex and MinWithIndex The LINQ library has Max methods and Min methods. However, sometimes we are interested in the index location in the IEnumerable<T> rather than the actual values. Hence the MaxWithIndex and MinWithIndex methods. These methods return a Tuple. The first item of the Tuple is the maximum or minimum value just the the Max and Min methods. The second item of the Tuple is the index location. As usually, you might get my LINQ extension from NuGet: PM>Install-Package SkyLinq.Linq Usage examples in the unit test. ASP Classic Compiler is now available in NuGet I know this is very, very late, but I hope it is better than never. To make it easy to experiment with ASP Classic Compiler, I made the .net 4.x binaries available in NuGet. So it is now extremely easy to try it: - From the package console of any .NET 4.x web project, run “Install-Package Dlrsoft.Asp”. - To switch from AspClassic to Asp Classic Compiler in the project, add the following section to the system.webServer handlers section: <system.webServer> <handlers> <remove name="ASPClassic"/> <add name="ASPClassic" verb="*" path="*.asp" type="Dlrsoft.Asp.AspHandler, Dlrsoft.Asp"/> </handlers> </system.webServer>Comment out the section to switch back. - Add a test page StringBuilder.asp: <% imports system dim s = new system.text.stringbuilder() dim i s = s + "<table>" for i = 1 to 12 s = s + "<tr>" s = s + "<td>" + i + "</td>" s = s + "<td>" + MonthName(i) + "</td>" s = s + "</tr>" next s = s + "</table>" response.Write(s) %>This code uses the .net extension so it will only work with Asp Classic Compiler. Happy experimenting! SkyLinq binaries are available on NuGet After much hesitate, I finally published my SkyLinq binaries on NuGet. My main hesitation was that this is my playground so I am changing things at will. The main reason to publish is that I want to use these works myself so I need an easy way to get the latest binaries into my projects. NuGet is the easiest way to distribute and get updates, including my own projects. There are 3 packages: - SkyLinq.Linq is a portal library that contains some LINQ extensions. - SkyLinq.Composition contains my duck-typing implementation. It is similar to Impromptu-Interface but it is much simpler and it uses il.emit instead of LINQ Expressions to generate code. It also contains a LINQ query rewriting example. - LINQPadHost is a simple hosting and executing environment for LINQPad queries. Live demo at.
https://weblogs.asp.net/lichen
CC-MAIN-2022-33
en
refinedweb
In this brief tutorial we will take a look at the different aspects of end-of-day equities data. We will develop an understanding of what the Open, High, Low and Close (OHLC) prices mean, as well as discuss the traded Volume. We will look at how a typical Adjusted Close price is calculated and the effects that stock splits, dividends and rights offerings have on our data and why they are important. If you would like to follow along with any of the code in this tutorial we will be using data for AAPL from October 30th 2010 to October 30th 2020 inclusive. This data is freely available from vendors such as Yahoo Finance. The environment utilised for this article consists of Python 3.8, Pandas 1.3 and Matplotlib 3.4. When working with financial data in Python the most appropriate way to evaluate and analyse the data is to use the Pandas library. For more information on using Pandas check out our Quantcademy course on Introduction to Financial Data Analysis with Pandas. We will begin by creating a DataFrame, which we have named aapl, and ensuring that the date column has been converted correctly to a Pandas datetime object. How you achieve this will depend largely on how your data is stored. We are using the Pandas function read_csv(). import pandas as pd aapl = pd.read_csv('/PATH/TO/YOUR/CSV') aapl['Date'] = pd.to_datetime(aapl['Date']) Note: Make sure to adjust the /PATH/TO/YOUR/CSV to point to the downloaded Apple daily stock data CSV file. Once you have created a DataFrame containing your stock data you can display the first few rows using the command aapl.head(). You should see the following output. In [1]: aapl.head() Out[1]: Date Open High Low Close Volume Adj Close 0 2010-11-01 302.22 305.60 302.20 304.18 15138900 9.386487 1 2010-11-02 307.00 310.19 307.00 309.36 15497500 9.546333 2 2010-11-03 311.37 312.88 308.53 312.80 18155300 9.652486 3 2010-11-04 315.45 320.18 315.03 318.27 22946000 9.821281 4 2010-11-05 317.99 319.57 316.75 317.13 12901900 9.786103 What is Open, High, Low, Close? When analysing the above stock data we can see that there are five different price columns and one volume column. The open price refers to the price at which the stock started trading each day. The close price is the last buy-sell order that was executed for the stock. It is the price agreed upon after a full day of trading. Depending on the liquidity of the stock this may have occured in the final few seconds of trading or much earlier in the day. The closing price of a stock is considered to be the standard measure of value for a specific day. As demonstrated in the above stock data the opening price is not necessarily the same as the closing price from the previous day. Opening price deviations are due to the effects of after-hours trading. Corporate announcements such as earnings events, man-made or natural disasters that occur after market close can all have an impact on the price of an equity. These factors encourage investors to engage in after-hours trading. After-hours trading begins at 4pm and ends at 8pm. Stock volume is generally lower than in normal trading hours. As a result the spread between the bid and offer is often much wider. This means that it is much easier for the price of an equity to be pushed higher or lower, requiring fewer shares than in normal hours to make an impact on the price of the asset. Stop loss orders are often used in these circumstances so that trades are only executed at prices previously determined by traders. However, due to the lower volumes of available stock they are more likely to be unfilled. As these incomplete orders are executed at prices that differ from the previous day's close they cause a disparity which affects the next day's opening price. High refers to the highest price at which a stock was traded during the time period specified. Likewise, Low refers to lowest traded price. How is Volume Recorded? Volume is a measure of the number of shares traded in a particular time period. Each time shares are bought and sold the trade volume is recorded. If the same shares are bought and sold multiple times, the volume of shares in each transaction is recorded. Volume doesn't reflect the number of available shares, it is a measure of share turnover, a count of all transactions that occur on every share. Every market exchange tracks their trade volume and provides estimates at an intraday frequency. An accurate report of the total daily volume is provided the following day. Volume is often used as a technical analysis indicator. Trade volume is considered a measure of market strength. A rising stock price with increasing volume indicates a healthy market. The higher the volume of trades the more liquidity the stock has and the higher the volume of trades during a price change, the more significant that price change becomes. Understanding the Adjusted Close The final column in the daily data above is the adjusted close price. This is the most useful column for analysing historic price changes. It incorporates changes in the price of an equity resulting from dividend payouts, stock splits and from new shares being issued. In order to deliver a coherent picture of stock returns the historic close price must be adjusted for all corporate actions. In practice this is handled for you by most data providers but it is necessary to understand how these corporate actions affect close prices and how the adjustments are made. The Effect of Stock Splits A stock split is a multiplication or division of a company's share content, without a change in its market capitalisation. If a stock splits 2:1 then a company has doubled the number of shares by halving each share's value. This is an example of a forward split. If an investor owned 20 shares prior to the split they would own 40 shares subsequently, but the shares would equate to the same dollar value. A company may consider a forward split as a way to reduce their share price making it more affordable for smaller investors to trade their shares. The announcement of a split draws media attention to a company and so a forward split is usually followed by a price rise. After the split there is often an increase in volume traded as many new smaller investors take the opportunity to purchase shares. Existing investors may seek to rebalance their positions, allowing capital previously tied up in smaller amounts of the high value shares to be reallocated to different investments. They may also take the oppotunity to double down on their existing positions by purchasing more shares at the new lower price. All of these factors increase liquidity allowing the shares to be traded more easily and quickly. In contrast to the forward split the reverse split can have an adverse effect on share price. In a reverse split existing shares owned by investors are replaced with a smaller number of shares with no effect on market capitalisation. For example a 1-for-2 reverse split replaces every two shares owned by an investor with a single share. If an investor owned 20 shares in a company before the split they would have 10 at the same value after the split. This type of split usually occurs for a negative reason and has the effect of decreasing the share price after the announcement. If this is true then why would a company carry out a reverse split? Most major stock exchanges require a single share to trade above $1 to maintain the listing. If a company's shares have been trading poorly, the board may consider a reverse split to avoid the stock becoming a penny stock and being delisted. Stock split announcements will provide information on the split ratio, the announcement date, the record date and the effective date. The ratio determines whether a split is forward or reverse. For example a 3-for-1 split is a forward split as the first number is larger than the second. The announcement date is when the company publically announces the plans for the split. The record date is the date on which shareholders need to own the stock to be eligible for the split however, this right is transferable for shares bought and sold before the effective date. The effective date is the date on which the shares actually appear in an investors account. Apple has undergone five stock splits since their Initial Public Offering (IPO) in December 1980; 4-for-1 on August 28th 2020, 7-for-1 on June 9th 2014 and 2-for-1 splits on Februrary 28th 2005, June 21st 2000 and June 16th 1987. Let's have a look at the effect of a stock split on the close and adjusted close by plotting them both for the period encompassing the August 2020 4-for-1 split. data = aapl.loc[(aapl['Date']>= '2020-07-10') & (aapl['Date']<= '2020-10-01')] data.plot(x='Date', y=['Close', 'Adj Close'], kind='line') As you can see from the graph there is a large difference between the closing price and the adjusted close until the 31st August when there is a huge drop in the closing price. As previously discussed this anomaly in the close price represents an increase in the number of shares, not a crash in the value of the stock. If we take a closer look at the two prices around August 28th 2020 you can see that the close is approximately four times the size of the adjusted close, inline with the split ratio. To look at these prices more closely we will first create a date range mask and then use the Pandas method loc to filter the DataFrame for these dates. split_start = '2020-08-27' split_end = '2020-09-01' split_mask = (aapl['Date'] >= split_start) & (aapl['Date'] <= split_end) split_data = aapl.loc[split_mask] In [2]: split_data Out[2]: Date Open High Low Close Volume Adj Close 2472 2020-08-27 508.57 509.94 495.33 500.04 38888096 125.0100 2473 2020-08-28 504.05 505.77 498.31 499.23 46907479 124.8075 2474 2020-08-31 127.58 131.00 126.00 129.04 223505733 129.0400 2475 2020-09-01 132.76 134.80 130.53 134.18 152470142 134.1800 Calculating Stock Split Adjustments All stock prices prior to the effective date of the split are multiplied by a factor which is calculated from the split ratio. In an N-for-M split this factor is calculated by $\frac{M}{N}$. For example in a 2-for-1 split the factor would be $\frac{1}{2} = 0.5$. The volume of stock is also adjusted in a similar manner but the calculation of the factor is reversed. For a 2-for-1 stock split the volume adjustment factor is $\frac{2}{1}=2$. All stock volumes prior to the effective date would be multiplied by 2. These methods apply to both forward and reverse splits. How does this work with stocks that have multiple splits historically? In order to visualise this let's take a look at the full range of our data for Apple. aapl.plot(x='Date', y=['Close', 'Adj Close'], kind='line') You can see two sharp drops in the close price on August 28th 2020 (4-for-1 stock split) and June 9th 2014 (7-for-1 stock split). In order to correctly adjust the price for both splits we simply multiply the two calculated factors together before multiplying all close prices prior to the final split date. In this example from June 9th 2014 to August 28th 2020 the close price would be multipled by $\frac{1}{4} \times \frac{1}{7} = \frac{1}{28}$ or $0.25 \times 0.143 = 0.0357$. As Apple has been through five stock splits this calculation is incomplete, there are three additional stock split factors needed to accurately calculate the adjusted close for this time period, as well as a series of dividends. The Effect of Dividends Dividends are a share of profits that a company pays to its shareholders. When a company consistently generates more profit than it can put to good use by reinvestment it may choose to reward its shareholders by starting to pay dividends. There are three main types of dividend; a regular dividend, paid consistently over time, most often on a quarterly basis. A special dividend, a one-off payment to shareholders due to the accumulation of cash that has no immediate use. A variable dividend, these are sometimes paid by companies that produce commodities in addition to regular dividends. They occur at consistent intervals but vary in amount. The dividend yield is the annual dividend per share divided by the share price. A stock with a share price of $10 that pays a dividend of $0.60 per share has a yield of 6%. The dividend yield is used by investors to quickly estimate how much they could earn by investing in a stock. A yield of 6% would give a dividend income of $6 for every $100 invested. The amount a company pays out as a dividend is voted on and approved by the company board. The declaration date is the date on which the dividend is publically declared by the company. This announcement includes a record date which is the date on which an investor must own shares in order to receive the dividend. The ex-dividend date is the date on which new shareholders are not eligible for the dividend. The payment date is the date on which the income is received by the shareholders. On the announcement date of a dividend payment the share price of the company will often rise by roughly the amount of the dividend. When the company pays out the dividend the amount of cash available to the company decreases. This is realised as a drop in the share price at the open of the payment date. With a highly liquid stock such as Apple, where the amount of the dividend is a small fraction of the share price such fluctuations are well within the standard deviation of daily price moves, as a result their effect on prices is often masked. Let's take a closer look at the prices for the Apple dividend that was announced on April 30th 2020 with a record date of May 11th, an ex-dividend date of May 12th and a payment date of May 14th. div_start = '2020-04-23' div_end = '2020-05-19' div_mask = (aapl['Date'] >= div_start) & (aapl['Date'] <= div_end) div_data = aapl.loc[div_mask] In [3]:div_data Out[3]: Date Open High Low Close Volume Adj Close 2384 2020-04-23 275.87 281.75 274.87 275.03 31203582 68.449893 2385 2020-04-24 277.20 283.01 277.00 282.97 31627183 70.426012 2386 2020-04-27 281.80 284.54 279.95 283.17 29271893 70.475788 2387 2020-04-28 285.08 285.83 278.20 278.58 28001187 69.333422 2388 2020-04-29 284.73 289.67 283.89 287.73 34320204 71.610688 2389 2020-04-30 289.96 294.53 288.35 293.80 45765968 73.121399 2390 2020-05-01 286.25 299.00 285.85 289.07 60154175 71.944189 2391 2020-05-04 289.17 293.69 286.32 293.16 33391986 72.962115 2392 2020-05-05 295.06 301.00 294.46 297.56 36937795 74.057194 2393 2020-05-06 300.46 303.24 298.87 300.63 35583438 74.821260 2394 2020-05-07 303.22 305.17 301.97 303.74 28803764 75.595282 2395 2020-05-08 305.64 310.35 304.29 310.13 33511985 77.389718 2396 2020-05-11 308.10 317.05 307.24 315.01 36486561 78.607471 2397 2020-05-12 317.83 319.69 310.91 311.41 40575263 77.709128 2398 2020-05-13 312.15 315.95 303.21 307.65 50155639 76.770860 2399 2020-05-14 304.51 309.79 301.53 309.54 39732269 77.242489 2400 2020-05-15 300.35 307.90 300.21 307.71 41587094 76.785832 2401 2020-05-18 313.17 316.50 310.32 314.96 33843125 78.594994 2402 2020-05-19 315.03 318.52 313.01 313.14 25432385 78.140832 If we look at the price fluctuations in the open and close price of AAPL for this period we can see that there is a rise in close price on April 30th, however these price moves are somewhat dwarfed by the normal daily fluctuations. To visualise this we will create a chart showing the rolling standard deviation across the period. In order to create this chart we will need to add Matplotlib to our list of Python library imports. Here we will also make use of a library called mdates to make the date range on our charts more readable. You can find more information on using mdates here. import matplotlib.pyplot as plt import mdates # Calculate the rolling mean (simple moving average) # and rolling standard deviation of the open and # close prices. price_close = pd.Series(div_data['Close'].values, index=div_data['Date']) close_ma = price_close.rolling(2).mean() close_mstd = price_close.rolling(2).std() price_open = pd.Series(div_data['Open'].values, index=div_data['Date']) open_ma = price_open.rolling(2).mean() open_mstd = price_open.rolling(2).std() fig, axs = plt.subplots(1, 2, figsize=(12, 4), sharey=True) for nn, ax in enumerate(axs): prices = [price_close, price_open] colors = ["cornflowerblue", "forestgreen"] means = [close_ma, open_ma] stds = [close_mstd, open_mstd] locator = mdates.AutoDateLocator(minticks=3, maxticks=20) formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) ax.set_xlabel("Date") ax.plot(prices[nn].index, prices[nn], colors[nn]) ax.fill_between(stds[nn].index, means[nn] - 2 * stds[nn], means[nn] + 2 * stds[nn], color="lightgrey", alpha=0.2) axs[0].set_ylabel("Close Price") axs[1].set_ylabel("Open Price") fig.suptitle("Standard Deviation of Open and Close price for AAPL April 23rd to May 19th 2020") As you can see in the chart above there is a rise in the close price on April 30th. This is then followed by a drop the next day, however the close price then continues to rise and remains higher for the rest of the period. On the payment date of May 14th there is also a drop in the open price. As can be seen in the chart above the open price drops from May 12th through to May 15th. However, when considered with respect to price moves across the monthly period it can be seen that this price drop is once again not significant. It is well within the standard deviation. Calculating Dividend Adjustments In a similar manner to stock splits a dividend adjustment factor must be calculated and applied to all close prices prior to the ex-dividend date. For dividends the adjustment factor is calculated by subtracting the dividend from the close price the day before the ex-dividend date, and then dividing by that close price to get the adjustment factor in percentage terms. Let's look at an example; A company announces a dividend of $2. The stock closes at $40 the day before the ex-dividend date. Without any adjustment the stock would open at $38 and there would be a gap of $2. To calculate the adjustment factor the dividend is subtracted from the close price ($40 - $2 = $38). This value is then divided by the close price ($38/$40 = 0.95). The historic close prices prior to the ex-dividend date are then multiplied by the adjustment factor so that they stay rationally aligned. Rights Offerings A rights offering is an offer to buy new shares created by a company. It is made to current shareholders and entitles them to purchase new shares at a reduced value to that presented to the public. This is often carried out by a company to raise additional capital. The creation of new shares lowers the value of the existing shares as it increases supply and each share now represents a smaller amount of the net profit. The group of rights offered to shareholders are known as subscription warrants and are in effect a type of option. The shareholder has the right but not the obligation to purchase the shares within a specific period, which is usually no more than 30 days. Until the date at which the shares must be purchased these rights are transferable and may be sold. Their value compensates the shareholder for the dilutative effect of creating the new shares. The rights issue ratio describes the number of additional shares a shareholder may purchase and is based upon the number of shares they currently own. So if the rights issue ratio is 1:2 a shareholder may purchase one additional share for every two shares they hold. The ratio is expressed as rights:owned. The adjusted close price takes into account the dilutive effect of the rights offering. The close price is adjusted in a simialr way to that of dividends and stock splits, by multiplying with an adjustment factor. In the case of a rights offering the adjustment factor is calculated from the rights issue ratio. Let's continue with our above example. A company announces a rights offering to shareholders with a rights issue ratio of 1:2. The adjustment factor is (1+2)/2 or 3/2 = 1.5. In this case all prior closing prices of the stock will be multiplied by 1.5. As you have seen the adjusted close price alters the close price based on the effects of corporate actions. It makes a stock's performance easier to evaluate as well as allowing an investor to compare the performance of multiple assets. Since it does not suffer from price drops due to stock splits it is more appropriate for backtesting any trading strategy rules. Ticker Symbol Changes Stock tickers are created when a company begins to trade publicly on an exchange. They identify the stock at the exchange level. Note that the ticker is not to be confused with the International Securities Identification Number (ISIN) which is a unique identifier for a security. For example, IBM is traded on 25 different stock exchanges, it has 25 different tickers but only one ISIN. By convention the symbol or ticker is often an abbrieviation of the company name, although this is not a requirement. Stock tickers were first created by Standard & Poor (S&P) and are used across the world on every major exchange. The number of characters in a ticker is variable and dependant on the exchange. Stocks traded on the New York Stock Exchange (NYSE) have tickers made up of 3 characters while stocks traded on the NASDAQ have 4 characters. Ticker symbol changes can result from a company name change, a merger or acquistion, or a delisting or bankruptcy event. When ticker symbol changes result from a merger, acquisition or name change the ticker symbols will update automatically within your brokerage account and in any data you may be using. The change will become visible within two to three days and all historical data will be updated accordingly. Symbol changes due to a name change are regarded as price neutral. There is no adjustment made to historic share price or volume. A company may choose to change its name simply as a rebranding exercise to update its image. Examples include when AOL became Time Warner and when Totalizer became International Business Machines (IBM). Mergers or acquisitions are often regarded as a positive change. The ticker of the acquired company generally becomes the ticker of the acquirer. Existing stock in the acquiree may be exchanged for stock of an equal value in the acquiring company or you may recieve the cash equivalent of your shares. When a merger occurs an adjustment factor will need to be applied to historic stock prices. These function in a similar way to stock splits but do not require any adjusment to volume. A split ratio is calculated and used to multiply all historic prices. Another reason for changes to symbols is delisting or bankruptcy filing. This is regarded in a negative light. If the value of stock in a company begins to fall the company may eventually be delisted from the exchange. Each exchange has its own rules regarding the delisting process but generally when a stock becomes a penny stock or trades below $1 per share the company can no longer be traded publicly. The ticker will be suffixed by a .pk, .ob or a .otcbb. The pk indicates that the stock is now available on the 'pink sheets', whereas the ob or otcbb means that the stock is now traded on the over-the-counter bulletin board. The NASDAQ exchange has a special case of ticker symbol changes in listings up to Jan 2016. They added a Q to the ticker for bankruptcy and an E for delinquent Securities and Exchange Commission (SEC) filings. This practice has now been discontinued but may show in historic data. Next Steps In subsequent articles we will consider various equities data providers and determine how straightforward it is to obtain and analyse data from these vendors.
https://quantstart.com/articles/understanding-equities-data/
CC-MAIN-2022-33
en
refinedweb
"Escape sequence is a character which escapes normal sequence of output (evaluation)". Escape sequences are very common in a language. Escape sequence precedes with a \ (backslash). For example, \n stands for new line; even though it looks two characters, Java treats them as one character with ASCII code of 10. Escape sequence, being a single character, should be put within single quotes. Observe the following simple code on Escape Sequences . public class Demo { public static void main(String args[]) { int x = '\t'; // enclosed within single quotes System.out.println(x); // prints 9 int y = '\n'; System.out.println(y); // prints 10 } } You can try for other escape characters also what their ASCII codes. Java also comes Escape Sequences and the list if given here under. Following code gives some Escape Sequences usage. public class Demo { public static void main(String args[]) { System.out.println("abc\ndef"); // abc and def are given in two lines System.out.println("ab\bcd"); // prints acd System.out.println("ab \bcd"); // prints abcd, the space before \b is gone System.out.println("abcdefghij"); // prints ab cd, gives 3 spaces between ab and cd System.out.println("ab\tcd"); // prints ab cd, gives 3 spaces between ab and cd System.out.println("a\"bc\"d"); // prints a"bc"d System.out.println("a\'bc\'d"); // prints a'bc'd } } 1 thought on “Java Escape Sequences” Nice Tutorial
https://way2java.com/java-general/java-escape-sequences/
CC-MAIN-2022-33
en
refinedweb
Python Build Simplified. A simple setuptools automation, designed mainly for publishing multiple packages from single repository Project description Python Build Simplified Lighter alternative to PBR. Created to support easily multiple namespace packages built from single repository. Ultra simple, allows to have almost empty setup.py file, while keeping all information in setup.json Features: - External dependencies in requirements-external.txt - Internal dependencies (packages from same repository) in requirements-subpackages.txt - All arguments that goes to setup() are placed as dictionary in setup.jsonfile, there is no magic there - README.md and README.rst (in order) are loaded automatically as long description - Uses SCM plugin (setuptools_scm) by default - Optional Pipfile.lock from Pipenv support - Command ./setup.py --quiet freeze_dependenciesto print dependencies in requirements.txt format - Command ./setup.py install_dependenciesto install dependencies using pip Getting started project.toml Needs to be configured, so the pip and pipenv would know the dependencies to run setup.py file [build-system] requires = ["setuptools>=45", "wheel", "setuptools_scm[toml]>=6.0", "riotkit.pbs>=1.0"] setup.py The usage of PBS is really simple, import and unpack a dictionary. Optionally override values as you wish - there is no magic, everything is passed explicitly, so you can print it or pass to setup() #!/usr/bin/env python3 from setuptools import setup from riotkit.pbs import get_setup_attributes # advanced usage: override any attribute attributes = get_setup_attributes(git_root_dir='../../') attributes['long_description'] += "\nBuilt using Riotkit Python Build Simplified" setup( **attributes ) setup.json Again, there is no any magic. Every key there is an attribute that should go to setup() from setuptools. Please look at setuptools documentation for list of available attributes, you can find it there: { "name": "rkd.process", "author": "RiotKit non-profit organization", "author_email": "riotkit@riseup.net", "description": "rkd.process provides easy process interaction and output capturing/redirecting, wraps subprocess from Python's standard library.", "url": "", "license": "Apache-2", "classifiers": [ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Intended Audience :: Developers", "Intended Audience :: System Administrators", "Intended Audience :: Information Technology", "License :: OSI Approved :: Apache Software License", "Operating System :: POSIX", "Programming Language :: Python :: 3 :: Only" ], "keywords": ["rkd", "riotkit", "anarchism", "output capturing", "output", "subprocess"] } requirements-external.txt It's a regular requirements.txt replacement, with all versions there. some-package>=1.0 MANIFEST.in Points out which files should be included in a distribution package. recursive-exclude tests * recursive-exclude example * include requirements-external.txt include requirements-subpackages.txt include setup.json Additional work to do in multiple-package repository Multiple-package repositories are used to keep versioning in synchronization for multiple packages. Some of packages could be dependent on each other, but possible to install standalone. See real use case: requirements-subpackages.txt A dynamic version of requirements.txt, where a simple templating mechanism is available to allow creating dependencies to packages that are released together with current package from same repository. rkd.process >= {{ current_version }}, < {{ next_minor_version }} Available variables: - current_version: Example 1.3.1.2 - next_minor_version: Example 1.4 - next_major_version: Example 2.0 Pipenv support Pipenv support could be optionally enabled. In this case all standard dependencies from Pipfile.lock would be frozen and added to install_requires automatically. setup.py #!/usr/bin/env python3 from setuptools import setup from riotkit.pbs import get_setup_attributes setup( **get_setup_attributes(pipenv=True) ) Debugging PBS is not performing any magic inside, so there is a possibility to just print the attributes that would be used in setup() setup.py #!/usr/bin/env python3 from setuptools import setup from riotkit.pbs import get_setup_attributes import pprint attributes = get_setup_attributes(git_root_dir='../../') pp = pprint.PrettyPrinter(indent=4) pp.pprint(attributes) # setup( # **attributes # ) Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/riotkit.pbs/1.0rc2.dev9/
CC-MAIN-2022-33
en
refinedweb
Entity: This week’s alpha release includes a bunch of great improvements in the following areas: - Async language support is now available for queries and updates when running on .NET 4.5. - Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. - Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. - Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. - All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. - Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities: public class HomeController : Controller { LocationContext db = new LocationContext(); public async Task<ActionResult> Index() { var locations = await db.Locations.ToListAsync(); return View(locations); } } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Properties<decimal>() .Configure(x => x.HasPrecision(5)); } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license. Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: - AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. - UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Configurations .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu
https://weblogs.asp.net/scottgu/entity-framework-6-alpha2-now-available
CC-MAIN-2022-33
en
refinedweb
Search... FAQs Subscribe Pie FAQs Recent topics Flagged topics Hot topics Best topics Search... Search Coderanch Advance search Google search Register / Login Nevin kumar Ranch Hand 93 23 Threads 0 Cows since Mar 15, 2008 (93/100) Number Threads Started (23 (93/10) Number Threads Started (23/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Nevin kumar Is there a way to construct empty xml elements while marshalling a jaxb object Dear Ranchers, while marshalling If we don't initialize a few jaxb object properties it will not show those properties in the marshalled String(StringWriter ) but in my case I need to show all the elements irrespective of whether there are initialized or not.If it is a String there is no problem I'm initializing to empty ,but the problem is when I had datatypes like int,boolean and date also.In this case if I need to show the elements I need to show the elements with default values such 0,false which is not proper to show like that. example: Imagine employee object has properties like name,age etc Employee emp=new Employee(); emp.setName("john"); marshaller.marshal(emp,sw); what it returns is( ignoring age as I didn't initialize) <Employee> <name>john</name> </Employee> but I'm expecting in either this form <Employee> <name>john</name> <age></age> </Employee> or this form <Employee> <name>john</name> </age> </Employee> If I initialize age(datatype is primitive int ) to default value in java code, it will return like this which doesn't make sense,age as 0. <Employee> <name>john</name> <age>0</age> </Employee> Is there any to to show empty elements when uninitialized.Any help highly appreciated, I'm really struggling with this , show more 11 years ago Web Services when using jaxb,how to pass xml message as string I used jaxb earlier but there whatever input message I pass is an object which is same as my wsdl's input xsd so I don't have any problem but now I had a web service which is expecting String as input message for a particular operation but that input String is of nested xml elements,Considering the fact it's a huge set of nested xml elements I reckon I will use jaxb,but how to pass this object as an xml string. I'm using spring ws 1.5 Typical spring ws client call looks like this. Response1 response1 = (Response1) template.marshalSendAndReceive(url,request1); Any guidance highly appreciated. show more 11 years ago Web Services session validate is not working in internet explorer which version of IE are you using ? show more 11 years ago JSP Is there any api in java to connect to windows using java from unix machine I had used an open source ssh client api in java.I was successfully able to connect to linux box from windows machine,but when I tried connecting to windows from unix box.It says connection refused.I had no idea what is the problem ?I had given hostName, userId and password.Any suggestions highly appreciated. java.net.ConnectException: Connection refused show more 12 years ago I/O and Streams Is there any api in java to connect to windows using java from unix machine Dear Rob, I need to access a particular folder and check how many files are present and get their names and check the file size of each file not zero .some type of file operations. File f=new File("D:/pathToReportsGenerated"); String fileNames[]=f.list();//get the list of fileNames in the folder show more 12 years ago I/O and Streams Is there any api in java to connect to windows using java from unix machine @Ulf, When I say connect I mean should be able to access windows from unix machine using a java program .Sometime back I used a ssh client (SFTP connection) for connecting from one mac book to other but I forgot the api . May you suggest any open source and free java api which will full fill my requirement. @Paul I need to check whether both the systems are connected in windows Network. Guys, Thank you very much for your time. show more 12 years ago I/O and Streams Is there any api in java to connect to windows using java from unix machine Dear Ranchers, From the java application which is running on a unix box, need to connect to windows machine and need to do some file operations. Any help highly appreciated. show more 12 years ago I/O and Streams salary for 3 years experience It depends on the company and how many times you switched your company.I know a few colleagues of mine are paid around 4 and 6 lakh' s.But on an average I can say it's around 5 lakh's. If your target foreign companies you can get more than what indian companies pay.Rule is simple indian companies work only for profit and foreign companies(establish indian branches)primarily for cost cutting and then profit. That's the difference. show more 12 years ago Jobs Discussion Hide Password in Hiberbate Configuration file I reckon you can try jasypt library. show more 12 years ago Object Relational Mapping Which book is best recommended for information on Spring MVC and Spring MVC annotations? Try Spring in Action. show more 12 years ago Bunkhouse Porch split the string Dear Pramod, It throws as exception because ? has special meaning in regular expressions. ? means Zero or one occurrence change this line from arr = str.split("?"); to arr = str.split("\\?"); It Reason: String p = "?"; // regex sees this as the "?" metacharacter String p = "\?"; // the compiler sees this as an illegal Java escape sequence String p = "\\?"; // the compiler is happy, and regex sees a question mark, not a metacharacter show more 12 years ago Java in General polymorphism with var args Dear Shaid, The output should be "its A" and it is. At this statement t.foo("test"); the behaviour of compiler and JVM as below: compile time: At compile time the compiler checks whether the parent class has any method which is compatible with single argument String as it is allows you to compile. runtime : At runtime as the parent class method is not overridden, child object has both methods(from parent as well as from child). So JVM calls child class method only in case when the parent class method is overridden.In this case it is not overridden because foo(String... a) and foo(String a) are not same. show more 12 years ago Programmer Certification (OCPJP) Searching a ResultSet You can try Google's java collections, lambdaj features. you can apply filtering on all sort of collections. show more 12 years ago Java in General how can I make two actions communicate with each other you can use type=chain to navigate from one action to another in struts2 example : <action name="makerDetails" class="abcAction"> <result name="redirect-abc" type="chain" > <param name="actionName">venus</param> <param name="namespace">/wm</param> </result> </action> show more 12 years ago Struts Call a javascript function from Java Dear Amol, You can get a hashing algorithm here if required.As Henry suggested there is no reason to decryt the password,you can store the hashed password in the database directly and compare the same on login.The better way is always hashing and salting together. regards, Naveen show more 12 years ago Java in General
https://www.coderanch.com/u/167428/Nevin-kumar
CC-MAIN-2022-33
en
refinedweb
In the avr-lib manual they present a uart.h file that has functions to talk to the serial port. When I #include this file in my code it cant be found by the compiler. Is this file part of the avr-lib? Quote: 6.38.3.7 uart.h Public interface definition for the RS-232 UART driver, much like in lcd.h except there is now also a character input function available. As the RS-232 input is line-buffered in this example, the macro RX_BUFSIZE determines the size of that buffer. My code: #include #include #include #include C:\WinAVR\examples\stdiodemo or grab them here-... put them in your project directory, then #include "uart.h" Top - Log in or register to post comments The file is not part of avr-libc itself but is included in an example of how to use stdio. Try looking in \winavr\doc\avr-libc\examples\stdiodemo If you want to use it (or something similar) I would not use that version of the file but simply make a copy of it in your local project directory then just #include "uart.h" (you'll also want to copy over uart.c and add it to a list of your build components - you'll probably also want to "borrow" the UART specific bits from stdiodemo.c) Cliff Top - Log in or register to post comments Believe it or not but your own PC can help you locate files on your hard disk... Start a Windows Explorer. Go to the WinAVR installation folder (top of the installation). Click Search. Type "uart.h" in the field for filename. Click Search Now. The file is part of an example project. Copy uart.h and uart.c to your project folder. Add uart.c to your project (assuming you use AVR Studio, if you roll your own makefile you alone knows exactly what is needed). Include the header file where needed guys, This was the problem. I had the impression that uart.h is part of the library itself. Top - Log in or register to post comments
https://www.avrfreaks.net/comment/436305
CC-MAIN-2022-33
en
refinedweb
NAME X509_chain_up_ref, X509_new, X509_free, X509_up_ref - X509 certificate ASN1 allocation functions SYNOPSIS #include <openssl/x509.h> X509 *X509_new(void);.
https://www.openssl.org/docs/man1.1.1/man3/X509_up_ref.html
CC-MAIN-2022-33
en
refinedweb
This content was last updated in March 2022, and represents the status quo as of the time it was written. Google's security policies and systems may change going forward, as we continually improve protection for our customers. Introduction This document provides an overview of how security is designed into Google's technical infrastructure. It is intended for security executives, security architects, and auditors. This document describes the following: - Google's global technical infrastructure, which is designed to provide security through the entire information processing lifecycle at Google. This infrastructure helps provide the following: - Secure deployment of services - Secure storage of data with end-user privacy safeguards - Secure communication between services - Secure and private communication with customers over the internet - Safe operation by Google engineers - How we use this infrastructure to build internet services, including consumer services such as Google Search, Gmail, and Google Photos, and enterprise services such as Google Workspace and Google Cloud. - Our investment in securing our infrastructure and operations. We have many engineers who are dedicated to security and privacy across Google, including many who are recognized industry authorities. - The security products and services that are the result of innovations that we implemented internally to meet our security needs. For example, BeyondCorp is the direct result of our internal implementation of the zero-trust security model. How the security of the infrastructure is designed in progressive layers. These layers include the following: The remaining sections of this document describe the security layers. Secure low-level infrastructure This section describes how we secure the physical premises of our data centers, the hardware in our data centers, and the software stack running on the hardware. Security of physical premises We design and build our own data centers, which incorporate multiple layers of physical security. Access to these data centers is tightly controlled. We use multiple physical security layers to protect our data center floors. We use biometric identification, metal detection, cameras, vehicle barriers, and laser-based intrusion detection systems. For more information, see Data center security. We also host some servers in third-party data centers. In these data centers, we ensure that there are Google-controlled physical security measures on top of the security layers that are provided by the data center operator. For example, we operate biometric identification systems, cameras, and metal detectors that are independent from the security layers that the data center operator provides. Hardware design and provenance A Google data center consists of thousands of servers connected to a local network. We design the server boards and the networking equipment. We vet the component vendors that we work with and choose components with care. We work with vendors to audit and validate the security properties that are provided by the components. We also design custom chips, including a hardware security chip (called Titan), that we deploy on servers, devices, and peripherals. These chips let us identify and authenticate legitimate Google devices at the hardware level and serve as hardware roots of trust. Secure boot stack and machine identity Google servers use various technologies to ensure that they boot the correct software stack. We use cryptographic signatures for low-level components like the baseboard management controller (BMC), BIOS, bootloader, kernel, and base operating system image. These signatures can be validated during each boot or update cycle. The first integrity check for Google servers uses a hardware root of trust. The components are Google-controlled, built, and hardened with integrity attestation. With each new generation of hardware, we strive to continually improve security. For example, depending on the generation of server design, the boot chain's root of trust is in one of the following: - The Titan hardware chip - A lockable firmware chip - A microcontroller running our own security code Each server in the data center has its own unique identity. This identity can be tied to the hardware root of trust and the software with which the machine boots. This identity is used to authenticate API calls to and from low-level management services on the machine. This identity is also used for mutual server authentication and transport encryption. We developed the Application Layer Transport Security (ALTS) system for securing remote procedure call (RPC) communications within our infrastructure. These machine identities can be centrally revoked to respond to a security incident. In addition, their certificates and keys are routinely rotated, and old ones revoked. We developed automated systems to do the following: - Ensure that servers run up-to-date versions of their software stacks (including security patches). - Detect and diagnose hardware and software problems. - Ensure the integrity of the machines and peripherals with verified boot and implicit attestation. - Ensure that only machines running the intended software and firmware can access credentials that allow them to communicate on the production network. - Remove or re-allocate machines from service when they're no longer needed. Secure service deployment Google services are the application binaries that our developers write and run on our infrastructure. Examples of Google services are Gmail servers, Spanner databases, Cloud Storage servers, YouTube video transcoders, and Compute Engine VMs running customer applications. To handle the required scale of the workload, thousands of machines might be running binaries of the same service. A cluster orchestration service, called Borg, controls the services that are running directly on the infrastructure. The infrastructure does not assume any trust between the services that are running on the infrastructure. This trust model is referred to as a zero-trust security model. A zero-trust security model means that no devices or users are trusted by default, whether they are inside or outside of the network. Because the infrastructure is designed to be multi-tenant, data from our customers (consumers, businesses, and even our own data) is distributed across shared infrastructure. This infrastructure is composed of tens of thousands of homogeneous machines. The infrastructure does not segregate customer data onto a single machine or set of machines, except in specific circumstances, such as when you are using Google Cloud to provision VMs on sole-tenant nodes for Compute Engine. Google Cloud and Google Workspace support regulatory requirements around data residency. For more information about data residency and Google Cloud, see Implement data residency and sovereignty requirements. For more information about data residency and Google Workspace, see Data regions: Choose a geographic location for your data. Service identity, integrity, and isolation To enable inter-service communication, applications use cryptographic authentication and authorization. Authentication and authorization provide strong access control at an abstraction level and granularity that administrators and services can understand. Services do not rely on internal network segmentation or firewalling as the primary security mechanism. Ingress and egress filtering at various points in our network helps prevent IP spoofing. This approach also helps us to maximize our network's performance and availability. For Google Cloud, you can add additional security mechanisms such as VPC Service Controls and Cloud Interconnect. Each service that runs on the infrastructure has an associated service account identity. A service is provided with cryptographic credentials that it can use to prove its identity to other services when making or receiving RPCs. These identities are used in security policies. The security policies ensure that clients are communicating with the intended server, and that servers are limiting the methods and data that particular clients can access. We use various isolation and sandboxing techniques to help protect a service from other services running on the same machine. These techniques include Linux user separation, language-based (such as the Sandboxed API) and kernel-based sandboxes, application kernel for containers (such as gVisor), and hardware virtualization. In general, we use more layers of isolation for riskier workloads. Riskier workloads include user-supplied items that require additional processing. For example, riskier workloads include running complex file converters on user-supplied data or running user-supplied code for products like App Engine or Compute Engine. For extra security, sensitive services, such as the cluster orchestration service and some key management services, run exclusively on dedicated machines. In Google Cloud, to provide stronger cryptographic isolation for your workloads and to protect data in use, we support Confidential Computing services for Compute Engine VMs and Google Kubernetes Engine (GKE) nodes. Inter-service access management The owner of a service can use access-management features provided by the infrastructure to specify exactly which other services can communicate with the service. For example, a service can restrict incoming RPCs solely to an allowed list of other services. That service can be configured with the allowed list of the service identities, and the infrastructure automatically enforces this access restriction. Enforcement includes audit logging, justifications, and unilateral access restriction (for engineer requests, for example). Google engineers who need access to services are also issued individual identities. Services can be configured to allow or deny their access based on their identities. All of these identities (machine, service, and employee) are in a global namespace that the infrastructure maintains. To manage these identities, the infrastructure provides a workflow system that includes approval chains, logging, and notification. For example, the security policy can enforce multi-party authorization. This system uses the two-person rule to ensure that an engineer acting alone cannot perform sensitive operations without first getting approval from another, authorized engineer. This system allows secure access-management processes to scale to thousands of services running on the infrastructure. The infrastructure also provides services with the canonical service for user, group, and membership management so that they can implement custom, fine-grained access control where necessary. End-user identities are managed separately, as described in Access management of end-user data in Google Workspace. Encryption of inter-service communication The infrastructure provides confidentiality and integrity for RPC data on the network. All Google Cloud virtual networking traffic is encrypted. All communication between infrastructure services is authenticated and most inter- service communication is encrypted, which adds an additional layer of security to help protect communication even if the network is tapped or a network device is compromised. Exceptions to the encryption requirement for inter-service communication are granted only for traffic that has low latency requirements, and that also doesn't leave a single networking fabric within the multiple layers of physical security in our data center. The infrastructure automatically and efficiently (with help of hardware offload) provides end-to-end encryption for the infrastructure RPC traffic that goes over the network between data centers. Access management of end-user data in Google Workspace A typical Google Workspace service is written to do something for an end user. For example, an end user can store their email on Gmail. The end user's interaction with an application like Gmail might span other services within the infrastructure. For example, Gmail might call a People API to access the end user's address book. The Encryption of inter-service communication section describes how a service (such as Google Contacts) is designed to protect RPC requests from another service (such as Gmail). However, this level of access control is still a broad set of permissions because Gmail is able to request the contacts of any user at any time. When Gmail makes an RPC request to Google Contacts on behalf of an end user, the infrastructure lets Gmail present an end-user permission ticket in the RPC request. This ticket proves that Gmail is making the RPC request on behalf of that particular end user. The ticket enables Google Contacts to implement a safeguard so that it only returns data for the end user named in the ticket. The infrastructure provides a central user identity service that issues these end-user permission tickets. The identity service verifies the end-user login and then issues a user credential, such as a cookie or OAuth token, to the user's device. Every subsequent request from the device to our infrastructure must present that end-user credential. When a service receives an end-user credential, the service passes the credential to the identity service for verification. If the end-user credential is verified, the identity service returns a short-lived end-user permission ticket that can be used for RPCs related to the user's request. In our example, the service that gets the end-user permission ticket is Gmail, which passes the ticket to Google Contacts. From that point on, for any cascading calls, the calling service can send the end-user permission ticket to the callee as a part of the RPC. The following diagram shows how Service A and Service B communicate. The infrastructure provides service identity, automatic mutual authentication, encrypted inter-service communication, and enforcement of the access policies that are defined by the service owner. Each service has a service configuration, which the service owner creates. For encrypted inter-service communication, automatic mutual authentication uses caller and callee identities. Communication is only possible when an access rule configuration permits it. For information about access management in Google Cloud, see IAM overview. Secure data storage This section describes how we implement security for data that is stored on the infrastructure. Encryption at rest Google's infrastructure provides various storage services and distributed file systems (for example, Spanner and Colossus), and a central key management service. Applications at Google access physical storage by using storage infrastructure. We use several layers of encryption to protect data at rest. By default, the storage infrastructure encrypts all user data before the user data is written to physical storage. The infrastructure performs encryption at the application or storage infrastructure layer. Encryption lets the infrastructure isolate itself from potential threats at the lower levels of storage, such as malicious disk firmware. Where applicable, we also enable hardware encryption support in our hard drives and SSDs, and we meticulously track each drive through its lifecycle. Before a decommissioned, encrypted storage device can physically leave our custody, the device is cleaned by using a multi-step process that includes two independent verifications. Devices that do not pass this cleaning process are physically destroyed (that is, shredded) on-premises. In addition to the encryption done by the infrastructure, Google Cloud and Google Workspace provide key management services. For Google Cloud, Cloud KMS is a cloud service that lets customers manage cryptographic keys. For Google Workspace, you can use client-side encryption. For more information, see Client-side encryption and strengthened collaboration in Google Workspace. Deletion of data Deletion of data typically starts with marking specific data as scheduled for deletion rather than actually deleting the data. This approach lets us recover from unintentional deletions, whether they are customer-initiated, are due to a bug, or are the result of an internal process error. After data is marked as scheduled for deletion, it is deleted in accordance with service-specific policies. When an end user deletes their account, the infrastructure notifies the services that are handling the end-user data that the account has been deleted. The services can then schedule the data that is associated with the deleted end-user account for deletion. This feature enables an end user to control their own data. For more information, see Data deletion on Google Cloud. Secure internet communication This section describes how we secure communication between the internet and the services that run on Google infrastructure. As discussed in Hardware design and provenance, the infrastructure consists of many physical machines that are interconnected over the LAN and WAN. The security of inter-service communication is not dependent on the security of the network. However, we isolate our infrastructure from the internet into a private IP address space. We only expose a subset of the machines directly to external internet traffic so that we can implement additional protections such as defenses against denial of service (DoS) attacks. Google Front End service When a service must make itself available on the internet, it can register itself with an infrastructure service called the Google Front End (GFE). The GFE ensures that all TLS connections are terminated with correct certificates and by following best practices such as supporting perfect forward secrecy. The GFE also applies protections against DoS attacks. The GFE then forwards requests for the service by using the RPC security protocol discussed in Access management of end-user data in Google Workspace. In effect, any internal service that must publish itself externally uses the GFE as a smart reverse-proxy frontend. The GFE provides public IP address hosting of its public DNS name, DoS protection, and TLS termination. GFEs run on the infrastructure like any other service and can scale to match incoming request volumes. Customer VMs on Google Cloud do not register with GFE. Instead, they register with the Cloud Front End, which is a special configuration of GFE that uses the Compute Engine networking stack. Cloud Front End lets customer VMs access a Google service directly using their public or private IP address. (Private IP addresses are only available when Private Google Access is enabled.) DoS protection The scale of our infrastructure enables it to absorb many DoS attacks. To further reduce the risk of DoS impact on services, we have multi-tier, multi-layer DoS protections. When our fiber-optic backbone delivers an external connection to one of our data centers, the connection passes through several layers of hardware and software load balancers. These load balancers report information about incoming traffic to a central DoS service running on the infrastructure. When the central DoS service detects a DoS attack, the service can configure the load balancers to drop or throttle traffic associated with the attack. The GFE instances also report information about the requests that they are receiving to the central DoS service, including application-layer information that the load balancers don't have access to. The central DoS service can then configure the GFE instances to drop or throttle attack traffic. User authentication After DoS protection, the next layer of defense for secure communication comes from the central identity service. End users interact with this service through the Google login page. The service asks for a username and password, and it can also challenge users for additional information based on risk factors. Example risk factors include whether the users have logged in from the same device or from a similar location in the past. After authenticating the user, the identity service issues credentials such as cookies and OAuth tokens that can be used for subsequent calls. When users sign in, they can use second factors such as OTPs or phishing-resistant security keys such as the Titan Security Key. The Titan Security Key is a physical token that supports the FIDO Universal 2nd Factor (U2F). We helped develop the U2F open standard with the FIDO Alliance. Most web platforms and browsers have adopted this open authentication standard. Operational security This section describes how we develop infrastructure software, protect our employees' machines and credentials, and defend against threats to the infrastructure from both insiders and external actors. Safe software development Besides the source control protections and two-party review process described earlier, we use libraries that prevent developers from introducing certain classes of security bugs. For example, we have libraries and frameworks that help eliminate XSS vulnerabilities in web apps. We also use automated tools such as fuzzers, static analysis tools, and web security scanners to automatically detect security bugs. As a final check, we use manual security reviews that range from quick triages for less risky features to in-depth design and implementation reviews for the most risky features. The team that conducts these reviews includes experts across web security, cryptography, and operating system security. The reviews can lead to the development of new security library features and new fuzzers that we can use for future products. In addition, we run a Vulnerability Rewards Program that rewards anyone who discovers and informs us of bugs in our infrastructure or applications. For more information about this program, including the rewards that we've given, see Bug hunters key stats. We also invest in finding zero-day exploits and other security issues in the open source software that we use. We run Project Zero, which is a team of Google researchers who are dedicated to researching zero-day vulnerabilities, including Spectre and Meltdown. In addition, we are the largest submitter of CVEs and security bug fixes for the Linux KVM hypervisor. Source code protections Our source code is stored in repositories with built-in source integrity and governance, where both current and past versions of the service can be audited. The infrastructure requires that a service's binaries be built from specific source code, after it is reviewed, checked in, and tested. Binary Authorization for Borg (BAB) is an internal enforcement check that happens when a service is deployed. BAB does the following: - Ensures that the production software and configuration that is deployed at Google is reviewed and authorized, particularly when that code can access user data. - Ensures that code and configuration deployments meet certain minimum standards. - Limits the ability of an insider or adversary to make malicious modifications to source code and also provides a forensic trail from a service back to its source. Keeping employee devices and credentials safe We implement safeguards to help protect our employees' devices and credentials from compromise. To help protect our employees against sophisticated phishing attempts, we have replaced OTP second-factor authentication with the mandatory use of U2F-compatible security keys. We monitor the client devices that our employees use to operate our infrastructure. We ensure that the operating system images for these devices are up to date with security patches and we control the applications that employees can install on their devices. We also have systems that scan user-installed applications, downloads, browser extensions, and web browser content to determine whether they are suitable for corporate devices. Being connected to the corporate LAN is not our primary mechanism for granting access privileges. Instead, we use zero-trust security to help protect employee access to our resources. Access-management controls at the application level expose internal applications to employees only when employees use a managed device and are connecting from expected networks and geographic locations. A client device is trusted based on a certificate that's issued to the individual machine, and based on assertions about its configuration (such as up-to-date software). For more information, see BeyondCorp. Reducing insider risk We limit and actively monitor the activities of employees who have been granted administrative access to the infrastructure. We continually work to eliminate the need for privileged access for particular tasks by using automation that can accomplish the same tasks in a safe and controlled way. For example, we require two-party approvals for some actions and we use limited APIs that allow debugging without exposing sensitive information. Google employee access to end-user information can be logged through low-level infrastructure hooks. Our security team monitors access patterns and investigates unusual events. Threat monitoring The Threat Analysis Group at Google monitors threat actors and the evolution of their tactics and techniques. The goals of this group are to help improve the safety and security of Google products and share this intelligence for the benefit of the online community. For Google Cloud, you can use Google Cloud Threat Intelligence for Chronicle and VirusTotal to monitor and respond to many types of malware. Google Cloud Threat Intelligence for Chronicle is a team of threat researchers who develop threat intelligence for use with Chronicle. VirusTotal is a malware database and visualization solution that you can use to better understand how malware operates within your enterprise. For more information about our threat monitoring activities, see the Threat Horizons report. Intrusion detection We use sophisticated data processing pipelines to. What's next - To learn more about how we protect our infrastructure, read Building secure and reliable systems (O'Reilly book). - Read more about data center security. Learn more about how we protect against DDoS attacks. Read about our zero-trust solution, BeyondCorp.
https://cloud.google.com/docs/security/infrastructure/design?hl=tr
CC-MAIN-2022-33
en
refinedweb
Comment on Tutorial - Sample J2ME code that shows various functionality of RMS. By Henry Comment Added by : shiva Comment Added at : 2015-04-18 09:19:17 Comment on Tutorial : Sample J2ME code that shows various functionality of RMS. By Henry what we have to write in simple comparator class can u explain in briefly Job. Keep up the good work . View Tutorial By: Syed Imran at 2010-11-27 06:54:36 2. I want to get SCJP certification. please anyone c View Tutorial By: Raju Salla at 2012-05-04 08:38:06 3. Real Life example of using Volatile ******* View Tutorial By: Mayank at 2011-01-25 15:44:21 4. The following exception are raised in send sms in View Tutorial By: Vikram Subbiah at 2010-04-12 02:09:36 5. my code is unable to open included files ho View Tutorial By: MANAS at 2013-09-08 05:44:46 6. Good explaination, easy to understand thank View Tutorial By: channaveer at 2010-05-02 05:03:21 7. hi, How to create certificate from https:// View Tutorial By: CN Balu Ramesh at 2011-07-19 08:38:31 8. package alphabets; import java.util View Tutorial By: Nitish at 2013-01-25 04:06:38 9. easy....... View Tutorial By: Suman at 2011-07-20 07:04:28 10. What is SMSC and CSCA number? , in which JAVA file View Tutorial By: sampath at 2009-06-19 07:01:31
https://java-samples.com/showcomment.php?commentid=39906
CC-MAIN-2022-33
en
refinedweb
Tempura is a holistic approach to iOS development, it borrows concepts from Redux (through Katana) and MVVM. ? Installation Tempura is available through CocoaPods. Requirements - iOS 11+ - Xcode 11.0+ - Swift 5.0+ CocoaPods CocoaPods is a dependency manager for Cocoa projects. You can install it with the following command: $ sudo gem install cocoapods To integrate Tempura in your Xcode project using CocoaPods you need to create a Podfile with this content: use_frameworks! source '' platform :ios, '11.0' target 'MyApp' do pod 'Tempura' end Now you just need to run: $ pod install Swift Package Manager Since version 9.0.0, Tempura also supports Swift Package Manager (SPM). ? Why should I use this? Tempura allows you to: - Model your app state - Define the actions that can change it - Create the UI - Enjoy automatic sync between state and UI - Ship, iterate We started using Tempura in a small team inside Bending Spoons. It worked so well for us, that we ended up developing and maintaining more than twenty high quality apps, with more than 10 million active users in the last year using this approach. Crash rates and development time went down, user engagement and quality went up. We are so satisfied that we wanted to share this with the iOS community, hoping that you will be as excited as we are. ❤️ ?? Show me the code Tempura uses Katana to handle the logic of your app. Your app state is defined in a single struct. struct AppState: State { var items: [Todo] = [ Todo(text: "Pet my unicorn"), Todo(text: "Become a doctor.\nChange last name to Acula"), Todo(text: "Hire two private investigators.\nGet them to follow each other"), Todo(text: "Visit mars") ] } You can only manipulate state through State Updaters. struct CompleteItem: StateUpdater { var index: Int func updateState(_ state: inout AppState) { state.items[index].completed = true } } The part of the state needed to render the UI of a screen is selected by a ViewModelWithState. struct ListViewModel: ViewModelWithState { var todos: [Todo] init(state: AppState) { self.todos = state.todos } } The UI of each screen of your app is composed in a ViewControllerModellableView. It exposes callbacks (we call them interactions) to signal that a user action occurred. It renders itself based on the ViewModelWithState. class ListView: UIView, ViewControllerModellableView { // subviews var todoButton: UIButton = UIButton(type: .custom) var list: CollectionView<TodoCell, SimpleSource<TodoCellViewModel>> // interactions var didTapAddItem: ((String) -> ())? var didCompleteItem: ((String) -> ())? // update based on ViewModel func update(oldModel: ListViewModel?) { guard let model = self.model else { return } let todos = model.todos self.list.source = SimpleSource<TodoCellViewModel>(todos) } } Each screen of your app is managed by a ViewController. Out of the box it will automatically listen for state updates and keep the UI in sync. The only other responsibility of a ViewController is to listen for interactions from the UI and dispatch actions to change the state. class ListViewController: ViewController<ListView> { // listen for interactions from the view override func setupInteraction() { self.rootView.didCompleteItem = { [unowned self] index in self.dispatch(CompleteItem(index: index)) } } } Note that the dispatch method of view controllers is a bit different than the one exposed by the Katana store: it accepts a simple Dispatchable and does not return anything. This is done to avoid implementing logic inside the view controller. If your interaction handler needs to do more than one single thing, you should pack all that logic in a side effect and dispatch that. For the rare cases when it's needed to have a bit of logic in a view controller (for example when updating an old app without wanting to completely refactor all the logic) you can use the following methods: open func __unsafeDispatch<T: StateUpdater>(_ dispatchable: T) -> Promise<Void> open func __unsafeDispatch<T: ReturningSideEffect>(_ dispatchable: T) -> Promise<T.ReturningValue> Note however that usage of this methods is HIGHLY discouraged, and they will be removed in a future version. Navigation Real apps are made by more than one screen. If a screen needs to present another screen, its ViewController must conform to the RoutableWithConfiguration protocol. extension ListViewController: RoutableWithConfiguration { var routeIdentifier: RouteElementIdentifier { return "list screen"} var navigationConfiguration: [NavigationRequest: NavigationInstruction] { return [ .show("add item screen"): .presentModally({ [unowned self] _ in let aivc = AddItemViewController(store: self.store) return aivc }) ] } } You can then trigger the presentation using one of the navigation actions from the ViewController. self.dispatch(Show("add item screen")) Learn more about the navigation here ViewController containment You can have ViewControllers inside other ViewControllers, this is useful if you want to reuse portions of UI including the logic. To do that, in the parent ViewController you need to provide a ContainerView that will receive the view of the child ViewController as subview. class ParentView: UIView, ViewControllerModellableView { var titleView = UILabel() var childView = ContainerView() func update(oldModel: ParentViewModel?) { // update only the titleView, the childView is managed by another VC } } Then, in the parent ViewController you just need to add the child ViewController: class ParentViewController: ViewController<ParentView> { let childVC: ChildViewController<ChildView>! override func setup() { self.childVC = ChildViewController(store: self.store) self.add(childVC, in: self.rootView.childView) } } All the automation will work out of the box. You will now have a ChildViewController inside the ParentViewController, the ChildViewController's view will be hosted inside the childView. ? UI Snapshot Testing Tempura has a Snapshot Testing system that can be used to take screenshots of your views in all possible states, with all devices and all supported languages. Usage You need to include the TempuraTesting pod in the test target of your app: target 'MyAppTests' do pod 'TempuraTesting' end Specify where the screenshots will be placed inside your plist : UI_TEST_DIR: $(SOURCE_ROOT)/Demo/UITests In Xcode, create a new UI test case class: File -> New -> File... -> UI Test Case Class Here you can use the test function to take a snapshot of a ViewControllerModellableView with a specific ViewModel. import TempuraTesting class UITests: XCTestCase, ViewTestCase { func testAddItemScreen() { self.uiTest(testCases: [ "addItem01": AddItemViewModel(editingText: "this is a test") ]) } } The identifier will define the name of the snapshot image in the file system. You can also personalize how the view is rendered (for instance you can embed the view in an instance of UITabBar) using the context parameter. Here is an example that embeds the view into a tabbar: import TempuraTesting class UITests: XCTestCase, ViewTestCase { func testAddItemScreen() { var context = UITests.Context<AddItemView>() context.container = .tabBarController self.uiTest(testCases: [ "addItem01": AddItemViewModel(editingText: "this is a test") ], context: context) } } If some important content inside a UIScrollView is not fully visible, you can leverage the scrollViewsToTest(in view: V, identifier: String) method. This will produce an additional snapshot rendering the full content of each returned UIScrollView instance. In this example we use scrollViewsToTest(in view: V, identifier: String) to take an extended snapshot of the mood picker at the bottom of the screen. func scrollViewsToTest(in view: V, identifier: String) -> [String: UIScrollView] { return ["mood_collection_view": view.moodCollectionView] } In case you have to wait for asynchronous operations before rendering the UI and take the screenshot, you can leverage the isViewReady(view:identifier:) method. For instance, here we wait until an hypothetical view that shows an image from a remote URL is ready. When the image is shown (that is, the state is loaded, then the snapshot is taken) import TempuraTesting class UITests: XCTestCase, ViewTestCase { func testAddItemScreen() { self.uiTest(testCases: [ "addItem01": AddItemViewModel(editingText: "this is a test") ]) } func isViewReady(_ view: AddItemView, identifier: String) -> Bool { return view.remoteImage.state == .loaded } } The test will pass as soon as the snapshot is taken. Context You can enable a number of advanced features through the context object that you can pass to the uiTest method: - the containerallows you to define a VC as a container of the view during the UITests. Basic navigationControllerand tabBarControllerare already provided, or you can define your own using the customone - the hooksallows you to perform actions when some lifecycle events happen. Available hooks are viewDidLoad, viewWillAppear, viewDidAppear, viewDidLayoutSubviews, and navigationControllerHasBeenCreated - the screenSizeand orientationproperties allows you to define a custom screen size and orientation to be used during the test - the renderSafeAreaallows you to define whether the safe area should be rendered as semitransparent gray overlay during the test - the keyboardVisibilityallows you to define whether a gray overlay should be rendered as a placeholder for the keyboard Multiple devices By default, tests are run only in the device you have choose from xcode (or your device, or CI system). We can run the snapshotting in all the devices by using a script like the following one: xcodebuild \ -workspace <project>.xcworkspace \ -scheme "<target name>" \ -destination name="iPhone 5s" \ -destination name="iPhone 6 Plus" \ -destination name="iPhone 6" \ -destination name="iPhone X" \ -destination name="iPad Pro (12.9 inch)" \ test Tests will run in parallel on all the devices. If you want to change the behaviour, refer to the xcodebuild documentation If you want to test a specific language in the ui test, you can replace the test command with the -testLanguage <iso code639-1>. The app will be launched in that language and the UITests will be executed with that locale. An example: xcodebuild \ -workspace <project>.xcworkspace \ -scheme "<target name>" \ -destination name="iPhone 5s" \ -destination name="iPhone 6 Plus" \ -destination name="iPhone 6" \ -destination name="iPhone X" \ -destination name="iPad Pro (12.9 inch)" \ -testLanguage it Remote Resources It happens often that the UI needs to show remote content (that is, remote images, remote videos, ...). While executing UITests this could be a problem as: - tests may fail due to network or server issues - system should take care of tracking when remote resources are loaded, put them in the UI and only then take the screenshots To fix this issue, Tempura offers a URLProtocol subclass named LocalFileURLProtocol that tries to load remote files from your local bundle. The idea is to put in your (test) bundle all the resources that are needed to render the UI and LocalFileURLProtocol will try to load them instead of making the network request. Given an url, LocalFileURLProtocol matches the file name using the following rules: - search a file that has the url as a name (e.g.,) - search a file that has the last path component as file name (e.g., image.png) - search a file that has the last path component without extension as file name (e.g., image) if a matching file cannot be retrieved, then the network call is performed. In order to register LocalFileURLProtocol in your application, you have to invoke the following API as soon as possible in your tests lifecycle: URLProtocol.registerClass(LocalFileURLProtocol.self) Note that if you are using Alamofire this won't work. Here you can find a related issue and a link on how to configure Alamofire to deal with URLProtocol classes. UI Testing with ViewController containment ViewTestCase is centred about the use case of testing ViewControllerModellableViews with the automatic injection of ViewModels representing testing conditions for that View. In case you are using ViewController containment (like in our ParentView example above) there is part of the View that will not be updated when injecting the ViewModel, as there is another ViewController responsible for that. In that case you need to scale up and test at the ViewController's level using the ViewControllerTestCase protocol:Models let viewModel = ParentViewModel(title: "A test title") let childVM = ChildViewModel(value: 12) /// define the tests we want to perform let tests: [String: ParentViewModel] = [ "first_test_vc": viewModel ] /// configure the ViewController with ViewModels, also for the children VCs func configure(vc: ParentViewController, for testCase: String, model: ParentViewModel) { vc.viewModel = model vc.childVC.viewModel = childVM } /// execute the UI tests func test() { let context = UITests.VCContext<ParentViewController>(container: .none) self.uiTest(testCases: self.tests, context: context) } } In case you don't have child ViewControllers to configure, it's even easier as you don't need to supply a configure(:::) method:Model let viewModel = ParentViewModel(title: "A test title") /// define the tests we want to perform let tests: [String: ParentViewModel] = [ "first_test_vc": viewModel ] /// execute the UI tests func test() { let context = UITests.VCContext<ParentViewController>(container: .tabbarController) self.uiTest(testCases: self.tests, context: context) } } ? Where to go from here Example application This repository contains a demo of a todo list application done with Tempura. To generate an Xcode project file you can use Tuist. Run tuist generate, open the project and run the Demo target. ? Swift Version Certain versions of Tempura only support certain versions of Swift. Depending on which version of Swift your project is using, you should use specific versions of Tempura. Use this table in order to check which version of Tempura you need. ? Get in touch If you have any questions or feedback we'd love to hear from you at [email protected] ?♀️. ?⚖️ License Tempura is available under the MIT license. ❓ About Tempura is maintained by Bending Spoons. We create our own tech products, used and loved by millions all around the world. Sounds cool? Check us out GitHub
https://iosexample.com/tempura-a-holistic-approach-to-ios-development-inspired-by-redux-and-mvvm/
CC-MAIN-2022-33
en
refinedweb
SYNOPSIS #include <modbus.h> cc files `pkg-config --cflags --libs libmodbus` Ethernet establish a connection with other before sending a message, you must set the slave (receiver) with modbus_set_slave(3). If you’re running a slave, its slave number will be used to filter received messages. The libmodbus implementation of RTU isn’t time based as stated in original Modbus specification, instead all bytes are sent as fast as possible and a response or an indication is considered complete when all expected characters have been received. This implementation offers very fast communication but you must take care to set a response timeout of slaves less than response timeout of master (ortherwise other slaves may ignore master requests when one of the slave is not responding). - Create a Modbus RTU context - - Set the serial mode - Independent) - Common Before using any libmodbus functions, the caller must allocate and initialize a modbus_t context with functions explained above, then the following functions are provided to modify and free a context: - Free libmodbus context - - Set slave ID - - Enable debug mode - - Timeout settings - - Error recovery mode - - Setter/getter of internal socket - - Information about header - - Macros for data manipulation MODBUS_GET_HIGH_BYTE(data), extracts the high byte from a byte MODBUS_GET_LOW_BYTE(data), extracts the low byte from a byte MODBUS_GET_INT64_FROM_INT16(tab_int16, index), builds an int64 from the four first int16 starting at tab_int16[index]] MODBUS_SET_INT32_TO_INT16(tab_int16, index, value), set an int32 value into the two first int16 starting at tab_int16[index] MODBUS_SET_INT64_TO_INT16(tab_int16, index, value), set an int64 value into the four first int16 starting at tab_int16[index] - Handling of bits and bytes - - Set or get float numbers - - - Write data - - Write and read data - - Raw requests - - Reply an exception - Server The server is waiting for request from clients and must answer when it is concerned by the request. The libmodbus offers the following functions to handle requests: - Data mapping - - Receive - - Reply -> RESOURCES Main web site: Report bugs on the issue tracker at. COPYING Free use of this software is granted under the terms of the GNU Lesser General Public License (LGPL v2.1+). For details see the file COPYING.LESSER included with the libmodbus distribution.
https://libmodbus.org/docs/v3.1.7/
CC-MAIN-2022-33
en
refinedweb
How is Thread.SpinWait actually implemented? I’m always drawn into disassembling stuff and learning how something works under the hood. The Thread.SpinWait is something I’m going to explore. Because .NET Core is open source I can attack this from side of both sources as well as pure disassembly. Sources Let’s start simply from sources. Following where the Thread.SpinWait goes, I eventually ended up in internal (yes, internal) class Thread, that derives from RuntimeThread, where the SpinWait method is. This method calls SpinWaitInternal, that is conveniently right above. That’s where the C# code ends and we need to go lower (in this case the “VM”). The implementation is in comsynchronizable.cpp file, using the FCIMPL1 macro (which I think is an abbreviation for ” fastcall function implementation with one argument”). It simply checks what the number of iterations is. If it’s over 100000 the preemptive mode is used to avoid stalling a GC, else the code stays in cooperative mode. In both cases the YieldProcessorNormalized is called passing result from YieldProcessorNormalizationInfo. The YieldProcessorNormalized method calls YieldProcessor number of times based on YieldProcessorNormalizationInfo.yieldsPerNormalizedYield. The YieldProcessor is again a macro, defined in gcenv.base.h (together with MemoryBarrier). Looking at it shows that the implementation differs based on platform. For example on AMD64 using Visual C++ it uses _mm_pause intrinsic. This eventually puts pause instruction into the resulting code. For x86 it simply uses rep nop. The important part of that file is included at the bottom as a reference. Looks like I have the implementation. On platforms where I’m running my code most often, it’s simply pause instruction. Disassembly All the above is nice, but what if I’ve made some mistake? I should be able to see the result in pure disassembly, right? I compiled a simple .NET Framework (non-Core) console application with full optimizations enabled and loaded it into WinDbg. Using the Disassembly and F11 went deeper and deeper into the code. Eventually I ended in this piece of code for 32bit. 7217f56c 8bf1 mov esi,ecx 7217f56e 8975e4 mov dword ptr [ebp-1Ch],esi 7217f571 bf60f51772 mov edi,offset clr!ThreadNative::SpinWait (7217f560) 7217f576 897de0 mov dword ptr [ebp-20h],edi ss:002b:00f3ee58=00000000 7217f579 81fe40420f00 cmp esi,0F4240h 7217f57f 0f8f26492100 jg clr!ThreadNative::SpinWait+0x35 (72393eab) 7217f585 85f6 test esi,esi 7217f587 7e07 jle clr!ThreadNative::SpinWait+0x123 (7217f590) 7217f589 f390 pause 7217f58b 83ee01 sub esi,1 7217f58e 75f9 jne clr!ThreadNative::SpinWait+0x29 (7217f589) The pause instruction is nicely there and the esi register is used to count (down) the iterations. For 64bit, the code obviously still uses pause, but the looping is done slightly differently. 00007ffe`b5a2b556 33db xor ebx,ebx 00007ffe`b5a2b558 81f940420f00 cmp ecx,0F4240h 00007ffe`b5a2b55e 7f0e jg clr!ThreadNative::SpinWait+0x4e (00007ffe`b5a2b56e) 00007ffe`b5a2b560 3bd9 cmp ebx,ecx 00007ffe`b5a2b562 0f8dc5010000 jge clr!ThreadNative::SpinWait+0x20d (00007ffe`b5a2b72d) 00007ffe`b5a2b568 f390 pause 00007ffe`b5a2b56a ffc3 inc ebx 00007ffe`b5a2b56c ebf2 jmp clr!ThreadNative::SpinWait+0x40 (00007ffe`b5a2b560) The ebx ( rbx) register is incremented and compared with ecx ( rcx) where the total number of interations is stored. The decision for cooperative or preemptive mode is visible in both with cmp with 0F4240h value. Summary True, for day-to-day programming in .NET one does not need to know this, heck one does not need Thread.SpinWait at all, and I know it. So what’s the reason for all this? I like such disassembling (pun intended). It keeps my brain occupied and sometimes stretches my abilities, thus I’m learning new stuff. Appendix YieldProcessor macro etc. in gcenv.base.h #if defined(_MSC_VER) #if defined(_ARM_) __forceinline void YieldProcessor() { } extern "C" void __emit(const unsigned __int32 opcode); #pragma intrinsic(__emit) #define MemoryBarrier() { __emit(0xF3BF); __emit(0x8F5F); } #elif defined(_ARM64_) extern "C" void __yield(void); #pragma intrinsic(__yield) __forceinline void YieldProcessor() { __yield();} extern "C" void __dmb(const unsigned __int32 _Type); #pragma intrinsic(__dmb) #define MemoryBarrier() { __dmb(_ARM64_BARRIER_SY); } #elif defined(_AMD64_) extern "C" void _mm_pause ( void ); extern "C" void _mm_mfence ( void ); #pragma intrinsic(_mm_pause) #pragma intrinsic(_mm_mfence) #define YieldProcessor _mm_pause #define MemoryBarrier _mm_mfence #elif defined(_X86_) #define YieldProcessor() __asm { rep nop } #define MemoryBarrier() MemoryBarrierImpl() __forceinline void MemoryBarrierImpl() { int32_t Barrier; __asm { xchg Barrier, eax } } #else // !_ARM_ && !_AMD64_ && !_X86_ #error Unsupported architecture #endif #else // _MSC_VER // Only clang defines __has_builtin, so we first test for a GCC define // before using __has_builtin. #if defined(__i386__) || defined(__x86_64__) #if (__GNUC__ > 4 && __GNUC_MINOR > 7) || __has_builtin(__builtin_ia32_pause) // clang added this intrinsic in 3.8 // gcc added this intrinsic by 4.7.1 #define YieldProcessor __builtin_ia32_pause #endif // __has_builtin(__builtin_ia32_pause) #if defined(__GNUC__) || __has_builtin(__builtin_ia32_mfence) // clang has had this intrinsic since at least 3.0 // gcc has had this intrinsic since forever #define MemoryBarrier __builtin_ia32_mfence #endif // __has_builtin(__builtin_ia32_mfence) // If we don't have intrinsics, we can do some inline asm instead. #ifndef YieldProcessor #define YieldProcessor() asm volatile ("pause") #endif // YieldProcessor #ifndef MemoryBarrier #define MemoryBarrier() asm volatile ("mfence") #endif // MemoryBarrier #endif // defined(__i386__) || defined(__x86_64__) #ifdef __aarch64__ #define YieldProcessor() asm volatile ("yield") #define MemoryBarrier __sync_synchronize #endif // __aarch64__ #ifdef __arm__ #define YieldProcessor() #define MemoryBarrier __sync_synchronize #endif // __arm__ #endif // _MSC_VER
https://www.tabsoverspaces.com/233735-how-is-thread-spinwait-actually-implemented
CC-MAIN-2022-33
en
refinedweb
There are many concepts in machine learning in which a beginner should be perfect. One of these concepts is the training of a machine learning model. So if you’ve never trained a machine learning model before, this article is for you. In this article, I’ll walk you through how to train a machine learning model using Python. Why Do We Need to Train a Machine Learning Model? Most concepts in machine learning revolve around training a machine learning model, but why does a model need to be trained? The answer is that we train a model to find relationships between the independent variables and the dependent variable so that we can predict future values of the dependent variable. So, the only idea behind the training of machine learning models is to find the relationships between the independent variables (x) and the dependent variable (y). So if you are new to machine learning, you must have heard of model training. In the section below, I’ll walk you through how to train your first machine learning model using Python. Train a Machine Learning Model using Python To train your very first machine learning model, you need to have a dataset. Assuming you’re a beginner, I’m not going to walk you through a very complex dataset right now. So here I will be using the classic Iris data which is a very famous dataset among data science newbies. So let’s import the necessary Python libraries and dataset that we need for this task: import numpy as np from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression iris = load_iris() Now, I’m going to split this data into two parts (x and y), in x I’m going to store the independent variables we need to predict y, and in y, I’m going to store the target variable (also known as a label): x = iris.data[:,(2,3)] #petal length, petal width y = (iris.target).astype(np.int) Now the next step is to choose a machine learning algorithm. As this problem is classification based, I will simply use the logistic regression algorithm here. So here’s how we train a machine learning model: model = LogisticRegression() model.fit(x, y) We just fit the features x and the target label y to the model by using the model.fit() method provided by the scikit-learn library in Python. Summary So this is how you can easily train machine learning models. The next steps you can take are to start training models on a more complex dataset. Of course, complex data has to be prepared a lot, but once prepared, go through the same process again. You can find over 200 machine learning projects solved and explained from here. Now you can learn to train many machine learning models with different types of problems and datasets. Hope you liked this article on how to train machine learning models. Please feel free to ask your valuable questions in the comments section below.
https://thecleverprogrammer.com/2021/06/09/how-to-train-a-machine-learning-model/
CC-MAIN-2022-33
en
refinedweb
A Python wrapper for Darius Kazemi's Corpora Project Project description A simple Python interface for Darius Kazemi’s Corpora Project, “a collection of static corpora (plural of ‘corpus’) that are potentially useful in the creation of weird internet stuff.” The pycorpora interface makes it easy to use data from the Corpora Project in your program. Here’s an example of how it works: import pycorpora import random # print a random flower name print random.choice(pycorpora.plants.flowers['flowers']) # print a random word coined by Shakespeare print random.choice(pycorpora.words.literature.shakespeare_words['words']) Allison Parrish created the pycorpora interface. The source code for the package is on GitHub. Contributions are welcome! Installation Installation by hand: python setup.py install Installation with pip: pip install pycorpora The package does not include data from the Corpora Project; instead, the data is downloaded when the package is installed (using either of the methods above). By default, the “master” branch of the Corpora Project GitHub repository is used as the source for the data. You can specify an alternative URL to download the data from using the argument --corpora-zip-url on the command line with either of the two methods above: python setup.py install --corpora-zip-url= … or, with pip: pip install pycorpora --install-option="--corpora-zip-url=" (The intention of --corpora-zip-url is to let you install Corpora Project data from a particular branch, commit or fork, so that changes to the bleeding edge of the project don’t break your code.) Usage Getting the data from a particular Corpora Project file is easy. Here’s an example: import pycorpora crayola_data = pycorpora.colors.crayola print crayola_data["colors"][0]["color"] # prints "Almond" The expression pycorpora.colors.crayola returns data deserialized from the JSON file located at data/colors/crayola.json in the Corpora Project (i.e., this file). You can use this syntax even with more deeply nested subdirectories: import pycorpora mr_men_little_miss_data = pycorpora.words.literature.mr_men_little_miss print mr_men_little_miss_data["little_miss"][-1] # prints "Wise" You can use from pycorpora import ... to import a particular Corpora Project category: from pycorpora import governments print governments.nsa_projects["codenames"][0] # prints "ARTIFICE" from pycorpora import humans print humans.occupations["occupations"][0] # prints "accountant" You can also use square bracket indexing instead of attributes for accessing subcategories and individual corpora (just in case the Corpora Project ever adds files with names that aren’t valid Python identifiers): import pycorpora import random fruits = pycorpora.foods["fruits"] print random.choice(fruits["fruits"]) # prints "pomelo" maybe Additionally, pycorpora supports an API similar to that provided by the Corpora Project node package: import pycorpora # get a list of all categories pycorpora.get_categories() # ["animals", "archetypes"...] # get a list of subcategories for a particular category pycorpora.get_categories("words") # ["literature", "word_clues"...] # get a list of all files in a particular category pycorpora.get_files("animals") # ["birds_antarctica", "birds_uk", ...] # get data deserialized from the JSON data in a particular file pycorpora.get_file("animals", "birds_antarctica") # returns dict w/data # get file in a subcategory pycorpora.get_file("words/literature", "shakespeare_words") As an extension of this interface, you can also use the get_categories, get_files and get_file methods on individual categories: import pycorpora # get a list of files in the "archetypes" category pycorpora.archetypes.get_files() # ['artifact', 'character', 'event', ...] # get an individual file from the "archetypes" category pycorpora.archetypes.get_file("character") # returns dictionary w/data # get subcategories of a category pycorpora.words.get_categories() # ['literature', 'word_clues'] Examples Here are a few quick examples of using data from the Corpora Project to do weird and fun stuff. Create a list of whimsically colored flowers: from pycorpora import plants, colors import random random_flowers = random.sample(plants.flowers["flowers"], 10) random_colors = random.sample( [item['color'] for item in colors.crayola["colors"]], 10) for pair in zip(random_colors, random_flowers): print " ".join(pair).title() # outputs (e.g.): # Maroon Bergamot # Blue Bell Zinnia # Pink Flamingo Camellias # Tickle Me Pink Begonia # Burnt Orange Clover # Fuzzy Wuzzy Hibiscus # Outer Space Forget Me Not # Almond Petunia # Pine Green Ladys Slipper # Shadow Jasmine Create random biographies: from pycorpora import humans, geography import random def a_biography(): return "{0} is a(n) {1} who lives in {2}.".format( random.choice(humans.firstNames["firstNames"]), random.choice(humans.occupations["occupations"]), random.choice(geography.us_cities["cities"])["city"]) for i in range(5): print a_biography() # outputs (e.g.): # Jessica is a(n) ceiling tile installer who lives in Grand Forks. # Kayla is a(n) substance abuse social worker who lives in Torrance. # Luis is a(n) hydrologist who lives in Saginaw. # Leah is a(n) heating installer who lives in Danville. # Grant is a(n) building inspector who lives in Vineland. Automated pizza topping-related boasts about your inebriation: from pycorpora import words, foods import random # "I'm so smashed I could eat a pizza with spinach, cheese, *and* hot sauce." print "I'm so {0} I could eat a pizza with {1}, {2}, *and* {3}.".format( random.choice(words.states_of_drunkenness["states_of_drunkenness"]), *random.sample(foods.pizzaToppings["pizzaToppings"], 3)) The possibilities… are endless. License The pycorpora package is MIT licensed (see LICENSE.txt). The data in the Corpora Project is itself in the public domain (CC0). Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pycorpora/0.1.0/
CC-MAIN-2022-33
en
refinedweb
/* Updating of data structures for redisplay. Copyright (C) 1985, 86, 87, 88, 93, 94, 95, 97, 1998 <signal.h> #include <config "frame.h" #include "window.h" #include "commands.h" #include "disptab.h" #include "indent.h" #include "intervals.h" #include "blockinput.h" #include "process.h" #include "keyboardndef PENDING_OUTPUT_COUNT #define PENDING_OUTPUT_COUNT(FILE) ((FILE)->_ptr - (FILE)->_base) #endif #endif /* not __GNU_LIBRARY__ */ /* Structure to pass dimensions around. Used for character bounding boxes, glyph matrix dimensions and alike. */ struct dim { int width; int height; }; /* Function prototypes. */)); static void swap_glyphs_in_rows P_ ((struct glyph_row *, struct glyph_row *)); static void swap_glyph_pointers P_ ((struct glyph_row *, struct glyph_row *)); static int glyph_row_slice_p P_ ((struct glyph_row *, struct glyph_row *)); void update_window_line P_ ((struct window *, int)); static void update_marginal_area P_ ((struct window *, int, int)); static void update_text_area P_ ((struct window *, int)); static void make_current P_ ((struct glyph_matrix *, struct glyph_matrix *, int)); static void mirror_make_current P_ ((struct window *, int)); void check_window_matrix_pointers P_ ((struct window *));; /* The currently selected frame. In a single-frame version, this variable always holds the address of the_only_frame. */ struct frame . */ #if GLYPH_DEBUG top_line_changed_p = 0; int top_line_p = 0; int left = -1, right = -1; int window_x, window_y, window_width, window_height; /* See if W had a top line that has disappeared now, or vice versa. */ if (w) { top_line_p = WINDOW_WANTS_TOP_LINE_P (w); top_line_changed_p = top_line_p != matrix->top_line_p; } matrix->top_line_p = top && !top || top).
https://emba.gnu.org/emacs/emacs/-/blame/2febf6e0335cb2e6ae5aff51cd1b356e0a8e2629/src/dispnew.c
CC-MAIN-2022-33
en
refinedweb
Write output to a file associated with a file descriptor #include <stdio.h> int dprintf( int filedes, const char* format, ... ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The dprintf() function writes output to the file associated with the file descriptor filedes, under control of the format specifier. The number of characters written, or a negative value if an output error occurred (errno is set).
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.lib_ref/topic/d/dprintf.html
CC-MAIN-2022-33
en
refinedweb
Kubernetes scheduling usually doesn’t need much help in order to determine which node to run a pod on. However, you may occasionally wish to have a little more control. In this lab, you will be able to practice the process of ensuring a pod runs on a specific node. Learning Objectives Successfully complete this lab by achieving the following learning objectives: - Configure the `auth-gateway` Pod to Only Run on `k8s-worker2` Locate the auth-gatewaypod in the beebox-authnamespace. Modify the pod, using a label and a nodeSelectorconstraint, so it will always be scheduled on k8s-worker2. You will need to delete and re-create the pod in order for these changes to take effect. You can find a YAML descriptor for this pod at /home/cloud_user/auth-gateway.yml. - Configure the `auth-data` Deployment’s Replica Pods to Only Run on `k8s-worker2` You will find the auth-datadeployment in the beebox-authnamespace. Modify the deployment, using a nodeSelectorconstraint, so its replica pods will always run on k8s-worker2. These changes should take effect once you make this change via a rolling deployment. You can find a YAML descriptor for this pod at /home/cloud_user/auth-data.yml.
https://acloudguru.com/hands-on-labs/assigning-a-kubernetes-pod-to-a-specific-node
CC-MAIN-2022-33
en
refinedweb
A library to use postman collection V2 in python. Inspired by Bardia Heydari nejad Project description Postpy2 Postpy2 is a library for Postman that run Postman's collections. Originaly was forked from and updated to Postman collection v2.1 format. If you are using postman, but collection runner is not flexible enough for you and postman codegen is too boring, Postpy2 is here for your continuous integration. Why use Postpy2 instead of postman codegen? - Postman codegen should be applied one by one for each request and it's boring when your API changes, but with Postpy2, you don't need to generate code. Just export collection with Postman and use it with Postpy2. - In code generation, you don't have environment feature anymore and variables are hard coded. Why user Postpy2 instead of Postman collection runner? - With Postpy2, you write your own script. But collection runner just turns all your requests one by one. So with Postpy2, you can design more complex test suites. How to install? Postpy2 is available on PyPI and you can install it using pip: $ pip install Postpy2 How to use? Import Postpy2 from Postpy2.core import Postpy2 Make an instance from Postpy2 and give the address of postman collection file. runner = Postpy2('/path/to/collection/Postman echo.postman_collection') Now you can call your request. Folders' name change to upper camel case and requests' name change to lowercase form. In this example the name of folder is "Request Methods" and it's change to RequestMethods and the name of request was "GET Request" and it's change to get_request. So you should call a function like runner.YourFolderName.you_request_name() response = runner.RequestMethods.get_request() print(response.json()) print(response.status_code) Variable assignment In Postpy2 you can assign values to environment variables in runtime. runner.environments.update({'BASE_URL': ''}) runner.environments.update({'PASSWORD': 'test', 'EMAIL': 'you@email.com'}) AttributeError Since RequestMethods and get_request does not really exists your intelligent IDE cannot help you. So Postpy2 tries to correct your mistakes. If you spell a function or folder wrong it will suggest you the closest name. >>> response = runner.RequestMethods.get_requasts() Traceback (most recent call last): File "test.py", line 11, in <module> response = runner.RequestMethods.get_requasts() File "/usr/local/lib/python3.5/site-packages/Postpy2/core.py", line 73, in __getattr__ 'Did you mean %s' % (item, self.name, similar)) AttributeError: get_requasts request does not exist in RequestMethods folder. Did you mean get_request You can also use help() method to print all available requests. >>> runner.help() Posible requests: runner.AuthOthers.hawk_auth() runner.AuthOthers.basic_auth() runner.AuthOthers.oauth1_0_verify_signature() runner.RequestMethods.get_request() runner.RequestMethods.put_request() runner.RequestMethods.delete_request() runner.RequestMethods.post_request() runner.RequestMethods.patch_request() ... >>> runner.RequestMethods.help() runner.RequestMethods.delete_request() runner.RequestMethods.patch_request() runner.RequestMethods.get_request() runner.RequestMethods.put_request() runner.RequestMethods.post_request() Contribution Feel free to share your ideas or any problems in issues. Contributions are welcomed. Give Postpy2 a star to encourage me to continue its development. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/postpy2/0.0.1/
CC-MAIN-2022-33
en
refinedweb
Hi, I am developing an app that monitors and corrects the user input based on some rules. I am reading the events from keyboard with the keyboard python module. I faced some problem when the user types very fast, as regards some overlays of text. By this I mean that when my app writes the correct input, the user continues writing and may writes before the corrector types the whole word. I found, that I can start a keyboard hook with suppressed output to screen and tried to implements a solution. In the above code I tried recreating the problem and tried giving the general idea. import keyboard from collections import deque string : str = "" counter : int = 0 is_suppressed: bool = False # this indicates if letters are shown in the window or not suppressed_string: str = "" q = deque() # this is used as a buffer, and stores the words that are entered when the # program is correcting def keyboard_module_write_to_screen(is_suppressed, string): for i in range(len(string) + 1): print("--Pressing backspace--") keyboard.press_and_release('backspace') for i, char in enumerate (string): # simulating a calculation final_char_to_be_written = char.upper() print("---WRITING THE CHAR -> {} ---".format(final_char_to_be_written)) keyboard.write(final_char_to_be_written) for i in range(30): keyboard.write('*') keyboard.write(' ') def monitoring(event): global counter, string, is_suppressed, suppressed_string if (event.event_type == keyboard.KEY_DOWN): # and event.name != 'backspace'): print("-String entered : {}".format(event.name)) if (event.name == 'space'): # if space is button a new word is entered if (is_suppressed is True): # if program occupied writing to the screen save the word to the buffer q.appendleft(suppressed_string) suppressed_string = "" elif (is_suppressed is False): # check and write first from the deque, # write the word(s) that were stored in the buffer before writing current # input string # haven't find a way to do the above alongside the others keyboard.unhook_all() keyboard.hook(monitoring, suppress = True) is_suppressed = True keyboard_module_write_to_screen(is_suppressed, string) keyboard.unhook_all() keyboard.hook(monitoring, suppress = False) is_suppressed = False counter = 0 string = "" elif (event.name in "abcdefghijklmnopqrstuvwxyz") : if (is_suppressed is True): suppressed_string = ''.join([suppressed_string, event.name]) print("########## SUPPRESSED_STRING = {} #########".format(suppressed_string)) counter = counter + 1 print("-- COUNTER is : {}".format(counter)) string = ''.join([string, event.name]) elif (event.name == "]"): print(q) elif (event.name == 'backspace'): pass keyboard.hook(monitoring, suppress = False) The main thing I want to achieve is 1)while correcting - writing to the window, read events and save them to a buffer 2)when correcting - writing is done check the buffer, write it's content, but keep reading events 3)if buffer empty and currently not writing something, read events etc. I didn't manage to make it work and produce the desired result. Any advice on how to make it work, would be useful. Thanks in advance for any help.
https://www.daniweb.com/programming/software-development/threads/522274/suppress-input-while-writing-to-window-python
CC-MAIN-2022-33
en
refinedweb
Ads Via DevMavensIS This />. TheIt means impossible. Since no engineer is going to admit something is impossible, they use this word instead. Non-trivial? Is it possible to reverse the flow of time when we click this button? The requests are more realistic, of course, but the stock reply is: Given enough resources, anything is possible.? What does it take to get this into our software? Then you know they are serious and passionate about the idea, and it’s time to start talking. to build a model binder to bind and validate recipes. Let’s start with the naive approach. This code has a number of problems. public class RecipeModelBinder : IModelBinder { public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { var form = controllerContext.HttpContext.Request.Form; var recipe = new Recipe(); recipe.Name = form["Name"]; // ... and so on for all properties if (String.IsNullOrEmpty(recipe.Name)) { bindingContext.ModelState.AddModelError("Name", "..."); } return recipe; } } Problem The code works directly off the HttpContext.Request.Form. There are a couple difficulties working with the Form collection directly. One problem is how your tests will require more setup to create an HttpContext, Request, and Form objects. The second problem is related to culture. Values you need from the Form collection are culture sensitive because the user will type a date and time value into their browser using a local convention. However, the URL is another place you might need to check when binding values (the query string and routing data in general), and these values are culture invariant. Instead of worrying about all these details, it’s better to use the ValueProvider given to us by the incoming binding context. The value provider is easy to populate in a unit test, and takes care of culture sensitive conversions. We’ll add a GetValue method to help fetch values from the ValueProvider. At runtime the MVC framework populates the provider with values it finds in the request’s form, route, and query string collections. public class RecipeModelBinder : IModelBinder { public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { var recipe = new Recipe(); recipe.Name = GetValue<string>(bindingContext, "Name"); // ... and so on for all properties if (String.IsNullOrEmpty(recipe.Name)) { bindingContext.ModelState.AddModelError("Name", "..."); } return recipe; } private T GetValue<T>(ModelBindingContext bindingContext, string key) { ValueProviderResult valueResult; bindingContext.ValueProvider.TryGetValue(key, out valueResult); return (T)valueResult.ConvertTo(typeof(T)); } } Problem: If you use any HTML Helpers (like Html.TextBox), you’ll see null reference exceptions when validation errors are present. One of the side-effects of model binding is that binding the model should put model values into ModelState. When an HTML helper sees there is a ModelState error for “Name”, it assumes it will also find the “attempted value” that the user entered. The helper uses attempted values to repopulate inputs and allow the user to fix any errors. The only change is to set the model value inside GetValue. private T GetValue<T>(ModelBindingContext bindingContext, string key) { ValueProviderResult valueResult; bindingContext.ValueProvider.TryGetValue(key, out valueResult); bindingContext.ModelState.SetModelValue(key, valueResult); return (T)valueResult.ConvertTo(typeof(T)); } The model binder we’ve written so far will work with controller actions like the following: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Recipe newRecipe) { // ... return View(newRecipe); } But it won’t work in this scenario: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(FormCollection formCollection) { var recipe = RecipeFactory.Create(); TryUpdateModel(recipe); if (ModelState.IsValid) { // save it ... } return View(recipe); } The recipe in the controller action will never see any changes when calling TryUpdateModel, because the model binder is creating and binding data into its own recipe object. Perhaps you never use UpdateModel or TryUpdateModel, but if you expect your model binder to work in any possible situation, you need to make sure the model binder works when someone else creates the model. The only change is to check bindingContext.Model to see if we already have a model. Only when this property is null will we go to the trouble of creating a new model. public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { var form = controllerContext.HttpContext.Request.Form; var recipe = (Recipe)(bindingContext.Model ?? new Recipe()); recipe.Name = GetValue<string>(bindingContext, "Name"); // ... return recipe; } Congratulations! We just re-implemented the behavior of the DefaultModelBinder - only our model binder is stupid and only works with Recipes. If all we need is validation for a specific type of model, we can derive from the built-in binder and override OnModelUpdated or OnPropertyValidating and provide our custom logic. OnModelUpdated is easy to work with, so what follows is the entire listing for our custom model binder. public class RecipeModelBinder : DefaultModelBinder { protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var recipe = bindingContext.Model as Recipe; if (String.IsNullOrEmpty(recipe.Name)) { bindingContext.ModelState.AddModelError("Name", "..."); } } } It turns out we didn’t need any code from those first four iterations, but I hope you find them useful because they demonstrate some common problems I’m seeing in custom model binders. Of course we could take this example even further and eliminate the magic strings, but we’ll leave the work for another day. In software development – every iteration is a learning opportunity! Model? The. Luis and David recently posted about the controls that appear in the ASP.NET MVC Futures 1.0 release. I’ve seen some discussions where people positively erupt at any mention of the word “control” in an MVC setting. These are the people who consider ASP.NET Web Forms as the ultimate source of evil in the universe – a cross between a Sith lord and a velociraptor. The idea of introducing controls, with un-testable event handling code and giant gobs of view state, is a travesty that will corrupt the minds of all developers before devouring their children. They must be stopped. Let’s take the simple scenario of rendering a text input in the browser. If you are using the ASP.NET MVC framework release with no additional libraries, you’ll be looking to use one of the 4 Html helper TextBox methods (excerpted for space below): string TextBox(... string name); string TextBox(... string name, object value); string TextBox(... string name, object value, IDictionary<string, object> htmlAttributes); string TextBox(... string name, object value, object htmlAttributes); A text input is a pretty trivial control, but even these 4 options don’t give you a clean path to the entire universe of things you might want to do when displaying a value in a text input, like specifying a formatting string to use when converting the model value to a string. One solution is to write additional helper methods, or include an additional library with the methods already written, but you are only adding to the number of method overloads a developer has to parse when they just want to display a simple text input. In the end, you’ll still be looking at code similar to the following… <%= Html.TextBox("ReleaseDate", String.Format("{0:d}", Model.ReleaseDate), new { @class = "special" })%> …versus the declarative syntax of the MVC futures TextBox control… <mvc:TextBox Note that the MVC control does not inject view state into the rendered output, but that’s not to say that the MVC controls don’t have some issues. One issue is that they still inherit properties, behaviors, and events from System.Web.UI.Control, and some of those inherited features don’t make sense in an MVC view. I think I'm still happier using Html helpers. Here are some advantages I see to using controls: And the disadvantages: It’s also worth pointing out that there are other solutions, besides controls, to the often clumsy Html helpers. By sticking to view-specific models, conventions, and effective CSS with JavaScript, you can remove many of the concerns that the HtmlHelpers burden themselves with. Using a view engine other than the web forms view engine can also solve some of these issues. What do you think? Will MVC controls be the spawn of Satan, or the blessing of a saint? Just a quick note to let you know that the first three modules of my ASP.NET MVC course are ready for consumption at Pluralsight! On-Demand. Pluralsight is building a fantastic library of online content you can view as a subscriber, or see in person via instructor led training at a classroom or on-site at your place. There is also a heap of free screencasts and previews to wet your appetite. A single subscription can get you access to training for WPF, C#, .NET, ASP.NET, AJAX, WF, WCF, Silverlight, BizTalk, with more on the way. P.S. I’ve heard the LINQ class is awesome, too… In the last post we talked about using entities as the models in an MVC application. This approach works well until an application reaches a certain level of complexity, at which point entities as the M in MVC can become painful. I believe entities exist to serve the business layer. They also have a role in the data access layer, but only because the data access layer has to be acutely aware of entities. The reverse is not true – entities don’t need to know details about the data access layer nor how they are saved in persistent storage. Entities aren’t owned by the UI, or the data access layer. Entities are business objects and are owned by the business layer. An application that supports a complex business needs a fine-tuned layer of business logic. You know an application is growing in complexity when you uncover scenarios like these: These types of requirements have a significant impact on the design and behavior of the business layer and its entities. You want all this complicated business stuff in a layer where it is testable, maintainable, and free from the influence of all the infrastructure that surrounds it – databases, data grids, message queues, communication protocols, and the technology du jour. It’s a business layer built with business objects that encapsulate business state and business behavior. It’s the secret sauce that makes money for your company. It’s a rich model of your domain. So - why wouldn’t we want to use this rich domain model inside of our views? Isn’t it every young view’s dream to grow up and marry an extraordinarily rich, fat model? Views can grow complex just as quickly as business logic can grow complex. Complex views might exhibit some of the following conditions. Perhaps you don’t consider these views as super-complex because you build them all the time, but the types of views we are talking about in the above three bullet points do place requirements and constraints on the model. For example, the UI may require a model that can serialize into a JSON or RSS format. If you share your entities or domain model with the UI layer, you’ll find your business objects have to serve two masters. The requirements from these two masters will pull your business objects in different directions, and they won’t be optimized to fit in either role. Also, ask yourself these questions about the model for your views: Answers: Complex applications often require multiple models. There is the domain model that encapsulates your company’s secret money-making business sauce, and then there are multiple view models that the UI layer consumes. Somewhere in between is logic that maps data between the various models. This isn’t a new idea, and it was at one time known as the Model Model View Controller pattern. You might create a model class for each type of view, like a – MovieReviewViewModel, and perhaps all the UI models derive from a base class. The model classes will hold just the state that their respective views require. These classes don’t need behavior – they are essentially just data transfer objects. In a sense, the model classes also become contracts that explicitly describe what the controller needs to put together for the UI, and the view author sees a model with just the information they need. Of course, building these additional models comes with a price. It’s up to you to decide if the benefits are worth the price: It was: haven’t given a definitive answer, but I hope I’ve given you enough of my opinion to see that the answer depends on the complexity of your application and its long term goals. For forms-over-data applications, passing your entities can be a simple and effective solution. For more complex applications you may need models built specifically for your views to maintain the integrity and maintainability of your business objects.
http://odetocode.com/blogs/scott/
crawl-002
en
refinedweb
For starters, let's consider a relatively straightforward function that takes three integer parameters and returns an arithmetic combination of them. This is nice and simple, especially since it involves no control flow: int mul_add(int x, int y, int z) { return x * y + z; } As a preview, the LLVM IR we’re going to end up generating for this function will look like: define i32 @mul_add(i32 %x, i32 %y, i32 %z) { entry: %tmp = mul i32 %x, %y %tmp2 = add i32 %tmp, %z ret i32 %tmp2 } If you're unsure what the above code says, skim through the LLVM Language Reference Manual and convince yourself that the above LLVM IR is actually equivalent to the original function. Once you’re satisfied with that, let's move on to actually generating it programmatically! Of course, before we can start, we need to #include the appropriate LLVM header files: #include "llvm/Module.h" #include "llvm/Function.h" #include "llvm/PassManager.h" #include "llvm/CallingConv.h" #include "llvm/Analysis/Verifier.h" #include "llvm/Assembly/PrintModulePass.h" #include "llvm/Support/IRBuilder.h" #include "llvm/Support/raw_ostream.h" Now, let's get started on our real program. Here's what our basic main() will look like: using namespace llvm; Module* makeLLVMModule(); int main(int argc, char**argv) { Module* Mod = makeLLVMModule(); verifyModule(*Mod, PrintMessageAction); PassManager PM; PM.add(createPrintModulePass(&outs())); PM.run(*Mod); delete Mod; return 0; } The first segment is pretty simple: it creates an LLVM “module.” In LLVM, a module represents a single unit of code that is to be processed together. A module contains things like global variables, function declarations, and implementations. Here we’ve declared a makeLLVMModule() function to do the real work of creating the module. Don’t worry, we’ll be looking at that one next! The second segment runs the LLVM module verifier on our newly created module. While this probably isn’t really necessary for a simple module like this one, it's always a good idea, especially if you’re generating LLVM IR based on some input. The verifier will print an error message if your LLVM module is malformed in any way. Finally, we instantiate an LLVM PassManager and run the PrintModulePass on our module. LLVM uses an explicit pass infrastructure to manage optimizations and various other things. A PassManager, as should be obvious from its name, manages passes: it is responsible for scheduling them, invoking them, and ensuring the proper disposal after we’re done with them. For this example, we’re just using a trivial pass that prints out our module in textual form. Now onto the interesting part: creating and populating a module. Here's the first chunk of our makeLLVMModule(): Module* makeLLVMModule() { // Module Construction Module* mod = new Module("test"); Exciting, isn’t it!? All we’re doing here is instantiating a module and giving it a name. The name isn’t particularly important unless you’re going to be dealing with multiple modules at once. Constant* c = mod->getOrInsertFunction("mul_add", /*ret type*/ IntegerType::get(32), /*args*/ IntegerType::get(32), IntegerType::get(32), IntegerType::get(32), /*varargs terminated with null*/ NULL); Function* mul_add = cast<Function>(c); mul_add->setCallingConv(CallingConv::C); We construct our Function by calling getOrInsertFunction() on our module, passing in the name, return type, and argument types of the function. In the case of our mul_add function, that means one 32-bit integer for the return value and three 32-bit integers for the arguments. You'll notice that getOrInsertFunction() doesn't actually return a Function*. This is because getOrInsertFunction() will return a cast of the existing function if the function already existed with a different prototype. Since we know that there's not already a mul_add function, we can safely just cast c to a Function*. In addition, we set the calling convention for our new function to be the C calling convention. This isn’t strictly necessary, but it ensures that our new function will interoperate properly with C code, which is a good thing. Function::arg_iterator args = mul_add->arg_begin(); Value* x = args++; x->setName("x"); Value* y = args++; y->setName("y"); Value* z = args++; z->setName("z"); While we’re setting up our function, let's also give names to the parameters. This also isn’t strictly necessary (LLVM will generate names for them if you don’t specify them), but it’ll make looking at our output somewhat more pleasant. To name the parameters, we iterate over the arguments of our function and call setName() on them. We’ll also keep the pointer to x, y, and z around, since we’ll need them when we get around to creating instructions. Great! We have a function now. But what good is a function if it has no body? Before we start working on a body for our new function, we need to recall some details of the LLVM IR. The IR, being an abstract assembly language, represents control flow using jumps (we call them branches), both conditional and unconditional. The straight-line sequences of code between branches are called basic blocks, or just blocks. To create a body for our function, we fill it with blocks: BasicBlock* block = BasicBlock::Create("entry", mul_add); IRBuilder<> builder(block); We create a new basic block, as you might expect, by calling its constructor. All we need to tell it is its name and the function to which it belongs. In addition, we’re creating an IRBuilder object, which is a convenience interface for creating instructions and appending them to the end of a block. Instructions can be created through their constructors as well, but some of their interfaces are quite complicated. Unless you need a lot of control, using IRBuilder will make your life simpler. Value* tmp = builder.CreateBinOp(Instruction::Mul, x, y, "tmp"); Value* tmp2 = builder.CreateBinOp(Instruction::Add, tmp, z, "tmp2"); builder.CreateRet(tmp2); return mod; } The final step in creating our function is to create the instructions that make it up. Our mul_add function is composed of just three instructions: a multiply, an add, and a return. IRBuilder gives us a simple interface for constructing these instructions and appending them to the “entry” block. Each of the calls to IRBuilder returns a Value* that represents the value yielded by the instruction. You’ll also notice that, above, x, y, and z are also Value*'s, so it's clear that instructions operate on Value*'s. And that's it! Now you can compile and run your code, and get a wonderful textual print out of the LLVM IR we saw at the beginning. To compile, use the following command line as a guide: # c++ -g tut1.cpp `llvm-config --cxxflags --ldflags --libs core` -o tut1 # ./tut1 The llvm-config utility is used to obtain the necessary GCC-compatible compiler flags for linking with LLVM. For this example, we only need the 'core' library. We'll use others once we start adding optimizers and the JIT engine.
http://www.llvm.org/docs/tutorial/JITTutorial1.html
crawl-002
en
refinedweb
Windows Tip: Windows Vista and GPT disks Send your Windows question to Mitch today! | See other Windows tips Windows Vista supports two types of disk partitioning: Master Boot Record (MBR) and Globally Unique Identifier Partition Table (GPT). GPT disks offer several advantages over MBR disks including more partitions (128 instead of 4) and larger partition sizes (theoretically up to 18 exabytes or about 18 million terabytes). But before you run out and get a zillion terabyte drive for your Vista workstation so you can store all your YouTube videos, you need to know the following. First, Vista only supports NTFS-formatted disks up to 256 TB in size. While that's a lot, it's still only a tiny fraction of an exabyte. So maybe you won't be able to store all those videos after all. Second, Vista can only boot from a GPT disk if your system uses Extensible Firmware Interface (EFI) instead of BIOS. But you can always have GPT data drives even if your system drive is MBR. But third and most importantly, you must be aware that if the dirty bit somehow gets set on your humungous GPT volume, chkdsk.exe is going to take a darned long time to run. In fact, with your typical 1 terabyte LUN having millions of files, chkdsk will likely take several hours to finish. So if you're dreaming of having dozens of terabytes at your fingertips someday, think again. Would you really want your system to be locked into checking disk integrity for several days while you twiddle your thumbs? Storage capacities for desktop systems have been growing at a tremendous rate in recent years, but chkdsk performance hasn't been able to keep up with this growth. A workaround is to use DFS to create multiple smaller volumes on your storage device and then create a namespace to logically unite these volumes into a single large volume. This doesn't really solve the problem however, but solid state hard drives may eventually shift the balance in this regard. Meanwhile, why not delete those old videos from your hard drive to free up some! Or better still........BackOr better still........Back them all up on seperate slightly smaller ones instead?? I personally have 6TB of space, I use 2TB built in, and the rest is on several external 750GB hard drives.............
http://www.itworld.com/nlswindows070220
crawl-002
en
refinedweb
Is there some OS X linker option one can give while building the extension such that the symbols in baz.c are exposed to foomodule.c while building module foo, but these baz symbols are not exposed outside module foo? Would the "two-level namespace" as opposed to the flat namespace be of use here? Manoj Manoj Plakal wrote: > > Steven, > > Thanks for the patch. I applied it to the > 2.2 tree that I had checked out from CVS > and re-built foo and bar and tried > importing them again. > > eve(1418):test% python > Python 2.2+ (#2, Jan 24 2002, 20:36:33) > [GCC 2.95.2 19991024 (release)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import foo > var_for_foo = 1 > >>> import bar > Traceback (most recent call last): > File "<stdin>", line 1, in ? > ImportError: dyld: python multiple definitions of symbol _var_for_bar > foo.so definition of _var_for_bar > bar.so definition of _var_for_bar > Failure linking new module > >>> > > var_for_foo and var_for_bar are defined in baz.c which > is included in both modules foo and bar. The modules > were compiled with a simple distutils setup.py file > (which I had included in my earlier post) which > builds the two modules with -flat_namespace. > > The same source code works fine on Red Hat Linux 7.2 > with Python 2.1 and Win98 with Python 2.2b2. > > So my question now is: is this really a bug > in the OS X implementation, or is this how > Python is supposed to work (and the Linux > and Win32 dynamic loading implementations > just happen to allow this without errors)? > > If I am to file a bug at Sourceforge, I'd > like to know exactly what the bug is :) > > Manoj > > > Steven Majewski wrote: > >> >> Below is a patch that adds a call to NSLinkEditError, which will >> give you a more explicit error message about what's wrong. It's >> how I discovered which symbol was the problem with that bug: >> >> >> *** dynload_next.c.0 Fri Jan 18 13:50:06 2002 >> --- dynload_next.c Fri Jan 18 14:08:13 2002 >> *************** >> *** 150,157 **** >> if (errString == NULL) { >> newModule = NSLinkModule(image, pathname, >> >> NSLINKMODULE_OPTION_BINDNOW|NSLINKMODULE_OPTION_RETURN_ON_ERROR); >> ! if (!newModule) >> ! errString = "Failure linking new module"; >> } >> if (errString != NULL) { >> PyErr_SetString(PyExc_ImportError, errString); >> --- 150,164 ---- >> if (errString == NULL) { >> newModule = NSLinkModule(image, pathname, >> >> NSLINKMODULE_OPTION_BINDNOW|NSLINKMODULE_OPTION_RETURN_ON_ERROR); >> ! if (!newModule) { // sdm7g >> ! int errNo; >> ! char *fileName, *moreErrorStr; >> ! NSLinkEditErrors c; >> ! errString = "Failure linking new module"; >> ! NSLinkEditError( &c, &errNo, &fileName, &moreErrorStr ); >> ! errString = strcat( fileName, errString ); >> ! errString = strcat( moreErrorStr, errString ); >> ! } // sdm7g >> } >> if (errString != NULL) { >> PyErr_SetString(PyExc_ImportError, errString);
https://mail.python.org/pipermail/python-list/2002-January/155277.html
CC-MAIN-2020-24
en
refinedweb
Ethan Furman wrote: > Hrm -- and functions/classes/etc would have to refer to each other that > way as well inside the namespace... not sure I'm in love with that... Not sure I hate it, either. ;) Slightly more sophisticated code: <code> class NameSpace(object): def __init__(self, current_globals): self.globals = current_globals self.saved_globals = current_globals.copy() def __enter__(self): return self def __exit__(self, *args): new_items = [] for key, value in self.globals.items(): if (key not in self.saved_globals and value is not self or key in self.saved_globals and value != self.saved_globals[key]): new_items.append((key, value)) for key, value in new_items: setattr(self, key, value) del self.globals[key] self.globals.update(self.saved_globals) if __name__ == '__main__': x = 'inside main!' with NameSpace(globals()) as a: x = 'inside a?' def fn1(): print(a.x) with NameSpace(globals()) as b: x = 'inside b?' def fn1(): print(b.x) def fn2(): print('hello!') b.fn1() y = 'still inside main' a.fn1() b.fn1() print(x) print(y) </code> ~Ethan~
https://mail.python.org/pipermail/python-list/2012-February/619619.html
CC-MAIN-2020-24
en
refinedweb
Image and video analysis tools for experimental sciences Project description xptools Analysis tools for experimental sciences to display images, detect particles (cells, bubbles, crystals), moving fronts and analyze their shape, size and distibution. Installation xptools is on PyPI so you can install it straight from pip with the command pip install xptools Alternatively, you can clone the repository to your local computer and install from source git clone cd xptools pip install . Scripts - analyze_front - This scipt takes a directory containing video files. For each file, it asks the user to select a region of interest and processes the selected area with a minimum threshold to find the largest area. It then plots the height of this area as a function of time. analyze_front --plotly --scale 60 --framerate 30 moviedirectory/ - analyze_bubbles - This script takes a movie (or directory of movies) showing bubbles on a surface (bright on dark). It uses a watershed segmentation algorithm to identify the bubbles and characterize their size. It then plots the bubble density and mean size as a function of time. analyze_bubbles --plotly --scale 60 bubble_movie.avi - analyze_crystals - This script takes a directory containing pictures of droplets containing crystals (under cross polarization). It uses a thresholding algorithm to segment the crystals, count them and measure their size. analyze_crystals --plotly --key funct_key.txt imagedirectory/ - display_image_matrix - Arranges all the images in a directory as a matrix and saves the resulting image display_image_matrix --lines 10 --compress imagedirectory/ Utilities Several utilities are included in the submodule utils including: select_roi:select_rectangle - Prompts the user to make a rectangular selection on the passed image and returns the coordinates of the selection. videotools:open_video - Opens a video file as a list of np.arrays each containing one frame of the video. videotools:determine_threshold - Determines the threshold to use for a video based on the minimum threshold algorithm. videotools:obtain_cropping_boxes - Prompts the user to select the region of interest for each video file. Returns a dataframe containing the coordinates of the selected area for each file. imagetools:open_all_images - Opens all the images in a folder and returns a list of cv2 images. Example usage import pandas as pd from xptools.utils import videotools video_list = ['Film1.avi','Film2.avi'] dict_crop = videotools.obtain_cropping_boxes(video_list) for key, val in dict_crop: stack = videotools.open_video(key) (minRow, minCol, maxRow, maxCol) = val stack = [img[minRow:maxRow,minCol:maxCol] for img in stack] process(stack) Notebooks - SegmentationBroadSpectrum.ipynb - Tests different image segmentation techniques to determine which is most appropriate - SegmentationFocused.ipynb - Implements a specific analysis and plots the resulting size and number distributions for the particles - Watershed_Segmentation.ipynb - Implements Watershed segmentation. Credits Code for display_image_matrix adapted from License This project is licensed under the MIT License - see the LICENSE.md file for details. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/xptools/
CC-MAIN-2020-24
en
refinedweb
Game engines and graphics are all well and good, but what about actual commercial games, you ask? You're in luck, for Python has slithered its way into many a shop. The language has been used as the primary scripting tongue for quite a few major games, and there are also a handful of game development tools, scriptable via Python, that have also been released. Eve Online is a massive multiplayer online game that won the award for best online game in Game Revolution's The Best of E3 2002 Online Awards, and was also featured shortly after its release at 2003's E3 conference. Created by Iceland's CCP Games and released in 2001, Eve's world is a massive RPG science- fiction environment featuring photo-realistic graphics and a real space-faring feel. What makes Eve special for us is that its game-logic is controlled by Stackless Python. CCP used Stackless on both the client and server side to free its programmers from many of the mundane tasks of model behavior and instead focus on the creative parts of AI. Stackless also allows CCG to easily make changes to the game and game behavior, even while the game is running, which is extremely important for its persistent online world model. Freedom Force , a popular super-hero multiplayer game from Irrational Games, was nominated for handfuls of PC Gamer's annual 2002 awards, and Irrational is currently working on an expansion of the game. Irrational used NDL's NetImmerse game engine and Freedom Force was co-published by Crave Entertainment and Electronic Arts. Many of the game's functions were exported to the Python side, so that Python could set and move objects and control camera movements. The single-player levels were scripted with Python as well, in order to control mission control and cut-scenes. Python was used with custom extensions provided by the Freedom Force engine, and the key to using these extensions is understanding the scripting guides, which you can download from Irrational games at. Freedom Force launches two Python scripts (located in its system folder): startup.py and init.py. Both of these files are used to set the data paths for the game; by adding to the default path , you can change which module ff ( Freedom Force ) loads up at the beginning: import ff ff.DefaultPath = "MyModule;data" Python scripts control the flow of a module or adventure and can be used to script missions, create events that spawn new enemies, check for mission success and failure, trigger speech, and run cut-scenes. Each mission has a single script file (called mission, py) with which it is associated and must be in the same folder as mission.dat (this file is commonly know as a mission script ). There are also level offshoots, called briefings and intermissions , that are loaded in between missions. These are scripted in the same way as missions but use a base.py file and a base.dat file instead. The custom extensions provided by the Freedom Force engine are huge. Everything from AI to Object control to missions to camera movement is completely accessible via the Python scripting interface. Let's take a look at one example, a cut-scene snip from Freedom Force . The Freedom Force camera has a number of methods for using cut-scenes, as illustrated in Table 5.6. Using these methods to start and stop a cut-scene would look like the following: # Define Cutscene MyCutscene = [ ( # Start Cutscene "startCS()", ) # End Cutscene "endCS()", ) Those who have been paying attention will notice that cut-scenes in Freedom Force are Python lists; here is the same code condensed to one line for familiarity : MyCutscene=[(item1,)(item2,)(etc)] Later in the code you call the play() function and viola! The MyCutscene cut-scene would run: play(MyCutscene) Of course, this cut-scene doesn't do much at all, but that's where FF's camera controls come in. The camera is enabled by a Camera_LookAtObject() command and released back to the player with the Camera_Release() command. Camera_LookAtObject() can be set with a number of commands common to the FF camera, as shown in Table 5.7: Given the camera controls in Table 5.7, you can move the camera around the main player or protagonist: MyCutscene = [ ( "startCS()", "Camera_LookAtObject('My_Player',-195,30,384,3,CPM_SCROLLTO,CA_MOVE)", "Camera_LookAtObject('My_Player',-200,20,320,3,CPM_SCROLLTO,CA_MOVE)", ) "endCS()", ) ] Not bad for a quick delve into the Freedom Force APIand we've really just begun. There are actually a number of other camera commands to set wide-screen, introduce camera jitter, snap to objects or markers, fade in and out, and so on. Outside of the camera there are whole suites of functions and methods to set up narration, music, and sound effects, control NPCs and characters , set mission objectives and game flow, and so on and so on. Severance: Blade of Darkness is a fantasy combat game from Codemasters / Rebel Act Studios (which is now defunct ). It is a mature-audience game released in 2001 along with a level editor (called LED) and a set of tools (called RAS) for making levels and mods, which are, of course, based on Python and wholly scriptable. A Blade of Darkness level generally includes: A .bw file which has the map architecture details, compiled from the LED map editor (uncompiled maps are .mp files). .mmp files, which are files with the textures used in and on the map. One or more Blade of Darkness (BOD) files that define the objects and characters that inhabit the mod. A number of Python scripts that initialize and make objects and npcs and so on. A level file (.lvl) that loads things up to the game engine (the .mmp bitmaps and the .bw map file). The LED editor is shown in Figure 5.10 (notice Python on the top toolbar). In the Python scripts, you'll find that objects (weapons, torches, and so on) are usually defined with a objs.py file, players with a pl.py file, configurations with a cfg.py file, the placement of the sun and its position with a sol.py file, and any water coordinates with a agua.py file. Take a look at a sample agua.py file: import Bladex pool1=Bladex.CreateEntity("pool1","Entity Water",72000,39800,-2000) pool1.Reflection=0.9 pool1.Color=90,20,20 pool2=Bladex.CreateEntity("pool2","Entity Water",116000,39800,54000) pool2.Reflection=0.1 pool2.Color=60,10,10 pool3=Bladex.CreateEntity("pool3","Entity Water",116000,39700,46000) pool3.Reflection=-0.5 pool3.Color=0,0,0 First, the necessary Bladex libraries (which hold most of the necessary commands and functions) are imported. CreateEntity is then called on to create three separate pools of water at three separate locations. Once instantiated , each pool is then further defined with the Reflection and Color methods. NOTE A handful of developers from Rebel Act started their own company called Digital Legends Entertainment at shortly after RAS closed its doors. They are currently focused on producing their first game, Nightfall Dragons at . ToonTown , an online cartoon style mulit-player game, is the latest from the Walt Disney Imagineering studio. Players create their own cartoon avatars and explore a rich world where they can meet and interact with other "toons," earn jelly beans to put in the bank, and buy things (like a toon house or items for a toon house). There is even a bit of conflict thrown in, in the form of a "Cog Invasion" that is threatening the city. Disney's ToonTown uses Python in a direct and powerful way. The ToonTown executable actually calls Python on the client when the program is instantiated. Python was also used in development of the game, particularly in the Panda3D rendering engine. Panda3D is powered by Python, DirectX, and the Fmod music and sound effects system. After being used to create Disney's ToonTown it was released to the open source community and is currently under even more extensive development by both the VR Studio and the Entertainment Technology Center at Carnegie Mellon University. ETC is working on making a simple installer for Panda3D (the current installation is somewhat of a bear ahem), creating solid documentation, adding to the basic model import functionality, and creating tools like level and script editors. Note that there are two versions of Panda. One is the original release to the community from Disney, located on Sourceforge and found there at http:// sourceforge .net/projects/panda3d/). The second version is the release from Carnegie Mellon's ETC, and can be found online at. Panda is capable of importing Maya and 3D Studio Max models, as well as the standard .gif, .tiff, and .jpeg image formats. It has a fairly extensive API that is still undergoing documentation. It can also be extended with the SDK, and the engine itself is tweakable, as the code has been released to the community. The two most important lines in any Pythoned Panda script are from ShowBaseGlobal import * and run() The first line imports the necessary Panda files (which takes quite a bit of time) and the second line runs the environment. Running these two lines in a script after installing Panda will create a large, blank, gray window. These two lines are the minimum needed to create a Panda environment. Panda3D is built around the scene-graph, which is a tree-like object hierarchy structure. Individual objects, which are normally 3D models or GUI elements, are called NodePath objects. NodePath objects inherit behavior from their parents, and there are a number of built-in, base, pre-defined NodePath objects in Panda. Panda3D models are either .egg or .bam. EGG files are ASCII format (and therefore readable by humans ), and .bam is their corresponding binary format (for faster load times). You load a 3D object in Panda using its global loader object, like so: My3Dobject = loader.loadModel("3Dobject.egg") All loaded objects in Panda are, by default, hidden from view. To change this, take the loaded object (which is now a NodeObject ) and change its parent to render ; doing so will make the object render onscreen: My3Dobject.reparentTo(render) Once the object is loaded, you can call upon all sorts of fun methods to manipulate it, from setting the x,y, and z coordinates with setX() , setY() , setZ() or setPos(x,y,z) : My3Dobject.setX(4) # Moves the object 4 "feet: on the X coordinate to changing the heading, pitch, and roll with setHPR(heading, pitch, roll) : My3Dobject.setHPR(50, 30, 0) # Changes the model heading by 50 degrees and pitches the model upward 30 degrees to changing the object's scale with setScale() : My3Dobject.setScale(10) # sets the scale uniformly x10 in each direction (x,y, and z) Panda is also capable of handling events (mouse clicks and key presses), has a GUI system for creating UI elements like buttons and dialog boxes (which can be bound to Python functions), and can incorporate sound effects and music.
https://flylib.com/books/en/1.77.1.56/1/
CC-MAIN-2018-47
en
refinedweb
Objective-C. Clone the GitHub repository from the command line. $ git clone To build the starter app: ios-starter/swiftSwift". 123456". On the second screen click Download GoogleService-Info.plist to download a configuration file that contains all the necessary Firebase metadata for your app. Copy that file to your application and add it to the AppQualitySwift target. You can now click the "x" in the upper right corner of the popup to close it -- skipping steps 3 and 4 -- as you will perform those steps here. Start by making sure the Firebase module is imported. import Firebase Use the "configure" method in FirebaseApp inside the application:didFinishLaunchingWithOptions function to configure underlying Firebase services from your .plist file. func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { FirebaseApp.configure() return true }. let fileName = "\(documentsDirectory)/perfsamplelog.txt" // Start tracing let trace = Performance.startTrace(name: "request_trace") let contents: String DispatchQueue.main.async { self.imageView.image = UIImage(data: data!) } trace?.stop() let contentToWrite = contents + (response?.url?.absoluteString ?? "") Each custom trace can have one or more counters to count performance-related events in your app, and those counters are associated with the traces that create them. Increment a counter with the log file's size. let fileLength = contents.lengthOfBytes(using: .utf8) trace?.incrementCounter(named: "log_file_size", by: fileLength) let target = "" Use another counter for the number of requests. override func viewDidLoad() { . . let task = URLSession.shared.dataTask(with: request) { data, response, error in . . task.resume() trace?.incrementCounter(named: func didPressCrash(_ sender: AnyObject) { print("Crash button pressed!").
https://codelabs.developers.google.com/codelabs/firebase-appquality-swift/index.html?index=..%2F..%2Fio2018
CC-MAIN-2018-47
en
refinedweb
Starting over with NikolaPosted: | More posts about nikola python This site was down for quite some time. I did not want to deploy the old PHP-based site again and therefore looked for alternatives. I never used a static site generator before, but as this site has only a single author (me), I decided to go for it. Python is my favorite programming language, so the site generation tool should also be Python based. I started to look at the Python Wiki blog software list and quickly narrowed my choice down to Nikola: - Nikola looks more lightweight than Django based Hyde - Nikola supports Jinja2 templates which I'm happily using in various projects. - Nikola has its own site with documentation --- it's not perfect, but somebody cared enough to write it :-) I'm using Nikola's developer version directly from github: git clone git://github.com/getnikola/nikola.git cd nikola sudo ./setup.py install sudo apt-get install python-dev python-pip python-lxml python-imaging sudo pip install -r requirements-full.txt Starting a new Nikola site is fairly easy: nikola init myblog cd myblog nikola new_post # create a new post (will ask for title) nikola auto # build and start internal webserver Now I just needed to start my own theme: nikola install_theme base-jinja mkdir themes/mytheme echo 'base-jinja' > themes/mytheme/parent I added a small custom filter in conf.py to "compress" whitespace in generated HTML: from nikola import filters from functools import partial import re WHITESPACE_PATTERN = re.compile('\s+') PRE_BLOCKS = re.compile(r'<pre.*?pre>', re.DOTALL) def repl(m, captures):>> compress_whitespace('a b') 'a b' >>> compress_whitespace('a <pre> \\n </pre> b') 'a <pre> \\n </pre> b' ''' text = raw.decode('utf8') captures = {} text = PRE_BLOCKS.sub(partial(repl, captures=captures), text) text = WHITESPACE_PATTERN.sub(' ', text) for key, val in captures.items(): text = text.replace(key, val) return text.encode('utf8') FILTERS = { ".css": [filters.yui_compressor], ".js": [filters.yui_compressor], ".jpg": [filters.jpegoptim], ".png": [filters.optipng], ".html": [filters.apply_to_file(compress_whitespace)] }
https://srcco.de/posts/starting-over-with-nikola.html
CC-MAIN-2018-47
en
refinedweb
Mechaflash 99 Posted March 12, 2013 (edited) Just trying to get the text in an <a> tag <a id="3734" href="#">Tuesday, March 12, 2013</a> I want to obtain Tuesday, March 12, 2013 from the link. Given my link object array is $aLinks, $aLinks = _IELinkGetCollection($oTarget) for $link in $aLinks if StringInStr($link.href, "") then ConsoleWrite($link.text & @CRLF) endif next at least I thought .text was the correct property but it's not. And my command works as I've tested it with ConsoleWrite($link.href & @CRLF) and $link.id as well Edited March 12, 2013 by Mechaf
https://www.autoitscript.com/forum/topic/149109-ie-link-object-obtain-link-text/
CC-MAIN-2018-47
en
refinedweb
ieee1284_get_deviceid man page ieee1284_get_deviceid — retrieve an IEEE 1284 Device ID Synopsis #include <ieee1284.h> ssize_t ieee1284_get_deviceid(struct parport *port, int daisy, int flags, char *buffer, size_t len); Description: - F1284_FRESH Guarantee a fresh Device ID. A cached or OS-provided ID will not be used. The provided buffer must be at least len bytes long, and will contain the Device ID including the initial two-byte length field and a terminating zero byte on successful return, or as much of the above as will fit into the buffer. Return Value A return value less than zero indicates an error as below. Otherwise, the return value is the number of bytes of buffer that have been filled. A return value equal to the length of the buffer indicates that the Device ID may be longer than the buffer will allow. - E1284_NOID The device did not provide an IEEE 1284 Device ID when interrogated (perhaps by the operating system if F1284_FRESH was not specified). - E1284_NOTIMPL One or more of the supplied flags is not supported in this implementation, or if no flags were supplied then this function is not implemented for this type of port or this type of system. This can also be returned if a daisy chain address is specified but daisy chain Device IDs are not yet supported. - E1284_NOTAVAIL F1284_FRESH was specified and the library is unable to access the port to interrogate the device. - E1284_NOMEM There is not enough memory. - E1284_INIT There was a problem initializing the port. - E1284_INVALIDPORT The port parameter is invalid. Notes. Author Tim Waugh <twaugh@redhat.com> Author. Referenced By ieee1284_claim(3), libieee1284(3).
https://www.mankier.com/3/ieee1284_get_deviceid
CC-MAIN-2018-47
en
refinedweb
Xml rpc net jobs I need ioS developer to develop an apps, with odoo as beckend server. XML-RPC is used as the protocol to comunicate with server. I need android developer to develop an apps, with odoo as beckend server. XML-RPC is used as the protocol to comunicate with server. We are looking for a module for our WHMCS install, that will pull in Zabbix Monitoring information, and assign that data to the individual customer accounts per server/host...WHMCS install, that will pull in Zabbix Monitoring information, and assign that data to the individual customer accounts per server/hostname. Zabbix has an API that uses JSON-RPC ...hardcoded password and execute a HTTPS GET request for a REST API - if found more than one, display a list to the user choose and go - , something like [login to view URL], which will return a JSON payload like { "device": { "id": "company_??????"} ...}. This device id ?????? needs to be parsed and assigned to a string, which after success Hello, We need to install bitcoind and json rpc, so our users can deposit/ withdraw money using BTC. We have a marketplace in which we let users buy and sell products to other members . -Users will deposit money into the admin btc account -once the payment has been verified, the system will automatically will add money into the users website account"}} Hello, We looking an expert on Python / Node.js for working with an RPC Deamon Crypto like bitcoin, litecoin etc. we need to create wallet and display each new wallet on a website page. Simple but you need good knowleadge with python / node.js. Thx you. RPC Holdings is a successful real estate developer. Quarterly reports, online training tools, blog postings, website content, investment documents, etc. We need a dedicated writer who is skilled in business communication with an emphasis on Real Estate and financial investment products. .. I found. [login to view URL]е When opening a local html file i get an error: The RPC server is unavailable. It happens after some calls on internet explorer (version 11). The solution must be in delphi or an explanation what should be done. .. using a connection-oriented I need to post Job in a CV Library using API. I did attached the whole manual. Need to post Job only. Phase one. I am working on a exchange project. My node is setup and address being genrate. I want to a met... Currently we have fully developed this project but we are getting some errors. We are using Nethereum and json-rpc. The node seems to be the source of our problems. Working in Windows environment. ...return Balance steps to reproduce Setup Docker with following attribute PROJECT_NAME="Parity ETH Client" PROJECT_TAG=parity/parity:v2.0.0 CONTAINER_NAME=parity-eth-rpc CONTAINER_HOSTNAME=eth-rpc CONTAINER_VOLUMES="-v ethereum-blockchain:/root/.local/share/[login to view URL]" CONTAINER_ARGS="--light --ws-hosts=all --ws-interface=all --jsonrpc-interface=a... We are looking for a module for our Multibranded WHMCS install, that will pull in Zabbix Monitoring information, and assign that data to the individual customer accounts per ...WHMCS install, that will pull in Zabbix Monitoring information, and assign that data to the individual customer accounts per domain/server. Zabbix has an API that uses JSON-RPC. ...same key is pressed. I need to replace api (i am currently using [login to view URL]) for other api (bitcoin rpc) I want an etherum wallet installation that can be controlled by the RPC through the website for 1. generate new etherum address by user id label 2. check deposit list to the address 3. send balance from address to other address also for tokens (ERC 20) 1. generate new tokens (ERC 20) address by user id label 2. check deposit list to the address 3 Hi I installed eth blockchain. There is all eth command working. but when i am using personal command its not working. Any one can help me to fix it. I would like you to go over my source code and see if you can reduce the number of RPC/Messages per seconds. I have a a small job to install an Ethereum Daemon and connect via json RPC. .. Services snap-in in Microsoft Management Console (MMC). -The DFS Namespace service depends. Dear freelancers, I am looking for someone who can give me a quick help on a g-rpc implementation of microservices. It is about 2 hours projects, (Removed by Freelancer.com Admin), starting right now ..: [login to view URL] ...smart phone (Android, iOS) - Synchronizing account sending and receiving coins to main net directly. - Coins: BTC, BCH, QTUM, DASH 2. Result - API of Android and iOS using Thin Core mentioned above. - Type: Library 3. The format of input and return : Json-Rpc PS: Please carefully see the attached document. Not agency or company, Only individual ...SimpleServiceBinding Endpoint: [login to view URL] SoapAction: [login to view URL] Style: rpc Input: use: encoded namespace: [login to view URL] encodingStyle: [login to view
https://www.freelancer.pk/job-search/xml-rpc-net/
CC-MAIN-2018-47
en
refinedweb
[ ] david e. berry updated SHIRO-160: --------------------------------- Attachment: amf package layout.png picture of the proposed package layout for amf support > Flex integration with Shiro > --------------------------- > > Key: SHIRO-160 > URL: > Project: Shiro > Issue Type: New Feature > Components: Authentication (log-in), Authorization (access control) > Affects Versions: Incubation > Reporter: david e. berry > Attachments: amf package layout.png > > > Commiters, > I have created the following classes that I used to integrate Shiro with Flex AMF. I would like to contribute them to the shiro. Please let me know if there is interest and the procedure for doing so. I have included the class names with a brief description of what they do. They are currently outside of the Shiro code base that I checked out, but I could combine them if interested. > Best Regards, > Dave > /* Authentication and Authorization need to let AMF Ping, Login, Logout messages pass through > without processing. They call FlexMessageHelper to introspect the binary message to see if it is allowed to pass. > If not, normal Authentication, and Authorization takes place. > */ > public class FlexAuthenticationFilter extends AuthenticationFilter; > public class FlexPermissionsAuthorizationFilter extends PermissionsAuthorizationFilter; > public class FlexRolesAuthorizationFilter extends RolesAuthorizationFilter; > /*Helper methods for introspecting the contents of the amf message. It is conceivable that a security handler > might need to introspect the contents of a request. It would be nice if Shiro wrapped the request automatically so that anyone can read the contents without > causing an end of stream error for a filter down the line. > Message helper deserializes the AMF message and checks to see if it is a PING, LOGON, or LOGOUT request. > */ > public class FlexHttpServletRequestWrapper extends HttpServletRequestWrapper; > public class FlexMessageHelper; > /* Custom Flex Login command that calls Subject.login returns a Principal back to Flex. > */ > public class FlexLoginCommand implements LoginCommand; > public class FlexPrincipal implements Principal; -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/shiro-dev/201005.mbox/%3C15540655.89971274124402285.JavaMail.jira@thor%3E
CC-MAIN-2018-47
en
refinedweb
Cornice provides helpers to build & document REST-ish Web Services with Pyramid, with decent default behaviors. It takes care of following the HTTP specification in an automated way where possible. We designed and implemented cornice in a really simple way, so it is easy to use and you can get started in a matter of minutes. A full Cornice WGSI application looks like this (this example is taken from the demoapp project): from collections import defaultdict from pyramid.httpexceptions import HTTPForbidden from pyramid.view import view_config from cornice import Service user_info = Service(name='users', path='/{username}/info', description='Get and set user data.') _USERS = defaultdict(dict) @user_info.get() def get_info(request): """Returns the public information about a **user**. If the user does not exists, returns an empty dataset. """ username = request.matchdict['username'] return _USERS[username] @user_info.post() def set_info(request): """Set the public information for a **user**. You have to be that user, and *authenticated*. Returns *True* or *False*. """ username =!
https://cornice.readthedocs.io/en/latest/
CC-MAIN-2018-47
en
refinedweb
In this post, we’ll have a quick look at ASP.NET Web API Self Hosting and Routing Conventions. ASP.NET Web APIs allow you to easily build RESTful applications on top of the .NET framework. ASP.NET Web API is bundled with ASP.NET MVC 4, but you can also self host Web APIs in your custom .NET application (like a console application or a Windows Forms application or in ASP.NET Web Forms applications). Let us quickly build a Pet Store API to query pets using ASP.NET Web API, and we’ll be self hosting the same. Self Hosting Fire up Visual Studio in Admin mode, and create a new Console application. Then, install the Nuget package AspNetWebApi.SelfHost Install-Package AspNetWebApi.SelfHost This will add the required dependencies to your Self host application. class Program { static void Main(string[] args) { //Create a host configuration var selfHostConfiguraiton = new HttpSelfHostConfiguration(""); //Setup the routes selfHostConfiguraiton.Routes.MapHttpRoute( name: "DefaultApiRoute", routeTemplate: "api/{controller}/{id}", defaults:new { controller = "Pet", id = RouteParameter.Optional } ); //Create Server & Wait for new connections using (var server = new HttpSelfHostServer(selfHostConfiguraiton)) { server.OpenAsync().Wait(); Console.WriteLine("Now Hosting at{controller}"); Console.ReadLine(); } } } Adding Controllers Now, you can add your controllers by inheriting them from the ApiController base class, and the requests will be dispatched to the correct controller based on the routing information we added. So, let us have a PetController class, where you expose few pets. //-- Actual controller public class PetController : ApiController { //GET All Pets /api/pet public IEnumerable<Pet> Get() { var rep = new PetRepository(); return rep.GetAllPets(); } //GET One Pet /api/pet/2 public Pet Get(int id) { var rep = new PetRepository(); return rep.GetAllPets().First(p => p.Id == id); } } //-- View Model public class Pet { public string Name { get; set; } public string Type { get; set; } public int Id { get; set; } } //-- Simple repository elsewhere for some mock data public class PetRepository { public IEnumerable<Pet> GetAllPets() { //Ideally get the data from a source return new List<Pet> { new Pet(){Name="Jim", Type="Dog", Id=1}, new Pet(){Name="Meow", Type="Cat",Id=2}, new Pet(){Name="Jam", Type="Dog",Id=3}, new Pet(){Name="Tommy", Type="Dog",Id=4}, new Pet(){Name="Bigpaw", Type="Cat",Id=5} }; } } At this point, if you go to the target URL, you’ll find the XML response. Here is a formatted view. Have a look at the URL. And for accessing one pet, you can provide the Id, which gets mapped to the Get method in controller that accepts Id parameter. Default Routing Conventions ASP.NET Web API will try to match the URL with the mapped HTTP routes. Based on the above example you can see that URLs like /api/pet or /api/pet/2 etc are matching the route template we provided – /api/{controller}/{id} Now, let us see how the Web API resolves the Controller and the correct action. The controller class will be picked by using the value of the {controller} variable in the URL that corresponds to the routing template. To choose the action, the Http Action bill be used. For example, if this is a HTTP GET request, according to the convention, that will be mapped to a method with the name starting with ‘Get’ in the controller. As an exercise, change the first Get method’s name to a different one to GetAll, and change the name of our second Get method to GetById – and notice that above URLs are still correctly getting mapped. This convention works out of the box for the GET, POST, PUT and Delete HTTP Methods . Instead of following the Naming conventions, you can use the attributes HttpGet, HttpPut, HttpPost and HttpDelete. For example, instead of starting the method names with ‘Get’ as in the above example, you could use the HttpGet attribute as below. public class PetController : ApiController { //GET All Pets /api/pet [HttpGet] public IEnumerable<Pet> FindAll() { var rep = new PetRepository(); return rep.GetAllPets(); } //GET One Pet /api/pet/2 [HttpGet] public Pet FindById(int id) { var rep = new PetRepository(); return rep.GetAllPets().First(p => p.Id == id); } } You might be wondering why the default mapping scheme is limited to the GET, POST, PUT and DELETE actions, but this mapping scheme fits well for RESTful services. Because in REST, all URIs should identify a resource and not actions. Note that this is different than the ASP.NET MVC mapping scheme, where you can specify actions directly. How ever, you can enforce MVC style route selection by specifying the action in the route template, like routes.MapHttpRoute( name: "default", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); And this will enable you to choose actions by name in the URL. For example, now you can access the actions in our PetController using the URLs /api/pet/findall and /api/pet/findbyid/2 - Note that we are using the action name in the URL scheme, much like in MVC. This may be useful if you want to build RPC/HTTP APIs. While this is a quick introduction, you may refer more detailed tutorials on these topics at
http://www.amazedsaint.com/2012/05/self-hosting-aspnet-web-api-and.html
CC-MAIN-2018-47
en
refinedweb
Python’s Flask Flask is a small and powerful web framework for Python. It is easy to learn and simple to use, enabling the users to build their web app in less amount of time. Flask is also easy to get started with as a beginner because there is little boilerplate code for getting a simple app up and running. Flask backs extensions that can add application features as if they were implemented in Flask itself. Extensions exist for object-relational mappers, form validation, upload handling, and several common frameworks related tools. Extensions are updated more regularly than the core Flask program. Flask is commonly used with MongoDB which allows it more control over databases and history. INSTALLING FLASK Before getting started, the user need to install Flask. Because systems vary, things can intermittently go wrong during these steps. INSTALL VIRTUALENV Here we will be using virtualenv to install Flask. Virtualenv is a suitable tool that creates isolated Python development environments where the user can do all his/her development work. If the user installs it system-wide, there is the risk of messing up other libraries that the user might have installed already. Instead, use virtualenv to create a sandbox, where the user can install and use the library without affecting the rest of the system. The user can keep using sandbox for ongoing development work, or can simply delete it once the user is finished using it. Either way, the system remains organized and clutter-free. If you see a version number, you are good to go and you can skip to this “Install Flask” section. If the command was not found, use easy_install or pip to install virtualenv. If you are running in Linux or Mac OS X, one of the following should work: $ sudo easy_install virtualenv $ sudo pip install virtualenv If you are running Windows, follow the “Installation Instructions” on this page to get easy_install up and running on your system. INSTALL FLASK After installing virtualenv, the user can create a new isolated development environment, like so: $ virtualenv flaskapp Here, virtualenv creates a folder, flaskapp/, and sets up a clean copy of Python inside for the user to use. It also installs the handy package manager, pip. Enter newly created development environment and activate it so to start working within it. 1 2 $ cd flaskapp $ . bin/activate Now, the user can safely install Flask: $ pip install Flask SETTING UP THE PROJECT STRUCTURE Let’s create a couple of folders and files within flaskapp/ to keep the web app organized. Within flaskapp/, create a folder, app/, to comprise all files. Inside app/, create a folder static/; this is where the user has to put the web app’s images, CSS, and JavaScript files, so create folders for each of those, as demonstrated above. As well, create another folder, templates/, to store the app’s web templates. Create an empty Python file routes.py for the application logic, such as URL routing. And no project is complete without a helpful description, so create a README.md file as well. BUILDING A HOME PAGE While writing a web app with a couple of pages, it quickly becomes bothersome to write the same HTML boilerplate over and over again for each page. Also, if the user needs to add a new element to their application, such as a new CSS file, the user would have to go into every single page and should add. This is time consuming and error prone. Wouldn’t be nice if, instead of repeatedly writing the same HTML boilerplate, the user can define their page layout just once, and then use that layout to make new pages with their own content. APP/TEMPLATES/HOME.HTML 1 {% extends “layout.html” %} 2 {% block content %} 3 <div class=”jumbo”> 4 <h2>Welcome to the Flask app<h2> 5 <h3>This is the home page for the Flask app<h3> 6 </div> 7 {% endblock %} BUILDING AN ABOUT PAGE In the above section, we have seen the creation of a web template home.html. Now, let’s repeat that process again to create an about page for our web app. APP/TEMPLATES/ABOUT.HTML {% extends “layout.html” %} {% block content %} <h2>About</h2> <p>This is an About page for the Intro to Flask article. Don’t I look good? Oh stop, you’re making me blush.</p> {% endblock %} In order to visit this page in the browser, we need to map a URL to it. Open up routes.py and add another mapping: from flask import Flask, render_template app = Flask(__name__) @app.route(‘/’) def home(): return render_template(‘home.html’) @app.route(‘/about’) def about(): return render_template(‘about.html’) if __name__ == ‘__main__': app.run(debug=True)
http://www.pro-tekconsulting.com/blog/
CC-MAIN-2018-47
en
refinedweb
This is the first post in the blog post series called The Three Laws of a Symptom Fix. The series talks about the consequences of fixing a symptom and not the cause of a bug. The overview of the whole series can be found here. The Temptation [...] He promptly loaded the function into an editor and said, "Oh, that function can't take a NULL pointer." Then, as I stood there watching, he fixed the bug by inserting a "quick escape" if the pointer was NULL:if (pb == NULL) return (FALSE); I pointed out that if the function shouldn't be getting a NULL pointer, the bug was in the caller, not in the function, to which he replied, "I know the code; this will fix it." And it did. But to me the solution felt as if we'd fixed a symptom of the bug and not the cause of it [...] Steve Maguire, Writing Solid Code, p. 176 A temptation to quick-fix a bug can sometimes be really hard to resist. Especially when a good and reliable bug fix is just five keystrokes away and the pressure to deliver new product features is extremely high. Try to put yourself in the following situation. This code: someQueryable.Split(3); crashes with a nasty exception saying that “There is already an open DataReader associated with this Command which must be closed first.”, while this code: someQueryable.ToArray().Split(3); works perfectly fine and you know that it will always work fine. The difference between the two lines, with a little help of IntelliSense of course, is in exactly five keystrokes. Dot, “t”, “o”, “a”, enter - and voilà! Exception is gone forever! Let’s get back to real work. We have features to deliver! The Three Laws In this concrete case, I was both the programmer from the Steve’s quote and the Steve himself, embodied in a single person. I knew the code and I knew that ToArray() will fix it. But I also “felt as if we’d fixed a symptom of the bug and not the cause of it”. Not spending time to dig deep enough and find the real cause of a bug has its price. Accepting a five-stroke bug fix and moving on usually ends up in exchanging a short-term gain - bug is fixed, let’s move on with the features - with a long-term pain. That long term pain caused by a symptom fix usually comes later on in a form of what I call The Three Laws of a Symptom Fix. I formulated the laws based solely on my experience with symptom fixes. Over a course of a decade and a half I testified the truth behind the laws on countless examples of a symptom fix. I’m sure most of you out there experienced the same. No mater how good a symptom fix is, the fact that the cause is not fixed will always result in at least one of the laws coming into force, if not all of them. And so here they are, The Three Laws of a Symptom Fix: - A symptom fix will be unintentionally removed and the bug will reappear. - A symptom fix will mutate and spread. - The one-to-rule-them-all symptom fix will appear, being more dangerous than any of the individual symptom fixes. Let’s explain The Three Laws shortly. The First Law: A symptom fix will be unintentionally removed and the bug will reappear. Out of my experience, they are too often not properly commented at all. No wonder that this is the case. The “proper” comment would sound like: “Don’t remove this or everything will crash!” This kind of a comment would imply that the fix is actually a symptom fix, but the person who apply it usually do not consider it as such, and therefore omits the warning comment. Sooner or later some other programmer will start changing the same code and ask the obvious question “Why is this line of code here? It doesn’t look that we need it.” I was that other programmer several times. Once the fix is unintentionally removed it’s merely a matter of luck how fast the bug will reappear again. If you are lucky, your automated tests will fail or your application will crash immediately. If you are not, the users of your software will get the honor to tell you that, well… your best intentions to clean-up the code have actually removed a symptom fix and reintroduced the bug. The Second Law: A symptom fix will mutate and spread Since the symptom is fixed and not the cause, the probability is high that the bug will appear in other places that use the buggy code. This often results in other programmers writing fixes for the same bug all over again. Let me quote Steve again: Other times, I’ve tracked a bug to it source and then thought, “Wait, this can’t be right; if it is, this function over here would be broken too, and it’s not.” I’m sure you can guess why this other function worked. It worked because somebody had used a local fix for a more general bug. Steve Maguire, Writing Solid Code, p. 176 Writing symptom fixes for the common bug all over again is what I call spreading a symptom fix. Those spread fixes will of course not always look the same. Depending on the nature of the underlying bug, they could come in various forms and could significantly differ from each other. That’s why I call them mutations of a symptom fix. The Third Law: The one-to-rule-them-all symptom fix will appear, being more dangerous than any of the individual symptom fixes Ah, if I would get a penny for every global try-catch I saw, that tried to swallow “that situation that shouldn’t happened”… I’m sure you know what I’m talking about. A one-to-rule-them-all symptom fix is an antipode of the root-cause fix. What makes it dangerous is the fact that it has the same end-effect as the root-cause fix. It makes all appearances of the bug to disappear. The difference is that the root-cause fix fixes the problem, and the one-to-rule-them-all symptom fix efficiently hides it. This hiding could lead to more problems. Also, depending on the approach used in implementing the one-to-rule-them-all, its unintentional removal could lead to a disaster. Examples, please! I hope this theoretical overview of The Three Laws makes sense to you. Still, there is nothing like a good concrete example :-) It happened by chance that my five-stroke symptom fix of the Split<T>() extension method can serve as a perfect example to demonstrate all three laws. My next post shortly explains the bug in the Split<T>() method and its symptom fix. I use this symptom fix afterwards in three separate posts to demonstrate each of the laws in detail: - The Three Laws of a Symptom Fix - Removal and Reappearance - The Three Laws of a Symptom Fix - Mutation and Spreading - The Three Laws of a Symptom Fix - One to Rule Them All (still to be written) All together these posts form a blog series on the topic of symptom fixes and their consequences. I hope that this series will motivate you to always dig as deep as needed to find the cause of a bug before eventually fixing any of its symptoms. Wow, you've made it this far :-) The chances are you might like my next blog post as well ;-) Should I let you know when it's out?
http://thehumbleprogrammer.com/the-three-laws-of-a-symptom-fix/
CC-MAIN-2018-47
en
refinedweb
I'm making a game and the game works around a class I call Cyber_State. Every state of the game is a subclass of Cyber_State, for example the title screen you see when you start the game and the actual game. Every state has a member that is a pointer to a Cyber_State. When a state is created this pointer points to itself, if you want to change states you build the next state and make next_state point to it. For example if you load a game from the main menu you make a new game(subclass of Cyber_State) based on a file and point next_state to it. The main loop then switches the state it works with at the end of the current iteration Cyber_State.h #include <Ogre.h> #include <OIS/OIS.h> #include <CEGUI/CEGUI.h> #include <OgreCEGUIRenderer.h> #include "Cyber_Util.h" #include "ogreconsole.h" class Cyber_State { protected: Cyber_Kernel *kernel; Cyber_State *transition; public: Cyber_State(Cyber_Kernel *kernel); virtual bool frameStarted(const Ogre::FrameEvent& evt); virtual bool frameEnded(const Ogre::FrameEvent& evt); virtual bool keyPressed(const OIS::KeyEvent &arg); virtual bool keyReleased(const OIS::KeyEvent &arg); virtual bool mouseMoved(const OIS::MouseEvent &arg); virtual bool mousePressed(const OIS::MouseEvent &arg, OIS::MouseButtonID id); virtual bool mouseReleased(const OIS::MouseEvent &arg, OIS::MouseButtonID id); Cyber_State *next_state(); }; #endif Cyber_State.cpp #include "Cyber_State.h" Cyber_State::Cyber_State(Cyber_Kernel *kernel){this->kernel = kernel; transition = this;} bool Cyber_State::frameStarted(const Ogre::FrameEvent& evt){return true;} bool Cyber_State::frameEnded(const Ogre::FrameEvent& evt){return true;} bool Cyber_State::keyPressed(const OIS::KeyEvent &arg){return true;} bool Cyber_State::keyReleased(const OIS::KeyEvent &arg){return true;} bool Cyber_State::mouseMoved(const OIS::MouseEvent &arg){return true;} bool Cyber_State::mousePressed(const OIS::MouseEvent &arg, OIS::MouseButtonID id){return true;} bool Cyber_State::mouseReleased(const OIS::MouseEvent &arg, OIS::MouseButtonID id){return true;} Cyber_State *Cyber_State::next_state(){return transition;} For an example in my game I run this function at the end of each frame bool Cyber_Listener::frameEnded(const Ogre::FrameEvent& evt) { state->frameEnded(evt); state = state->next_state(); if(state == NULL) { running = false; } return running; } My concern is this, what is the life span of a state returned by new_state? If I should delete the old state or it's memory gets freed automatically will the state returned by next_state be destroyed too? Or since Cyber_State only stores a pointer to to the next state will the next state itself be unharmed and accessible while I have a reference to it? While I'm at it does anyone have any links to help me understand memory in c++?
https://www.daniweb.com/programming/software-development/threads/183920/question-about-pointers
CC-MAIN-2018-30
en
refinedweb
The Prestige Exception in thread "main" java.io.NotSerializableException: serialization.Main$Tee at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1075) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:291) at java.util.ArrayList.writeObject(ArrayList.java:5WriteObject(ObjectStreamClass.java:890) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1333).writeObject(ObjectOutputStream.java:291) at serialization.Main.main(Main.java:47)After exploring numerous options, I've settled on a simple ObjectOutputStream hack which keeps track of the path to the errant object with very little overhead. Now you can wrap serialization exceptions and add more information: DebuggingObjectOutputStream out = new DebuggingObjectOutputStream(...); try { out.writeObject(...); } catch (Exception e) { throw new RuntimeException( "Serialization error. Path to bad object: " + out.getStack(), e); }The new exception message is much more helpful: Exception in thread "main" java.lang.RuntimeException: Serialization error. Path to bad object: [serialization.Main$Foo@94948a, serialization.Main$Bar@a401c2, [serialization.Main$Tee@ff5ea7], serialization.Main$Tee@ff5ea7] at serialization.Main.main(Main.java:55) Caused by: java.io.NotSerializableException: serialization.Main$TeeWe can now see that Foo references Bar which references a List which contains Tee. In our application, we actually log the type of each object, too, in case the toString() output isn't helpful. As a few key ObjectOutputStream members are final, I ended up having to access the private depth field and use it in conjunction with replaceObject(): import java.io.ObjectOutputStream; import java.io.OutputStream; import java.io.IOException; import java.lang.reflect.Field; import java.util.List; import java.util.ArrayList; public class DebuggingObjectOutputStream extends ObjectOutputStream { private static final Field DEPTH_FIELD; static { try { DEPTH_FIELD = ObjectOutputStream.class .getDeclaredField("depth"); DEPTH_FIELD.setAccessible(true); } catch (NoSuchFieldException e) { throw new AssertionError(e); } } final List<Object> stack = new ArrayList<Object>(); /** * Indicates whether or not OOS has tried to * write an IOException (presumably as the * result of a serialization error) to the * stream. */ boolean broken = false; public DebuggingObjectOutputStream( OutputStream out) throws IOException { super(out); enableReplaceObject(true); } /** * Abuse {@code replaceObject()} as a hook to * maintain our stack. */ protected Object replaceObject(Object o) { // ObjectOutputStream writes serialization // exceptions to the stream. Ignore // everything after that so we don't lose // the path to a non-serializable object. So // long as the user doesn't write an // IOException as the root object, we're OK. int currentDepth = currentDepth(); if (o instanceof IOException && currentDepth == 0) { broken = true; } if (!broken) { truncate(currentDepth); stack.add(o); } return o; } private void truncate(int depth) { while (stack.size() > depth) { pop(); } } private Object pop() { return stack.remove(stack.size() - 1); } /** * Returns a 0-based depth within the object * graph of the current object being * serialized. */ private int currentDepth() { try { Integer oneBased = ((Integer) DEPTH_FIELD.get(this)); return oneBased - 1; } catch (IllegalAccessException e) { throw new AssertionError(e); } } /** * Returns the path to the last object * serialized. If an exception occurred, this * should be the path to the non-serializable * object. */ public List<Object> getStack() { return stack; } }I'll see what I can do about building a more secure version of this into ObjectOutputStream in Java 7.. { Object => String } closure = { String s => new Object(); } The answer is in the comments. So far as closures go, that's a relatively simple example. Then again, the non-closure version isn't exactly simple either.
http://blog.crazybob.org/2007/02/
CC-MAIN-2018-30
en
refinedweb
So new to programming and a little confused I thought the input was random or a list that is changed to an int but apparently its always the same number ? I tried using max function to do the program but that didnt work since it doesnt give me a list. I just need someone to confirm the input never changes and its a single integer its not a list ? If that is the case then what about the list of mountains in the screen is there a way to get that to use as info or would that for some reason be considered cheating ? If the inputs do not give you a list, you may program to create a list yourself The list of mountains in the screen for a test case is valid for that particular test case only. If you hardcode the values in your program, you won't pass the hidden validators (which are used to verify your submitted code), because the validators are different from the test cases. I didnt want to hardcore those specific values. I was wondering if there was some way I could cause the program to read from or get the input values from whats printed for the mountain values if thats possible ? So that it could be used in any of the descent games. The default code you got when you first started the puzzle helps you read the inputs. So, you write additional code to either process the inputs right away, or store them somewhere before processing them. ah alright thanks for the info. Man, I wish they would add HTML to the roster of other codes. That would be so nice. Hi, im totally new to coding in general. I dont get While 1:what does 1 mean? and also the hints tab, suggest these variables:maximaxwhat are these stands for? While 1 means "While True", i.e. the loop is repeated for ever (unless something inside the loop breaks out of it).In the pseudocode, the variables "max" and "imax" are set up to store some values. Can you see what values will eventually go to "max" and "imax"? Hi, I have tried the below code but it doesn't work it stops at the montain with index = 1, I tried to put the index of the montagnes already destroyed in a list in order to avoid to fire on them again. Any comment is welcom. import java.util.*;import java.io.*;import java.math.*; /** class Player { public static void main(String args[]) { Scanner in = new Scanner(System.in); int hmax=0; int imax=-1; ArrayList<Integer> listDeMontagneDetruit = new ArrayList<Integer>(); while (true) { for (int i = imax ; i < 8; i++) { int mountainH = in.nextInt(); // represents the height of one mountain. { if (mountainH>hmax && !(listDeMontagneDetruit.contains(imax))) imax = i; listDeMontagneDetruit.add(imax); } } System.out.println(imax); // The index of the mountain to fire on } } } Please don't post full code on this forum. It may not be correct for the "for" loop to start with "i = imax". Please make use of the "System.err.println" syntax to print out your variables to help you debug your code. @tarekbel2d: You're overthinking it. You don't need to keep track of anything to solve this one. You just need to decide on each turn which mountain is currently the highest. (You also have a variable there that never gets updated.) [Python]Can someone PLEASE tell me what in the hell is going on in the code? My brain just BREAKS!This is ridiculous and frustrating. The web page never mentioned "Height of mountain 0 : x" yet, these lines get printed.I see no such code in the code. Why is is appearing in the console output? There is a comment in the code: # To debug: print >> sys.stderr, "Debug messages..." What does this mean? If I simply enter that in the editor, it doesn't output any useful information.What am I suppose to do with that code? The info on the left says "input for one game turn" but I'm the one who's suppose to provide input to the game, right? Why is the input already given?Also, on the input, it says i'm given 8 LINES with mountainH. How am I suppose to interpret this info? am I getting eight times mountainH as a given? On the out put it says "single line of integer" while in the code editor, the integer is between quotes, so doesn't that make it a string? Can someone help me? I realize I sound like a retard but I can't make heads or tails out of this and I WANT to like this website and the problems. HELP please.Thanks Reading your profile, I see that you've already passed 100% of the game, and also 100% of Temperatures, so I assume you've got no further questions now. But in case you don't understand the interface of the website, I suggest you go through the Onboarding puzzle again. The onboarding doesn't really clarify that either. I just moron'd my way through.What'd I'd love to see is an actual, literal explanation of what's going on in the interface. Which pieces of data are delivered by the game and what my options are to do with it. For example, it took me months to figure out that the "inputs" were being done by the server.I don't know if I can explain it any more clearly than that.At least I seem to be making progress with the puzzles, so there is that. Hi i am quite lost solving this puzzle could you help me ! what is the solutioon for this iam also having the same doubt.you know means pls tell me.first conditon only gets passed other may get failed how the loop works i dont no.pls help me Hi guys, I'm a beginner and i'm trying really hard to understand this but struggling. I'm doing it in Python3 cause thats the first language i'm learning. I've got a few questions: 1.mountain_h = int(input()) # i don't get how that works. From what i know, input asks the user for input from the keyboard, but once you submit or run the program, there is no input prompt asking you to enter anything. How is this finding and storing the height of the mountain? 2.What does it mean by game turn? Is that the same as one time around the while loop? 3.Correct me if im wrong, but you shoot by printing the number of the mountain you want to shoot? So the loop has to find the heights of each individual mountain, then decide which mountain is the tallest, and if it's, say, number 5, it will result in print("5")? Is that how the solution goes? Appreciate any advice, thanks guys. No idea why the hell this doesn't work. I've pumped it into a repl that mimics the codingame env, and the results are exactly as expected, but for some reason it simply will not work here. I cannot for the life of me figure out why... And yes, I've tried using Math.max.apply(null, arr) as well, to no avail. while (true) { for (var i = 0; i < 8; i++) { var mountainH = parseInt(readline()); // represents the height of one mountain. var height = []; height.push(mountainH); var maxHeight = Math.max(...height); var target = height.indexOf(maxHeight); print(target); } } EDIT - Interesting observation: When printErr the max of any array I supply the IDE, it will ALWAYS print the entire array, never the max value of the array as expected. I think there may be something wrong with array functions here...
http://forum.codingame.com/t/the-descent-puzzle-discussion/1332?page=11
CC-MAIN-2018-30
en
refinedweb
I want to find sum of all divisors of a number i.e. if the number is 6 i want to have 1+2+3+6=12. My attempt to approach it is: #include <iostream> using namespace std; int divisorsSum(int n){ int sum=0; for (int i=1; i<=n; i++){ if(n%i==0) i=sum+i; } return sum; } int main() { cout<<divisorsSum(6); } However surprisingly it does not work at all, it returns nothing and i am not able to figure out what is wrong with my code. Thus the question is how to make it works? BTW: There is no point in immediately down voting everything I am not an expert and yes i do make mistakes. You have several issues in your code. int i = i; and i is still not defined. You probably wanted i = 1 i = sum + i; sum is not updated above. You probably wanted sum += i You need to change your function divisorsSum to use the following code: int divisorsSum(int n) { int sum = 0; for (int i = 1; i <= n; i++) { if(n % i == 0) sum += i; } return sum; } for (int i=i; i<=n; i++) Change the i=i to i = 1 int divisorsSum(int n){ int sum=0; for (int i=1; i<=n; i++){ if(n%i==0) sum+=i; } return sum; } Maybe there are better algorithms of finding divisors of a number, but here is the correct version of your code. int divisorsSum(int n){ int sum=0; for (int i = 1; i <= n; ++i){ if(n % i == 0) sum += i; } return sum; } And here is a little optimized version, if a i is bigger than half of n then i cannot be a divisor of n. int divisorsSum(int n) { int sum=0; for (int i = n / 2; i >= 1; --i){ if(n % i == 0) sum += i; } return sum; }
http://www.dlxedu.com/askdetail/3/ec7427f2b8c1cec8ac7c1246c37b93cb.html
CC-MAIN-2018-30
en
refinedweb