text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On Mon, Aug 10, 2009 at 01:05:42PM -0700, Tommy Chheng wrote: > It is a Ruby app using Couchrest(which uses restclient/net ruby lib) > > I'm basically comparing one document against all other documents(+30K > documents in the dataset; so it's huge number of connections if the > connections aren't being closed properly) like this: > grants = NsfGrant.all.paginate(:page => current_page, :per_page => > page_size) > grants.each do |doc2| > NsfGrantSimilarity.compute_and_store(doc1, doc2) But presumably NsfGrant.all only makes a single HTTP request, not 30K separate requests? Looking at "netstat -n" will give you a rough idea, at least for seeing how many sockets are left in TIME_WAIT state, but the surest way is with tcpdump: tcpdump -i lo -n -s0 'host 127.0.0.1 and tcp dst port 5984 and (tcp[tcpflags] & tcp-syn != 0)' should show you one line for each new HTTP connection made to CouchDB. But in any case, for parsing 30K documents, you may not want to load all 30K into RAM and then compare then afterwards. Couchrest lets you do a streaming view, so that one object is read at a time - I think if you call view with a block, then it works this way automatically. You need to have curl installed for this to work, as it shells out a separate curl process and then reads the response one line at a time. # Query a CouchDB view as defined by a <tt>_design</tt> document. Accepts # paramaters as described in def view(name, params = {}, &block) keys = params.delete(:keys) name = name.split('/') # I think this will always be length == 2, but maybe not... dname = name.shift vname = name.join('/') url = CouchRest.paramify_url "#{@uri}/_design/#{dname}/_view/#{vname}", params if keys CouchRest.post(url, {:keys => keys}) else if block_given? @streamer.view("_design/#{dname}/_view/#{vname}", params, &block) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ else CouchRest.get url end end end HTH, Brian.
http://mail-archives.apache.org/mod_mbox/couchdb-user/200908.mbox/%3C20090810201914.GA28582@uk.tiscali.com%3E
CC-MAIN-2016-50
refinedweb
315
75.2
Slim down your view controllers the smart way Part 2 in a series of tutorials on fixing massive view controllers: One of the easiest ways to create messy and confused view controllers is to ignore the single responsibility principle – that each part of your program should be responsible for one thing at a time. A good sign that you’re ignoring this principle is writing code like this: class MegaController: UIViewController, UITableViewDataSource, UITableViewDelegate, UIPickerViewDataSource, UIPickerViewDelegate, UITextFieldDelegate, WKNavigationDelegate, URLSessionDownloadDelegate { If I asked you what that view controller does, could you answer without having to pause for breath? I’m not saying you must make everything do precisely one thing – sometimes sheer pragmatic development will stop that from being the case, as you’ll see soon. However, there’s no reason that view controller should act as so many delegates and data sources, and in fact doing so makes your view controllers less composable and less reusable. If you split off those protocols into separate objects you can then re-use those objects in other view controllers, or use different objects in the same view controller to get different behavior at runtime – it’s a huge improvement. In this article I want to walk you through examples of getting common data sources and delegates out of view controllers in a way you should be able to apply to your own projects without much hassle. Before we begin, please use Xcode to create a new iOS app using the Master-Detail App template. This creates a pretty disastrous app template for a number of reasons, and it’s a thoroughly shaky foundation to use for any of your own work. I could write many articles about fixing the problems it contains, but here we’re going to do the least amount of work required to fix two of its problems: the main view controller acts as its table view’s data source and delegate. SPONSORED Build Chat messaging quickly with Stream Chat. The Stream iOS Chat SDK is highly flexible, customizable, and crazy optimized for performance. Take advantage of this top-notch developer experience, get started for free today! Sponsor Hacking with Swift and reach the world's largest Swift community! Apple’s default template has code in MasterViewController.swift to make that act as the table view delegate. While this is fine for simple apps or while you’re learning, for serious apps you should always (always) split this off into its own class that can be then be re-used as needed. The process here is quite simple, so let’s walk through it step by step. First, go to the File menu and choose New > File. Select Cocoa Touch Class from the list that Xcode offers you, then press Next. Make it a subclass of NSObject, give it the name “ObjectDataSource”, then click Next and Create. Note: I’ve called it “ObjectDataSource” because Apple’s template code gave us var objects = [Any]() for the app data. This is one of many crimes that we won’t be fixing here. The next step is to move all the table view data source code from MasterViewController.swift into ObjectDataSource.swift. So, select all this code and cut it to your clipboard: // MARK: - Table View override func numberOfSections(in tableView: UITableView) -> Int { return 1 } override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return objects.count } override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath) let object = objects[indexPath.row] as! NSDate cell.textLabel!.text = object.description return cell } override func tableView(_ tableView: UITableView, canEditRowAt indexPath: IndexPath) -> Bool { // Return false if you do not want the specified item to be editable. return true } override func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCellEditingStyle, forRowAt indexPath: IndexPath) { if editingStyle == .delete { objects.remove(at: indexPath.row) tableView.deleteRows(at: [indexPath], with: .fade) } else if editingStyle == .insert { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view. } } None of that has any business being in the view controller, so open ObjectDataSource.swift and paste it inside that class instead. We need to make three small changes to ObjectDataSource before we can use it: overridefrom all the method definitions. This was required in our view controller because we inherited from UITableViewController, but now we don’t. UITableViewDataSourceby adding that next to NSObjectlike this: class ObjectDataSource: NSObject, UITableViewDataSource {. var objects = [Any]()from being a property on MasterViewControllerto being a property on ObjectDataSource. That completes ObjectDataSource, but leaves problems inside MasterViewController because it’s trying to refer to an objects array it no longer has. To fix this we must make two changes inside MasterViewController: give it a data source property using our new ObjectDataSource class, then refer to that data source wherever objects is used. First, open MasterViewController.swift and give the class this new property: var dataSource = ObjectDataSource() Second, change the two references to objects to be dataSource.objects. That means changing insertNewObject() to this: dataSource.objects.insert(NSDate(), at: 0) And changing the prepare() method to this: let object = dataSource.objects[indexPath.row] as! NSDate Yes, I know; Apple’s template code here is really poor, but remember we’re trying to do the least amount of work required to fix our two problems. At this point the code compiles cleanly, but it doesn’t work yet. For that we need one last change inside the viewDidLoad() method of MasterViewController. Add this line: tableView.dataSource = dataSource That tells the table view to load its data from our custom data source, and now the app will be back to the same state where it started. The difference is that the view controller has come down from 84 lines of code to 54 lines of code, plus you can now use that data source elsewhere. This is definitely an improvement, although in practice you would probably want to move the data model out to your coordinator if you’re using one, or perhaps leave it in the view controller if that’s where you handle data fetching. The single responsibility principle helps us design apps in small, simpler parts that can then be combined together to make more complex components. However, as I said earlier sometimes being a pragmatic developer will make you take a different route, and I want to discuss that briefly before moving on. You’ve seen how it’s pretty straightforward to get table view data sources out into their own object, so you might very well think we’ll create another object to be the table view delegate. However, this is more problematic for two reasons: UITableViewDataSourceand UITableViewDelegateis bizarre and seemingly arbitrary. For example, the data source has titleForHeaderInSectionwhereas the delegate has viewForHeaderInSectionand heightForRowAt. This means splitting UITableViewDelegate into its own class can be fraught with difficulties. As a result, I often see two solutions: UITableViewDataSourceand UITableViewDelegatehandling into a single class. This goes against the single responsibility principle, but if it avoids spaghetti code that’s the bigger win. Which you prefer depends on your personal style, but in my own projects I prefer to keep my view controllers as simple as possible. That means they handle view lifecycle events ( viewDidLoad(), etc), store some @IBOutlets and @IBActions, and occasionally to handle model storage depending on what I’m doing. Remember: the goal here is to make your app design simpler, more maintainable, and more flexible – if you’re adding complexity just to stick to a principle, you’ll end up with problems. Although UITableViewDataSource and UITableViewDelegate are tricky to separate cleanly, not all delegates are like that. Instead, many delegates are easy to carve off into separate classes, and in doing so you’ll immediately benefit from the same kinds of reusability you already saw. Let’s look at a practical example: you want to embed a WKWebView that enables access to only a handful of websites that have been deemed safe for kids. In a naïve implementation you would add WKNavigationDelegate to your view controller, give it a childFriendlySites array as a property, then write a delegate method something like) } (If you haven’t used contains(where:) before, you should really read my book Pro Swift.) To reiterate, that approach is perfectly fine when you’re building a small app, because either you’re just learning and need momentum, or because you’re building a prototype and just want to see what works. However, for any larger apps – particularly those suffering from massive view controllers – you should split this kind of code into its own type: ChildFriendlyWebDelegate. This needs to inherit from NSObjectso it can work with WebKit, and conform to WKNavigationDelegate. childFriendlySitesproperty and navigation delegate code in there. ChildFriendlyWebDelegatein your view controller, and make it the navigation delegate of your web view. Here’s a simple implementation of just that: import Foundation import WebKit class ChildFriendlyWebDelegate: NSObject, WKNavigationDelegate { var childFriendlySites = ["apple.com", "google) } } That solves the same problem, while neatly carving off a discrete chunk from our view controller. But you can – and should – go a step further, like this: func isAllowed(url: URL?) -> Bool { guard let host = url?.host else { return false } if childFriendlySites.contains(where: host.contains) { return true } return false } func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, decisionHandler: @escaping (WKNavigationActionPolicy) -> Void) { if isAllowed(url: navigationAction.request.url) { decisionHandler(.allow) } else { decisionHandler(.cancel) } } That separates your business logic (“is this website allowed?”) from WebKit, which means you can now write tests without trying to mock up a WKWebView. I said it previously but it’s worth repeating: any controller code that encapsulates any knowledge – anything more than sending a simple value back in a method – will be harder to test when it touches the user interface. In this refactored code, all the knowledge is stored in the isAllowed() method, so it’s easy to test. This change has introduced another, more subtle but no less important improvement to our app: if you want a child’s guardian to enter their passcode to unlock the full web, you can now enable that just by setting webView.navigationDelegate to nil so that all sites are allowed. The end result is a simpler view controller, more testable code, and more flexible functionality – why wouldn’t you carve off functionality like this? As I said at the beginning of this article, Apple’s Master-Detail App template is a pretty disastrous foundation for any real work, but in this article I’ve shown you how we can chip away at some of the rot to make the view controller simpler. If you found this article interesting then you should definitely read my book Swift Design Patterns – it’s packed with similar tips for ways to simplify, streamline, and uncouple your components..
https://www.hackingwithswift.com/articles/86/how-to-move-data-sources-and-delegates-out-of-your-view-controllers
CC-MAIN-2022-40
refinedweb
1,801
51.58
0.33 2020-04-04 - Fix broken DEBE 0.32 2019-08-31 - Added DOLUDOLU back, just in case - Updated with correct link of DEBE 0.31 2019-08-31 - DEBE is back! - Add CoC. 0.30 2017-12-27 - Remove unnecessary dependency 0.29 2017-12-27 - Replace online test with an offline one 0.28 2017-12-25 - Replace smartmatch with any (breaks on 5.27.8) 0.27 2017-07-19 - Minor fix & some boring talk on timezones 0.26 2017-01-22 - Added DOLUDOLU (alternative to DEBE) 0.25 2017-01-20 - Fix failing tests - Use GitHub issue tracker 0.24 2017-01-15 - Fix version number! 0.23 2017-01-15 - Fix failing tests 0.22 2017-01-15 - Fix typo 0.21 2017-01-15 - Use WWW::Lengthen (WWW::Expand has failing tests) 0.20 2017-01-15 - Renamed to WWW::Eksi (WWW::Eksisozluk is now an alias) - Removed DEBE (no more published since 2017-01-13) - Added GHEBE (top entries of last week) - Added politeness delay option - Replaced most regexps with DOM 0.13 2016-09-23 - Added deprecation warning to WWW::Eksisozluk 0.12 2015-02-27 - Updated regexps to match new eksisozluk style - increased default sleep time to 15 - converted tabs to spaces in "changes" - this version is not published to cpan yet 0.11 2015-04-27 - Trying to fix 'decreasing version number' problem 0.10 2015-04-26 - move to dist:zilla - entry->number_in_topic is deprecated (as it is removed by eksisozluk) - entry->date_accessed is deprecated - entry->date_published, is_modified, date_modified are deprecated. - entry->date_print is renamed to entry->date - gifs are no more embedded automatically in entry->body - popular is renamed. now you need to call topiclist with argument popular - list of today's topics is added (call topiclist with argument today) 0.09 2014-11-09 - A semicolon was missing on dependency list 0.08 2014-11-09 - List of popular topics (%popular) 0.07 2014-08-03 - author_link is added - style max-width from body's img is removed 0.06 2014-07-22 - Changed namespace from "Net" to "WWW" as proposed by PrePAN community - get_entry_by_id is renamed as entry - get_current_debe is renamed as debe_ids - debe_ids returns in 0..49, it was 1..50 where [0] was a dummy -1 - Partial list problem is handled at which you get 60 entries. Now you don't. It simply doesn't re-add already added value. - Script now has a object oriented interface. You can call my $eksi = WWW::Eksisozluk->new(); and work from there. 0.05 2014-07-21 - get_entry_by_id($id) - get_current_debe 0.01 2014-07-08 - original version; created by h2xs 1.23 with options - AX Net::Eksisozluk
https://metacpan.org/changes/distribution/WWW-Eksi
CC-MAIN-2021-17
refinedweb
451
69.18
from libpysal.weights.contiguity import Queen import libpysal from libpysal import examples import matplotlib.pyplot as plt import geopandas as gpd %matplotlib inline from splot.libpysal import plot_spatial_weights examples.explain('rio_grande_do_sul') {'name': 'Rio_Grande_do_Sul', 'description': 'Cities of the Brazilian State of Rio Grande do Sul', 'explanation': ['* 43MUE250GC_SIR.dbf: attribute data (k=2)', '* 43MUE250GC_SIR.shp: Polygon shapefile (n=499)', '* 43MUE250GC_SIR.shx: spatial index', '* 43MUE250GC_SIR.cpg: encoding file ', '* 43MUE250GC_SIR.prj: projection information ', '* map_RS_BR.dbf: attribute data (k=3)', '* map_RS_BR.shp: Polygon shapefile (no lakes) (n=497)', '* map_RS_BR.prj: projection information', '* map_RS_BR.shx: spatial index', 'Source: Renan Xavier Cortes <[email protected]>', 'Reference:']} Load data into a geopandas geodataframe gdf = gpd.read_file(examples.get_path('map_RS_BR.shp')) gdf.head() weights = Queen.from_dataframe(gdf) /Users/steffie/code/libpysal/libpysal/weights/weights.py:169: UserWarning: There are 29 disconnected observations Island ids: 0, 4, 23, 27, 80, 94, 101, 107, 109, 119, 122, 139, 169, 175, 223, 239, 247, 253, 254, 255, 256, 261, 276, 291, 294, 303, 321, 357, 374 " Island ids: %s" % ', '.join(str(island) for island in self.islands)) This warning tells us that our dataset contains islands. Islands are polygons that do not share edges and nodes with adjacent polygones. This can for example be the case if polygones are truly not neighbouring, eg. when two land parcels are seperated by a river. However, these islands often stems from human error when digitizing features into polygons. This unwanted error can be assessed using splot.libpysal plot_spatial_weights functionality: plot_spatial_weights(weights, gdf) plt.show() This visualisation depicts the spatial weights network, a network of connections of the centroid of each polygon to the centroid of its neighbour. As we can see, there are many polygons in the south and west of this map, that are not connected to it's neighbors. This stems from digitization errors and needs to be corrected before we can start our statistical analysis. libpysal offers a tool to correct this error by 'snapping' incorrectly separated neighbours back together: wnp = libpysal.weights.util.nonplanar_neighbors(weights, gdf) We can now visualise if the nonplanar_neighbors tool adjusted all errors correctly: plot_spatial_weights(wnp, gdf) plt.show() The visualization shows that all erroneous islands are now stored as neighbors in our new weights object, depicted by the new joins displayed in orange. We can now adapt our visualization to show all joins in the same color, by using the nonplanar_edge_kws argument in plot_spatial_weights: plot_spatial_weights(wnp, gdf, nonplanar_edge_kws=dict(color='#4393c3')) plt.show()
https://nbviewer.jupyter.org/github/pysal/splot/blob/master/notebooks/libpysal_non_planar_joins_viz.ipynb
CC-MAIN-2019-13
refinedweb
406
51.04
Struts Dispatch Action Example function. Here in this example you will learn more about Struts Dispatch Action that will help you grasping the concept better. Let's develop Dispatch_Action... Struts Dispatch Action Example   Dispatch Action - Struts Dispatch Action While I am working with Structs Dispatch Action . I am getting the following error. Request does not contain handler parameter named 'function'. This may be caused by whitespace in the label text Struts dispatch action - Struts Struts dispatch action i am using dispatch action. i send the parameter="addUserAction" as querystring.ex: at this time it working fine. but now my problem is i want to send another value as querystring for example dynamic method dispatch - Java Beginners dynamic method dispatch can you give a good example for dynamic method dispatch (run time polymorphism) Hi Friend, In dynamic method dispatch,super class refers to subclass object and implements method overriding Create Action class an action class you need to extend or import the Action classes or interface...; } return control; } } In the above action class userName and password is two... used in the above class to forward the action to the next page. All Getting Stated with JDBC it. In this section on Getting stated with JDBC you learned the basics of JDBC...Getting Stated with JDBC In this Getting Started with JDBC tutorial, we will explain you how you can start working with the JDBC API to develop database Login Action Class - Struts Login Action Class Hi Any one can you please give me example of Struts How Login Action Class Communicate with i-batis Dynamic method dispatch System.out.println("In main Class"); } }Description : In the above example...Dynamic method dispatch Dynamic dispatch is a process of selecting, which... polymorphism is achieved in java. Let suppose a class A contain a method Dispatcher Result Example the same to the dispatch the request data to the desired action. To use dispatcher in result you need to do the following mapping. <action name="... to another namespace --> <action name="doLogin" class=" AsyncContext Interface dispatch method In this section, you will learn about dispatch method of AsyncContext Interface User Registration Action Class and DAO code User Registration Action Class and DAO code... to write code for action class and code for performing database operations (saving data into database). Developing Action Class The Action Class Adduser.jsp in struts Dispatch ;Add User</title> </head> <form name="user" action="Test.do?goto Understanding Struts Action Class on the user browser. In this lesson you learned how to create Action Class and add... Understanding Struts Action Class In this lesson I will show you how to use Struts Action password action requires user name and passwords same as you had entered during... The password forgot Action is invoked STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS of aggregation, the action class must extend DispatchAction class as shown...; Observe that the above class extends DispatchAction and so you cannot use this method if your class already extends your (some What is Action Class? What is Action Class? What is Action Class? Explain with Example action tag - Struts action tag Is possible to add parameters to a struts 2 action tag? And how can I get them in an Action Class. I mean: xx.jsp Thank you DispatchAction class? - Struts DispatchAction class? HI, Which is best and why either action class or dispatch class. like that Actionform or Dynactionform . I know usage.../understandingstruts_action_class.shtml Struts Action Class Struts Action Class What happens if we do not write execute() in Action class Class class. In the above example the sq is the object of square class and rect... Class, Object and Methods Class : Whatever we can see in this world all the things Action Script custom components Flex Custom Components using Action Script:- You can create custom components by define Action Script class. User can create two type of custom components.... The process is that you will create a subclass of the Action Script servlet action not available - Struts :: action class:: package org.students; import...servlet action not available hi i am new to struts and i am getting the error "servlet action not available".why it is displaying this error Standard Action "jsp:plugin" file that the plugin will execute. You must include the .class extension...Standard Action <jsp:plugin> In this Section, we will discuss about standard action "jsp:plugin" & their implementation using a Chain Action Result Example ; </action> <action name="doLogin" class="...Chain Action Example Struts2.2.1 provides the feature to chain many actions... in an application by applying Chain Result to a action. A Chain Result is an result Introduction to Action interface from user. NONE- If the execution of action is successful but you do not want...Introduction To Struts Action Interface The Action interface contains the a single method execute(). The business logic of the action is executed within attribute in action tag - Java Beginners used with the action class. But i`m not clear about the attribute tag(attribute="bookListForm")? could you pleas explain me the use of this tag...attribute in action tag I'm just a beginner to struts Spring 3 MVC hello world, Spring 3.0 Web MVC Example , make sure you have JDK 5 or above. Open dos prompt if you are using windows...Spring 3 MVC Hello World Example Getting stated with the Spring 3 MVC... first example in Spring MVC. After completing the tutorial you will be able Redirect Action Result Example the action to the specified location you need to do mapping in the struts.xml as follows <action name="redirectAction" class="...;/result> </action> <action name="doLogin" class=" Capturing JSP Content Within My Strut Action Servlet - JSP-Servlet Capturing JSP Content Within My Strut Action Servlet My end goal... a dispatch to the JSP and is some how able to hold onto the response so as to be able... if JBoss/Tomcats container handles that implicitly or not... Is there anything you HTML Action attribute - Java Beginners In action Attribute I want to give emp_event.java class. But When I run I am... are calling servlet from action tag,then you need to import the package...HTML Action attribute I have folder structure like Struts Action Classes Struts Action Classes 1) Is necessary to create an ActionForm to LookupDispatchAction. If not the program will not executed. 2) What is the beauty of Mapping Dispatch Action Action and ActionSupport Action and ActionSupport Difference between Action and ActionSupport.... The developer implements this interface of accessing string field in action methods. The com.opensymphony.xwork2.ActionSupport is class . It is used Example of ActionSupport class ;action name="actionSupport" class="...Example of ActionSupport class Struts ActionSupport class provides the default... automatically when action is called. This is default implemented method subclasses Struts 2 Redirect Action Struts 2 Redirect Action In this section, you will get familiar with struts 2 Redirect action...;/html> Step 3 : Create an Action class. Login.java package  Action Listeners Action Listeners Please, could someone help me with how to use action listeners I am creating a gui with four buttons. I will like to know how to apply the action listener to these four buttons. Hello Friend, Try Developing Login Action Class Developing Login Action Class  ... for login action class and database code for validating the user against database. Developing Login Action Class In any application javascript call action class method instruts javascript call action class method instruts in struts2 onchange event call a method in Actionclass with selected value as parameter how can i do Action Event Listener Action Event Listener Action Listeners can be implemented in 2 ways. If you use the first one then it will limit you to use only one Implementing Actions in Struts 2 the many Action classes and interface, if you want to make an action class for you... in this method. When an action is called the execute method is executed. You can... the following Action class by implementing Action interface. TestAction.java Create Chart using Action script in Flex4 Create Chart using Action script in Flex4: In this section you will see how we can create a chart using action script. we have to need some classes... and manipulate chart using action script. You will use the new keyword download file Error in struts2 action class download file Error in struts2 action class Hi, i am using bellow block of code for download file : public void downloadGreeting(String filename,HttpServletRequest request, HttpServletResponse response){ String Struts2.2.1 Action Tag Example class directly from a JSP page. We can call action directly by specifying...;action name="ActionTag" class="roseindia.ActionTag...;/package> </struts> The action class ActionTag.java is as follows Fetch the data using jsp standard action java.util.*; public class EmpBean { public List dataList(){ ArrayList list=new...*; import javax.servlet.http.*; public class BeanInServlet extends HttpServlet... In the beandata.jsp you are using scriptlet tag(<% %>).But I am Action classes in struts Action classes in struts how many type action classes are there in struts Hi Friend, There are 8 types of Action classes: 1.ForwardAction class 2.DispatchAction class 3.IncludeAction class 4.LookUpDispatchAction Action Or DispatchAction - Development process Action Or DispatchAction Hi, Action class has execute() only where as dispatchaction class has multiple methods. plz tell me when to use action and dispatchaction.can we use multiple actions in Action class. Thanks Prakash iPhone UISwitch Action iPhone UISwitch Action UISwitch Class: Provides a method to control on and off state in your application. For example if you wanted to receive a alert when your application receive any updates you can set it to on, if your application Action tag - JSP-Servlet Action tag Hello, I want to help ....i hav one feedback form there is action , .. can i use two action at the same form because i want html page on submitting feedback form and instead of above tag i change form class medals. In this class, you should also define constructors, and assessor, mutator methods. Task 2 MedalTally.java is a class to model a medal tally, containing... is a class to represent a country in medal tally. It has an attribute of is extended by most of the Action classes is ActionSupport class. The default method which in an action class is execute() methods. When the action is called this method executed automatically. The action class in Struts2 framework jsp include action tag jsp include action tag Defined jsp include action tag ? The <jsp:include> element allows you to include either a static or dynamic... that is included in the JSP page. When the include action is finished, the JSP The ActionForm Class validation of data, the data will be sent to model (the action class). In the Action class we can add the business processing logic but in this case we are just... in next version. Now we will create the Action class which is the model part UIButton action example UIButton action example HI, How i can capture the Button click action? What to write the .h and .m file? Thanks Hi, Here is the code that you can write in .h file: (void) buttonPress:(id)sender; Here standard action - JSP-Servlet ; hi friend.. Action Allow you to specify components from a tag library or a standard tag. For Example : Action may display an output or write some value to servlet without showing any output. Actually action is a tag name and tag Class names don't identify a class - Java Tutorials a new feature that allows you to identify a class not only by its name... This is a First Class But suppose a situation where you wish to use the make... that but with the help of ClassLoader you can do that. Consider a simple class which uses String Class in Action Script3 .style1 { font-size: medium; } String Class in Action Script3:- String class is a data type. String class provides methods for access... types of String class objects. String class have different types of constructors Action Script 'include' statement example statement has been demonstrated. Above mentioned action script files... Action Script 'include' statement example Example below shows two consequent action No action instance for path . * */ public class StrutsUploadAndSaveAction extends Action...No action instance for path <%@ taglib uri="http...; <body bgcolor="white"> <html:form action="/FileUploadAndSave Date Class In Action Script .style1 { font-size: medium; } Date Class in Action Script 3:- If user... class that provide date and time related information. User access the current... class in the application. The Date class is the top-label class that provide Introduction to ModelDriven interface with example ) { this.studentAge = studentAge; } } Now Mapping for the above Action class and JSP... to action. In order to create a model driven action your class should extend...;/action> <action name="studentAction" class=" Java Bigdecimal class example digits. If you haven't ever used this class, it's one that you should master... Java Bigdecimal class example Java bigdecimal class comes under java.math library. Class Can you create an object of an abstract class? Can you create an object of an abstract class? Hi, Can you create an object of an abstract class? Thanks How do you call a constructor for a parent class? How do you call a constructor for a parent class? How do you call a constructor for a parent class Can you instantiate the Math class? - Java Beginners Can you instantiate the Math class? Hi, Can you create object of Math class? Thanks Hi, All the methods in Math class is static. The constructor is not public in the Math class. So, we can't create Toolbar: Change of action - Swing AWT Toolbar: Change of action How do you change the action of a button in the Jtoolbar using Swing .The button in the toolbar should act as links so that when that button is clicked the corresponding page should appear jsp:include action tag jsp:include action tag In this section we will discuss about JSP "include" action tag. The jsp "include" tag is used to include... in the jsp container. Then the Servlet code is compiled and .class file Configuring Actions in Struts application application, at first write a simple Action class such as SimpleAction.java... Action class which returns the success. Now Write the following code into the struts.xml file for action mapping. Before mapping action let I introduce you how can you calculate you your age in daies?? ."); } } Hope that the above code will be helpful for you. Thanks...how can you calculate you your age in daies?? **hi, I am beginner... class AgeInDays{ public static void main(String[] args) throws Exception Struts Forward Action Example -mapping. Note that we do not create any action class..... Here in this example you will learn more about Struts Forward Action... an Action Class Developing the Action Mapping in the struts-config.xml java swing-action on checkbox selection java swing-action on checkbox selection I am working in netbeans... need the action preformance of check box...could nybody?....if could...!thanks... javax.swing.event.*; public class CheckBoxAction { private static String des Introduction To Math Class In Java will compile and execute the above example you will get the output as follows...Introduction To Math Class In Java In this tutorial we will read about the java.lang.Math class in Java. java.lang.Math is a final class which is created radio button value on edit action ...Problem 'm facing is on edit action 'm not retrieving radio button value..i have...; </head> <body class="body" onload="payment_hide();">...; out.println("<form name='edit_customer' method='post' action='edit_form Indexed Vector Array in Action Script 3.0 ; In this example we have create a Array class instance. You use the array access... .style1 { font-size: medium; } Insert value in Indexed Array class instance example:- In this example, you can see how to insert value in the indexed Abstract class and abstract method be private because it defined in the other class. An abstract methods cannot... points about abstract methods are as following An abstract class may or may... also. An abstract class cannot be instantiated so it need to be a sub class Action Submit Html Illustrates an example from Action Submit Html.In this tutorial, we explain you... Action Submit Html Action Submit in Html is used to submit a form, When a user how to forward select query result to jsp page using struts action class how to forward select query result to jsp page using struts action class how to forward select query result to jsp page using struts action class Package in Action Script 3 Package in Action Script:- Package is the collection of related classes. Package describe the directory structure of your component of your application. You... create a simple class name hello.as within hello package name. The package thread class - Java Beginners the Incrementor thread and above mentioned step is repeated. - But if values of cnt1... the following code: class Incrementor extends Thread{ int cnt1 = 0; boolean...; } } class Decrementor extends Thread{ int cnt2 = 100; int cnt1 Java Wrapper Class Example When you will compile and execute the above example you will get the output...Java Wrapper Class Example In this section we will read about how to use... to objects of that class. In Java there are eight primitive data types and each data PHP Class Object PHP Class Object: In object oriented programming a class can be an abstract data type, blue print or template. You could consider the name of a class as noun...: In the above programming we create a class named A, it has a publically Validations using Struts 2 Annotations validation is done in action class and if user enters Admin/Admin in the user...; Developing Action class... the action class to handle the login request. The Struts 2 framework provides Class SALE - Java Beginners !"); } } while (!quit); } } Hope that the above code will be helpful for you...Class SALE A company sale 10 items at different rates and details... Friend, We have create a loop for only 3 items, you can modify it to 10 Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/13330
CC-MAIN-2015-22
refinedweb
3,043
57.98
[TODO] - Convert internal representation of http request and response to PSGI. [CHANGELOG] 2012-05-07 1.087 - Marking ASP4 and its entire ecosystem "DEPRECATED" until further notice. 2012-03-01 1.086 - Use of $api->get within a handler is disabled at this time. 2012-02-24 1.085 - Fixed ASP4::UserAgent to take advantage of new subrequest option. - Use of the ASP4::API within an existing asp script, handler or request filter is fully-functional now. Before, the behavior was unpredictable.! 2012-02-24 1.082 - Response->Redirect after Response->TrapInclude was causing the redirect to fail. - This release introduces a hack to fix it, by writing a meta tag to the client. 2012-02-13 1.081 - Updated logging of errors so that it outputs something interesting, instead of a blank line. - Running under mod_perl should now correctly support full RESTful interfaces. 2012-02-12 1.080 - Added support for multiple external "routes.json" files. 2012-02-07 1.079 - Errors output to the stderr are now derived directly from $@ not from any parsed version of it. 2012-02-02 1.078 - Fixed installation problem that came up in v1.075 (compilation root was missing leading forward slash on non-windows systems). 2012-02-01 1.077 - Loath to add a mime-types-all-knowing dependency, we have a small list of common mime-types (html, css, js, etc). - Added mime for html and svg. 2012-02-01 1.076 - Now, the 'content-type' header is set correctly for ASP4::UserAgent responses. - Works correctly under ASP4::PSGI (images, css, javascript all show up). 2012-02-01 1.075 - Now, works on Windows! - eric.hayes++ 2012-01-30 1.074 - Explicit calls to $Session->save() are no longer necessary. 2012-01-23 1.073 - Added $Request->Header($name) (Somehow we've gotten along all this time without it.) 2012-01-23 1.072 - More tweaks on ASP4::SessionStateManager's default internal behavior has resulted in some more-than-modest performance gains. - Disabling session-state can result in 630 requests/second on a simple master/child "content" page, and 475 requests/second on a page that includes reading data from a database (using Class::DBI::Lite of course). * Results from `ab` on a Dell E6510 (quad-dual-core-i7, 8GB RAM, Ubuntu 10.10) 2012-01-22 1.071 - ASP4::HTTPContext now triggers the SessionStateManager's save() method when the context is DESTROY'd.. 2011-12-01 1.067 - ASP4::GlobalASA is completely deprecated. - ASP4::ErrorHandler contained a bug (cannot find method 'context'). Thanks to ray.baksh++ for discovering it. 2011-11-16 1.066 - Fixed a POD error in ASP4::ErrorHandler::Remote. - ASP4::ErrorHandler::Remote now correctly clones the error object before POSTing it. 2011-11-15 1.065 - Documented asp4-prep and asp4-deploy. These are deployment tools for ASP4 apps. - Other POD updates here and there. 2011-11-15 1.064 - 1.063 was broken - please upgrade. 2011-11-15 1.063 - Stealth-mode ASP4::ErrorHandler::Remote will send your error messages to a remote server via http. - Added ASP4::Error - Refactored ASP4::ErrorHandler to be more easily sub-classable. - GlobalASA is now officially removed. 2011-11-13 1.062 - The httpd.conf produced by asphelper had an incorrect DocumentRoot. Fixed now. 2011-11-07 1.061 - asphelper now creates new ASP4 apps using the proper structure. 2011-11-06 1.060 - $Response->Include on a missing file will no longer result in a 404 on the calling file. This goes for $Response->TrapInclude as well as <!-- #include virtual="..." --> 2011-11-05 1.059 - ASP4::API: - No longer have to do this: use ASP4::API; my $api; BEGIN { $api = ASP4::API->new } # Now load classes: use MyApp::Foo; - You can do this instead: use ASP4::API; use MyApp::Foo; my $api = ASP4::API->new; - Also no need for BEGIN { ASP4::API->init } - Added requirement Data::Properties::JSON. - JSON is a better format for test fixtures. - YAML can still be used.' to the new folder, copies the existing config files from latest/*/conf/* (if it exists) or copies conf/*.template config files and renames them without the *.template suffix. If a 'latest/*' folder was found, asp4-deploy will run any unit tests found. If all tests pass, then 'deploying' is removed and 'latest' is changed to point to the new folder. - TODO: Add POD for asp4-prep and asp4-deploy. This is delayed until it's proven that this is the correct way for onesie-twosie deployments. websites. RayBaksh++ - This fixes the dreaded "This website uses an invalid form of compression" error that you may have gotten after trying to $Response->Status(404) within an asp script. - Added in-memory mock sessionstate handler for faster testing and easier installation. 2011-09-22 1.055 - Giving credit where credit is due :-) - Erikdj++ - Added *experimental* memcached session storage backend. 2011-09-20 1.054 - Added @AppRoot@ macro for asp4-config.json. It is 1 folder "up" from @ServerRoot@. - If your @ServerRoot@ is: /home/bart/Projects/facebook/www Then your @AppRoot@ is: /home/bart/Projects/facebook - Erikdj pointed out the need for the @AppRoot@ macro and suggested the (excellent) name. Thanks Erik! 2011-09-19 1.053 - Updated asphelper script to genenrate sample app conforming to new App::db::* root namespace. 2011-09-19 1.052 [Bug Fixes] - Blank lines in asp4-config.json no longer causes an exception to be thrown. - Update documentation to reflect preference change from app::* to App::db::* root namespace for database classes. 2011-08-14 1.051 [Bug Fixes] - 'Redirect Loop' fixed! Under mod_perl, $context->send_headers() was not called for non-200 http responses. Now it is. This means that if you had `return $Response->Redirect("/foo/")` in a RequestFilter you may have gotten a "redirect loop" because although the '301 Moved' status was set, the `location: /foo/` header was *not* set. This would result in a redirect loop. 2011-07-11 v1.050 [Bug Fixes] - v1.049 Caused script execution to cease after any $Response->Include or ssi include finished. - Upgrade required if you know what's good for you. 2011-07-09 v1.049 [Bug Fixes] - v1.048 broke session cookies. - Upgrade to v1.049 (quick). 2011-07-07 v1.048 [Bug Fixes] - <% $Response->Status(404); return $Response->End; %> DID NOT WORK. Instead it continued processing other ContentPlaceHolders. Now we check to see if $Response->End was called before we process anything else. - Still getting some "Content encoding error" messages from FF/Chrome/MSIE but we're almost there. 2011-05-19 v1.047 [Bug Fixes] - $Response->Expires("30M") wasn't documented. Now it is. - $Response->Expires wasn't working properly. Now it is. (Always ended up with pre-epoch expiration times). 2011-05-03 v1.046 [Bug Fixes] - $Response->Redirect(...) wasn't returning '301' - now it does. 2011-05-03 v1.045 [Bug Fixes] - Actually it turned out that setting $Session->is_read_only(1) *DID* prevent $Session->save() from working. This is now fixed to match the documentation. 2011-05-01 v1.044 [Bug Fixes] - ASP4::ModPerl now does the Right Thing when a non-200 response is encountered. - 500 response does not result in an "encoding error" in firefox. - 200 (or 0 response) does the right thing. - non-200 (and non-500) response does the right thing (eg: 401) - ASP4::SessionStateManager now checks $s->is_changed *before* checking $s->{__lastMod} date before deciding whether is should persist its changes in ->save(). [New Features] - $Session->is_read_only(1) is new. Setting it to a true value (eg: 1) will prevent the default behavior of calling $Session->save() at the end of each successful request. 2011-04-08 v1.043 - Documentation overhaul. 2011-03-23 v1.042 - Fixed sporadic error in master pages that looks like this: Can't call method "Write" on an undefined value at /tmp/PAGE_CACHE/BStat/_masters_global_asp.pm line 1. - Apparently $s->init_asp_objects($context) was not getting called before the master page's run() method was called, resulting in a call to $Response->Write(...) before $Response had been initialized. 2010-11-11 v1.041 - ASP4::UserAgent calls all cleanup handlers registered via $Server->RegisterCleanup(sub { }, @args) at the end of each request, not when the ASP4::Mock::Pool object's DESTROY method is called. This fixes a condition which caused conflict when a Class::DBI::Lite ORM is used and the ASP4 application is executed via the `asp4` helper script. 2010-10-25 v1.040 - 1.039 introduced a bug that could cause session-id conflicts in the asp_sessions table. - This release fixes that bug. 2010-10-25 v1.039 - Session expiration now happens exclusively on the server, not as the result of an expiring session cookie. 2010-10-21 v1.038 - Another stab at getting http response codes right for errors. 2010-09-25 v1.037 - Added a couple tweaks here and there to make ASP4 run on Windows a little easier: * $Config->web->page_cache_root now does the Right Thing on linux & win32. * $Config->web->page_cache_root is automatically created if it does not exist. 2010-09-21 v1.036 - Added ASP4::StaticHandler to process requests for static files - like images, css, etc. 2010-09-17 v1.035 - It turns out that if you close the client socket, some browsers complain (Chrome). Upgrade recommended. 2010-09-17 v1.034 - Non-2xx responses are more returned more correctly, albeit with empty bodies. - HTTPHandler now caches the @ISA tree in RAM, offering a slight performance boost. - Added missing '$r->headers_in' method to ASP4::Mock::RequestRec. 2010-05-26 v1.033 - Fixed more issues related to running multiple web applications under different VirutalHosts on the same server. This time related to how Filters and Handlers are cached - now not only by URL but also by $ENV{DOCUMENT_ROOT}. 2010-05-20 v1.032 - Fixed several issues related to running multiple web applications under different VirtualHosts on the same server. 2010-05-19 v1.031 - Migrated from Ima::DBI to Ima::DBI::Contextual. 2010-05-18 v1.030 - $ENV{HTTP_HOST} is set to $r->hostname or $ENV{DOCUMENT_ROOT} in ASP4::ModPerl and ASP4::UserAgent, respectively. 2010-04-18 v1.029 - The document root was not always set properly in some very, very strange circumstances. - Upgrade recommended. 2010-04-18 v1.028 - $Request->Reroute($uri) no longer changes $ENV{REQUEST_URI} to $uri. 2010 there, and foo=abc would be lost. 2010-04-06 v1.025 - If Router::Generic is installed, ASP4::ConfigNode::Web will create $Config->web->router based on the "routes" segment of asp4-config.json. - No documentation about this yet. 2010-03-22 v1.024 - $Request->Reroute() with additional querystring parameters was not adding those extra parameters to $Form. Now it does. 2010-03-08 v1.023 - ASP4::HTTPContext now checks to see if any RequestFilters match a uri before returning a 404. This is helpful for SEO optimizations. - New feature: $Request->Reroute("/new-uri/?foo=bar") * Also very useful for SEO. 2010-03-08 v1.022 - asphelper's final instructions are now more clear and concise. - Fixes a bug that caused active sessions to timeout as though inactive simply because they were not changed before the timeout occurred. Now, $Session->save() checks to see if it's been more than 60 seconds since the last time the __lastMod was changed - and if it has been more than 60 seconds, the session is saved and the __lastMod value is updated to time() - thus preventing expiry of active sessions. 2010 structure of a web application. - If $Config->web->data_connections->session->session_timeout is set to '*' then the session lasts as long as the browser keeps the cookie around. - 20% performance increase by using Cwd::fastcwd() instead of Cwd::cwd() and a few other minor tweaks. 2010-03-02 v1.019 - Fixed a bug in asphelper that caused some problems creating a skeleton website. 2010-03-01 v1.018 - Updated asphelper script so that the POD on CPAN is not contaminated with POD from within one of the modules that asphelper generates. - Now asphelper will not create a Class::DBI::Lite model class unless Class::DBI::Lite is installed. 2010-03-01 v1.017 - Updated asphelper script to only accept options on the command-line, like "normal" scripts. 2010-02-28 v1.016 - A vestigial "use encoding 'utf8'" was removed from ASP4::Server. - It was causing Apache to segfault on ubuntu 9.10. 2010-02-19 v1.015 - Hostnames like were not setting session cookies properly. - $Config->data_connections->session->cookie_domain should set to "*" in these cases. - $Response->SetCookie accepts the "*" value for domain also. - The result is that no "domain=xyz" attribute is given to these cookies. 2010-02-18 v1.014 - $Response->ContentType now functions correctly. - Upgrade mandatory! 2010-02-18 v1.013 - ASP4::HandlerResolver was not properly remembering timestamps on handler files. This resulted in unnecessary reloads of handlers that had not been changed. 2010-02-18 v1.012 - MANIFEST was missing a few files that caused tests to fail. 2010-02-17 v1.011 ! Upgrade Recommended ! - $Response->SetCookie and $Response->ContentType were not functioning properly. - Added new method $Response->SetHeader. 2010-02-10 v1.010 ! UPGRADE *SERIOUSLY* RECOMMENDED ! - In an environment with multiple VirtualHosts running ASP4 web applications, ASP4::HandlerResolver's %HandlerCache and %FileTimes hashes were shared between all VirtualHosts. This means that if you had 2 web apps (Foo and Bar) then "/index.asp" on "Foo" might get handled by "Bar::_index_asp" or vice versa. 2010-02-08 v1.009 ! Upgrade Recommended ! - ASP4::ModPerl sets $ENV{DOCUMENT_ROOT} = $r->document_root before doing anything else. - The scaffold website output by 'asphelper' had some minor bugs: * email was sometimes referred to as email_address * The error message for the 'message' field was displaying the wrong error. 2010-02-07 v1.008 - Multi-value form parameters (eg 3 checkboxes with the same name) will now *correctly* appear as an arrayref in $Form, instead of 3 values joined with a null byte. 2010-01-31 v1.007 - $FileUpload->SaveAs("/path/to/file.txt") will now create "/path" and "/path/to" before writing "/path/to/file.txt". 2010-01-27 v1.006 - Sometimes changes in MasterPages are not immediately reflected in child pages. This release attempts to correct this bug. 2010-01-25 v1.005 - Request Filters were not always matching properly because of a regexp bug in ASP4::FilterResolver. 2010-01-22 v1.004 - $ENV{REQUEST_URI} was not getting set properly - this is now fixed. 2009-12-22 v1.003 - $ENV{HTTP_REFERER} can be set and preserved properly. - conf/asp4-config.json will be reloaded if it is modified. This means that the server does not have to be restarted for changes to asp4-config.json to take effect. - Added ASP4::TransHandler 2009-12-17 v1.002 - %ENV is no longer clobbered by ASP4::UserAgent. 2009-12-16 v1.001 - Fixed a bug that prevented ASP4 for reliably detecting when an ASP script had been updated. 2009-12-15 v1.000 - Ready for production use. 2009-12-14 v0.001_03 .. v0.001_05 - Just getting the Makefile.PL prerequisites correct. 2009-12-13 v0.001_02 - Added POD. 2009-12-13 v0.001_01 * Initial release
https://metacpan.org/changes/release/JOHND/ASP4-1.087
CC-MAIN-2019-13
refinedweb
2,512
59.7
this addAction["<t color='#ff1111'>Virtual Ammobox</t>", "VAS\open.sqf"]; #include "VAS\menu.hpp" class CfgFunctions { #include "VAS\cfgfunctions.hpp" }; # DarkXess : This is Operation Arrowhead right? because you posted in the ArmA 3 section... # BloodxGusher : Hey Tonic. Thanks for releasing this. Great tool. Questions. How would I go about limiting weapon access to Blufor and Opfor? How would I also limit the amount of mags and weapons available. For example, limiting 50 guns total to be taken out of the menu? Maybe even a refresher so the crate fills with maybe 5 or 10 guns every 30 mins. There are other scripts that use the gear box method and just refresh but like someone stated, this method allows for more FPS while playing and I would like to explore and edit it if I can. Thanks # AlltimeHIGH : Tried to use script in the init field of ammo box, and it says "invalid number expression". It also crashes if i try to put in ext and sqf files first, and gives back error saying "commonfiles.hpp" not found #include "gear\common.hpp" #include "gear\menu.hpp" this addAction["<t color='#ff1111'>Virtual Ammobox</t>", "gear\open.sqf"];
http://www.armaholic.com/forums.php?m=posts&q=20990
CC-MAIN-2019-47
refinedweb
197
69.38
system() Execute a system command Synopsis: #include <stdlib.h> int system( const char *command ); Since: BlackBerry 10.0.0. The shell used is always /bin/sh, regardless of the setting of the SHELL environment variable, because applications may rely on features of the standard shell, and may fail as a result of running a different shell. This means that any command that can be entered to the OS can be executed, including programs, BlackBerry 10 OS. BlackBerry 10 OS. Examples: #include <stdlib.h> #include <stdio.h> #include <sys/wait.h> int main( void ) { int rc; rc = system( "ls" ); if( rc == -1 ) { printf( "shell could not be run\n" ); } else { printf( "result of running command is %d\n", WEXITSTATUS( rc ) ); } return EXIT_SUCCESS; } Classification: Last modified: 2014-11-17 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/system.html
CC-MAIN-2015-32
refinedweb
143
59.5
ESP-8266 gateway with sensor can't communicate with controller - Senne Vande Sompele last edited by hi everybody, I've read a lot of topics but can't seem to find out what i'm doing wrong. I've a wemos D1 mini with a switch connected to it and i want to send the state of this switch to Home Assistant The following code is running on my d1: #include <EEPROM.h> #include <SPI.h> #include <Bounce2.h> // Enable debug prints to serial monitor #define MY_DEBUG // Use a bit lower baudrate for serial prints on ESP8266 than default in MyConfig.h #define MY_BAUD_RATE 9600 #define MY_GATEWAY_ESP8266 #define MY_ESP8266_SSID "xxxxxxx" #define MY_ESP8266_PASSWORD "xxxxxxx" // If using static ip you need to define Gateway and Subnet address as well #define MY_IP_ADDRESS 192,168,0 #define CHILD_ID 1 #define BUTTON_PIN 0 // Arduino Digital I/O pin for button/reed switch #include <ESP8266WiFi.h> #include <MySensors.h> Bounce debouncer = Bounce(); int oldValue=-1; bool initialValueSent = false; MyMessage msg(CHILD_ID,V_STATUS); void setup() { pinMode(BUTTON_PIN,INPUT_PULLUP); // Activate internal pull-up //digitalWrite(BUTTON_PIN,HIGH); // After setting up the button, setup debouncer debouncer.attach(BUTTON_PIN); debouncer.interval(5); } void presentation() { // Present locally attached sensors here Serial.println("Presentation:"); // Send the sketch version information to the gateway and Controller sendSketchInfo("Switch", "1.1"); present(CHILD_ID, S_BINARY); } void loop() { debouncer.update(); // Get the update value int value = debouncer.read(); if (value != oldValue) { // Send in the new value Serial.println(value); send(msg.set(value==HIGH ? 1 : 0)); oldValue = value; } // Send locally attached sensors data here } void receive(const MyMessage &message) { if (message.isAck()) { Serial.println("This is an ack from gateway"); } Serial.println("something was received: "+ message.type); } and got follwing output 0;255;3;0;9;Starting gateway (R-NGE-, 2.0.0) scandone f 0, scandone state: 0 -> 2 (b0) state: 2 -> 3 (0) state: 3 -> 5 (10) add 0 aid 3 cnt connected with telenet-CED24D5, channel 11 ip:192.168.0.9,mask:255.255.255.0,gw:192.168.0.1 .IP: 192.168.0.9 0;255;3;0;9;No registration required 0;255;3;0;9;Init complete, id=0, parent=0, distance=0, registration=1 1 pm open,type:2 0 and in home assistant i get following in the debug: 16-12-31 11:45:08 mysensors.mysensors: Trying to connect to ('192.168.0.9', 5003) 16-12-31 11:45:08 mysensors.mysensors: Connected to ('192.168.0.9', 5003) 16-12-31 11:45:09 netdisco.service: Scanning 16-12-31 11:45:09 mysensors.mysensors: Received 0;255;3;0;14;Gateway startup complete. 0;255;3;0;11;Switch 0;255;3;0;12;1.1 0;1;0;0, will not add child 1. 16-12-31 11:45:09 mysensors.mysensors: Sending 0;255;3;0;19; 16-12-31 11:45:09 mysensors.mysensors: Sending 0;255;3;0;19; 16-12-31 11:45:09 mysensors.mysensors: Sending 0;255;3;0;19; can someone help me getting this working @Senne-Vande-Sompele try upgrading to MySensors 2.1 (which was relesed yesterday). 2.0 has a problem that causes the receive function to not be called on nodes that are gateways. - martinhjelmare Plugin Developer last edited by Mysensors 2.0 has a bug that leads to gateways without radio to not present themselves, which is why node 0 (gateway) is unknown in your log. This is fixed in mysensors 2.1. - Senne Vande Sompele last edited by I've upgraded and everything works now thank you very much Great work @Senne-Vande-Sompele, thanks for reporting back!
https://forum.mysensors.org/topic/5662/esp-8266-gateway-with-sensor-can-t-communicate-with-controller
CC-MAIN-2021-39
refinedweb
608
51.04
Hi I would like to write a small program that takes the input of integers and puts them in an array, from where one can do things like calculating their mean, median, standard deviation and so forth. My Problem is that, after extracting the numbers one by one, I am stuck at putting them in an array. One problem is that the array where I stored the numbers is only accessible in the for-loop, I would appreciate if someone could help. Here is it: Code:#include <iostream> using namespace std; int main(){ int array[50], x, n, sum =0; cout << "Enter a number" << endl; cin >> x; int num = x; for(x; x !=0; x = x/10) sum++;//Trying to find the number of digits given for(num; num != 0; num = num/10){ n = num%10; array [sum - 1] = n; //I am trying to put in the numbers in an array //in a back-to-front manner so I start with filling the last number in the array //and looping to the first array, i.e array[0] sum--; cout << array[sum]; } //But my Problem is that these variables are only valid within the for //loop so I can't access them outside of the for-loop although I need them // It would be grateful if someone can tell me how I can accessay array[sum] //outside of the for-loop. return 0; }
http://cboard.cprogramming.com/cplusplus-programming/136471-putting-number-cin-into-arry.html
CC-MAIN-2015-35
refinedweb
233
65.69
[The HTML version of this Summary is available at] ===================== Summary Announcements ===================== ---- QOTF ---- We have our first ever Quote of the Fortnight (QOTF), thanks to the wave of discussion over `PEP 343`_ and Jack Diederich: I still haven't gotten used to Guido's heart-attack inducing early enthusiasm for strange things followed later by a simple proclamation I like. Some day I'll learn that the sound of fingernails on the chalkboard is frequently followed by candy for the whole class. See, even threads about anonymous block statements can end happily! ;) .. _PEP 343: Contributing thread: - `PEP 343 - Abstract Block Redux <>`__ [SJB] ------------------ First PyPy Release ------------------ The first release of `PyPy`_, the Python implementation of Python, is finally available. The PyPy team has made impressive progress, and the current release of PyPy now passes around 90% of the Python language regression tests that do not depend deeply on C-extensions. The PyPy interpreter still runs on top of a CPython interpreter though, so it is still quite slow due to the double-interpretation penalty. .. _PyPy: Contributing thread: - `First PyPy (preview) release <>`__ [SJB] -------------------------------- Thesis: Type Inference in Python -------------------------------- Brett C. successfully defended his masters thesis `Localized Type Inference of Atomic Types in Python`_, which investigates some of the issues of applying type inference to the current Python language, as well as to the Python language augmented with type annotations. Congrats Brett! .. _Localized Type Inference of Atomic Types in Python: Contributing thread: - `Localized Type Inference of Atomic Types in Python <- May/053993.html>`__ [SJB] ========= Summaries ========= --------------------------- PEP 343 and With Statements --------------------------- The discussion on "anonymous block statements" brought itself closer to a real conclusion this fortnight, with the discussion around `PEP 343`_ and `PEP 3XX`_ converging not only on the semantics for "with statements", but also on semantics for using generators as with-statement templates. To aid in the adaptation of generators to with-statements, Guido proposed adding close() and throw() methods to generator objects, similar to the ones suggested by `PEP 325`_ and `PEP 288`_. The throw() method would cause an exception to be raised at the point where the generator is currently suspended, and the close() method would use throw() to signal the generator to clean itself up by raising a GeneratorExit exception. People seemed generally happy with this proposal and -- believe it or not -- we actually went an entire eight days without an email about anonymous block statements!! It looked as if an updated `PEP 343`_, including the new generator functionality, would be coming early the next month. So stay tuned. ;) .. _PEP 288: .. _PEP 325: .. _PEP 343: .. _PEP 3XX: Contributing threads: - `PEP 343 - Abstract Block Redux <>`__ - `Simpler finalization semantics (was Re: PEP 343 - Abstract Block Redux) < -dev/2005-May/053812.html>`__ - `Example for PEP 343 <>`__ - `Combining the best of PEP 288 and PEP 325: generator exceptions and cleanup <>`__ - `PEP 343 - New kind of yield statement? <>`__ - `PEP 342/343 status? <>`__ - `PEP 346: User defined statements (formerly known as PEP 3XX) <- May/054014.html>`__ [SJB] ----------- Decimal FAQ ----------- Raymond Hettinger suggested that a decimal FAQ would shorten the module's learning curve, and drafted one. There were no objections, but few adjustments (to the list, at least). Raymond will probably make the FAQ available at some point. Contributing thread: - `Decimal FAQ <>`__ [TAM] --------------------- Constructing Decimals --------------------- A long discussion took place regarding whether the decimal constructor should or should not respect context settings, and whether matching the standard (and what the standard says) should be a priority. Raymond Hettinger took the lead in the status-quo (does not) corner, with Tim Peters leading the opposition. Tim and Guido eventually called in the standard's expert, Mike Cowlishaw. He gave a very thorough explanation of the history behind his decisions in this matter, and eventually weighed in on Raymond's side. As such, it seems that the status-quo has won (not that it was a competition, of course <wink>). For those that need to know, the unary plus operation, as strange as it looks, forces a rounding using the current context. As such, context-aware construction can be written:: val = +Decimal(string_repr) Contributing threads: - `Adventures with Decimal <>`__ - `Decimal construction <>`__ - `[Python-checkins] python/nondist/peps pep-0343.txt, 1.8, 1.9 <- May/053766.html>`__ [TAM] ------------------------ Handling old bug reports ------------------------ Facundo Batista continued with his progress checking the open bug reports, looking for bugs that are specific to 2.2.1 or 2.2.2. The aim is to verify whether these bugs exist in current CVS, or are old-of-date. There are no longer any bugs in the 2.1.x or 2.2.x categories, and Facundo wondered whether removing those categories would be a good idea. The consensus was that there was no harm in leaving the categories there, but that changing the text to indicate that those versions are unmaintained would be a good idea. Raymond Hettinger reminded us that care needs to be taken in closing old bug reports. Particularly, a bug report should only be closed if (a) there are no means of reproducing the error, (b) it is impossible to tell what the poster meant, and they are no longer contactable, or (c) the bug is no longer present in current CVS. Contributing threads: - `Deprecating old bugs, now from 2.2.2 <>`__ - `Closing old bugs <>`__ - `Old Python version categories in Bug Tracker <- May/054020.html>`__ [TAM] ------------------ Exception chaining ------------------ Ka-Ping Yee has submitted `PEP 344`_, which is a concrete proposal for exception chaining. It proposes three standard attributes on trackback objects: __context__ for implicit chaining, __cause__ for explicit chaining, and __traceback__ to point to the traceback. Guido likes the motivation and rationale, but feels that the specification needs more work. A lot of discussion about the specifics of the PEP took place, and Ka-Ping is working these into a revised version. One of the major questions was whether there is no need for both __context__ and __cause__ (to differentiate between explicit and implicit chaining). Guido didn't feel that there was, but others disagreed. Discussion branched off into whether which attributes should be double-underscored, or not. Guido's opinion is that it depends who "owns" the namespace, and with "magic" behaviour caused (or indicated) by the presence of the attribute. He felt that the underscores in the proposed exception attributes should remain. .. _PEP 344: Contributing threads: - `PEP 344: Exception Chaining and Embedded Tracebacks <- May/053821.html>`__ - `PEP 344: Implicit Chaining Semantics <>`__ - `PEP 344: Explicit vs. Implicit Chaining <>`__ - `Tidier Exceptions <>`__ [TAM] ------------------------------------ Adding content to exception messages ------------------------------------ Nicolas Fleury suggested that there should be a standard method of adding information to an existing exception (to re-raise it). Nick Coghlan suggested that this would be reasonably simple to do with PEP 344, if all exceptions were also new-style classes, but Nicolas indicated that this wouldn't work in some cases. Contributing threads: - `Adding content to exception messages <>`__ [TAM] =============== Skipped Threads =============== - `Loading compiled modules under MSYS/MingGW? <>`__ - `RFC: rewrite fileinput module to use itertools. <>`__ - `Multiple interpreters not compatible with current thread module <>`__ - `Weekly Python Patch/Bug Summary <>`__ - `Request for dev permissions <>`__ - `python-dev Summary for 2005-05-01 through 2005-05-15 [draft] <>`__ - `AST manipulation and source code generation <>`__ - `Weekly Python Patch/Bug Summary <>`__ - `AST branch patches (was Re: PEP 342/343 status?) <>`__ - `[Python-checkins] python/dist/src/Lib/test test_site.py, 1.6, 1.7 <>`__ - `Split MIME headers into multiple lines near a space <>`__ ======== Epilogue ======== ------------ Introduction ------------ This is a summary of traffic on the `python-dev mailing list`_ from May 16, 2005 through May 31, fourth:
https://mail.python.org/pipermail/python-list/2005-June/321759.html
CC-MAIN-2019-39
refinedweb
1,290
58.92
I have a very strange problem. With my classes client and password I can include them separately by calling the cpp file, ie. (#include "client.cpp", and #include "password.cpp"). I have tested them separately. If I call the header I don't get a compile error but it doesn't recognize the class and I can't declare objects. If I include them together I get the error: [C++ Error] CLIENT.CPP(5): E2090 Qualifier 'client' is not a class or namespace name. As I said if I use them separately they are fine. Can anybody help? I am using Borland C++ Builder 5 Thanks: retretret
https://cboard.cprogramming.com/cplusplus-programming/12327-cant-include-classes-very-strange.html
CC-MAIN-2017-26
refinedweb
107
78.25
.jni;19 20 /** Local socket21 *22 * @author Mladen Turk23 * @version $Revision: 467222 $, $Date: 2006-10-24 05:17:11 +0200 (mar., 24 oct. 2006) $24 */25 26 public class Local {27 28 /**29 * Create a socket.30 * @param path The address of the new socket.31 * @param cont The parent pool to use32 * @return The new socket that has been set up.33 */34 public static native long create(String path, long cont)35 throws Exception ;36 37 /**38 * Bind the socket to its associated port39 * @param sock The socket to bind40 * @param sa The socket address to bind to41 * This may be where we will find out if there is any other process42 * using the selected port.43 */44 public static native int bind(long sock, long sa);45 46 /**47 * Listen to a bound socket for connections.48 * @param sock The socket to listen on49 * @param backlog The number of outstanding connections allowed in the sockets50 * listen queue. If this value is less than zero, for NT pipes51 * the number of instances is unlimite.52 *53 */54 public static native int listen(long sock, int backlog);55 56 /**57 * Accept a new connection request58 * @param sock The socket we are listening on.59 * @param pool The pool for the new socket.60 * @return A copy of the socket that is connected to the socket that61 * made the connection request. This is the socket which should62 * be used for all future communication.63 */64 public static native long accept(long sock)65 throws Exception ;66 67 /**68 * Issue a connection request to a socket either on the same machine69 * or a different one.70 * @param sock The socket we wish to use for our side of the connection71 * @param sa The address of the machine we wish to connect to.72 * Unused for NT Pipes.73 */74 public static native int connect(long sock, long sa);75 76 }77 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/tomcat/jni/Local.java.htm
CC-MAIN-2017-04
refinedweb
335
70.02
Batch sending/merge and ESP templates¶ If your ESP offers templates and batch-sending/merge capabilities, Anymail can simplify using them in a portable way. Anymail doesn’t translate template syntax between ESPs, but it does normalize using templates and providing merge data for batch sends. Here’s an example using both an ESP stored template and merge data: from django.core.mail import EmailMessage message = EmailMessage( subject=None, # use the subject in our stored template from_email="marketing@example.com", to=["Wile E. <wile@example.com>", "rr@example.com"]) message.template_id = "after_sale_followup_offer" # use this ESP stored template message.merge_data = { # per-recipient data to merge into the template 'wile@example.com': {'NAME': "Wile E.", 'OFFER': "15% off anvils"}, 'rr@example.com': {'NAME': "Mr. Runner"}, } message.merge_global_data = { # merge data for all recipients 'PARTNER': "Acme, Inc.", 'OFFER': "5% off any Acme product", # a default if OFFER missing for recipient } message.send() The message’s template_id identifies a template stored at your ESP which provides the message body and subject. (Assuming the ESP supports those features.) The message’s merge_data supplies the per-recipient data to substitute for merge fields in your template. Setting this attribute also lets Anymail know it should use the ESP’s batch sending feature to deliver separate, individually-customized messages to each address on the “to” list. (Again, assuming your ESP supports that.) Note Templates and batch sending capabilities can vary widely between ESPs, as can the syntax for merge fields. Be sure to read the notes for your specific ESP, and test carefully with a small recipient list before launching a gigantic batch send. Although related and often used together, ESP stored templates and merge data are actually independent features. For example, some ESPs will let you use merge field syntax directly in your body, so you can do customized batch sending without needing to define a stored template at the ESP. ESP stored templates¶ Many ESPs support transactional email templates that are stored and managed within your ESP account. To use an ESP stored template with Anymail, set template_id on an AnymailMessage. template_id¶ The identifier of the ESP stored template you want to use. For most ESPs, this is a strname or unique id. (See the notes for your specific ESP.) message.template_id = "after_sale_followup_offer" With most ESPs, using a stored template will ignore any body (plain-text or HTML) from the object. A few ESPs also allow you to define the message’s subject as part of the template, but any subject you set on the will override the template subject. To use the subject stored with the ESP template, set the message’s subject to None: message.subject = None # use subject from template (if supported) Similarly, some ESPs can also specify the “from” address in the template definition. Set message.from_email = None to use the template’s “from.” (You must set this attribute after constructing an from_email=None to the constructor will use Django’s DEFAULT_FROM_EMAIL setting, overriding your template value.) Batch sending with merge data¶ Several ESPs support “batch transactional sending,” where a single API call can send messages to multiple recipients. The message is customized for each email on the “to” list by merging per-recipient data into the body and other message fields. To use batch sending with Anymail (for ESPs that support it): Use “merge fields” (sometimes called “substitution variables” or similar) in your message. This could be in an ESP stored template referenced by template_id, or with some ESPs you can use merge fields directly in your Set the message’s merge_dataattribute to define merge field substitutions for each recipient, and optionally set merge_global_datato defaults or values to use for all recipients. Specify all of the recipients for the batch in the message’s tolist. Caution It’s critical to set the merge_dataattribute: this is how Anymail recognizes the message as a batch send. When you provide merge_data, Anymail will tell the ESP to send an individual customized message to each “to” address. Without it, you may get a single message to everyone, exposing all of the email addresses to all recipients. (If you don’t have any per-recipient customizations, but still want individual messages, just set merge_data to an empty dict.) The exact syntax for merge fields varies by ESP. It might be something like *|NAME|* or -name- or <%name%>. (Check the notes for your ESP, and remember you’ll need to change the template if you later switch ESPs.) AnymailMessage. merge_data¶ A dictof per-recipient template substitution/merge data. Each key in the dict is a recipient email address, and its value is a dictof merge field names and values to use for that recipient: message.merge_data = { 'wile@example.com': {'NAME': "Wile E.", 'OFFER': "15% off anvils"}, 'rr@example.com': {'NAME': "Mr. Runner", 'OFFER': "instant tunnel paint"}, } When merge_datais set, Anymail will use the ESP’s batch sending option, so that each torecipient gets an individual message (and doesn’t see the other emails on the tolist). AnymailMessage. merge_global_data¶ A dictof template substitution/merge data to use for all recipients. Keys are merge field names in your message template: message.merge_global_data = { 'PARTNER': "Acme, Inc.", 'OFFER': "5% off any Acme product", # a default OFFER } Merge data values must be strings. (Some ESPs also allow other JSON-serializable types like lists or dicts.) See Formatting merge data for more information. Like all Anymail additions, you can use these extended template and merge attributes with any (It doesn’t have to be an AnymailMessage.) Tip: you can add merge_global_data to your global Anymail send defaults to supply merge data available to all batch sends (e.g, site name, contact info). The global defaults will be merged with any per-message merge_global_data. Formatting merge data¶ If you’re using a date, datetime, Decimal, or anything other than strings and integers, you’ll need to format them into strings for use as merge data: product = Product.objects.get(123) # A Django model total_cost = Decimal('19.99') ship_date = date(2015, 11, 18) # Won't work -- you'll get "not JSON serializable" errors at send time: message.merge_global_data = { 'PRODUCT': product, 'TOTAL_COST': total_cost, 'SHIP_DATE': ship_date } # Do something this instead: message.merge_global_data = { 'PRODUCT': product.name, # assuming name is a CharField 'TOTAL_COST': "%.2f" % total_cost, 'SHIP_DATE': ship_date.strftime('%B %d, %Y') # US-style "March 15, 2015" } These are just examples. You’ll need to determine the best way to format your merge data as strings. Although floats are usually allowed in merge data, you’ll generally want to format them into strings yourself to avoid surprises with floating-point precision. Anymail will raise AnymailSerializationError if you attempt to send a message with merge data (or metadata) that can’t be sent to your ESP. ESP templates vs. Django templates¶ ESP templating languages are generally proprietary, which makes them inherently non-portable. Anymail only exposes the stored template capabilities that your ESP already offers, and then simplifies providing merge data in a portable way. It won’t translate between different ESP template syntaxes, and it can’t do a batch send if your ESP doesn’t support it. There are two common cases where ESP template and merge features are particularly useful with Anymail: - When the people who develop and maintain your transactional email templates are different from the people who maintain your Django page templates. (For example, you use a single ESP for both marketing and transactional email, and your marketing team manages all the ESP email templates.) - When you want to use your ESP’s batch-sending capabilities for performance reasons, where a single API call can trigger individualized messages to hundreds or thousands of recipients. (For example, sending a daily batch of shipping notifications.) If neither of these cases apply, you may find that using Django templates can be a more portable and maintainable approach for building transactional email.
http://anymail.readthedocs.io/en/stable/sending/templates/
CC-MAIN-2018-05
refinedweb
1,302
56.05
Hi, after a long time I was able to find a time to write the blog, because these days I’m very busy with my exam and it will end 29th September. So until that I doubt I’d be able to write much. So sorry for that my all friends who read this. Related articles.. 1) How to create a GUI(Graphical User Interface) using C programming Language.. 2) How to create a GUI(Graphical User Interface) using C programming Language.. (part 2) 3) How to create a GUI(Graphical User Interface) using C programming Language.. (part 3) And this article focus on how to use a Text Entry with button action signals. And the purpose of this program is display the string in the label,which user enter in the text entry. Contents… 1) Open a Code Blocks projects 2) Open a Glade Project 3) Set properties for the components 4) Code the C Gtk project 5) Run the project Components 1) Window= mainWindow 2) Label= displayLabel 3) Text Entry= textEntry 4) Button= displayButton 5) Button= exitButton Set properties - For the main Window General–> Name= mainWindow General–>Resizable= No Common–>Height request = 400 Common–>Width request=200 - For Display Label General–>Name=displayLabel General–>Label=Display - For Display Button General–>Name=displayButton General–>Label=Display Signals–>Clicked=on_displayButton_clicked - For Exit Button General–>Name=exitButton General–>Label=Exit Signals–>Clicked=on_exitButton_clicked - For Text Entry General–>Name=textEntry Then save it as TextEntry.glade in libglade format in the CodeBlocks project folder. In this C project I divided the whole project into three files. 1) main.c 2) callback.c 3) callback.h The source code of main.c #include <stdio.h> #include <gtk/gtk.h> #include <glade/glade.h> /* Author : Gihan De Silva gihansblog.com Purpose: This program displays the string in the label,which user enter in the text entry. */ GladeXML *xml; GtkWidget *widget; int main(int argc, char *argv[]) { gtk_init(&argc, &argv); /*import glade file*/ xml = glade_xml_new(“TextEntry.glade”, NULL, NULL); /* get a widget (useful if you want to change something) */ widget = glade_xml_get_widget(xml, “mainWindow”); /* connect signal handlers */ glade_xml_signal_autoconnect(xml); /*show widget*/ gtk_widget_show (widget); gtk_main(); return 0; } The source code of callback.c #include <stdio.h> #include <gtk/gtk.h> #include <glade/glade.h>”); /* Get the string value form the Entry widgets */ d_string=gtk_entry_get_text(GTK_ENTRY(textValue)); gtk_label_set_text(GTK_LABEL(display),d_string); } G_MODULE_EXPORT void on_exitButton_clicked(GtkButton *button,gpointer *data) { gtk_main_quit(); } The source code of callback.h G_MODULE_EXPORT void on_displayButton_clicked(GtkButton *button,gpointer *data); G_MODULE_EXPORT void on_exitButton_clicked(GtkButton *button,gpointer *data); Now run the gtk project and it will look like this. And when you hit the Display button, it will display the content in the text entry.. Display Button Code Explained… Here I’m not going to explain all the code because I already have explained them in previous articles. If you haven’t read previous article, you’d better to read them first. But I’m going to explain the specific things related to this article.”); In these two lines, the program will take glade widgets into Gtk widgets. /* Get the string value form the Entry widgets */ d_string=gtk_entry_get_text(GTK_ENTRY(textValue)); We can’t directly use GtkWidgets to set gtk label, because it needs gchar* type data. So in here we convert to into gchar* . gtk_label_set_text(GTK_LABEL(display),d_string); Now with above line we can set it to the label. } Ok that’s all for today . If you want, you can DOWNLOAD my CodeBlock project here!. In next post I will show you how to create a simple Calculator using Gtk and Glade. Thank you Gihan De Silva Hello, I need to ask you one thing. Is this site a wordpress weblog? My organization is planning on changing our weblog from Blogger to wordpress, ya think that is practical? In addition did you set up this specific theme by yourself some how? Bless you for your assistance! Ya I,ve answered the same question earlier, anyway Yes it a wordpress blog and I use Freshy theme :-) () Only thing I’ve done to the theme is changing the header. If you need any help regarding changing your blog page from Blogger to WordPress read my article on that : :D I was suggested this web site by my cousin. I am not sure whether this post is written by him as no one else know such detailed about my trouble. You are wonderful! Thanks! My brother recommended I might like this website. He was entirely right. This post actually made my day. You cann’t imagine just how much time I had spent for this information! Thanks! You actually make it seem so easy with your presentation but I find this matter to be actually something that I think I would never understand. It seems too complicated and very broad for me. I am looking forward for your next post, I’ was recommended this web site by my cousin. I’m not sure whether this post is written by him as nobody else know such detailed about my problem. You are incredible! Thanks! Heya i am. Hey there, You’ve done a fantastic job. I’ll definitely digg it and personally recommend to my friends. I am sure they’ll be benefited from this web site. F*ckin’ tremendous issues here. I am very glad to see your post. Thank you a lot and i am having a look ahead to touch you. Will you please drop me a e-mail? Pretty section of content. I just stumbled upon your blog and in accession capital to assert that I acquire actually enjoyed account your blog posts. Any way I’ll be subscribing to your augment and even I achievement you access consistently quickly. the auspicious writeup. It in fact was a amusement account it. Look advanced to more added agreeable from you! However, how can we communicate? hello there and thank you for your information – I’ve.. Wonderful page along with very easy for you to figure out justification. Exactly how can My spouse and i attempt receiving concur for you to submit element in the document inside my future news letter? Getting suitable credit history to you personally your journalist along with backlink on the blog won’t certainly be a trouble. Excellent blog here! Also your web site loads up fast! What web host are you using? Can I get your affiliate link to your host? I wish my web site loaded up as quickly as yours lol Excuse, I can help nothing. But it is assured, that you will find the correct decision. Do not despair. Do you mind if I post your article on my Information Blog? I would think this article suits my topic perfectly. Uhmmm, thanks for posting this. We’re a group of volunteers and opening a new scheme in our community. Your website offered us with valuable information to work on. You’ve done a formidable job and our entire community will be grateful to you. This is very interesting, You are a very skilled blogger. I have joined your feed and look forward to seeking more of your wonderful post. Also, I’ve shared your site in my social networks! Excellent post. I was checking constantly this blog and I’m impressed! Very helpful information particularly the last part :) I care for such information much. I was seeking this particular info for a very long time. Thank you and best of luck. really a tremendous website. do consider all the ideas you have introduced in your post. They’re really convincing and can definitely work. Still, the posts are very brief for newbies. May just you please lengthen them a bit from next time? Thank you for the post. Hi, I think that I saw you visited my website so I came to “return the favorâ€?.I am trying to find things to improve my website!I suppose its ok to use some of your ideas!! You are my role models. Thanks for the article I love your wordpress web template, exactly where did you download it through? Wow, incredible blog layout! How long have you been blogging for? you made blogging look easy. The overall look of your web site is fantastic, as well as the content! Thanks, I’ve recently been seeking for info about this subject for ages and yours is the best I have located so far. Pretty nice post. I just stumbled upon your blog and wanted to say that I’ve truly enjoyed browsing your blog posts. After all I’ll be subscribing to your really wish I hadn’t noticed this as I genuinely appreciate now!: As shortly as we rescued this site we went upon reddit to share a small of a adore with them. “Love a small traffic that thou hast learned, as well as be calm therewith.” by Marcus Aurelius Antoninus. Thank you, I’ve recently been hunting for information about this subject for ages and yours is the best I have found so far. As soon as I found this internet site I went on reddit to share some of the love with them. “A sect or party is an elegant incognito devised to save a man from the vexation of thinking.” by Ralph Waldo Emerson.! I was looking at some of your blog posts on this site and I conceive this site is rattling informative! Keep posting. I dugg some of you post as I cogitated they were very useful handy 2011… Excellent website. A lot of useful info here. I am sending it to several friends ans also sharing in delicious. And certainly, thanks for your effort!… Awesome blog! Is your theme custom made or did you download it from somewhere? A theme like yours with a few simple adjustements would really make my blog shine. Please let me know where you got your design. Thank you Excellent piece of writing and easy to fully understand story. How do I go about getting agreement to post component of the page in my upcoming newsletter? Offering proper credit to you the source and weblink to the site will not be a problem. Valuable information. Fortunate me I found your site accidentally, and I am shocked why this twist of fate did not happened earlier! I bookmarked it. I,need source code thanks my dear brother i am first semester in IT field so i am very satisfied see your website i read simple c.
https://gihansblog.wordpress.com/2011/09/09/how-to-create-a-guigraphical-user-interface-using-c-programming-language-part-4/
CC-MAIN-2016-07
refinedweb
1,747
67.45
: KnuthGlue.java 426576 2006-07-28 15:44:37Z jeremias $ */19 20 package org.apache.fop.layoutmgr;21 22 /**23 * An instance of this class represents a piece of content with adjustable 24 * width: for example a space between words of justified text.25 * 26 * A KnuthGlue is a feasible breaking point only if it immediately follows27 * a KnuthBox.28 * 29 * The represented piece of content is suppressed if either the KnuthGlue30 * is a chosen breaking point or there isn't any KnuthBox between the31 * previous breaking point and the KnuthGlue itself.32 * 33 * So, an unsuppressible piece of content with adjustable width, for example34 * a leader or a word with adjustable letter space, cannot be represented35 * by a single KnuthGlue; it can be represented using the sequence:36 * KnuthBox(width = 0)37 * KnuthPenalty(width = 0, penalty = infinity)38 * KnuthGlue(...)39 * KnuthBox(width = 0)40 * where the infinity penalty avoids choosing the KnuthGlue as a breaking point41 * and the 0-width KnuthBoxes prevent suppression.42 * 43 * Besides the inherited methods and attributes, this class has two attributes44 * used to store the stretchability (difference between max and opt width) and45 * the shrinkability (difference between opt and min width), and the methods46 * to get these values.47 */48 public class KnuthGlue extends KnuthElement {49 50 private int stretchability;51 private int shrinkability;52 private int adjustmentClass = -1;53 54 /**55 * Create a new KnuthGlue.56 *57 * @param w the width of this glue58 * @param y the stretchability of this glue59 * @param z the shrinkability of this glue60 * @param pos the Position stored in this glue61 * @param bAux is this glue auxiliary?62 */63 public KnuthGlue(int w, int y, int z, Position pos, boolean bAux) {64 super(w, pos, bAux);65 stretchability = y;66 shrinkability = z;67 }68 69 public KnuthGlue(int w, int y, int z,70 int iAdjClass, Position pos, boolean bAux) {71 super(w, pos, bAux);72 stretchability = y;73 shrinkability = z;74 adjustmentClass = iAdjClass;75 }76 77 /** @see org.apache.fop.layoutmgr.KnuthElement#isGlue() */78 public boolean isGlue() {79 return true;80 }81 82 /** @return the stretchability of this glue. */83 public int getY() {84 return stretchability;85 }86 87 /** @return the shrinkability of this glue. */88 public int getZ() {89 return shrinkability;90 }91 92 /** @return the adjustment class (or role) of this glue. */93 public int getAdjustmentClass() {94 return adjustmentClass;95 }96 97 /** @see java.lang.Object#toString() */98 public String toString() {99 StringBuffer sb = new StringBuffer (64);100 if (isAuxiliary()) {101 sb.append("aux. ");102 }103 sb.append("glue");104 sb.append(" w=").append(getW());105 sb.append(" stretch=").append(getY());106 sb.append(" shrink=").append(getZ());107 if (getAdjustmentClass() >= 0) {108 sb.append(" adj-class=").append(getAdjustmentClass());109 }110 return sb.toString();111 }112 113 }114 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/fop/layoutmgr/KnuthGlue.java.htm
CC-MAIN-2017-04
refinedweb
477
51.18
On Thu, Mar 27, 2014 at 06:22:50PM +0400, Kirill Smelkov wrote: > On Mon, Mar 24, 2014 at 02:47:24PM -0700, Junio C Hamano wrote: > > K. > > Yes, that is all correct - that version works and we can improve it in > the future with platform-specific follow-up patches, if needed. Advertising Junio, thanks for merging this and other diff-tree patches to next. It so happened that I'm wrestling with MSysGit today, so please also find alloca-for-mingw patch attached below. Thanks, Kirill ---- 8< ---- Subject: [PATCH] mingw: activate alloca Both MSVC and MINGW have alloca(3) definitions in malloc.h, so by moving win32-compat alloca.h from compat/vcbuild/include/ to compat/win32/ , which is included by both MSVC and MINGW CFLAGS, we can make alloca() work on both those Windows environments. In MINGW, malloc.h has explicit check for GNUC and if it is so, defines alloca to __builtin_alloca, so it looks like we don't need to add any code to here-shipped alloca.h to get optimum performance. Compile-tested on Windows in MSysGit. Signed-off-by: Kirill Smelkov <k...@mns.spb.ru> --- compat/{vcbuild/include => win32}/alloca.h | 0 config.mak.uname | 1 + 2 files changed, 1 insertion(+) rename compat/{vcbuild/include => win32}/alloca.h (100%) diff --git a/compat/vcbuild/include/alloca.h b/compat/win32/alloca.h similarity index 100% rename from compat/vcbuild/include/alloca.h rename to compat/win32/alloca.h diff --git a/config.mak.uname b/config.mak.uname index 17ef893..67bc054 100644 --- a/config.mak.uname +++ b/config.mak.uname @@ -480,6 +480,7 @@ ifeq ($(uname_S),NONSTOP_KERNEL) endif ifneq (,$(findstring MINGW,$(uname_S))) pathsep = ; + HAVE_ALLOCA_H = YesPlease NO_PREAD = YesPlease NEEDS_CRYPTO_WITH_SSL = YesPlease NO_LIBGEN_H = YesPlease -- 1.9.0.msysgit.0.31.g74d1b9a.dirty -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at
https://www.mail-archive.com/git@vger.kernel.org/msg47570.html
CC-MAIN-2017-30
refinedweb
320
52.87
Hi, thanks for checking out my first instructable. After after a year with the mk1 , I realised its time to upgrade. The main things i concentrated on the upgrade was LIGHTING and SIZE. On the mk1, i had a single throw switch which worked fine, but keeping up with modern tech.. i decided to add touch buttons instead. Sorry No real plans or schematic exist for this build, it is all in my head. But i have every bit of faith that this is a walk in a park for you. Step 1: Step 1: BOM (parts) - An encloser ( i used a box that has 12x cat food pouches) - 3/4 pc fans, ( i salvaged a twin fan from my xbox 360) - Arduino nano - NANO dip ( so you dnt solder your adruino to pcb like a did, make it much easier to change - 3x tip122 transistor - 360 degree led 12v W5W - 9g servo - 5inch piece of wire ( 16 gauge enamel copper will do) - 2inch wire ( 20-22 " ") - 12 5mm leds ( you choose color) - 12x 180ohm resistors - 1 diode - 8 pairs of female and male connectors - perfboard 3inchx4inch -12v 3amp+ dc supply adapter (i didnt have 1 so used an old laptop power brick with a buck stepdown) - 5v regulator - 2x 10uf electrolytic capacitors - air filter - soldering iron - foil - 2x 6.6Mohm - heat shrink tubing - hot glue gun - superglue - duct tape - 3x Roughly A4 pieces of reasonably strong cardbord If you decide to a backlit sign - cd case - spray paint (any colour) - electric tape - circle cutter (relevant to my design) - white paper - coloured light filter (red in my case) Step 2: Step 2: the Shell Before you start putting it all together, make sure all the components will fit in your enclosure. Make sure your pcb and its components fit. If they don't, change the box or find a way to enlarge it. Pull the box apart at the joints, my box is joined together with hot glue so i just pulled those tabs. Now that your box is flat.. Measure the diameter of the fan Cut holes where you want the intake fans and exhaust fans with the circle cutter. after this, you could spray paint the exterior now. Step 3: Step 3:The Circuitry This is one of the easiest circuits out there, but was a challenge for me. I'm not going to give exact details of the circuit for two reasons. 1. if you're no stranger to electronics then this is easy for you ( you prob could make it better) 2. If you are learning then this is brilliant for you as it will give you a chance to develop. If you understand transistors, resistors and have basic arduino knowledge.. this should be a walk in a park Using the CapacitiveSensor library. Jump the 6.6Mohm resistor across pin 2 of arduino and pin 4. Attach a long wire to pin 2, this serves as the sense pin. this will be power button (fans, and all leds) for light, jump the 6.6Mohm resistor across pin 2 of arduino and pin 6 and like last time attach a long wire to pin 6 and this will be the light sense. Keep in mind you can change the pin designation in the code. so the power could be light and vice versa. the wires as they are will work as capsense, but we need to have pads to touch. for this we use a tiny bit of foil wrapped around exposed wire. this foil will be attached to your control panel. test this circuit on a bread board before soldering. SOLDERING I started with the NANO followed by the 5v regulator and capacitors. after soldering in the transistors added resistors for led array Most of the time for me went in the soldering and debugging 20%, writing code 45%, Debugging code 35% Step 4: Step 4: Control Panel and Backlit Sign The control panel and backlit sign are on the same piece of material.# Using a clear piece of a CD case and electrical tape, stick strips of the tape horizontal.. living no gaps and repeat process with vertical strips, living no gaps. Draw your design on a paper or if you cannot draw you can always print one.REMEMBER to draw a mirrored image. using extra tape, secure your design on top on the taped cd case and cut out sections you do not require. Now carefully remove your template design and stick it where you want it to go. REMEMBER. stick it under the panel (the side you won't touch) Leaving enough room for buttons. Before spraying, also have in mind the shape of your buttons and cover those bits. unless you want your buttons the same as the panel ( i went for clear so i can attach status led's) Now you can paint. using the back end of a 5mm drill bit, heat it up and make hole this is where your foil wrapped in wire will go into. Lighting using a small perfboard, i soldered a few led's ( the number changes with the brightness you want to archive or how bright the led's are to begin with. Now grab your painted panel, make sure its dry. You should have (if you chose my design) the A see through as well as the circle around it. on the underside where you have sprayed. stick a white piece of paper followed by the light filter of choice. Step 5: Step 5: Light and Mechanism The light for when i'm soldering and its deployment method is unique. this is the first on the internet to my knowledge, but i digress. first,extend the cathode and anode, make a tube to hold the light and secure with hot glue. second. make a tube slightly bigger than the light holder, this will act as a guide an test it. secure the bigger tube to a piece of card (same size as the side of the enclosure). cut the 16 gauge wire to size (where ever you decide to put the servo). bend a 90 degree angle 3mm from each end in the same direction. using a 2 inch piece of thinner wire (20-22 gauge) make a small ring (small enough for 16 gauge) and twist the remaing wire together into a tail. and hot glue just inside the smaller tube behind the led. slip one end of the 90 degree bends in this and the other to the servo horn( cut off the arms and leave one, cut that one left short living two holes, the second hole will be where the other end of the 90 degree bend will go into. if there is a fitting issue, heat the end and slip it in, the plastic will melt and blah blah blah) line up an secure with hot glue. connect your servo to your arduino and test, keep note of the two positions you need. mine, when ON 20, OFF 165. Step 6: Step 6: Software and Testing this code is basic. you will have to develop it further yourself (learning curve and all that). If you do have any difficulties i will offer help. #include <capsense.h> Capsense cs_4_2 = CapSense(4,2); const int ledPin1 = 13; void setup() { cs_4_2.set_CS_AutocaL_Millis(0xFFFFFFFF); // turn off autocalibrate on channel 1 - just as an example Serial.begin(9600); } void loop() { static boolean lastSensorHit = false; static boolean LEDvalue = LOW; bool sensorHit = cs_4_2.capSense(30) > 1100; if (sensorHit && !lastSensorHit) // Now true, was false { LEDvalue = !LEDvalue; // Toggle the LED digitalWrite(ledPin1, LEDvalue); } lastSensorHit = sensorHit; delay(100); } Step 7: Putting It Together Put everything together securing the fans with hot glue, making sure the are blowing the same direction. I had drawn a skull and cross bones with 2 red leds for eyes, but i find it pointless now that the project is complete also i think its messing up air flow. Its stuck to the base with gorila tape. Once again, im sorry to not give Full details. The way i see it is, if you're building one of these you like doing this and want to learn more. And having an full manual doesn't do you any good in the long run. I will offer help if stuck. 6 Discussions 3 years ago I love it. Now I just need to break down and buy a soldering gun/knife/torch .... I'm not sure what it's called. Reply 3 years ago the torch-... Circle cutter -... is this what you need? Reply 3 years ago Almost... more like this Soldering Iron Reply 3 years ago you can get a cheap 1 for £15 3 years ago Nicely done on your first instructable! This looks like a great little fume extractor. Reply 3 years ago thank you very much
https://www.instructables.com/id/ARDUINO-fume-extractor-with-touch-sensitive-button/
CC-MAIN-2019-04
refinedweb
1,475
79.8
I'm trying to log a few values to a text file, but everytime I do it, it gives me this error: [Error] 'save' was not declared in this scope Here's the code: #include "iostream" #include "fstream" using namespace std; double n1, n2; int main() { n1 = 1; n2 = 2; save(n1, n2) } int save(double a, double b) { ofstream log; log.open("test.txt"); log << 1 << 2 << "\n"; log.close(); } Code in main needs to know that such a function exists. It needs either a declaration or a definition above it: int save(double a, double b); int main() //... By the way: you should eschew using global variables. And you're not using the parameters in your save function.
https://codedump.io/share/FUexiJ0sJ7uT/1/error-function-not-declared-in-scope
CC-MAIN-2018-22
refinedweb
120
69.82
Den lör 16 juni 2018 kl 17:47 skrev Stefan Bodewig <bodewig@apache.org>: > On 2018-06-16, Gintautas Grigelionis wrote: > > > if services would become a new mechanism for adding tasks, modules > > would be of help by providing an API to discover them. > > Which may be a bit more tricky than it looks as currently tasks don't > provide metadata about themselves (tag name, namespace uri) as this is > handled by the surounding concepts like the antlib task. > Namespace is usually derived from package name and can be derived from module name. Tag name can be derived from service name, and it looks like provider method [1] could be used as a mechanism for aliasing. > Since modules are a jar with a twist, it seems that they could be > > abstracted as resources if one would like to expose what's happening > > under the hood/bonnet. > > The term resource might be overloaded and mean different things to you > and me. In Ant terms a module would be a ResourceCollection, probably. > I see. Thanks for correction. > > The question was more about whether that exposure would be useful (so > > that there could be ModuleSets, etc). > > Something like a ZipGroupFileset? > Exactly. Gintas [1]
https://mail-archives.eu.apache.org/mod_mbox/ant-dev/201806.mbox/%3CCALVNWHWJ1Mz5_CwRzdFMv3abAQAYO3B1miZqz+Nj8j7Gt5gK0A@mail.gmail.com%3E
CC-MAIN-2021-25
refinedweb
199
70.73
new-run-webkit-tests: suppress extraneous pretty patch warnings Created attachment 89144 [details] Patch Comment on attachment 89144 [details] Patch Doesn't python logging already have built in ways to filter based on where the messages came from? (In reply to comment #2) > (From update of attachment 89144 [details]) > Doesn't python logging already have built in ways to filter based on where the messages came from? Yeah, but the messages are being logged in the same place in both cases, so I'm not sure how that would help? Comment on attachment 89144 [details] Patch View in context: > Tools/Scripts/webkitpy/layout_tests/port/base.py:178 > + def check_pretty_patch(self, logger=None): It seems wrong that we're calling this method multiple times. How about making a pretty_patch_available method for pretty_patch_text and check_pretty_patch to call? Created attachment 89431 [details] revise check_pretty_patch() to return a list of error strings So, the latest patch revises check_pretty_patch() to return a list of strings to log if prettypatch is not available, and an empty list == success. This works okay in this case, although the naming and return value is a little weird, since you have: self._pretty_patch_available = not bool(self._check_pretty_patch()). There are several other check_XX() routines in the port interface. check_image_diff() takes a flag indicating whether or not to log messages, and returns a bool. check_sys_deps() and check_build() unconditionally call log() internally as they see fit. It seems like all of the check_* methods should behave similarly. This is complicated by the fact that some of the routines log and fail, and some (like check_build()) might log warnings or informational messages but not fail. Extending all of these routines so that they return (error_messages, warning_messages, info_messages, sucess_or_failure) gets increasingly wonky. I feel like I'm inventing monads :) Alternatively, we can pass in a log object (like the first patch did). Or we can pass a flag (like check_image_diff() currently does, in the chromium port). Or we can provide two methods, one that logs, and one that doesn't, for both pretty_patch and image_diff. Further thoughts? Created attachment 89446 [details] version using a boolean flag param. (In reply to comment #8) >. Ah, that's definitely more readable than my second version. (In reply to comment #7) > Created an attachment (id=89446) [details] > version using a boolean flag param Note that this version should have check_pretty_patch(logging=False) on line 759 (fixed during the commit). Committed r83759: <>
https://bugs.webkit.org/show_bug.cgi?id=58295
CC-MAIN-2021-49
refinedweb
402
54.12
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. how to use res_id in odoo ? I cant understand that concept. please give me simple clear example from anyone. let's take an example def redirect_partner_form(self, cr, uid, partner_id, context=None): value = { 'domain': "[]", 'view_type': 'form', 'view_mode': 'form,tree', 'res_model': 'res.partner', 'res_id': int(partner_id), 'view_id': False, 'context': context, 'type': 'ir.actions.act_window', } return value Here in odoo while opening a specific record of model we can pass the model name and it's res_id for open the record. I hope this may help in your case. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-use-res-id-in-odoo-102004
CC-MAIN-2017-13
refinedweb
142
59.8
HOWTO take a screenshot Taking a screenshot of your application or desktop in GNU/Linux can be done in countless ways using a broad variety of specialized graphical screenshots programs, advanced image editors and small terminal tools. Here are some of the many ways you can take a screenshot in a Linux environment. The Print Screen Button While it may seem obvious that pressing the Print Screen key (sometimes labelled "PrtSc") will let you take a screenshot in XFCE4 and KDE Plasma it's actually something few people know. Now you know: Pressing that fine key which located on the left side of the top row on your keyboard (first key after the F1-F12 row) will take a screenshot and open a small program which allows you to open that full-screen screenshot in a program like Krita or GIMP or save it to a file. This is what the dialog you get when pressing Print Screen in Xfce4 looks like: Using a graphical program The functionality which allows Xfce to take a screenshot of the entire screen when the Print Screen key is pressed is provided by a program called xfce4-screenshooter and it can be used as a stand-alone tool for taking screenshots of the entire screen or an active window or a region of the screen - with a set delay before capturing. That binary can be assigned to a hotkey in the settings. It does not appear to be anywhere in the menus of Xfce or any other desktop environment. Every single desktop environment appears to have it's own GUI tool which works like xfce4-screenshooter described above and these can be installed and used on any desktop environment. Most of them are actually NOT as good as the screenshot-tools provided by KDE Plasma and Xfce. - Deepin has a tool called deepin-screenshotwhich can be installed on any desktop. It will, unlike other similar tools, turn the mouse cursor into a selection tool which can be used to select an area of the screen and save a screenshot. We do NOT like this tool at all. Everything else is easier and quicker. - MATE's got a tool called mate-screenshot. It will take a screenshot and open a dialog where you can "Save" or "Copy to Clipboard" or take a "New" screenshot. It's fine but MATE's tool lacks the option of directly opening a screenshot in another program like Krita or GIMP. - KDE Plasma activates a screenshot program called spectacle when the PrintScreen button is pressed. spectacle lets you capture screenshots of the entire screen, a single window, the window under the mouse pointer or a selected area with or without a delay. It is the most feature-complete and easy to use screenshot program of all and a good choice to use as keybinding for the PrtSc key in any desktop environment. Most graphics programs and editors have a screenshot function built-in. - The simple paint program KolourPaint, which you probably have if you are using KDE Plasma, allows you to take a screenshot by selecting the menu Filethen Acquire screenshot. It allows you to select a delay but not a specific window, it can only be used to take a screenshot of the entire screen. - You can take a screenshot with GIMP by going to the main menu Filethen Createthen Screenshot. GIMP's dialog box for taking screenshots has the option of taking a screenshot of the entire screen or a program with or without window decorations. It supports taking a screenshot with a delay. Using the command-line in a terminal If you have imagemagick installed, and you probably do (click here to install on Ubuntu if you don't) then you can use it's import (manual page) program to take a snapshot of your dekstop. Use it like this: import -window root screenshot.png You can also use xv. xv is in principle for interactive image display but it's secretly got quite a few image conversion and manipulation features listed in xv --help (but not in [it's manual page for some reason). You can use it to take a screenshot with: xv -grabdelay 2 myimage.jpg There is also scrot. It is a standalone program you probably don't have pre-installed but it's easy to install since it's in all distributions repositories. It's been around since 2000. One interesting advantage it has is that you can save filenames using patters, scrot '%Y-%m-%d_$wx$h.png' will create a screenshot file named something like 2019-10-30_2560x1024.png which makes it easy to know time and resolution later on. The oldest program for taking a screenshot of X from a terminal that's still around is probably xwd. It's been around since forever and comes with X so probably have it installed. You probably don't want to use it but if you do you can use it like this: xwd -root -out test.xwd Screenshots created by xwd has to be saved as .xwd files. GIMP supports those and so does some obscure thing called xwud but that's about it. Taking a screenshot of a framebuffer terminal The best option for taking a screenshot of your framebuffer terminal at VT1 appears to be fbgrab. Using it once you have it is strait forward, fbgrab screenshot.png however getting it is not. Distributions do not come with it and most don't have it in their repositories. Thus, you need to get it from it's homepage at and compile it yourself. Enable comment auto-refresher
https://linuxreviews.org/HOWTO_take_a_screenshot
CC-MAIN-2021-49
refinedweb
939
69.11
javascript typeof operator JavaScript typeof operator tells you what type of data you're dealing with: var BooleanValue = true; var NumericalValue = 354; var StringValue = "This is a String"; alert(typeof BooleanValue) // displays "boolean" alert(typeof NumericalValue) // displays "number" alert(typeof StringValue) // displays "string" o a fungeje to aj na objekty? since this is the #1 google result for javascript typeof, it should probably include a reasonable list of javascript types, such as "object", "undefined", "function" <a href="...">Mozilla Javascript Doc</a> Please read the 4 articles of Matthias Reuter finding and converting objects in javascript instead the "try and error method"!!! The following site gives more details of the operator of "typeof":... i agree with cory, such a waste of time Google: FAIL. typeof is almost never needed in good OOP. The times it is "needed" can usually be avoided by using a simple flag, and the remaining times almost always are a result of poor encapsulation and piss-poor programming ability. The thing is, javascript isn't OOP, it's actually not OBP as some people believe. Javascript is a type insensitive language that has few rules on object malleability. That means that you can change almost any aspect of any "new Object()" anywhere in the program. You can do OOP with this, but the type insensitivity upon which it relies means that any object will always be 'type' Object. Now, recall that I said that good OOP tries to avoid the need of typeof? Well, I didn't explain that the reason people use typeof is because they want to know what their OBJECT's type is. Now wait a min, Javascript's objects are all type Object! So you're saying you don't know the TYPE of a VARIABLE that YOU defined (unlike objects, which are defined by the program). Stop using overloaded functions. Javascript does NOT support function overloading, Javascript ONLY supports "argument insensitive" functions (meaning you can leave some arguments off and check if they are undefined (NOT that they are false *:| ("0" == false) && (0 == false) && ([0] == false) Three very good reasons to use (0 == undefined) (for arguments sake, null is the value that undefined holds... so either or). Javascript has type insensitive functions so you can pass OBJECTS that have the same INTERFACE without hassle, not so you can avoid writing an additional function. Stop redefining namespaces (variables)! A variable should NEVER change it's type... if it was a string at the beginning of the program, it should be a string at the end of the program. The type insensitivity of variables exists mostly so functions can pass fail flags around (such as returning a -1) instead of returning null. Note, however, that even if you pass a fail flag to a variable that should hold a string, you are not changing the variables type (other than from being undefined). By checking if the fail flag is there before using it as a string, you inherently confirm that the object is, indeed, a string. Zoroastro, you said "A variable should NEVER change it's type... if it was a string at the beginning of the program, it should be a string at the end of the program." How about when I want to increment a counter on a page? I read the existing value with innerHTML or get the value of a form field. Either gives me a string. If I don't change the type of the variable to a number, I concatenate, rather than add. since this is the #1 google result for javascript typeof, it should probably include a reasonable list of javascript types, such as "object", "undefined", "function" men you are honest!! this is because you are rated #1 ;) Zoroastro -- What about the case in which you want to know whether a passed-in parameter is of the right type for the code that follows? This especially applies in the case of something like a jQuery plugin, where oftentimes the end-user of the plugin can pass in custom (and quite possibly wrong) settings. Daniel completely agree, surely typeof is essential when relying on parameters passed in by an unknown user. @Zoroastro "javascript isn't OOP" - true. It's way better than OOP! Also you're confusing OOP and static type safety. Smalltalk is a hugely influential OO language, but like JS it is dynamically-typed: a variable may store anything. The need to check the type of an object is not automatically a sign of poor design. Yes, often the operation to be performed depends on the type of a single object, and so by calling a method on the object we can avoid an explicit "type-switch": just call the method and let the object decide how to implement it. But it is also often true that the operation to be performed depends on the types of two or more objects. Traditional OO (as exemplified by Java and C++) has no answer to this problem. e.g. a "window" receives an "event". The operation to perform depends on both the type of the window and the type of the event. If you make a "handleEvent" method on the window, you need a type-switch to examine the type of event. OO is not quite the ultimate solution to all programming problems that you may have been told! :) hi.. well hello.. Just wanted to add my 2 cents: THIS SITE IS AN EPIC FAIL. /* Formatted on 2011/01/03 16:17 (Formatter Plus v4.8.7) */ SELECT a.matter_ref_number, a.reminder_activity, a.reminder_date, a.reminder_to, b.claim_type_id, getparamvalue('CLAIM_TYPE_MASTER', 'CLAIM_NAME', 'CLAIM_TYPE_ID', b.claim_type_id ) AS CLAIM_Name FROM ACTIVITY_MASTER a, ENQUIRY_MASTER b WHERE a.matter_ref_number = b.matter_ref_number Zoroastro has demonstrated fairly well that he doesn't understand much about JavaScript at all. For instance, he says that EVERY object in JavaScript is typeof "object". That is simply not true. What about a function? var func = Function(); "object" == typeof func; // false func instanceof Object; // true Since I have effectively knocked down the central argument of all that gobbledygook, that every JS object is type "object", I'll stop there. typeof should be avoided for other reasons: It is very useful when you don't have 100% control of the environment. Done. @Zoroastro Or... here's a thought, use the === operator to check value AND type. @Zoroastro: That's outrageous. What about a javascript model setter that needs to check for associated user-defined hooks (functions) that may or may not be present for a given attribute? You need to determine the type to avoid executing the function in cases where it is absent. Just had to join in. @Zoroastro There are times when you can say "this is" in coding, but using the typeof operator is not one of them. If you had a point, it got well and truly lost in your incoherent, misleading, opinionated and fairly inaccurate rant. I won't bother to knock down each of your arguments as I'm busy, my role here is to highlight to others who may think you know what you're talking about, not to get sucked in.. Gym.Prathap... Прелест, You are my inspiration, I possess few web logs and rarely run out from to brand. dbebkacdbgea ccsdssd 26
http://hubpages.com/technology/javascript-typeof-operator
CC-MAIN-2016-40
refinedweb
1,210
62.58
On Fri, 2010-06-11 at 05:20 +0100, Ben Dooks wrote:>.Well, in this case the goal is to unify things, both within ARM andbetween architectures, so I fail to see Linus complaining about that :-)> What do people think of just changing everyone who is currently using> clk support to using this new implementation?It's risky I suppose... there isn't many users of struct clk in powerpcland today (I think one SoC platform only uses it upstream at themoment) so I won't mind getting moved over at once but on ARM, you haveto deal with a lot more cruft that might warrant a more progressivemigration approach... but I'll let you guys judge.> > struct clk {> > const struct clk_ops *ops;> > unsigned int enable_count;> > struct mutex mutex;> > I'm a little worried about the size of struct clk if all of them> have a mutex in them. If i'm right, this is 40 bytes of data each> clock has to take on board.> > Doing the following:> > find arch/arm -type f -name "*.c" | xargs grep -c -E "struct.*clk.*=.*{" | grep -v ":0" | awk 'BEGIN { count=0; > diff --git a/include/linux/clk.h b/include/linux/clk.h> > index 1d37f42..bb6957a 100644> > --- a/include/linux/clk.h> > +++ b/include/linux/clk.h> > @@ -3,6 +3,7 @@> > *> > * Copyright (C) 2004 ARM Limited.> > * Written by Deep Blue Solutions Limited.> > + * Copyright (c) 2010 Jeremy Kerr <jeremy.kerr@canonical.com>> > *> > * This program is free software; you can redistribute it and/or modify> > * it under the terms of the GNU General Public License version 2 as> > @@ -11,36 +12,125 @@> > #ifndef __LINUX_CLK_H> > #define __LINUX_CLK_H> > > > -struct device;> > +#include <linux/err.h>> > +#include <linux/mutex.h>> > > > -/*> > - * The base API.> > +#ifdef CONFIG_USE_COMMON_STRUCT_CLK> > +> > +/* If we're using the common struct clk, we define the base clk object here,> > + * which will be 'subclassed' by device-specific implementations. For example:> > + *> > + * struct clk_foo {> > + * struct clk;> > + * [device specific fields]> > + * };> > + *> > + * We define the common clock API through a set of static inlines that call the> > + * corresponding clk_operations. The API is exactly the same as that documented> > + * in the !CONFIG_USE_COMMON_STRUCT_CLK case.> > */> > > > +struct clk {> > + const struct clk_ops *ops;> > + unsigned int enable_count;> > + struct mutex mutex;> > +};> > how about defining a nice kerneldoc for this.> > > +#define INIT_CLK(name, o) \> > + { .ops = &o, .enable_count = 0, \> > + .mutex = __MUTEX_INITIALIZER(name.mutex) }> > how about doing the mutex initinitialisation at registration> time, will save a pile of non-zero code in the image to mess up> the compression.> > ~> +struct clk_ops {> > + int (*enable)(struct clk *);> > + void (*disable)(struct clk *);> > + unsigned long (*get_rate)(struct clk *);> > + void (*put)(struct clk *);> > + long (*round_rate)(struct clk *, unsigned long);> > + int (*set_rate)(struct clk *, unsigned long);> > + int (*set_parent)(struct clk *, struct clk *);> > + struct clk* (*get_parent)(struct clk *);> > should each clock carry a parent field and the this is returned by> the get parent call.~~> > > +};> > +> > +static inline int clk_enable(struct clk *clk)> > +{> > + int ret = 0;> > +> > + if (!clk->ops->enable)> > + return 0;> > +> > + mutex_lock(&clk->mutex);> > + if (!clk->enable_count)> > + ret = clk->ops->enable(clk);> > +> > + if (!ret)> > + clk->enable_count++;> > + mutex_unlock(&clk->mutex);> > +> > + return ret;> > +}> > So we're leaving the enable parent code now to each implementation?> > I think this is a really bad decision, it leaves so much open to bad> code repetition, as well as something the core should really be doing> if it had a parent clock field.> > ~> +static inline void clk_disable(struct clk *clk)> > +{> > + if (!clk->ops->enable)> > + return;> > so if we've no enable call we ignore disable too?> > also, we don't keep an enable count if this fields are in use,> could people rely on this being correct even if the clock has> no enable/disable fields.> > Would much rather see the enable_count being kept up-to-date> no matter what, given it may be watched by other parts of the> implementation, useful for debug info, and possibly useful if> later in the start sequence the clk_ops get changed to have this> field.~> > > +~ mutex_lock(&clk->mutex);> > +> > + if (!--clk->enable_count)> > + clk->ops->disable(clk);> > +> > + mutex_unlock(&clk->mutex);> > +}> > +> > +static inline unsigned long clk_get_rate(struct clk *clk)> > +{> > + if (clk->ops->get_rate)> > + return clk->ops->get_rate(clk);> > + return 0;> > +}> > +> > +static inline void clk_put(struct clk *clk)> > +{> > + if (clk->ops->put)> > + clk->ops->put(clk);> > +}> > I'm beginging to wonder if we don't just have a set of default ops> that get set into the clk+ops at registration time if these do> not have an implementation.> ~> > +static inline long clk_round_rate(struct clk *clk, unsigned long rate)> > +{> > + if (clk->ops->round_rate)> > + return clk->ops->round_rate(clk, rate);> > + return -ENOSYS;> > +}> > +> > +static inline int clk_set_rate(struct clk *clk, unsigned long rate)> >~ +{> > + if (clk->ops->set_rate)> > + return clk->ops->set_rate(clk, rate);> > + return -ENOSYS;> > +}> > +> > +static inline int clk_set_parent(struct clk *clk, struct clk *parent)> > +{> > + if (clk->ops->set_parent)> > + return clk->ops->set_parent(clk, parent);> > + return -ENOSYS;> > +}> > We have an interesting problem here which I belive should be dealt> with, what happens when the clock's parent is changed with respect> to the enable count of the parent.> > With the following instance:> > we have clocks a, b, c;> a and b are possible parents for c;> c starts off with a as parent> > then the driver comes along:> > 1) gets clocks a, b, c;> 2) clk_enable(c);> 3) clk_set_parent(c, b);> > now we have the following:> > A) clk a now has an enable count of non-zero> B) clk b may not be enabled> C) even though clk a may now be unused, it is still running> D) even though clk c was enabled, it isn't running since step3> > this means that either any driver that is using a multi-parent clock> has to deal with the proper enable/disable of the parents (this is> is going to lead to code repetition, and I bet people will get it> badly wrong).> > I belive the core of the clock code should deal with this, since> otherwise we end up with the situation of the same code being> repeated throughout the kernel.> > > +static inline struct clk *clk_get_parent(struct clk *clk)> > +{> > + if (clk->ops->get_parent)> > + return clk->ops->get_parent(clk);> > + return ERR_PTR(-ENOSYS);> > +}> >> > +#else /* !CONFIG_USE_COMMON_STRUCT_CLK */> > > > /*> > - * struct clk - an machine class defined object / cookie.> > + * Global clock object, actual structure is declared per-machine> > */> > struct clk;> > > > /**> > - *_enable - inform the system when the clock source should be running.> > * @clk: clock source> > *> > @@ -83,12 +173,6 @@ unsigned long clk_get_rate(struct clk *clk);> > */> > void clk_put(struct clk *clk);> > > > -> > -/*> > - * The remaining APIs are optional for machine class support.> > - */> > -> > -> > /**> > * clk_round_rate - adjust a rate to the exact rate a clock can provide> > * @clk: clock source> > @@ -125,6 +209,27 @@ int clk_set_parent(struct clk *clk, struct clk *parent);> > */> > struct clk *clk_get_parent(struct clk *clk);> > > > +#endif /* !CONFIG_USE_COMMON_STRUCT_CLK */> > +> > +struct device;> > +> > +/**> > + *_get_sys - get a clock based upon the device name> > * @dev_id: device name> > --> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in> > the body of a message to majordomo@vger.kernel.org> > More majordomo info at> > Please read the FAQ at>
http://lkml.org/lkml/2010/6/11/52
CC-MAIN-2016-36
refinedweb
1,145
57.4
In current realization of Rack::Builder the method :use dictates that middlewares must be instances of some class by internal invoking :new initializer. Thus Rack’s Builder demands to delegate creation of middleware object and demands existing special class for middleware. Though in most cases middleware conforms with singleton pattern. Or just simply a callable object. Moreover, it is always more cleary to link already existing middleware, created outside of builder. Is middleware worse than ordinary app in respect of this? Unneeded delegation of creation makes unneeded coupling and unneeded restrictions. As Vidar H. well wrote in his post “Cohesion vs. coupling” Looking at realization of Builder’s methods we can make something like this. first variant module MiddleHello this method needs only to prevent duplicating of code in second example def self.invoke app, env, who, whom status, header, resp = app.call env add_str = “” if resp.kind_of? Rack::Response new_resp = Rack::Response.new(add_str, status, header) new_resp.write resp.body return new_resp.finish end [status, header, (add_str + resp)] end def self.creator app @app = app self end def self.call env invoke @app, env, self.method(:call), “#{self.class}: #{self}” end end second variant middle_hello = proc do |app| proc do |env| # just for preventing duplication of code MiddleHello.invoke app, env, Proc, Proc end end builder = Rack::Builder.new do use … … use … map ‘/some_path’ do run MiddleHello.method(:creator) # :run instead of :use ! run middle_hello # :run instead of :use ! run OurApp end end It works. Now I can not find better solution without changing of Builder class. But it looks like a hack. First. There are no warranties that realization of Builder’s methods will not have changed in the future. Second. It is still needed an additional object for creating a middleware object. Third, and maybe more important. This breaks the semantics of Builder’s language. And it is losing distinction in declarations of middlewares and ordinary apps. Of course we can remake the Builder or make a subclass from it. But can it cause conflicts with hostings where Rack is used for administration?
https://www.ruby-forum.com/t/rack-must-not-dictate-how-to-create-a-middleware/174453
CC-MAIN-2021-25
refinedweb
346
61.93
Getting. The user’s question was Hi, We have dynamic indexes in elastic search like signal-YYYY.MM.DD => (which resolved as signal-2017.03.31) I was try hard to find the way to specify the dynamic index in nifi processor. I already have a field in my data which i want to use as dynamic index. for example "index_name". I just wanted to use that data field in the index parameter of nifi processor. It might be simple, but i tried hard with different options like '${index_name}' , "${index_name}" , '${INDEX_NAME}' , $INDEX_NAME , $index_name Can anyone just refer me how to use data field in the nifi's elasticsearch processor property of index name? As I understand the users question, he has a key-value pair in his file, index_name, that he wants to extract and use as the index for the PutElasticSearch Processor. I responded with how I thought best to reach this goal, which was using the ExecuteScript processor to extract the field and set it as an attribute. So our goal can be stated as… Given a flowfile with a key-value pair, find that key and set the value as an attribute to be used later in the flow. I’m going to make a few assumptions here since we don’t have the exact use case, mainly that the file is json. Since we are working with Elasticsearch, the flowfile here is pretty simple- a single json object. Creating the flow Let’s first start with creating our flow. We are going to pick up a local json file, grab a field from it and then set it as an attribute. We’ll then insert into Elasticsearch using the PutElasticSearch processor. Pretty simple flow. This is the configured flow. I’ve copied the groovy script below into the ExecuteScript Processor’s ‘Script Body’ property. We are going to auto-terminate failures since this is just a quick example. I have the basic elasticsearch docker image running localy that I’m connecting to and am going to read in from a local directory, /tmp/nifi.rocks/example.json. The file contents are: { "data": "some-data", "index_name": "test-index.2017.04.01", "an_int": 1, "more_data": "yet more data" } The ExecuteScript Processor - Groovy This processor ends up being pretty simple. - Read in the flow file - Find the field - Set the flowfile attribute - Transfer to success Groovy code for the ExecuteScript: import org.apache.commons.io.IOUtils import java.nio.charset.StandardCharsets import groovy.json.JsonSlurper def flowFile = session.get() if(!flowFile) return def jsonSlurper = new JsonSlurper() def indexName = '' session.read(flowFile, { inputStream -> def row = jsonSlurper.parseText(IOUtils.toString(inputStream, StandardCharsets.UTF_8)) indexName = row.index_name } as InputStreamCallback) flowFile = session.putAttribute(flowFile, 'index_name', indexName) session.transfer(flowFile,REL_SUCCESS) Now if we run the edited flow, with the PutElasticsearch index set to ${index_name}, we should be able to query the _cat endpoint of our local Elasitcsearch and see the new index! Again, this curl is specific to the default Elasticsearch Docker image running in development mode. The default password is ‘changeme’. curl -u elastic And we get an output of: yellow open .monitoring-es-2-2017.04.02 XK092VdTQJCjl9Gjrd396w 1 1 4072 98 1.7mb 1.7mb yellow open .monitoring-data-2 NGf5b0nyR1WV13cjs0oNZA 1 1 2 0 4.1kb 4.1kb yellow open test-index.2017.04.01 wnVIqCZhTPOK1F-5A1YVUg 5 1 1 0 5.5kb 5.5kb Success!! We have inserted our index! If you see any interesting problems on the mailer list or have your own issues you’d like tackled, let us know via email or in the comments below!
http://www.nifi.rocks/executescript-groovy-example/
CC-MAIN-2017-17
refinedweb
601
58.69
Can two classes exist in a single .java file? Reply quick, please. Yes they can... you can have unlimited classes in one file, but you can only have one public class in one file, and this class needs to have the same name as the Java file. Public?? But the book I'm learning from has more than one public classes. I have attached the two pages I want to show you. It is a little difficult to read, but open it up (Right click --> Open with --> Paint) with Paint. Thank you. that's not in one java file. they print the code after each other, but they assume you know where the new file starts. you mean :- the first class-one file and so on? if so, which one am i supposed to compile and run with javac and java? You compile in the order of depenedency, I'm not sure if javac handles that in some way, try it or google. If ClassA depends on ClassB, compile ClassA first and then ClassB. To run you need your main class, i.e. the class where you have the method "public static void main(int[] args)", if that is ClassB then you write... java ClassB Thank you! I am closing this thread. Further doubts of mine ( I'm a newbie **:-(** ) will be discussed in other threads. ...
https://www.daniweb.com/programming/software-development/threads/407401/is-this-allowed
CC-MAIN-2017-43
refinedweb
226
92.93
im trying to get a list of ipsec tunnels from each template stack i have. but anytime i attempt to use the class pandevice.network.IpsecTunnel i receive an error that pandevice doesnt have an attribute network.... according to the documentation here: it does have a class pandevice.network.IpsecTunnel do i need to define something before i can reference this class ? import sys import io import json import pandevice from pandevice import base from pandevice import firewall from pandevice import panorama from pandevice import objects from pandevice import policies from pandevice.panorama import Panorama devices = [ device1, device2, device3 ] templates = [Temp1, Temp2, Temp3, Temp4] auth_key = "mykey" fw = firewall.Firewall() #to define which panorama you will connect to for device in devices: pano = panorama.Panorama(device, api_key=auth_key) # when you need to fetch objects in the Device Group for temp in templates: tempgrp = pandevice.panorama.Template(temp) pano.add(tempgrp) ipsecgrp = pandevice.network.IpsecTunnel.refreshall(panogrp) pano.add(ipsecgrp) objects_list = [] for object_element in ipsec_object_list: obj = { 'name': object_element.name, 'tunnel_interface': object_element.tunnel_interface, 'type': object_element.type, } objects_list.append(obj) for object in objects_list: print (object) print() Looks like the problem is that you haven't imported pandevice.network at all. Once you do that, you should be ok. Two other comments: First is that you shouldn't do pano.add(ipsecgrp) after the refresh. The objects are already in the pandevice object tree (as children of the template), so you don't need to add them in again. If you want to print out details of an object, you can use the .about() function that all pandevice objects have. If you just want to see a few params and not everything, then making a separate dict and printing that is fine. thank you for the additional guidance.. ill work on this more today and let you know my!
https://live.paloaltonetworks.com/t5/Automation-API-Discussions/IpsecTunnel-api-refresh/m-p/280821/highlight/true
CC-MAIN-2020-16
refinedweb
305
51.44
This is your resource to discuss support topics with your peers, and learn from each other. 12-02-2010 09:24 AM The new SDK (0.9.1) does not add any new features, just support of Burrito and some very minor bug fixes. Only about a month to go (early January) to submit to App World. A lot has to happen between now and then. I hope everyone is ready to hook themselves to some iv caffine drip 12-02-2010 01:03 PM I noticed that there is no size set for any of these button: qnxButton = new qnx.ui.buttons.Button(); qnxButton.x = 10; qnxButton.y = 10; qnxButton.setSize(75,42); this.addChild(qnxButton); I have a qnx button on top of a sprite object and it shows up fine. See if that fixes it. 12-02-2010 01:18 PM You can use MX components, even with 0.9.0.You just need a top level application. I'm working on an app right now and it's exactly like developing for AIR anywhere else. I created a normal AIR application and then switched the project to use the SDK for Blackberry. There was some problem creating a project using that SDK that was anything but a .as file for the main app. And add the compiler args (your library path may vary): -locale en_US -library-path+="C:/Program Files/Adobe/Adobe Flash Builder 4/sdks/blackberry-tablet-sdk-0.9.0/frameworks/loca And configure a launch profile of course. Then you should be able to deploy something like this: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns: <mx:Button </s:Application> 12-02-2010 01:32 PM Have you been able to check the resulting SWF filesize from this method to the Sprite only method? I dont have the number with me, but the file size difference was significant (~2x) and in the case of WindowedApplication base class, the load time in the simulator was just too slow. I would rather work in MXML land, because I feel I am writing less code and have increase productivity, but the download and loadup by this method did not seem to warrant to proceed further in that direction. If you can show that an MXML approach is still light weight, I would love to change gears and go back to that approach. Also had trouble setting the SWF parameters above the main Sprite definition in any approach other than the "standard" approach. 12-02-2010 01:46 PM If you want to use MX or Spark components, you MUST have an Application component as Root. MX and Spark components inherits from UIComponent class which initialise a lot of things and communicate with a top Manager. It is not possible to add Sprites on MX or Spark components (except UIComponent with some tricks), because they don't use the same display events. And you can't use MX or Spark components on Sprites, because components need an initialisation to display, and Sprite can't do that. When developping real desktop AIR programs or web Flex programs, if you use Shared Libraries, your SWF will be >40kB. if you add static librairies, it will be bigger. 12-02-2010 01:57 PM - edited 12-02-2010 02:00 PM @jtegen Planning to check size of similar function apps with Sprite + QNX and Spark, but apps shoul be substantial not just one button in a container! Will post my results here. I agree with you that using x,y for mobile devices is bad, I always use relative coordinates, containers (like vbox,hbox) and non absolute layour managers. Always runinng orientation test to see how UI looks with different orientation. 12-02-2010 02:53 PM - edited 12-02-2010 02:53 PM Forgot to mention, new Burito SDK actually has <MobileApplicaiton/> root element, I suspect there are bunch of mobile optimised components as well. As part of my test I will check how big is Mobile optimised Spark Application. 12-02-2010 03:07 PM - edited 12-02-2010 03:08 PM @jtegan Note in the above example, I'm using Application rather than WindowedApplication. Of course the SWF loads quickly, but the Flex Init time is a few seconds, so I put a preloader on it to get rid of the 'white box' progress bar for the few seconds it's shown. I'm fine with it not being absolutely instantaneous. It doesn't take as long as Firefox does to launch on my desktop, -=Cliff> 12-02-2010 05:05 PM I just did a quick sample between Application root class and Sprite root class. The Sprite root class to just show a label is 127KB. The Application based application is 540KB; 425% larger. Any suggestions to get it smaller? Sprite: import qnx.ui.text.Label; [SWF(height="600", width="1024", frameRate="30", backgroundColor="#FFFFFF")] public class SampleAway extends Sprite { public function SampleAway() { this.stage.nativeWindow.visible = true; var label : Label = new Label(); label.text = 'Hello World'; addChild( label ); } } Application: <s:Application xmlns: <s:Label </s:Application> 12-02-2010 05:22 PM You can try to use shared libraries with Flex. It's an compiler option.
https://supportforums.blackberry.com/t5/Adobe-AIR-Development/HelloWorld-Example-with-MX-Spark-components/m-p/662781
CC-MAIN-2017-04
refinedweb
883
63.9
IoT on AWS: Machine Learning Models and Dashboards from Sensor Data. By Rubens Zimbres, Data Scientist Google Colab has open source projects that help Data Scientists everywhere. Inspired in this mindset, I developed my first IoT project using my notebook as an IoT device and AWS IoT as infrastructure. So, I had a "simple" idea: collect CPU Temperature from my Notebook running on Ubuntu, send to Amazon AWS IoT, save data, make it available for Machine Learning models and dashboards. However, the operationalization of this idea is quite complex: first, develop a Python notebook that runs Ubuntu command line internally ('sensors'), collecting CPU temperature and is able to connect to AWS IoT via proper security protocols using MQTT. Without using a MQTT broker like Mosquitto. It is necessary to create a Thing at AWS IoT, get the Certificates, create and attach the Policy and create a SQL Rule to send data (JSON) to Cloud Watch and Dynamo DB. Then, create a Data Pipeline from Dynamo DB to S3, so that the data become available for a Machine Learning model and also to AWS Quick Sight dashboard. Let's get started by installing 'sensors' in Ubuntu 16.04 and 'AWSIoTPythonSDK' library in Anaconda 3: $ sudo apt-get install lm-sensors $ sudo service kmod start Let’s see what the ‘sensors’ command look like: Now, install AWSIoTPythonSDK library: $ pip install AWSIoTPythonSDK Let's start with the Python notebook: the following function was developed to collect CPU Temperature with a delay of 5 seconds: import subprocess import shlex import time def measure_temp(): temp = subprocess.Popen(shlex.split('sensors -u'), stdout=subprocess.PIPE, bufsize=10, universal_newlines=True) return temp.communicate() while True: string=measure_temp()[0] print(string.split()[8]) time.sleep(5) Then, we run the notebook from Linux command line: Good. Now this code is inserted in basicPubSub.py notebook from AWSIoTPythonSDK library like this: while True: if args.mode == 'both' or args.mode == 'publish': args.message=measure_temp()[0].split()[8] mess={"reported": {"light": "blue", "Temperature": measure_temp()[0].split()[8],"timestamp": time.time() },"timestamp": 1526519248} args.message=mess print(measure_temp()[0].split()[8],(time.time()-start)/60,'min') print(mess,'\n') message = {} message['message'] = args.message message['sequence'] = loopCount messageJson = json.dumps(message) myAWSIoTMQTTClient.publish(topic, messageJson, 1) if args.mode == 'publish': print('Published topic %s: %s\n' % (topic, messageJson)) loopCount += 1 time.sleep(5) Cool. We have a Python notebook that will connect to AWS IoT Core via MQTT protocol. Now we set up the shadow (JSON file) at AWS IoT, that is similar to the 'device twin' from Microsoft. Note that as I had only one device, I didn’t insert a device ID in the JSON file. { "desired": { "light": "green", "Temperature": 55, "timestamp": 1526323886 }, "reported": { "light": "blue", "Temperature": 55, "timestamp": 1526323886 }, "delta": { "light": "green" } } Now we get the certificates .pem, .key files and rootCA.pem for a safe connection. We type CTRL+ALT+T at Ubuntu and enter the command line and publish to a topic '-t': $ python basicPubSub_adapted.py -e 1212345.iot.us-east-1.amazonaws.com -r rootCA.pem -c 2212345-certificate.pem.crt -k 2212345-private.pem.key -id arn:aws:iot:us-east-1:11231112345:thing/CPUUbuntu -t 'Teste' We will receive the feedback from AWS IoT connection in the Linux shell, and check in AWS IoT monitoring tool (after 1 minute) if connections were successful: It is also possible to see if the messages are being published (orange area) and also the protocol used for the connection (on the left): Also, we see that the 'shadow' is also being updated (center): Now we create a SQL rule to send data to Cloud Watch and also to Dynamo DB, creating IAM roles, policies and permissions: Data is then saved in DynamoDB, as a JSON file. Instead of timestamp, you can use MessageID as the Primary Key. Now we can visualize Cloud dynamics and data transfer in CloudWatch: Then we create a Data Pipeline from DynamoDB to S3 to be used by QuickSight: It is also needed to create a JSON file and set up IAM permissions so that Quick Sight can read from S3 bucket: { "fileLocations": [ { "URIs": [ "" ] }, { "URIPrefixes": [ "" ] } ], "globalUploadSettings": { "format": "JSON", "delimiter": "\n","textqualifier":"'" } } Now we have our static plot of CPU Temperature in Quick Sight. Also, S3 data (.JSON file) is now available for Machine Learning models, like anomaly detection, prediction and classification, making possible to create a pipeline with Sage Maker and Deep Learning libraries = FUN. This was a very nice way to get in touch with Amazon AWS services, like EC2, IoT, Cloud Watch, DynamoDB, S3, Quick Sight and Lambda. It's definitely not easy to set up everything and their dependencies, but this part of the project costed less than 1 USD. And generated a lot of fun ! This is the flowchart of the first part of the project at AWS: Project Part 2 – Near Real-Time Dashboard Now let's develop a second solution, using Streaming Data from AWS IoT that is sent to Kinesis / Firehose and then to AWS ElasticSearch, and finally to Kibana, a near real-time dashboard. You can opt to clean and extract data with Lambda (or not) using AWS IoT as input and AWS Batch as output to connect with Kinesis. Anyway, Kibana is able to interpret your JSON file. First we must set up another rule for AWS IoT send telemetry to Kinesis Firehose stream delivery: Then create an Elastic Search domain Setting up the access to a specific IP: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "es:*", "Resource": "arn:aws:es:us-east-1:12345:domain/domain/*", "Condition": { "IpAddress": { "aws:SourceIp": "178.042.222.33" } } } ] } Then we create the Stream and Stream delivery with Kinesis Firehose. Finally, we connect AWS Elasticsearch with Kibana, adjusting at Kibana’s 'Dev Tools': PUT /data { "mappings": { "doc": { "properties": { "light":{"type":"text"}, "Temperature": {"type": "integer"}, "timestamp": {"type": "integer"} } } } } Note that Elasticsearch will provide a Kibana endopint. Finally, we have our Near Real-Time Dashboard of CPU Temperature. It’s important to notice that we are almost in a real-time environment. The issue here is that Kibana updates the graphic each 5 seconds (or 15 if you want) but Elasticsearch has a minimum latency of 60 seconds. We can now visualize our fancy dashboard: More info and files at my GitHub - Repo 2018 (CPU Temperature – IoT Project): Bio: Rubens Zimbres is a Data Scientist, PhD in Business Administration with emphasis in Artificial Intelligence and Cellular Automata. Currently works in Telecommunications area, developing Machine Learning, Deep Learning models and IoT solutions for the financial sector and agriculture. Related: - GANs in TensorFlow from the Command Line: Creating Your First GitHub Project - Putting the “Science” Back in Data Science - Machine Learning Applied to Big Data, Explained
https://www.kdnuggets.com/2018/06/zimbres-iot-aws-machine-learning-dashboard.html
CC-MAIN-2018-30
refinedweb
1,127
52.7
Fault Tolerant Shell 234 Paul Howe writes "Roaming around my school's computer science webserver I ran across what struck me as a great (and very prescient) idea: a fault tolerant scripting language. This makes a lot of sense when programming in environments that are almost fundamentally unstable i.e. distributed systems etc. I'm not sure how active this project is, but its clear that this is an idea whose time has come. Fault Tolerant Shell." Python . Re:You're dealing with the problem too high up (Score:5, Insightful) IMHO as someone who works in a complex web server / database server environment, there are many interdependancies brought by different software, different platforms and different applications. Whilst 100% uptime on all servers is a nice to have, it's a complex goal to achieve and requires not just expertise in the operating systems & web / database server software but an indepth understanding of the applications. A system such as this fault tolerant shell is actually quite a neat idea. It allows for flexibility in system performance and availability, without requiring complex (and therefore possibly error prone or difficult to maintain) management jobs. An example would be server which replicates images using rsync. If one of the targets is busy serving web pages or running another application, ftsh would allow for that kind of unforeseen error to be catered for relatively easily. Re:You're dealing with the problem too high up (Score:3, Insightful) It depends how you organise your systems. If you push to them then yes you need something like ftsh. If you organise them so that they pull updates, pull scripts to execute and arrange those scripts so that they fail safe (as they all should anyway) then you'll have something? Re:You're dealing with the problem too high up (Score:4, Insightful) Resources DO become unavailable in most systems. It simply doesn't pay to ensure everything is duplicated, and set up infrastructures that makes it transparent to the end user - there are almost always cheaper ways of meeting your business goals by looking at what level of fault tolerance you actually need. For most people hours, sometimes even days, of outages can be tolerable for many of their systems, and minutes mostly not noticeable if the tools can handle it. The cost difference in providing a system where unavailabilities are treated as a normal, acceptable condition within some parameters, and one where failures are made transparent to the user can be astronomical. To this date, I have NEVER seen a computer system that would come close to the transparency you are suggesting, simply beause for most "normal" uses it doesn't make economic sense. Bad Idea (Score:5, Insightful) Re:Bad Idea (Score:2, Interesting) Re:Bad Idea (Score:5, Insightful) Re:Bad Idea (Score:2) Re:Bad Idea (Score:2, Insightful) The only thing I can add at this point is an analogy: Think of it along the lines of IE and HTML; if you don't want to close your tags, say your table td and tr tags, it's fine, the IE browser will do it for you. Nevermind that it will break most any W3C compliant browser on the planet. (insert deity here) help the person that gets used to this style of programming and then joins the real world.) Missing the point (Score:5, Insightful) Shell scripts should be short and easy to write. I have seen plenty of them fail due to some resource or another being temporarily down. At first people are neat and then send an email to notify the admin. When this then results in a ton of emails everytime some dodo knocks out the DNS they turn it off and forget about it. Every scripting language has their own special little niche. BASH for simple things, perl for heavy text manipulation, PHP for creating HTML output. This scripting language is pretty much like BASH but takes failure as given. The example shows clearly how it works. Instead of ending up with PERL like scripts to catch all the possible errors you add two lines and you got a wonderfull small script, wich is what shell scripts should be, that is none the less capable of recovering from an error. This script will simply retry when someone knocks out the DNS again. This new language will not catch your errors. It will catch other peoples errors. Sure a really good programmer can do this himself. A really good programmer can also create his own libraries. Most find of us in admin jobs find it easier to use somebody elses code rather then constantly reinvent the wheel. Re:Missing the point (Score:5, Insightful) On a side note for Perl, one thing I always hated were the examples that had something like "open( FH, "file/path" ) || die "Could not open file!" . $!; I mean, come one, you don't want your script to just quit if it encounters an error...how about putting in an example of error handling other than the script throwing up its hands and quitting! LOL. Please excuse any grammatical/other typos above, I was on 4 hrs sleep when I wrote this. Thank You. Re:Missing the point (Score:5, Insightful) Yet we are expected to excuse your grammatical and typos. Doesn't that just promote bad practices? Shouldn't we whack you over the head with a baseball bat just to make sure you won't post when you're not prepared to write flawless posts? The more work you have to do to check errors, the more likely it is that however vigilant you might be, errors slip past. If you have to check the return values of a 100 commands, that is a 100 chances for forgetting to do the check or for doing the check the wrong way, or for handling the error incorrectly. In this case, the shell offers a function that provides a more sensible default handling of errors: If you don't handle them, the shell won't continue executing by "accident" because you didn't catch an error, but will terminate. It also provides an optional feature that let you easily retry commands that are likely to fail sometimes and where the likely error handling would be to stop processing and retry without having to write the logic yourself. Each time you have to write logic to handle exponential backoff and to retry according to specific patterns is one more chance of introducing errors. No offense, but I would rather trust a SINGLE implementation that I can hammer the hell out of until I trust it and reuse again and again than trust you (or anyone else) to check the return code of every command and every function they call. This shell does not remove the responsibility to for handling errors. It a) chooses a default behaviour that reduces the chance of catastrophic errors when an unhandled error occurs, and b) provides a mechanism for automatic recovery from a class of errors that occur frequently in a particular type of systems (distributed systems where network problems DO happen on a regular basis), and by that leave developers free to spend their time on more sensible things (I'd rather have my team doing testing than writing more code than they need to) Mod up! (Score:4, Informative) Re:Missing the point (Score:2) Write your scripts to fail safe, then don't perform ad-hoc updates, schedule them regularly.:Bad Idea (Score:5, Insightful) Done correctly, spellcheckers can be the best spelling-learning tool there is. "Correctly" here means the spell-checkers that give you red underlines when you've finished typing the word and it's wrong. Right-clicking lets you see suggestions, add it to your personal dict, etc. "Incorrectly" is when you have to run the spell-checker manually at the "end" of typing. That's when people lean on it. The reason, of course, is feedback; feedback is absolutely vital to learning and spell-checkers that highlight are the only thing I know of that cuts the feedback loop down to zero seconds. Compared to this, spelling tests in school where the teacher hands back the test three days from now are a complete waste of time. (This is one of many places where out of the box thinking with computers would greatly improve the education process but nobody has the guts to say, "We need to stop 'testing' spelling and start using proper spell-checkers, and come up with some way to encourage kids to use words they don't necessarily know how to spell instead of punishing them." The primary use of computers in education is to cut the feedback loop down to no time at all. But I digress...) 'gaim' is pretty close but it really ticks me off how it always spellchecks a word immediately, so if you're typing along and you're going to send the word "unfortunately", but you've only typed as far as "unfortun", it highlights it as a misspelled word. Bad program! Wait until I've left the word! Spell checkers and homonyms (Score:2) To, too Re:Spell checkers and homonyms (Score:3, Insightful) Of course not. But using a spell checker means having time to learn about the homonyms, instead of endlessly playing catch up. You still predicated your post on "relying" on spell checkers; I'm saying that people learn from good spell checkers. That people can't learn everything from a spell checker is hardly a reason to throw the baby out with the bath water and insist that people use inferior learning techniques anyhow! A kid that. Re:Wouldn't be much work in Tcl (Score:3, Insightful) def college_try (limit, seq =0) begin yield catch e # forgot the syntax for getting the block college_try( limit, seq + 1, block ) if (seq < limit) end end college_try( 50 ) { begin do some work catch e do error clean up here raise e ensure do cleanup that should always run here end } Anyways, I agree with the notion that most popular scripting languages have advanced error handling that is up to the task Re:Wouldn't be much work in Tcl (Score:3, Interesting) Your example only does a fraction of what ftsh does. Re:Wouldn't be much work in Tcl (Score:3, Insightful) > of what ftsh does. yawn, so we didn't post a 100-500 line library in our slashdot comment. the point is, this stuff would be trivial to implement in language like ruby. plus, using a full scripting language you get lots of other useful features like regular expressions, classes, etc, etc It's a good idea, but it's a library implemented as a language. Re:Wouldn't be much work in Tcl (Score:2) Worst idea since spell checkers (Score:4, Insightful) This si even worse. Computers will try to second guess what the user means, get get it wrong half tyhe time. A qualified shell scripter will be not make these mistakes in the first place. Anyone who thinks they need this shell actually just need to learn to spell and to ytype accuratly. Re:Worst idea since spell checkers (Score:3, Funny) Re:Worst idea since spell checkers (Score:5, Funny) <Teal'c> Indeed </Teal'c> RTFA (Score:2, Insightful) "It [ftsh] is especially useful in building distributed systems, where failures are common, making timeouts, retry, and alternation necessary techniques." It doesn't protect you from typos in the script, it handles failures in the commands that are executed. Re:Worst idea since spell checkers (Score:2), It's got the concept backwards (Score:5, Insightful) Instead, you automate everything and *pull* updates, changes, scripts etc. That way if a system is up, it just works, if it's down, it'll get updated next time it's up. I won't go into details but I'll point you at re. Sounds like good way to do some serious damage (Score:3, Interesting) Re:Sounds like good way to do some serious damage (Score:3, Insightful) This would be nice... (Score:5, Insightful) This would REALLY be useful when you're connecting to services external to yourself - network glitches cause more problems with my code than ANYTHING else, and it's a pain in the arse to write code to deal with it gracefully. i'd really really like to see a universal "try this for 5 minutes" wrapper, which, if it still failed, you'd only have one exit condition to worry about. hey, what the hell, maybe i'll spend a few days and write one myself.:5, Insightful) All the programmers who need the environment to compensate for their inadequacies, step on one side. All the programmers who want to learn from their mistakes and become better at their craft, get on the other side. Most of us know where this line is located. Re:5, Insightful) So what happens if the files are crucial (let's use the toy example of kernel modules being updated): The modules get deleted, then the update fails because the remote host is down. Presumably the shell can't rollback the changes a la DBMS, as that would involve either hooks into the FS or every file util ever written. Now I think it's a nice idea, but it could easily lead to such sloppy coding; if your shell automatically tries, backs off and cleans up, why would people bother doing it the 'correct' way and downloading the new files before removing the old ones?, Insightful) Uh.. it's just you. You should, y'know, maybe try using Windows 2000 or XP sometime... Windows has a perfectly good command line. Point at the "Start" menu, click "Run" -> type "cmd", and away you go. You can turn on command line completion (search for "TweakUI" or "Windows Powertoys", I can't be bothered to link to them). And even pipes work just fine (as they have since the DOS days). For example: i? open source erlang site. FTSH is an exception system for shell programming (Score:5, Insightful) What's with all of the people claiming that FTSH will ruin the world because it makes it easier to be a sloppy programmer. Did you freaking read the documentation? To massively oversimplify, FTSH adds exceptions to shell scripting. Is that really so horrible? Is of line-after-line of "if [$? -eq 0] then" really an improvement? Welcome to the 1980's, we've discovered that programming languages should try and minimize the amount of time you spent typing the same thing over and over again. Human beings are bad at repetitive behavior, avoid repetition if you can. Similarlly FTSH provides looping constructs to simplify the common case of "Try until it works, or until some timer or counter runs out." Less programmer time wasted coding Yet Another Loop, less opportunities for a stupid slip-up while coding that loop. If you're so bothered by the possibility of people ignoring return codes it should please you to know that FTSH forces you to appreciate that return codes are very uncertain things. Did diff return 1 because the files are different, or because the linker failed to find a required library? Ultimately all you can say is that diff failed. Christ, did C++ and Java get this sort of reaming early on? "How horrible, exceptions mean that you don't have to check return codes at every single level." Re:Been tried (Score:2) Re:First Real Post (Score:3, Informative)
https://developers.slashdot.org/story/04/03/15/0051221/fault-tolerant-shell
CC-MAIN-2017-51
refinedweb
2,609
67.08
By... You can specify the items of your user script metadata in any order. I like @name, @namespace, @description, @include, and finally @exclude, but there is nothing special about this order. // @name Butler, the default is the filename of the user script, minus the .user.js extension. // *). User scripts are written in JavaScript. On every page you visit, Greasemonkey looks through the list of installed user scripts, determines which ones apply to this page (based on the @include and @exclude directives), and executes them after the page is loaded but before it is rendered. The scripts themselves can do anything you like. Butler works on a number of different Google pages, and does different things to each type of page, so it contains some code to check where exactly we are and calls the appropriate methods. For example, on Google web search results pages, Butler adds the "Enhanced by Butler" banner along the top, removes the ads along the top and right sides of the results, adds the "try your search on..." line, and may also add other "try your search on..." lines for inline movie reviews, news headlines, weather forecasts, and product results. The code to do this is straightforward: var href = window.location.href; // Google web search if (href.match(/^http:\/\/www\.google\.[\w\.]+\/search/i)) { Butler.addLogo(); Butler.removeSponsoredLinks(); Butler.addOtherWebSearches(); Butler.addOtherInlineMovieReviews(); Butler.addOtherInlineNewsResults(); Butler.addOtherInlineForecasts(); Butler.addOtherInlineProductSearches(); } Each of these functions is defined elsewhere in the Butler user script. (Greasemonkey user scripts are always self-contained. If you need to bundle multiple interdependent scripts, you're probably better off writing a browser extension.) The Butler user script barely scratches the surface of what Greasemonkey can do. There are literally thousands of user scripts, some targeting a single page or a single site, others that work on every page. Because you can have multiple user scripts installed, Greasemonkey provides a graphical interface for managing them. From the Firefox Tools menu, select "Manage User Scripts" (See Figure 6). Here you can see all the user scripts you have installed, change their configuration, disable them temporarily, or uninstall them completely. You can even edit a user script "live" and see your changes immediately, without restarting Firefox. This is enormously helpful while you're developing your own user scripts. Figure 6: Manage User Scripts Many user scripts are available at the Greasemonkey script repository. Here are some of my favorites: Have fun remixing the web! Mark Pilgrim is an accessibility architect who can be found stirring up trouble at diveintomark.org. Return to the O'Reilly Network
http://www.oreillynet.com/lpt/a/6117
crawl-002
refinedweb
428
55.64
Google docs are one of the widely used tools across the industry and the spreadsheets are used to store lot of our data, which we would want to access anytime for data analysis or any other purpose. Check my previous posts where I have talked about analyzing & visualizing the data using google spreadsheets. Many a times there is a need to access this data at the run time and there are different ways you can access these data from the google spreadsheet, onw way by using Google API’s, which requires you to Turn on google sheets API, install google client library, Authentication and then write a script to access the data. This is a wonderful and a secure way to access the data from the drive. However I was looking for a much simpler way to access the data Per say in a Pythonic way, within 2-3 lines of code and which can be easily consumed by the Pandas Dataframe and can help me for further data analysis and I do not have to spend much time writing the script for accessing the google spreadsheet. In this post we will see how Python Requests and Pandas Dataframe can be used in conjunction to pull the data from the Google spreadsheet and will be consumed directly into Pandas dataframe. Pandas is an open source and fast & easy high performance data analysis library which is developed by Wes Mckinney. you can find more details on it’s official page here. Pandas has two types of data structures Series & Dataframe, Series is like one dimensional objects like an array list and Dataframe is a two-dimensional spreadsheet structure having rows & columns. We are going to use Dataframes for this post. Requests is a simple to use HTTP library for python, which we would be using to scrape the content of our google spreadsheet. Dataset to be used for this post is Booker Prize winners list from Wikipedia: Check my previous posts here on how to import data from web in google spreadsheet in one simple step. Once you have the data from the wikipedia link in your google spreadsheet, Save it on your drive and then navigate to the following menu File > Publish to the Web and select Comma-Separated values(.csv) and copy the link. Now using Python requests we will write a simple two line of code to get this data in pandas data frame: import pandas as pd #Paste copied link here pathtoCsv = r'' df = pd.read_csv(pathtoCsv, encoding = 'utf8') The data is now consumed in the pandas dataframe(df) and can be used for further analysis. Let’s see how to explore the data using dataframe. I’m using ipython notebook for analysis but you can do this exercise in any editor of your choice. Let’s find out the info about the dataframe: There are total 5 columns and 51 rows in this dataset and you can also check the datatype for each column as well. Basic Analysis using Pandas Dataframe: - Find out number of Authors from India who won Bookers Prize using this dataset - Find out the count of Authors from each country United Kingdom has topped the list, Followed by Australia and South Africa. We can further use this dataframe api’s and function in conjunction with plotting libraries to plot the graphs and visualize the dataset.
https://kanoki.org/2017/07/04/reading-google-sheets-data-using-python/
CC-MAIN-2017-43
refinedweb
563
61.8
n this example, we are going to implement posting massage to servlet. In the following program, you will learn how to post massage. Post your Comment error message - JSP-Servlet error message hi, friends after complete my servlet programe i can type on the browser. the servlet program is insert data into dbms. but i have errormessage that is "window can not access the specified Ejb message driven bean is used to post message. PostMessage.java package web... .style1 { color: #FFFFFF; } Ejb message driven... you the process which are involved in making a message driven bean using EJB Error Message in Eclipse - Development process Error Message in Eclipse Hi Friend, My changes in servlet file is not reflected.Am using Eclipse ide and weblogic server. I think save option is not working in Eclipse. Am getting error message while saving. Now save Post Message In servlet Post Message In servlet  ... to servlet. In the following program, you will learn how to post massage. Code.... This is the method that defines an object to assist a servlet in sending a response depending on the form name i have to display the message Spring Security Custom Error Message Spring Security Custom Error Message In this section, you will learn about Custom Error Message in Spring Security. Spring Security have predefined error message, which appears when authentication failed. The predefined error message Ejb message driven bean ; named PostMessage.java This servlet is used to post message... Ejb message driven bean This tutorial explains you the process which are involved in making a message difference between $message and $$message difference between $message and $$message What is the difference between $message and $$message? Hi friends, $message is a simple variable whereas $$message is a reference variable. Example: $name = 'Bharat servlet com.ilp.tsi.um.bean.BankBean; import com.ilp.tsi.um.service.BankService; /** * Servlet...{ request.setAttribute("Message",BankConstant.MESSAGE); RequestDispatcher rd SERVLET to give an alert message when I try to submit the form. Employee id is a primary Pass message controller to viiew with the help of Model in Spring 3 Pass message from Controller to View with help of Model in Spring 3.0: In this example, we will discuss about the pass message or different types of value...;/display-name> <servlet> <servlet-name>dispatcher< Servlet Error Message based on user input Servlet Error Message based on user input  ... the sucess message otherwise it displays the error message. So, you can use Servlet... the error message on the page showing the wrong entry. Here is the code for servlet Java Message Services - JMS Java Message Services What is Java Message Services Context Log Example Using Servlet of Context Log in servlet. Context Log is used to write specified message... have simply taken a text area where user give his/her message and post the form...;/servlet-mapping> User enters the message in the text area servlet - Servlet Interview Questions method POST is not supported by this URL -------------------------------------------------------------------------- type Status report message HTTP method POST...servlet The given below is the output of my servlet program plz Objective C Message Passing Objective C Message Passing What exactly a message passing means in Objective C? and how can i pass a message to a method Message Context in AXIS Message Context in AXIS Is there any method in Axis classes to het the size of IN and OUT Message Conetext Javascript alert message Javascript alert message How to send an alert message to a user in JavaScript Message Resource Bundle work. Message Resource Bundle work. How does Value replacement in Message Resource Bundle work spelling mistakeManivannan.P November 19, 2011 at 5:17 PM n this example, we are going to implement posting massage to servlet. In the following program, you will learn how to post massage. Post your Comment
http://www.roseindia.net/discussion/18998-Post-Message-In-servlet.html
CC-MAIN-2013-20
refinedweb
635
64.2
Import used to run some script code works only once --- Python defined behavior The very first time I run utest.sikuli , the tw.sikuli test is getting executed and I am able to see the result ( report from HTML test runner ). but when I run the utest.sikuli for 2nd time. two.sikuli is not running but in result it is considering as a pass without performing the test. If I restart the sikuli IDE it is running fine for the first time, Am I using the import statement in a wrong manner ? utest.sikuli class SikuliUI( def test_1_ import two two.sikuli :: from sikuli import * popup("Check for greyedout option in menu") click(Pattern( assert exists( popup("Match found") Question information - Language: - English Edit question - Status: - Solved - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Solved: - 2017-11-03 - Last query: - 2017-11-03 - Last reply: - 2017-11-02 Thanks RaiMan, I got the problem in my approach. import is not intended, to "run" other scripts. Per Jython session (and the IDE is only one session) import is only done once (the first time seen). So step back and learn, how to load and reference stuff, that is in other modules.
https://answers.launchpad.net/sikuli/+question/660256
CC-MAIN-2018-47
refinedweb
203
72.46
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. PROJECT FORTRESS SUBVERSION REPOSITORY -------------------------------------- This README exists in the top-level directory of the Fortress project. Information about Fortress can be found at the following web site: If you have Subversion installed, you can check out the Fortress repository by going to the directory in which you want to check it out and issuing the following command: svn checkout PFC (The name "PFC" merely specifies the name of the directory you want to check the code into. Feel free to substitute another directory name if you prefer.) You'll now have a subdirectory named 'PFC'. Go into that directory and you'll see several subdirectories: Fortify: The Fortify tool for converting Fortress code into LaTeX, both interactively and in batch mode. Scripts are provided for conveniently producing rendered Fortress code in LaTeX documents, for producing PDF "doc" files from Fortress source code, etc. See Fortify/fortify-doc.txt for more information. ProjectFortress: The Fortress interpreter. You'll need to build the interpreter by following the instructions below for setting up your environment in order to have a complete Fortress installation. Emacs: A directory holding the Emacs Lisp file fortress-mode.el, which defines a Fortress mode for Emacs. To use this file, load it from your .emacs file with the following command: (load (concat (getenv "FORTRESS_HOME") "/contrib/Emacs/fortress-mode.el")) (push '("\\.fs[si]$" . fortress-mode) auto-mode-alist) If you wish to use the Fortify package to format Fortress source code into LaTeX, you should also add the following to your .emacs (for more information about fortify, see Fortify/fortify-doc.txt): (load (concat (getenv "FORTRESS_HOME") "/Fortify/fortify.el")) Vim: A directory containing vim script files for syntax highlighting. To enable syntax highlighting for fortress code copy the sub-directories under Vim/ to your ~/.vim directory. $ mkdir ~/.vim $ cp -a Vim/ftdetect Vim/syntax Vim/ftplugin ~/.vim/. If your cp command does not accept the -a option then use -r $ cp -r Vim/ftdetect Vim/syntax Vim/ftplugin ~/.vim/. You should also add the following line to your ~/.vimrc file au BufNewFile,BufRead *.fsi,*.fss set ft=fortress SpecData: Machine-readable files used by the Fortress Language Specification (e.g., a list of all reserved words). Editors and other tools may also benefit from using these files. Moreover, all examples included in the language specification are included in the directory SpecData/examples. Specification: A directory containing a PDF of the Fortress Language Specification, Version 1.0. Library: The home for all of the Fortress standard libraries. bin: Shell scripts for our various projects. These are bash scripts; you will need an installation of Bash on your system to run them. To make these scripts "auto-homing", script "forfoobar" begins with the line FORTRESS_HOME=`${0%forfoobar}fortress_home` This replaces 'forfoobar' in whatever was used to invoke the script with 'fortress_home', runs that command, and assigns its output to FORTRESS_HOME for the remainder of the scripts. 'fortress_home' determines the location of fortress_home if it is not otherwise specified. This command can also be used in your own build files; for example, if you include the fortify macros in a LaTeX file \input{$FORTRESS_HOME/Fortify/fortify-macros} you might precede the latex command with FORTRESS_HOME="`fortress_home`" It is also possible to set FORTRESS_HOME in your environment, but if you have multiple versions of Fortress installed this can cause confusion and build problems. You will also see the following files: ant: A small bash script used for invoking the build.xml with specific Ant options. (This script defers to the script with the same name in directory ProjectFortress.) build.xml: The interpreter build script, written in Ant. (This script defers to the script with the same name in the directory ProjectFortress.) fortress.properties: This file defines several environment variables used by the internals of the Fortress interpreter. (Normally, there is no reason to override the settings in this file.) SETTING UP YOUR ENVIRONMENT --------------------------- We assume you are using an operating system with a Unix-style shell (for example, Solaris, Linux, Mac OS X, or Cygwin on Windows). You will need to have access to the following: * J2SDK 1.6 or later. See * Ant 1.6.5 or later. See * Bash version 2.5 or later, installed at /bin/bash. See Assume FORTRESS_HOME points to the PFC directory you checked out. On Unix-like systems this should be a matter of using export or setenv. If you are using Cygwin, one user reports success with the following command line for setting FORTRESS_HOME: export FORTRESS_HOME=`cygpath -am cygwin/path/to/fortress/install/directory` e.g.: export FORTRESS_HOME=`cygpath -am ${HOME}/tools/fortress` In your shell startup script, add $FORTRESS_HOME/bin to your path. The shell scripts in this directory are Bash scripts. To run them, you must have Bash accessible in /bin/bash. Make sure the following environment variables are set in your startup script: JAVA_HOME ANT_HOME (Although our scripts are sometimes able to guess the locations of JAVA_HOME and ANT_HOME, it is preferred that you set them manually.) Once all of these environment variables are set, build the interpreter by going to the directory FORTRESS_HOME and typing the command: ./ant clean compile If that doesn't work, there's a bug in the interpreter; please issue a bug report. Once you have built the interpreter, you can call it from any directory, on any Fortress file, simply by typing one of the following commands at a command line: fortress [walk] [-test] [-debug interpreter] somefile.fss arg... fortress help The first time you run a Fortress program, the static checker is called on the given file and the results are stored in a cache directory (by default this cache is kept in default_repository/caches in the root of your Fortress distribution). No user-visible object file is generated. A file with suffix .fsi should contain a single API definition. The name of the API should match the name of the file. Similarly, a file with the suffix .fss should contain a single component definition. The name of the component should match the name of the file. A command of the form "fortress walk somefile.fss" checks whether a cached and up to date result of compiling the given file exists. If so, it runs the cached file. Otherwise, it processes the given file and runs the result. This command can be abbreviated as "fortress somefile.fss". If the optional flag -test is given, all test functions defined in the given file are run instead. If the optional flag "-debug interpreter" is given, stack traces from the underlying interpreter are displayed when errors are signaled. If all else fails, look at the script bin/fortress to see if your system has peculiarities (for example cygwin requires ; separators in the classpath). USING ECLIPSE ------------- There exists a .project file in the directory ${FORTRESS_HOME}. Import this project into Eclipse. There exists a file called ${FORTRESS_HOME}/DOTCLASSPATH in the repository. Copy this file to ${FORTRESS_HOME}/.classpath. If you are using the Java 5.0 jdk under Windows or Linux, you will need to add an entry to ${JAVA_HOME}/lib/tools.jar to the classpath. Setting up Eclipse to follow the Fortress project coding style conventions is a two-step process. The following instructions are known to work on Eclipse 3.4, and should work on Eclipse 3.3 as well. These will change preferences for all your Eclipse projects. Open up Eclipse Preferences to start configuring your global settings. First select General --> Editors --> Text Editors and make sure the checkbox is enabled for "Insert spaces for tabs". Second select Java --> Code Style --> Formatter and click on the "Edit..." button. Change the Tab policy to "Spaces only" and give the profile a new name (recommended name: "Spaces only"). Click "OK" and you are finished. DEMO PROGRAMS ------------- The directory ProjectFortress/demos/ contains some demonstration Fortress programs. Among them are: buffons.fss: Buffon's needle. Estimates pi using a Monte Carlo simulation. lutx.fss: Naive dense LU decomposition. Demonstrates how to define new subclasses of Array2. conjGrad.fss: Conjugate gradient, including the snapshot from the NAS CG benchmark that you've seen in many Fortress talks. Uses the Sparse library for sparse matrices and vectors. sudoku.fss: Solve a simple sudoku by elimination. Includes a tree-based set implementation. aStar.fss: Generic A* search, accompanied by a specific instance for solving sudoku that cannot be solved by elimination alone. Lambda.fss: A simple interpreter for the lambda calculus that permits top-level binding and reduces to both WNHF and NF. If you're curious how to parse text using the Fortress libraries, you should look here (it's presently far more painful than we'd like). TEST PROGRAMS ------------- The directory ProjectFortress/tests/ contains some Fortress programs to test the interpreter. Test programs that are supposed to fail (for example, storing a String into a ZZ32-typed mutable) have names that are prefixed with XXX. The directory ProjectFortress/static_tests/ contains some Fortress programs to test the static end. Test programs that are supposed to fail have names that are prefixed with XXX. Test programs that are supposed to pass static disambiguation then fail have names that are prefixed with DXX. The directory ProjectFortress/parser_tests/ contains some Fortress programs to test the parser. Test programs that are supposed to fail to be parsed have names that are prefixed with XXX. The directory ProjectFortress/not_passing_yet/ contains some Fortress programs that should pass, but do not. For example, if we had a test file containing an error that should be detected, but it isn't, that would be contained in ProjectFortress/not_passing_yet with a name prefixed with XXX. Test programs in this directory should pass the parser. COMPONENTS ---------- Fortress currently lacks a full-blown component system. All the code in your Fortress program should reside in API and component file pairs. If you take a look at the Fortress programs in ProjectFortress/tests/, ProjectFortress/demos/, or SpecData/examples, you'll see that they have the same overall structure: component MyComponent exports Executable ... Your program here ... run():() = ... end LANGUAGE FEATURES THAT ARE IMPLEMENTED -------------------------------------- * Object and trait declarations, including polymorphic traits. Constructor invocations must *always* provide the static arguments explicitly. * Overloaded functions and ordinary methods. Top-level overloaded functions can be polymorphic. Nested functions and methods must be monomorphic. * Polymorphic top-level functions and methods, so long as the methods are not overloaded. * Checking and inference of argument types to functions, methods, and operators. These checks use the dynamic types of the arguments. Return types are NOT checked. Inference of static parameters is not complete yet; it is often necessary to provide static arguments explicitly. It is *always* necessary to do so in a constructor call and in any situation where a static parameter occurs only in the result and not in the arguments to a function. For example, you must always provide the array element type E and size n when invoking the factory array1[\E,n\](). * Arrays of up to three dimensions. Note that there isn't yet a single overarching Array type. For more details on the array types and operations defined see below. In particular, note that array comprehensions are not yet implemented; the array types provide functions to work around this lack. Another caveat: due to a bug we haven't fully understood, some (but not all) uses of the compact notation T[n,m] for an array type cause the interpreter to fail. Desugaring the code by hand to e.g. Array2[\T,0,n,0,m\] works around this bug. * Array aggregates except singleton arrays. * Parallel tupling and argument evaluation. * Parallel for loops over simple ranges such as 0#n. * Sequential for loops over simple ranges. The functional method seq() and the equivalent function sequential() can be used to turn any Generator into a SequentialGenerator. * While loops, typecase, if, etc. Note that for parametric types typecase isn't nearly as useful as you might think, since it cannot bind type variables; we are working to address this shortcoming. * The "atomic" construct uses code based on the DSTM2 library. Nested transactions are flattened. We use their obstruction free algorithm with a simple lowest-thread-wins contention manager. Reduction variables in for loops are not yet implemented, so perform an explicit atomic update or just use a reduction expression instead. * throw and catch expressions. * Generators. * at expressions. * spawn * also (multiple parallel blocks) LANGUAGE FEATURES THAT ARE NOT IMPLEMENTED ------------------------------------------ * Numerals with radix specifiers (which implies that some numerals may be recognized as identifiers) * Unicode names * Dimensions and units * Static arguments: nat (using minus), int, bool, dimension, and unit * Modifiers * Keyword arguments * Where clauses * Coercion * Constraint solving for nat parameters * Reduction variables * Distributions * Any of the types which classify operator properties * Any of the bits and storage types * Non-RR64 floats * Integers other than ZZ32 and ZZ64 * Use of ZZ64 for indexing (the JVM uses 32-bit indices) CHANGES SINCE FORTRESS LANGUAGE SPECIFICATION v.1.0 BETA -------------------------------------------------------- * Fortress 1.0 is the first release of the Fortress language interpreter is the first to be released in tandem with the language specification, available as open source and online at: Each example in the specification is automatically generated from a corresponding working Fortress program which is run by every test run of the interpreter. * To synchronize the specification with the implementation, it was necessary to temporarily drop the following features from the specification: - Static checks (including static overloading checks) - Static type inference - Qualified names (including aliases of imported APIs) - Getters and setters - Array comprehensions - Keyword parameters and keyword expressions - Most modifiers - Dimensions and units - Type aliases - Where clauses - Coercions - Distributions - Parallel nested transactions - Abstract function declarations - Tests and properties - Syntactic abstraction * Libraries have significantly changed. * Syntax and semantics of the following features have changed: - Tuple and functional arguments - Operator rules: associativity, precedence, fixity, and juxtaposition - Operator declaration - Extremum expression - Import statement - Multiple variable declaration - Typecase expression * The following features have been added to the language: - "native" modifier - Operator associativity - Explicit static arguments to big operator applications * The following features have been eliminated from the language: - Identifier parameters - Explicit self parameters of dotted methods - Empty extends clauses - Local operator declarations - Shorthands for Set, List, and Map types - Tuple type encompassing all tuple types * Significantly more examples have been added. THE DEFAULT LIBRARIES --------------------- The components ProjectFortress/LibraryBuiltin/FortressBuiltin.fsi, ProjectFortress/LibraryBiltin/NativeSimpleTypes.fss and Library/FortressLibrary.fss are imported implicitly whenever any Fortress program is run. BUILT-IN TYPES -------------- There are a bunch of types that are defined internally by the Fortress interpreter. With the exception of Any, these cannot be overridden. The built-in types are found in ProjectFortress/LibraryBuiltin/FortressBuiltin.fsi and NativeSimpleTypes.fsi; documentation for the released version of these libraries can be found in the accompanying specification release. Most built-in types do not have any methods. Note that the types found in FortressBuiltin do not have methods. Tuple and arrow types are always built in, and cannot be overridden in any way. Note that there isn't (yet) a trait Object! Eventually user-written trait and object declarations will extend Object by default; right now, they instead extend Any by default. We plan to migrate to a new infrastructure for primitive objects (based on the one used for Boolean in NativeSimpleTypes) at which point we will remedy this situation. Meanwhile, operations on the primitive types in FortressBuiltin can be found in Library/FortressLibrary.fsi; again these primitive are documented in the specification as well. Note in particular that in the absence of coercion, you may occasionally need to make use of widen and narrow to convert between ZZ32 and ZZ64. LIBRARY HIGH POINTS ------------------- Your best guide to library functionality is the library code itself; this can be found in Library/ and in ProjectFortress/LibraryBuiltin. The APIs for these libraries can also be found in the language specification (note, though, that if you downloaded the latest version of the Fortress implementation then the two may differ). This section provides an overview of things you may not immediately realize are there. Juxtaposition of strings means string append. You may also find the BIG STRING operation (that concatenates strings) useful. Several functions attempt to convert data of type Any to a string. These include print(), println(), assert(), and juxtaposition of Any with a string. Right now, the FortressBuiltin types are printed using internal magic, and object types are printed using the toString method. The consequence of this is that you will see a runtime error if you attempt to print an object without first defining a toString method. In the absence of array comprehensions, there are several ways to create and initialize an array (in these examples a 1-D array, but the 2- and 3-D arrays work the same way): The simplest is to use an aggregate expression (this seems to fail at top level in your program, which is a known bug): z : ZZ32[3] = [1 2 3] If you know the size statically (it is a static parameter to your function, or is fixed at compile time): a : T[size] = array1[\T,size\]() (* lower bound 0 *) a[i] := f(i), i <- a.bounds() or: a : T[size] = array1[\T,size\](initialValue) or: a : T[size] = array1[\T,size\](fn (index:ZZ32) => ...) If you are computing the size at run time: a = array[\T\](size) a[i] := f(i), i <- a.bounds() or: a = array[\T\](size).fill(initialValue) or: a = array[\T\](size).fill(fn (index:ZZ32) => ...) At the moment, to create a non-0-indexed array you need to create a correctly-sized 0-indexed array as described above, then use the shift(newlower) method to shift the lower index. Thus, to create an nxn 1-indexed array you can do something like this: a = array2[\T,n,n\]().shift(1,1) The replicate[\T\]() method on arrays is a little unintuitive at first. It creates a fresh array whose element type is T but whose bounds are the same as the bounds of the array being replicated. When data distribution is fully implemented, it should respect that as well. It is a bit like saying array[\T\](a.bounds().upper()) for 0-indexed arrays but is slightly more graceful and deals well with non-0-indexed arrays. You can convert any array to use 0 indexing simply by indexing it with an empty range: a[:] or a[#] ==> a, only 0-indexed. Any operation that yields a subarray of an underlying array shares structure. If you want a fresh copy of the data, use the copy() method. To assign the contents of array a to array b, you can use: a.assign(b) if a is freshly allocated. The following should work all the time: a[:] := b[:] Right now type-level ranges don't really exist, so if you want to operate on subarrays with statically type-checked bounds, you'll need to work with the subarray method: subarray[\nat b, nat s, nat o\]():Array1[\T, b, s\] This returns a structure-sharing subarray with base b and size s starting from offset o in the current array. The special factory functions vector and matrix are restricted to numeric argument types and static dimensionality: x' : ZZ64[1000] = vector[\ZZ64,1000\](17) At the moment, any Array1 or Array2 whose element type extends Number is considered to be a valid vector or matrix respectively (this will eventually be accomplished by coercion, and vectors will be a distinct type). Note that the t() method on matrices is transposition, and will eventually be replaced by opr ()^T. GENERATORS, REDUCTIONS, and COMPREHENSIONS ------------------------------------------ Defining new generators is discussed in detail in the Fortress language specification, but if you're trying it yourself for the first time, you may find it instructive to browse the source code of the libraries. DEFINING NEW PRIMITIVE FUNCTIONS -------------------------------- It is relatively easy to add new primitive functions to Fortress. To do this, you simply invoke the builtinPrimitive function with the name of a loadable Java class which extends glue.NativeApp. Useful subclasses are NativeFn1 and NativeFn2, and any of the classes in glue.prim (particularly the classes in glue.prim.Util). Here's a sample native binding, which defines the floor operator which returns an integer: opr |\a:RR64/|:ZZ64 = builtinPrimitive("glue.prim.Float$IFloor") You should *not* mention the type parameter to builtinPrimitive when invoking it; doing so will confuse the interpreter. Note also that the interpreter requires that you declare appropriate argument and return types for your native functions as shown above. If you give an incorrect type declaration on the Fortress side, you'll get non-user-friendly error messages when the Java code is run. DEFINING NEW PRIMITIVE CLASSES ------------------------------ To define a new primitive class, you will need to write a native component. Examples of these can be found in Library; anything that starts with "native component" is a native component. Here's the first few lines of File.fss: native component File import FileSupport.{...} export File private language="java" private package="com.sun.fortress.interpreter.glue.prim" object FileReadStream(filename:String) extends { ReadStream, FileStream} getter fileName():String = builtinPrimitive( "com.sun.fortress.interpreter.glue.prim.FileReadStream$fileName") .... Note that we import a non-native component that defines traits mentioned in the extends clause. The first two bindings must be language and package in that order; right now only language="java" is supported, and the package is where the backing class will be found. In com.sun.fortress.interpreter.glue.prim.FileReadStream defines the corresponding backing data type. Note that FileReadStream extends Constructor, and defines an inner class that extends FOrdinaryObject that represents the actual values that get passed around at run time. The methods must extend NativeMethod, but are otherwise referenced using builtinPrimitive just as for top-level functions. A native class can contain a mix of native and non-native method code. Note, however, that the namespace in which a native object is defined is slightly odd from the perspective of library name visibility. For this reason, some primitive classes extend a parent trait (defined in a non-native component) that contains most of their non-native functionality and that has full access to the libraries. For example, FileStream provides a number of generator definitions that are inherited by FileReadStream.
https://bitbucket.org/dtrebbien/project-fortress-sources
CC-MAIN-2017-30
refinedweb
3,726
55.95
Definition We consider a Markov Chain to be a graph G in which each edge has an associated non-negative integer weight w[e]. For every node (with at least one outgoing edge) the total weight of the outgoing edges must be positive. A random walk in a Markov chain starts at some node s and then performs steps according to the following rule: Initially, s is the current node. Suppose node v is the current node and that e0, ..., ed-1 are the edges out of v. If v has no outgoing edge no further step can be taken. Otherwise, the walk follows edge ei with probability proportional to w[ei] for all i, 0 < = i < d. The target node of the chosen edge becomes the new current node. #include < LEDA/graph/markov_chain.h > Creation Operations
http://www.algorithmic-solutions.info/leda_manual/markov_chain.html
CC-MAIN-2021-10
refinedweb
137
73.78
In this section of the C++ tutorial, you will learn the following: - What is polymorphism - Polymorphism in C++ programming language - Function Overloading - Operator Overloading 9.1 C++ Polymorphism C++ programming language allows you to implement polymorphism by overloading functions and operators. 9.2 C++ Function Overloading In C++, function overloading allows you to create two or more functions with the same name. The compiler decides which function to call based on type, number or the sequence of parameters that are passed to these functions. Note: In C++, the return type of function is not used to distinguish between functions with the same name. The following statements are not valid: void display (int i); long display (int i); 9.2.1 Function Overloading Example Consider that you need to create a function that displays an integer value and function that displays a decimal value. Now, instead of creating two functions with different names, you can use function overloading. This allows you to create two functions (say display) with the same name. Depending on the type of value you pass to the display function, the compiler invokes the appropriate one. The user of the program needs to remember only one function name (display). The user doesn’t have to worry about how the compiler figures out which display function to call. This makes the program more user-friendly. You can overload the display function as follows: #include <iostream.h> using namespace std; void display (int x) { cout<<”Displaying integer value”<<x<<endl; } void display (float y) { cout<<”Displaying decimal value”<<y<<endl; } int main( ) { int a=100; float b= 10.54; display(int a); display(float b); } The output of the program is as follows: Display integer value 100 Displaying decimal value 10.54 In the above example, we overload the display function; there are two functions with the name display. However, one displays an integer value and the other displays a decimal value. The compiler, in the above example, figures out the appropriate display function to call based on the data type being passed to it. 9.3 C++ Operator Overloading An operator performs a specific task. For example, the ‘+’ operator adds two numbers. In C++, all operators are defined for built-in data types and you cannot create new operators. Thus, you need to overload the existing operators to work with user-defined data types. When you overload an operator, similar to function overloading, you define the behavior of the operator in the context of user-defined data type. Let’s say you want to add two objects of a class. To do this, you need overload the ‘+’ operator. When you overload the operator, you write code that specifies how to add the two objects. You can look at an operator as a function; the difference is in the name. The name of the operator has a symbol that must be preceded by the operator keyword. Just as you pass parameters to a function, you pass operands to an operator. In essence, operator overloading allows you use existing operators with user-defined data types. 9.3.1 Operator Overloading Example The following code defines a class MyNum that has the following variable and function: - A public variable val - A constructor that initializes the variable val - A member function sum that adds two objects and returns and object of the class MyNum In main() we do the following: - We create two objects of the class MyNum; a and b and initialize them with the values 10 and 5 respectively - Next, we use the object a to call the member function sum and pass object b as a parameter - The result of the addition of the two objects is stored in object c of the class MyNum The class definition and the main() function are as follows: Class MyNum { public: int val; MyNum(int i) //constructor { val=i; } MyNum sum(MyNum &a) // function sum that returns an object of the class myNum { return MyNum(val + a.val); //adding two objects } }; int main() { MyNum a=MyNum(10); //creating and initializing object a MyNum b=MyNum(5); //creating and initializing object b MyNum c= a.sum(b); } A more intuitive and easier way to add two objects is as follows: MyNum c = a+b; This can be done by overloading the ‘+’ operator as follows: MyNum operator+(MyNum &a) { return MyNum(val + a.val); } Now in main (), you can write the following statement: MyNum c = a+b; The name of the function sum is replaced by the word operator+. The operator ’+’ is an alternate syntax for calling the function sum. The class MyNum with an overloaded operator + and the main() function are as follows: Class MyNum { public: int val; MyNum(int i) //constructor with parameter { val=i; } MyNum operator+(MyNum &a) // Overloading the + operator { return MyNum(val + a.val); //adding two objects of the class Mynum } }; int main() { MyNum a= MyNum(10); //creating and initializing object a MyNum b= MyNum(5); //creating and initializing MyNum c= a+b; }
http://www.wideskills.com/c-plusplus/c-plusplus-polymorphism
CC-MAIN-2018-09
refinedweb
833
50.57
Input events queue with an initial size. More... #include <deInputEventQueue.h> Input events queue with an initial size. The queue does not grow since processing a large amount of events due to a short time problem is more of a problem than discarding some events in such a situation. Create a new event queue. Clean up the event queue. Add a copy of an event to the end of the queue. If the queue is full the event is discarded. Referenced by GetEventCount(). Event by index. Referenced by GetEventCount(). Number of events in the queue. References AddEvent(), GetEventAt(), and RemoveAllEvents(). Determine if the queue contains any events. Remove all events from the queue. Referenced by GetEventCount().
http://dragengine.rptd.ch/docs/dragengine/latest/classdeInputEventQueue.html
CC-MAIN-2017-51
refinedweb
116
80.58
This article is the second part of a "deep dive" series on Hot Module Replacement with webpack. module.hotAPI In the first blog post of the HMR series, we discussed the four stages of the Hot Module Replacement process. Today, we will focus on the last stage. We will learn how to instruct the modules in our application to refresh themselves when they receive a hot update. The hot update handlers can either be injected by a webpack loader during the build, or be manually added by you. We will discuss only the second way in this article. Webpack exposes a public interface from the module.hot object. Let's explore it! For our demos, we'll tinker with a simple webpage. It's best if you clone the project and follow the instructions, but it's not mandatory. You could also just read the blog post and trust me that everything works. Clone the repository from. If you're a fan of the command-line interface, execute: git clone Navigate to the cloned folder and install the dependencies: cd christmas-tree npm install To run the development server, execute: npm run watch After the build finishes, a new tab in your browser will open. Toggle the devtools console. Merry Christmas! Yeah, I know it's February. But you haven't put down your Christmas decorations either, have you? I would like you to notice two things here: npm run watchstarts the webpack development server, provided by the webpack-dev-serverpackage. The image below shows the project structure (excluding node_modules): The source directory is where we will be making changes: index.jsimports all source files. That's the entry module for webpack; lights.jscreates the blinking effect for the Christmas tree; tree.jsdraws the tree itself. The dist directory hosts the ready-to-run application. main.jsis the single output bundle, produced by webpack; index.htmlis the web page that loads main.js. We won't discuss package.json and package-lock.json in this post. If you want to learn more about them, check out the npm docs. And finally, webpack.config.js - the place where we instruct webpack how to bundle our application. The application runs with HMR because the Hot Module Replacement plugin is part of the configuration: const path = require('path'); const webpack = require('webpack'); module.exports = (env, argv) => { const config = { entry: './src/index.js', output: { filename: 'main.js', path: path.resolve(__dirname, 'dist') }, devServer: { contentBase: './dist', hot: true, }, plugins: [], }; if (argv.mode === 'development') { config.plugins.push( new webpack.HotModuleReplacementPlugin() ); } return config; }; Enough theory, let's go back to the browser. The devtools console says that hot module replacement is running. But does it really work? 🤔 Open src/tree.js and increase the value of rowsCount. We get a bigger tree, but also a full page reload. It can be hard to notice it. Look at the messages in the console - they disappear when the reload starts. When the scripts on the page are executed anew, the messages appear again. The goal of HMR is to avoid full page reloads. Currently, the application doesn't accept hot updates, because we haven't instructed it to do so. Therefore, the webpack-dev-server falls back to a full page reload. The easiest way to handle an incoming update is by self-accepting it from the changed module. This will cause webpack to execute the new version of the module. All we need to add is: src/tree.js module.hot.accept(); However, the module.hot property is defined only when HMR is enabled. If we build the application for production, without HMR, the above code will throw an error. We need a check: src/tree.js if (module.hot) { module.hot.accept(); } One final note before trying it out - when we build for production, webpack knows that module.hot is undefined and the code block, guarded by the if statement, will never be executed. The UglifyJS/Terser plugin for webpack will remove it from the bundle. We don't have to worry that our development settings will end up in production. Let's change the rowsCount again and see what happens. The page is not fully reloaded, but the tree still updates, because the new tree.js module is executed. There's one more module in our simple app - src/lights.js, which "illuminates" the tree. src/lights.js import fir from './tree.js'; /** * Changes the look of * some 'needles' in the tree * every 1000ms */ function turnOn() { const blinkRate = 1000; const rowsCount = fir.rowsCount; const needles = fir.getNeedles(); setInterval(() => blink(rowsCount, needles), blinkRate ); } turnOn(); // ... To make lights.js a self-accepted module, we need to extend it with the same code that we used for tree.js earlier: if (module.hot) { module.hot.accept(); } Now, let's try decreasing the blinkRate to make the lights go faster and increasing it for the opposite effect. We didn't quite get the behavior we wanted. The light bulbs become more and more with every change. The self-accept causes webpack to execute the module whenever a hot update is needed. src/lights.js function turnOn() { ... setInterval(() => blink(rowsCount, needles), blinkRate ); } turnOn(); // <-- gets called every time the module is changed Executing the code above has a side effect - it triggers a repeating action with the setInterval call. We never cancel the already started actions, but keep triggering new ones. Luckily, webpack provides a mechanism for disposing old modules before replacing them. First, we need to keep the ID of the started action: src/lights.js let lightsInterval; function turnOn() { ... lightsInterval = setInterval(() => blink(rowsCount, needles), blinkRate ); } Then, we can clear it before the new module is executed: src/lights.js if (module.hot) { module.hot.accept(); module.hot.dispose(_data => { clearInterval(lightsInterval); }); } And finally, we can try changing the blinkRate again: So far, we instructed webpack to execute the tree.js and lights.js modules whenever we change their code. The hot module replacement seems to work surprisingly well when the changed data is internal. But, what would happen if instead, we modify the public interface of a module? What would happen to the other modules depending on that interface 😱? The tree.js module exports a single object - fir: fir.draw()function visualizes the tree inside a container DOM element. It replaces the contents of the container with span elements needles. Each needle gets a classNameequal to the value of the NEEDLE_CLASSconstant; fir.getNeedles()function returns all DOM elements with the above className. The lights.js module imports tree.js and uses fir.getNeedles() to obtain a list of the newly drawn DOM elements. The list is critically important for illuminating the Christmas tree. This is what the dependency graph looks like: Let's put the HMR process to the test by modifying the value of NEEDLE_CLASS in tree.js. Our lights went out! The HMR process failed the test miserably. Webpack executed tree.js when we changed it. The fir.draw() function created brand-new needles with classNames matching the new value of NEEDLE_CLASS. It also got rid of the previous needles. However, nothing happened in lights.js. Its list of needles still references the old, already removed needles. We should refresh that list when tree.js is changed. The parent accept API allows us to handle hot updates for a module from other modules that import it. We can extend the HMR logic in lights.js to restart the bulbs whenever tree.js is changed: src/lights.js if (module.hot) { module.hot.accept(['./tree.js'], function() { clearInterval(lightsInterval); turnOn(); }); ... } Now we have update handling logic for tree.js in two places: lights.js; tree.jsmodule itself. But which one will be preferred? Let's visualize all possible update handling scenarios for tree.js: If there is a self accept in tree.js, webpack executes the module. The modules that import tree.js (its "parents") are not notified of the change. If there is no self accept in tree.js, webpack looks for update handlers inside the modules that import it. The lights.js module imports tree.js. Let's say that it has a handler for it: module.hot.accept(['./tree.js'], function updateHandler() { ... }); Webpack will: tree.js; tree.jsimport inside lights.jsto point to the new module; updateHandler(). If there is no handler for tree.js in lights.js, webpack will continue looking up the dependency graph. The index.js module imports lights.js. Webpack will check if it contains a handler for lights.js. I want to highlight this part - it won't check for a handler for tree.js (the changed module), but for lights.js (the module it actually imports). Let's imagine for a moment that index.js has a handler: src/index.js module.hot.accept(['./lights.js'], function updateHandler() { ... } Webpack will: tree.js; lights.js(the imported tree.jsmodule is updated) lights.jsinside index.js; updateHandler(). Webpack continues looking for handlers until it finds a 'root' module - a module, that's not imported in any other module. In that case, the webpack-dev-server will fallback to a full page reload, and in the case of NativeScript - an app restart. Back to the tree - we noticed that there are two update handlers for tree.js. We want webpack to use the new logic in lights.js. That's why we have to remove the self accept from tree.js. src/tree.js // comment or simply delete the code below // if (module.hot) { // module.hot.accept(); // } Let's try changing the value of NEEDLE_CLASS again: Aaaand...the HMR process fails. Instead of a refresh, we get a full page reload. I must admit that I lied to you. The tree.js module is actually imported not only in lights.js but also in index.js. This is the real dependency graph: The changed module should have an update handler in every branch of its dependency graph. Currently, we are not accepting the changes for tree.js in index.js and the upcoming hot update is rejected. Notice that if we have only one root module, we can add an application-wide update handler in it. We won't do that in our project for now. For example, if you're using Angular your task is a little easier, as most Angular apps have a single entry module - main.ts, which bootstraps the app. If there are no lazy loaded NgModules, main.ts will be the only root module. Adding the following handler to it will catch all hot updates in the app: if (module.hot) { module.hot.accept(["./app/app.module"], function() { ... }); } import { AppModule } from "./app/app.module"; ... Back on our project again - we need a parent accept for tree.js inside index.js. And we don't even need a callback. src/index.js if (module.hot) { module.hot.accept(['./tree.js']); } Now the handler look-up process will be successful, because the hot updates for tree.js are accepted in all of its parents. Let's try changing the value of NEEDLE_CLASS one last time before we give up: Yeyyy! It works! But... The code we wrote doesn't really...feel good to me. The parent accept in index.js seems a bit artificial - it's there only because we have to accept the upcoming update. And what if we add a new module that imports tree.js? We will have to add an update handler inside it too! It's time to refactor. Currently, we need to accept the tree.js changes in index.js and lights.js, because both modules import it. Let's take a look at how lights.js uses tree.js: src/lights.js import fir from './tree.js'; function turnOn() { const rowsCount = fir.rowsCount; const needles = fir.getNeedles(); ... } turnOn(); Instead of importing fir, we can make it a parameter of the turnOn function. In that case, the function shouldn't be called, but exported instead. src/lights.js // import fir from './tree.js'; export function turnOn(fir) { const rowsCount = fir.rowsCount; const needles = fir.getNeedles(); ... } // turnOn(); We are not importing tree.js anymore and we can also remove the handler for it: src/lights.js if (module.hot) { // module.hot.accept(['./tree.js'], function() { // clearInterval(lightsInterval); // turnOn(); // }); module.hot.accept(); module.hot.dispose(_data => { clearInterval(lightsInterval); }); } Whoever uses lights.js will have to import the turnOn function, call it, and provide the fir object as an argument. In our case the importer is index.js. After doing the necessary modifications, index.js should look like that: src/index.js import fir from './tree.js'; import { turnOn } from './lights.js'; turnOn(fir); if (module.hot) { module.hot.accept(['./tree.js']); } Now, we have an update handler for tree.js only in index.js. We can move the refresh logic for lights.js inside it. Let's export a function that "restarts" the lights: src/lights.js export function restart(fir) { clearInterval(lightsInterval); turnOn(fir); } And call that function from the update handler in index.js: src/index.js import fir from './tree.js'; import { turnOn, restart } from './lights.js'; turnOn(fir); if (module.hot) { module.hot.accept(['./tree.js'], () => { restart(fir); }; } Let's test out the changes we did by modifying the tree.js file. Editing tree.js still works! However, now the lights.js module is self-accepted and no longer imports tree.js. If we try to modify it, the tree won't be redrawn. We need to move the lights.js update handler to index.js as well. src/index.js if (module.hot) { // add ./lights.js to the list of accepted dependencies module.hot.accept(['./tree.js', './lights.js'], () => { restart(fir); }); } Don't forget to remove the self accept from lights.js. src/lights.js if (module.hot) { // Comment or remove the line below // module.hot.accept(); // Keep the disposal logic! module.hot.dispose(_data => { clearInterval(lightsInterval); }); } That's it! Refactoring done! Let's stop making changes before we break something... You can find the HMR-charged version of the demo in the branch, called finished: There is (at least) one bug in the disposal logic of the application. Try to find it! Feel free to open a PR in the Github repo. The first to do it, may (or may not) win something nice 🙂. We learned how to use the module.hot API to manually handle hot updates. Some frameworks, like React, Vue, Angular, and NativeScript, provide built-in HMR support. In a dedicated article, we will explore how each framework solves the problem of refreshing the application and keeping its state intact.
https://blog.nativescript.org/deep-dive-into-hot-module-replacement-with-webpack-part-two-handling-updates/index.html
CC-MAIN-2021-21
refinedweb
2,433
62.75
On Thu, 2009-01-01 at 14:25 -0500, Brian Hurt wrote: > First off, let me apologize for this ongoing series of stupid newbie > questions. Haskell just recently went from that theoretically interesting > language I really need to learn some day to a language I actually kinda > understand and can write real code in (thanks to Real World Haskell). Of > course, this gives rise to a whole bunch of "wait- why is it this way?" > sort of questions. > > So today's question is: why isn't there a Strict monad? Something like: > > data Strict a = X a (Note that you can eliminate the `seq`s below by saying data Strict a = X !a instead, or, if X is going to be strict, newtype Strict a = X a). > instance Monad Strict where > ( >>= ) (X m) f = let x = f m in x `seq` (X x) > return a = a `seq` (X a) > > (although this code doesn't compile for reasons I'm not clear on- I keep > getting: > temp.hs:4:0: > Occurs check: cannot construct the infinite type: b = Strict b > When trying to generalise the type inferred for `>>=' > Signature type: forall a b1. > Strict a -> (a -> Strict b1) -> Strict b1 > Type to generalise: forall a b1. > Strict a -> (a -> Strict b1) -> Strict b1 > In the instance declaration for `Monad Strict' > as a type error. Feel free to jump in and tell me what I'm doing > wrong.) Since you asked, The simplest fix (that makes the code you posted compile) is to pattern match on the result of f (to do what you want): (>>=) (X m) f = let X x = f m in x `seq` X x (I would write X m >>= f = let X x = f m in X $! x) But unless you export the X constructor (and don't make it strict), then f should never return (X undefined), so the above is equivalent, for all f in practice, to X m >>= f = f m I think what you really want to say is newtype Strict alpha = X alpha instance Monad Strict where return = X X x >>= f = f $! x jcc
http://www.haskell.org/pipermail/haskell-cafe/2009-January/052713.html
CC-MAIN-2013-48
refinedweb
352
72.29
This topic describes how CDN is used within the Episerver Digital Experience Cloud services, and provides general recommendations for how to configure caching specifically for CDN environments. In this topic The purpose of a content delivery network or content distribution network (CDN) is to ensure high availability when serving content to visitors. A CDN consists of a globally distributed network of proxy servers deployed in multiple data centers. CDNs deliver content from the fastest and closest server available. The CDN has its own URL pointing to the original URL. The visitor navigates to something like. The request goes to the CDN, which in turn goes to the origin URL for non-cached data. The request returns to the CDN, which caches are served from these caches. The CDN servers are designed to cache content according to the cache rules you define in the HTTP cache headers for your web application. It is critical that these caching rules are correctly set for your solution to scale properly. Setting up a CDN is fairly easy and, even with minor configuration work, you can get better performance. You should develop your application with the CDN in mind from the start, and add proper cache headers based on what you want to achieve. This way, when you hook up the CDN, it will work automatically. You should control the cache settings in the web application rather than writing overriding rules in the CDN, because this can quickly get quite complex. However, there may be situations where rules are needed, such as when special configurations are required, or if you do not want to do a deployment with these changes. Evaluate whether frequently-requested objects can be cached. Static objects can be stored indefinitely in the caches. For cloud-based solutions, you should ensure identifier that the CDN uses the latest CSS file right after deployment. If you cannot use version identifiers for frequently-requested objects, you can set a maximum lifespan for an object. This is one of the values that are typically set in the cache-control described below. Priority should be version identifier plus indefinite caching first, and no version identifier plus maximum lifespan second. Using the correct cache headers in your application ensures that you get the most out of the CDN service, regardless of whether you want to just cache static content, or text/HTML. You can use Last-Modified, cache-control settings, ETags, or a combination of these, to determine whether content has changed. There are many HTTP header settings, but cache-control is one of the most important ones because this setting defines the time for which a cached object should remain cached. A CDN monitors the cache headers in the response settings: Note: Cache-control must be set to public (not private, which means that only the browser is allowed to cache the content). Typical values: Note: You should not set cache to public on page requests if the page has private information, such as a cart page in e-commerce. Example: Caching an object for a maximum of 20 minutes. Cache-Control: public, max-age=1200, must-revalidate Use as long max-age as possible, at least for static content, to minimize the trips to the web server. In web.config, there are specific sections for setting cache headers for static content. You can see an example of settings when you install an Episerver site through Visual Studio. The tag <staticContent> and the subtag <clientCache> set cache headers in the HTTP response. In the following example, static files are cached for one day. The <staticContent> section controls caching for the files that are part of the web application code and cannot be changed by editors. The <caching> tag and its subsettings controls the IIS cache> If you use SetMaxAge, this sets cache-control to public. The following example sets the max-age using a TimeSpan of one hour that is converted automatically to 3600 seconds. public class StartPageController : PageControllerBase { public ActionResult Index(StartPage currentPage) { Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromHours(1)); return View(currentPage); } } If you use SetExpires, this sets cache-control to public, and Expires to your selected date and time. public class StartPageController : PageControllerBase { public ActionResult Index(StartPage currentPage) { Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetExpires(DateTime.Now.AddHours(1)); return View(currentPage); } } The ETag or entity tag is a part of the HTTP protocol determining cache validation, and is best used for static content, such as files, uploaded assets, and pages without output caching. For requests to resources composed from dynamic content objects such as a landing page, using ETags is not recommended. You can combine ETags with other settings. You can use both Cache-Control "max-age" and ETags to fine-tune cache management and optimize performance. You can also use only ETags, if you have specific caching control requirements. If you need to disable ETag headers from the HTTP response, do this by adding a setEtag="false" attribute setting to the web.config file under the application root directory, as in the following example. <configuration> ... <system.webServer> ... <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" setEtag="false" /> </staticContent> ... </system.webServer> ... </configuration> The default cache layer setting in DXC Service for query strings is "Standard", meaning that a different resource is delivered each time the query string changes. Proper encryption practices are necessary to prevent bad actors from accessing data. The CDN used in DXC service includes web encryption based on TLS (Transport Layer Security)/SSL (Secure Sockets Layer) protocols, for communication with other services over HTTP (HTTPS). The service includes a default certificate provided by Episerver, but you can replace this with your own. If you replace the certificate, the following requirements apply: When the application is deployed to DXC environment, there can be some lag-time for assets to update when replacing them in the CMS, due to the asset still being cached in CDN or browsers. One solution to overcome this is versioning URLs to assets. See for an (unsupported) example of how to accomplish this. Last updated: Apr 08, 2019
https://world.episerver.com/digital-experience-cloud-service/development-considerations/cdn-recommendations/
CC-MAIN-2019-51
refinedweb
1,018
53.71
Brian Noyes gave a presentation to the Cleveland .NET SIG last night on WPF for ASP.NET Developers. I took some notes, which I’ll present a few of here in case they’re helpful to anyone else. - WPF - Logical Pixels, not physical pixels, each 1/96 of an inch - Vector Graphics based - Containers - Many controls are containers - Support composition (images within buttons within buttons etc.) - Declarative XAML - Documents and Media are 1st class objects - Video/Audio - Word DOCs, PDF, etc. - Supports Interop (both ways) with Windows Forms in about 4 LOC - Requires .NET 2.0 - New features will require .NET 3.0/3.5 - Application Types - Windows Application - Same as Windows Forms - Can support ClickOnce deployment - XBAP - XAML Browser Application - Runs in the browser transparent to the user - Behind scenes, uses ClickOnce to deploy - Limited Security Context - XAML in the Browser - Static – no script/DLLs - Basically a way to render without resorting to HTML - Silverlight 1.1 – What’s needed? - Runtime - SDK - Orcas B1 and Tools - Expression Blend 2 - Silverlight 1.1 – Features - .NET Codebehind - Some Controls - Access to BCL - LINQ - Networking stack including REST/RSS - Dynamic Language Support - Some DRM story - XAML Basics - Elements define objects or set properties – similar to ASP.NET markup - XML namespaces scope objects defined in markup - Think Imports or using statements - Dependency Properties – Attached Property - Objects can refer to properties of their containers - e.g. <Grid><Image Grid.Column=”1” … /></Grid> - Also used to affect behavior, particularly in WF - Note calls to InitializeComponent() at design time, even though this doesn’t exist until runtime. - Data Binding - Example: Text=”{Binding Path=Title}” - Window.DataContext is the property to assign collections or objects to - Blend2 - Supports Adding Events to Controls if VS is used at same time - Switches focus to VS and adds the event - Requires VS to compile the app then before Blend can preview it - My take – better than hand writing the events but very klugey to have to jump between the tools, especially when MSBuild would be so easy to call from Blend. Overall it was a good overview. I missed the very beginning. The slides and demos and such are available on Brian’s blog. If that doesn’t work, try the .NET SIG Presentations area. [categories: XAML,WPF,Silverlight]
http://ardalis.com/wpf-for-asp-net-developers
CC-MAIN-2016-40
refinedweb
379
53.61
This is an example C program demonstrating the quicksort algorithm. #include <stdio.h> /* For "strcmp". */ #include <string.h> /* This swaps two elements of "array" indexed by "a" and "b". */ static void swap (const char ** array, int a, int b) { const char * holder; printf ("Swapping entry %d, '%s' to %d, and entry %d, '%s' to %d.\n", a, array[a], b, b, array[b], a); holder = array[a]; array[a] = array[b]; array[b] = holder; } /* This is an example implementation of the quick sort algorithm. */ static void quick_sort (const char ** array, int left, int right) { int pivot; int i; int j; const char * key; /* Catch the trivial case. */ if (right - left == 1) { if (strcmp (array[left], array[right]) > 0) { printf ("Only one entry: "); swap (array, left, right); } return; } /* Pick a mid point for the pivot. */ pivot = (left + right) / 2; key = array[pivot]; printf ("Sorting from %d to %d: pivot (midpoint) is at %d, '%s'\n", left, right, pivot, key); /* Put the pivot key at the left of the list. */ swap (array, left, pivot); i = left + 1; j = right; while (i < j) { while (i <= right && strcmp (array[i], key) < 0) { /* Leave the parts on the left of "key" in place if they are smaller than or equal to "key". */ i++; } while (j >= left && strcmp (array[j], key) > 0) { /* Leave the parts on the right of "key" in place if they are greater than "key". */ j--; } if (i < j) { /* "array[i]" is greater than "key", and "array[j]" is less than or equal to "key", so swap them. */ printf ("Out of order: '%s' > '%s', but %d < %d: ", array[i], array[j], i, j); swap (array, i, j); } } /* Put the pivot key back between the two sorted halves. */ printf ("Putting the pivot back: "); swap (array, left, j); if (left < j - 1) { printf ("Sub-sorting lower entries.\n"); /* Sort the left half using this function recursively. */ quick_sort (array, left, j - 1); } if (j + 1 < right) { printf ("Sub-sorting upper entries.\n"); /* Sort the right half using this function recursively. */ quick_sort (array, j + 1, right); } } /* This is the example data to sort. */ const char * monsters[] = { "jabberwocky", "werewolf", "dracula", "zebedee", "captain pugwash", "the clangers", "magilla gorilla", "hong kong phooey", "spartacus", "billy the silly billy", }; /* "n_monsters" is the number of things to sort. */ #define n_monsters (sizeof monsters)/(sizeof (const char *)) /* This prints the contents of "array". */ static void print_array (const char ** array, int size) { int i; for (i = 0; i < size; i++) { printf ("%d: %s\n", i, array[i]); } printf ("\n"); } int main () { printf ("Before sorting:\n\n"); print_array (monsters, n_monsters); quick_sort (monsters, 0, n_monsters - 1); printf ("\nAfter sorting:\n\n"); print_array (monsters, n_monsters); return 0; } (download) The output of the example looks like this: Before sorting: 0: jabberwocky 1: werewolf 2: dracula 3: zebedee 4: captain pugwash 5: the clangers 6: magilla gorilla 7: hong kong phooey 8: spartacus 9: billy the silly billy Sorting from 0 to 9: pivot (midpoint) is at 4, 'captain pugwash' Swapping entry 0, 'jabberwocky' to 4, and entry 4, 'captain pugwash' to 0. Out of order: 'werewolf' > 'billy the silly billy', but 1 < 9: Swapping entry 1, 'werewolf' to 9, and entry 9, 'billy the silly billy' to 1. Putting the pivot back: Swapping entry 0, 'captain pugwash' to 1, and entry 1, 'billy the silly billy' to 0. Sub-sorting upper entries. Sorting from 2 to 9: pivot (midpoint) is at 5, 'the clangers' Swapping entry 2, 'dracula' to 5, and entry 5, 'the clangers' to 2. Out of order: 'zebedee' > 'spartacus', but 3 < 8: Swapping entry 3, 'zebedee' to 8, and entry 8, 'spartacus' to 3. Putting the pivot back: Swapping entry 2, 'the clangers' to 7, and entry 7, 'hong kong phooey' to 2. Sub-sorting lower entries. Sorting from 2 to 6: pivot (midpoint) is at 4, 'jabberwocky' Swapping entry 2, 'hong kong phooey' to 4, and entry 4, 'jabberwocky' to 2. Out of order: 'spartacus' > 'dracula', but 3 < 5: Swapping entry 3, 'spartacus' to 5, and entry 5, 'dracula' to 3. Putting the pivot back: Swapping entry 2, 'jabberwocky' to 4, and entry 4, 'hong kong phooey' to 2. Sub-sorting lower entries. Only one entry: Swapping entry 2, 'hong kong phooey' to 3, and entry 3, 'dracula' to 2. Sub-sorting upper entries. Only one entry: Swapping entry 5, 'spartacus' to 6, and entry 6, 'magilla gorilla' to 5. Sub-sorting upper entries. Only one entry: Swapping entry 8, 'zebedee' to 9, and entry 9, 'werewolf' to 8. After sorting: 0: billy the silly billy 1: captain pugwash 2: dracula 3: hong kong phooey 4: jabberwocky 5: magilla gorilla 6: spartacus 7: the clangers 8: werewolf 9: zebedee If you are very interested in performance, you may be interested in An example of an inline qsort, which compares the performance of qsort, an inline sorting routine in C, and the C++ standard template library's template-based sorting.
https://www.lemoda.net/c/qsort-example/index.html
CC-MAIN-2017-47
refinedweb
813
67.08
First time here? Check out the FAQ! I use the RListDisplay Widget in Ivy 4.1.2 and 4.1.3 to display other components dynamically and it works good so far. The RListDisplay shows a vertical Scrollbar as soon as there is not enough vertical place to show all the components. It is included into a Rich Dialog and the list of components to load in the RListDisplay is built in the Rich Dialog Start method. The problem is that the scrollbar scrolls automatically to the end of the ListDisplay after the RDC has been loaded. I can not get the scrollbar to stay "quiet" at the 0.0 position. I have tried with the "ListDisplay.setAutoscrolls(false);" method or with panel.ListDisplay.setVerticalScrollBarPosition(0.0); in the last step of the start method: the scrollbar scrolls always automatically to the end. I there any solution to this problem? asked 06.01.2011 at 07:46 Emmanuel 175●13●16●19 accept rate: 66% In our projects we use the following solution that was found by my colleague: in the end of the start method we have RD step with the following code: import ch.ivyteam.ivy.richdialog.exec.panel.IRichDialogPanel; List<IRichDialogPanel> panels = panel.ListDisplay.getPanels(); if (panels.size() > 0) { IRichDialogPanel firstPanel = panels.get(0); panel.ListDisplay.selectPanel(firstPanel.getName()); panel.ListDisplay.scrollToVisible(); } answered 14.01.2011 at 03:29 Andriy Petri... 26●2●2●2 accept rate: 25% Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown richdialog ×40 rdwidgets ×12 Asked: 06.01.2011 at 07:46 Seen: 796 times Last updated: 06.01.2011 at 07:46 how to set the mandatory property dynamically? How to configure columns of RTableTree in ivyScript ? Hide columns of a RTable depending of the value in a row/cell RTable is not updated when model is changed Wrap lines in a RTable Cell Widget in RTable Display NULL values in record set nicely in the RTable Mark the content of text fields automatically when focus is gained How to set the focus in a RTable programmatically? Catch Widget Validation Message
https://answers.axonivy.com/questions/387/rlistdisplay-scrollbar
CC-MAIN-2019-13
refinedweb
366
58.58
Hello all. I've recently started doing a java encryption/decryption project using the crypto and security packages. I have very little experience with java although I do have a lot of experience in C and C++ so I am no stranger to OO programming. Anyway, I have this one line of code which is causing me a problem. Code JAVA: import java.util.*; import DataLog.*; import java.security.*; import javax.crypto.*; import javax.crypto.spec.*; public class JCipher { private static Cipher jc; public static void main(String[] args) { /* Apparently this method does not exist, how should I use the Cipher class? */ jc = Cipher.getInstance("DES/CBC/PKCS5Padding"); } } Obviously I have more code than that, but it shows the principle. Here is the compiler message: Code JAVA: JCipher.java:79: unreported exception java.security.NoSuchAlgorithmException; must be caught or declared to be thrown jc = Cipher.getInstance("DES/CBC/PKCS5Padding"); ^ 1 error I am using JDK 6.27, did they change the interface for accessing the classes, all the info I can find says that I should get an instance rather than doing something like: jc = new Cipher(...); The only other idea I have is that my imports are wrong but all the examples I've looked at have the same imports.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/10052-c-c-programmer-doing-java-encryption-project-printingthethread.html
CC-MAIN-2016-07
refinedweb
211
55.44
The Windows Mobile operating system provides a Notifications feature which can be utilised to display prompts to the user in the form of toast or popup balloons (depending upon operating system version) even when your application is running in the background. These are ideal for when your application must inform the user that background tasks have completed, or that particular actions must be performed. This blog entry discusses how you can utilise notifications within your own applications. Supported Platforms Notifications are only supported on Windows Mobile Professional or Classic (i.e. Pocket PC) devices, they are not supported on Windows Mobile Standard (i.e. Smartphone) devices. If you develop a Compact Framework application which uses notifications, the application will compile when you target a Windows Mobile Standard device, but will cause a NotSupportedException when the application is ran on the device. To utilise the Microsoft.WindowsCE.Forms.Notification class you must add a reference to the Microsoft.WindowsCE.Forms.dll assembly. One of the easiest ways to obtain this reference, and to configure your notification is to drag a “Notification” component from the Toolbox within Visual Studio 2005 (it is located within the Device Components category). Doing this will automatically added the needed reference to your project, and allow you to configure the notification’s properties graphically via the Properties window. Notification Properties The notification class has a number of properties which control the look and feel of the notification balloon. These properties are as follows: - Caption – the text displayed in the caption bar at the top of the notification balloon. - Text – a string containing the HTML content to display in the main part of the notification popup. This enables you to use formatting, and different colours within your notifications. - Critical – a boolean flag, if set to true this indicates that the notification is of urgent importance. This has the effect of using a different colour border around the edge of the notification. - Icon – The icon to display on the nav bar for the notification. The user can tap on this icon to display a hidden notification, so it should be representative of the meaning of the notification. - InitialDuration – the number of seconds the notification should appear on the screen when initially made visible. After this duration has expired the notification will hide itself, and the user will have to tap on the icon on the nav bar to make it re-appear. If this value is set to zero the popup notification is not displayed, instead going straight to the icon on the nav bar. - Visible – setting the Visible property to true, will make the notification balloon appear on the screen. Displaying a notification To display a notification to the user is as simple as creating an instance of the Notification class, setting up the properties for the desired effect, and then finally setting the Visible property to true to make the notification visible to the user. This is demonstrated in the following sample: using Microsoft.WindowsCE.Forms; private Notification n = null; private void button1_Click(object sender, EventArgs e) { // Create an instance of the notification class and configure // its properties for the desired effect. n = new Notification(); n.Caption = "Hello World!"; n.Text = "<b>This</b> is a <u>sample</u> notification!"; n.Critical = false; n.Icon = Properties.Resources.NotificationIcon; // Finally to make the notification appear on screen // set the Visible property to true. n.Visible = true; } One important thing to notice is that we have not used a local variable within the button1_Click method to hold the Notification we are displaying to the user. The reason for this has to do with the CLR and its Garbage Collection behavior. Although the code may work if you use a local variable, it is not guaranteed and will potentially lead to unpredictable behavior (more on this in a later section). The only property which deserves further discussion is the Icon one. You can store an icon in a number of ways. Perhaps the easiest way to store an icon in your executable is to use the Resources.resx file the Visual Studio 2005 project wizard will have created for you. The following screenshot demonstrates where you can find this file within solution explorer. If you open the file you can add new icon(s) into it, and these icons will be accessible via strongly typed properties within the Properties.Resources class, as demonstrated by the code sample above. Hiding a notification There are two different ways you can remove a notification which is visible on the screen. You can simply set the Visible property to false, as the following example demonstrates: // Using this approach to hide a notification will allow you // to re-display it by changing the Visible property back to true. n.Visible = false; This has the benefit that you can decide to re-display the notification, by simply resetting the Visible property back to true. You can change the Visible property as many times as you like. The alternative approach is to call the Dispose method of the Notification class, as the following example demonstrates: // Using this approach will hide the notification but won't // allow you to re-display it without creating a new instance // of the Notification class. n.Dispose(); Once you have done this you will not be able to display the notification again, without creating a new instance of the Notification class. Previously we mentioned that you should not use a local variable to reference your Notification object. This last code sample demonstrates the reason why. If you had stored your notification in a local variable within the button1_Click method, the garbage collector would detect your variable as potential garbage when the method completed. If a garbage collection occurred, and decided to collect this reference, the garbage collector would call the Dispose method on the notification, which would remove it from the screen. By keeping a reference to the notification “alive” for the life time of the form (by using a member variable to reference it) the garbage collector will not be able to dispose of it until the form is closed. Detecting when the notification is hidden The notification class has a BallonChanged event which fires whenever the notification balloon is made visible, or hidden. The following example, demonstrates how you can listen to this event, in order to perform a task when the popup balloon is hidden: Notification n = new Notification(); // configure the notification properties... n.BallonChanged += new BallonChangedEventHandler(n_BallonChanged); void n_BalloonChanged(object sender, BalloonChangedEventArgs e) { // The Visible property indicates the current state of the // popup notification balloon if (e.Visible == false) { // If the balloon has now been hidden, display a message box // to the user. MessageBox.Show("The balloon has been closed", "Status"); } } Sample Application A sample application is available for download, which demonstrates the use of the Notification class. It enables you to experiment with the various properties of the notification class, and see how they alter a notification. There is one member of the Notification class which we have not discussed in this blog entry. This is the ResponseSubmitted event which can be used to process feedback provided by the user when they dismiss the popup notification. For example in the HTML text of a notification, you could create a couple of radio buttons, and a text field. By handling the ResponseSubmitted event you can determine what values the user has entered and use them to alter the behavior of your application. Covering how to utilise the ResponseSubmitted event to process HTML based forms will be the topic of a future blog entry. Great article. However, I just can’t make my notifications go away. I mean, when I click on “Hide”, the notification stays minimized (in ballon form), and it is not removed no matter what I do (only way to do it is to reset device). I’m using WM 6.0. Is there something I’m missing? Thanks! Hi hacqua, I’m glad you liked the article. I’m blogging about these topics as a way to give back to the community, so I’m glad I’ve been of help to someone. This is “by design”, the hide button simply hides the popup ballon part of the notification, but will leave the notification icon on the nav bar at the top of the form. This is to support scenarios such as “you are currently downloading a file: 42KB out of 5923KB completed so far” style progress notifications. i.e. long running notifications which should still be accessable to the user, but the user may want to temporarily hide in order to work with the device before the process has completed. To get the functionality you want (clicking the Hide button will remove the notification completely), you should listen to the BallonChanged event and do something like the following: See if making that change to the sample application does what you desire. Hope this helps Yes, now it works!!! Thank you very much! :-) On my WM6 device, there are numerous pop-ups that look identical to the Notification popup except they have custom text for the 2 soft keys rather then just having “Hide”. How can this be done? Hi Dan, Welcome to the world of the .NET Compact Framework :-) The Notification class within the Microsoft.WindowsCE.Forms assembly is a .NET wrapper around the native functionality present within the Operating System. Support for having custom menu bars associated with notifications is something which is missing from this wrapper class. I.e. it is supported by the operating system, but is not exposed by the .NET wrapper class. On MSDN you can see within the documentation for the SHNOTIFICATIONDATA structure used by the native API references to fields called skm and rgskn which allows control over the menu bar present. So at present setting custom menu bars up for a notification is not possible from a .NET Compact Framework application. It is possible you could PInvoke the entire notification API yourself in order to expose the required functionality. I might have a little bit of an investigation and see if I can come back with a better answer, it is possible for instance that someone has already produced such a wrapper. Hi Dan, As promised I have taken a look at what is required to get custom soft keys on a notification within the confines of a .NET Compact Framework application. Please see my newer blog entry “Custom soft keys for Notifications” for further details and an example application. This approach also provides an alternative way for hacqua to get a notification to remove itself when the “hide” button is pressed (make a custom soft key of type “Dismiss” instead of “Hide”). Hi , I am trying to implement the sample program provided in : ” Northwind Pocket Delivery: Transportation for Windows Mobile-based Pocket PCs” . It seems the notification API `s used in this project is obolete now. The target platform is Windows Mobile 6. My requirement is to get the notification in Mobile device which originates from the web interface. I have gone through your well explained posts here but could not find something related to this. Thanks for preparing and sharing your vast programming experience with us…:) _Aneesh Hello, Is there a way to detect programmatically when a Notification Ballon is displayed. For instance a “Main batery Low” balloon? Thanks a lot in advance. Alex. Hello, I have a problem using notification in an application on Pocket PC 2003 SE. when i click on the icon of may notification in the title bar the device resets. With pocket pc 2003, windows mobile 5.0 or 6.0 the problem doesn’t manifest. Can we have notification control for smartphone. Hello Sir, I create a custom notification for my application and notification icon will show up in the title bar of the PDA phone. If there is only one ballon notificaiton on the title bar, then i can click on the notification icon to show up the ballon message, but when there is also a sms notification icon , then when i try to click on one of the icons on the title bar, then the whole system crashed. Can anyone know what happeded to it. thanks a lot anne Hello, It’s a good exemple, i’m just a question, when i create my notification in my application, if my application close, so my notification disapeard, it’s normal or not ? I want to send a notification, close my apps, but i want to acces to my notification, it’s possible or not ? Thank a lot Fetch I really appreciate the info posted in your Notification articles. These are some of the things that I have been playing with the last day or so. The two questions that I have that has not been answered in your posts or in the comments are: 1. Do you know if it is possible to initiate a Notification and not have the balloon show up initially? In other words can I get the icon to display and not show the balloon? I want to let the user click on the icon to display the balloon when they want to. 2. How can I change the icon without having the balloon pop up? I need to change the icon based on the current state of my app so that the user can see what is going on without popping the balloon up. I can change the icon through the standard C# API but every time I do that the balloon pops up. Thanks for the info, Jay Hi, I would like to know if i can change the look and feel of the notification? i mean instead of that default box; can i change the apeareance (creating my own box with a different design). thx. If not is there any way to do it with another API?. Regards, Hi, Thanks for the article, I’m new to windows mobile development, so this definitely helped. One question: Is there a way to configure the location of the notification? I would like mine to stay at the top ‘Start’ menu bar, but by default it ends up at the bottom of the screen, even though the icon is at the top. I don’t see an obvious way to configure this, and when I originally tried it in the Pocket PC 2003 platform it was at the top. Regards, Jon Great article with a simple and practical example. I used this today to get notifications working in my app. Hi Christopher, Is it possible to create the popup window similar to the one when you clicked the icons in the taskbar such as sound icon and Time. It looks like a callout window. Regards, Roy [...] I grabbed an application called AKToggleWifi from akarnik and created a small program that makes use of the Notifications class based on the code posted by Christopher Fairbairn. [...] Hi Christopher, Can i display this notification during a call? will it beep like when a message is received? Thank a lot. Kareem Ayoub. Hi there, Great example. Is there a way to disable to balloon and just keep the icon on the top. Reason is that I want to use the icon to indicate wifi availability to the user from the app. I change the icon when the device has no signal. I can change the notification icon dynamically just fine but everytime it changes, the balloon also pops up and I don’t want that. Appreciate your help. Thanks Hi Christopher, Very nice example. I want to know how to implement the WM 6.1 short message notification UI(using C#). The rigth softkey is not a menuItem but a menu which has some menuItems. thank a lot. Li Hong Bo Hi, Great example, but is it possible to change native SMS notification with this code ? Nice work! Excellent article! Keep it up!
http://www.christec.co.nz/blog/archives/104/comment-page-1
CC-MAIN-2018-47
refinedweb
2,656
53.71
to help convert ASP.Net to .Net Core. 1. Using xproj & csproj files together There doesn’t seem to be any way for these two project types to reference each other. You move everything to xproj, but then you can no longer use MSBuild. If you are like us, that means your current setup with your build server won’t work. It is possible to use xproj and csproj files both at the same time which is ultimately what we ended up doing for our Windows targeted builds of Prefix. Check out our other blog post on this topic: 2. Building for deployment If you are planning to build an app that targets non Windows, you have to build it on the target platform. In other words, you can’t build your app on Windows and then deploy it to a Mac. You can do that with a netstandard library, but not a netcoreapp. They are hoping to remove this limitation in the future. 3. NetStandard vs NetCoreApp1.0 What is the difference? NetStandard is designed as a common standard so that .NET 4.5, Core, UWP, Xamarin and everything else has a standard to target. So, if you are making a shared library that will be a nuget package, it should be based on NetStandard. Learn more about NetStandard here: If you are making an actual application, you are supposed to target NetCoreApp1.0 as the framework IF you plan on deploying it to Macs or Linux. If you are targeting Windows, you can also just target .NET 4.5.1 or later. 4. IIS is dead, well sort of As part of .NET Core, Microsoft (and the community) has created a whole new web server called Kestrel. The goal behind it has been to make it as lean, mean, and fast as possible. IIS is awesome but comes with a very dated pipeline model and carries a lot of bloat and weight with it. In some benchmarks, I have seen Kestrel handle up to 20x more requests per second. Yowzers! Kestrel is essentially part of .NET Core which makes deploying your web app as easy as deploying any console app. As matter of fact, every app in .NET Core is essentially a console app. When your ASP.NET Core app starts up, it activates the Kestrel web server, sets up the HTTP bindings, and handles everything. This is similar to how self hosted Web Api projects worked with Owin. IIS isn’t actually dead. You can use IIS as a reverse proxy sitting in front of Kestrel to take advantage of some of it’s features that Kestrel does not have. Things like virtual hosts, logging, security, etc. Microsoft still recommends using IIS to sit in front of your ASP.NET Core apps. Check out this blog post about deploying to IIS: Publishing and Running ASP.NET Core Applications with IIS If you have ever made a self hosted web app in a Windows service or console app, it all works much differently now. You simply use Kestrel. All the self hosted packages for WebApi, SignalR and others are no longer needed. Every web app is basically self hosted now. 5. HttpModules and HttpHandlers are replaced by new “middleware” Middleware has been designed to replace modules and handlers. It is similar to how Owin and other languages handle this sort of functionality. They are very easy to work with. Check out the ASP.NET docs to learn more. The good (and bad) news is you can’t configure them in a config file either. They are all set in code. 6. FileStream moved to System.IO.FileSystem ??? Some basic classes that everyone uses on a daily basis have been moved around to different packages. Something as common as FileStream is no longer in the System.IO assembly reference/package. You now have to add the package System.IO.FileSystem. This is confusing because we are using class namespaces that don’t directly match the packages. This website is very valuable for figuring out where some classes or methods have been moved around to: 7. StreamReader constructor no longer works with a file path Some simple uses of standard libraries have changed. A good example is the StreamReader which was often used by passing in a file path to the constructor. Now you have to pass in a stream. This will cause small refactorings to use a FileStream in addition to the StreamReader everywhere. Another good example of this is around reflection. GetType() now returns a more simplified object for performance reasons and you must do a GetTypeInfo() to get the full details. Luckily that is backwards compatible to .NET 4.5 8. Platform specific code… like Microsoft specific RSA .NET Core is designed to run on Windows, Macs and Linux. But some of your code could potentially compile on Windows but then fail at runtime when you run it on a Mac or Linux. A good example of this is RSACryptoServiceProvider which appears to be useable. At runtime on a Mac you will get a “platform not supported” type exception. Evidently this RSA provider API is Windows specific. Instead you have to use RSA.Create() which is a more generic implementation and has slightly different methods. Both are in System.Security.Cryptography. Confusing huh? The old “If it builds, ship it!” mentality totally falls apart on this one! 9. Newtonsoft changed to default to camel case on field names 🙁 This has to be one of the biggest headaches of the conversion. Newtonsoft now defaults to camelCase. This will cause all sorts of REST APIs to break if you were using PascalCase. We ended up using the JsonProperty attribute on some things to force their casing how we needed them. This one is a big land mine, so watch out for it. #TeamPascalCase 10. Log4net doesn’t (didn’t) work and neither do countless other dependencies, unless you target .NET 4.5! Log4net is a pretty fundamental library used by countless developers. It has not been ported to core, yet. NLog and Serilog work and you will have to switch logging providers. Before converting anything to core, you need to review all of your referenced dll dependencies to ensure they will work with core. But as long as you are targeting Windows, you can target .NET 4.5.1 or newer and use all your current dependencies! If you have to go cross platform… watch out for dependency problems. Be sure to check out our entire article about ASP.NET Core Logging. Update: log4net has been updated to work with .NET Core 11. System.Drawing doesn’t exist Need to resize images? Not with the .NET framework currently. There are some community projects that you can use. Check out Hanselman’s blog post: Server-side Image and Graphics Processing with .NET Core and ASP.NET 5 12. DataSet and DataTable doesn’t exist People still use these? Actually, some do. We have used DataTables for sending a table of data to a SQL stored procedure as an input parameter. Works like a charm. 13. Visual Studio Tooling We have seen a lot of weirdness with intellisense and Visual Studio in general. Sometimes it highlights code like it is wrong, but it compiles just fine. Being able to switch between your framework targets is awesome for testing your code against each. Although we just removed net451 as a target framework from my project, but Visual Studio still thinks we are targeting it…. There are still a few bugs to be worked out. 14. HttpWebRequest weird changes In .NET 4.5 there are some properties you have to set on the HttpWebRequest object and you can’t just set them in the headers. Well, in core they decided to reverse course and you have to use the header collection. This requires some hackery and compiler directives… Otherwise you get errors like this from your .NET 4.5 code: The ‘User-Agent’ header must be modified using the appropriate property or method. We need some extension methods for core to make it backwards compatible. 15. Creating a Windows Service in .NET Core Windows Services can be easily created with Visual Studio 2017 as long as your code targets the full .NET Framework. You will need move your .NET Core code to a class library that targets NetStandard that can then be shared via other .NET Core apps and your Windows Service. Read More: How to Create .NET Core Windows Services With Visual Studio 2017 16. Web API is Gone/Part of MVC Now With .NET Core Microsoft and the community decided to merge Web API and MVC together. They have always been very similar to work with and either could be used for API type applications. So in a lot of ways, merging them made sense. Check out our detailed article on this subject about Bye Bye ASP.NET Core Web API. BONUS – Database access Low level access via SqlConnection and SqlCommand works the same. My favorite two ORMs, Dapper and Entity Framework, both work with dotnet core. Entity Framework Core has quite a few differences. Learn more here: BONUS – Need help? If you are working with .NET core and have questions, try joining community Slack account: at Stackify Retrace APM for .NET Core Released -
https://stackify.com/15-lessons-learned-while-converting-from-asp-net-to-net-core/
CC-MAIN-2017-26
refinedweb
1,553
76.82
)). The following Python code plots the shielded and unshielded Coulomb potential due to a point test charge $q_\mathrm{T} = +e$, assuming an electron temperature and density typical of a tokamak magnetic confinement nuclear fusion device. import numpy as np from scipy.constants import k as kB, epsilon_0, e from matplotlib import rc import matplotlib.pyplot as plt rc('font', **{'family': 'serif', 'serif': ['Computer Modern'], 'size': 16}) rc('text', usetex=True) # We need the following so that the legend labels are vertically centred on # their indicator lines. rc('text.latex', preview=True) def calc_debye_length(Te, n0): """Return the Debye length for a plasma characterised by Te, n0. The electron temperature Te should be given in eV and density, n0 in cm-3. The debye length is returned in m. """ return np.sqrt(epsilon_0 * Te / e / n0 / 1.e-6) def calc_unscreened_potential(r, qT): return qT * e / 4 / np.pi / epsilon_0 / r def calc_e_potential(r, lam_De, qT): return calc_unscreened_potential(r, qT) * np.exp(-r / lam_De) # plasma electron temperature (eV) and density (cm-3) for a typical tokamak. Te, n0 = 1.e8 * kB / e, 1.e26 lam_De = calc_debye_length(Te, n0) print(lam_De) # range of distances to plot phi for, in m. rmin = lam_De / 10 rmax = lam_De * 5 r = np.linspace(rmin, rmax, 100) qT = 1 phi_unscreened = calc_unscreened_potential(r, qT) phi = calc_e_potential(r, lam_De, qT) # Plot the figure. Apologies for the ugly and repetitive unit conversions from # m to µm and from V to mV. fig, ax = plt.subplots() ax.plot(r*1.e6, phi_unscreened * 1000, label=r'Unscreened: $\phi = \frac{e}{4\pi\epsilon_0 r}$') ax.plot(r*1.e6, phi * 1000, label=r'Screened: $\phi = \frac{e}{4\pi\epsilon_0 r}' r'e^{-r/\lambda_\mathrm{D}}$') ax.axvline(lam_De*1.e6, ls='--', c='k') ax.annotate(xy=(lam_De*1.1*1.e6, max(phi_unscreened)/2 * 1000), s=r'$\lambda_\mathrm{D} = %.1f \mathrm{\mu m}$' % (lam_De*1.e6)) ax.legend() ax.set_xlabel(r'$r/\mathrm{\mu m}$') ax.set_ylabel(r'$\phi/\mathrm{mV}$') plt.savefig('debye_length.png') plt.show() Comments are pre-moderated. Please be patient and your comment will appear soon. Dominik Stańczak 2 months, 3 weeks ago If you'd like to avoid doing the annoying unit conversions, Python has a bunch of packages like astropy.units (which I ten to use, ) or unyt (which I've heard good things about) :)Link | Reply New Comment
https://scipython.com/blog/the-debye-length/
CC-MAIN-2019-22
refinedweb
397
52.15
In this tutorial, you will learn about a simple C++ Hello World program with step by step explanation. The Hello world program is the first step for learning any programming language as it covers the basics and is the simplest program. It just prints “Hello World” in the screen. Here is the example of C++ program to print Hello World line by line explanation. C++ Hello World Program //"Hello, World!" program in C++ #include <iostream> using namespace std; int main() { cout << "Hello, World!"; return 0; } OR C++ Hello World Program: Without including namespace //"Hello, World!" program in C++ #include <iostream> int main() { std::cout << "Hello, World!"; return 0; } Output Both the programs above will yield the same output. Hello, World! Explanation There are many similar things in C and C++. Line 1: //"Hello, World!" program in C++ This is a single line comment in C++. Everything in a line after double forward slash // is a comment. Line 2: #include <iostream> Everything after hash # is called directives that are processed by preprocessor. The above line causes the compiler to include standard lines of C++ code, known as header iostream, into the program. The file iostream contains definitions that are required for stream input or output i.e. declaration for the identifier cout and cin. Line 3: using namespace std; This is a new concept introduced by ANSI C++ called namespace that defines the scope of the identifiers. std is the namespace where standard class libraries are defined. The using keyword is used to include already defined namespace std into our scope. If we didn’t include using the keyword, we have to use the following format: std::cout << "Hello, World!"; Line 4: int main() Every C++ program must contain main function that contains actual code inside curly braces {}. The return type of main function is of int type. Line 5: cout << "Hello, World!"; This line instructs the compiler to display the string inside quotation "" in the screen. cout is an identifier which corresponds to the standard output stream whereas << is the insertion operation. Line 6: return 0; This statement returns the value 0 which represents the exit status of the program. This is all about the first program in C++.
http://www.trytoprogram.com/cpp-examples/cplusplus-hello-world-program/
CC-MAIN-2020-29
refinedweb
369
66.74
Opened 17 years ago Closed 14 years ago Last modified 4 years ago #191 enhancement closed invalid (invalid) twisted.internet refactor to better support non-select reactors Description Change History (8) comment:1 Changed 17 years ago by comment:2 Changed 17 years ago by Why are you using the classes in tcp.py? You're right in your other bug (the namespace is horrible) but those classes are for reactors which behave more or less like select. If CFRunLoop doesn't behave like select, then write you own transport classes, don't try to make the default ones infinitely flexible. comment:3 Changed 17 years ago by I'm using the classes in tcp and udp because the code for CoreFoundation TCP/UDP (when not using the CFNativeSocket method) is identical - except for the fact that I REALLY need to know when they get disconnected so I can invalidate the CFSocket. Which I did, by changing their connectionLost method (at runtime, from the reactor). Granted it's not really any nastier than what's already in tcp/udp, but it's not the kind of hack I'd want to put into Twisted. I also don't want to cut and paste 90% of tcp.py and udp.py into something else and pollute that namespace even more. comment:4 Changed 17 years ago by I agree with bob - the changes would be pretty minimal and the vast majority of the code would be the same in any case. Additional stuff like "register fd" would actually make some of the code cleaner. OTOH, it may cause some issues, like, existing FileDescriptor's don't have this concept. We could probably figure out something that gets them to DTRT (startReading could do the register the first time), or just update our code, but at least some people have written their own FileDescriptors already. comment:5 Changed 17 years ago by Subclass then, perhaps? comment:6 Changed 14 years ago by Ran across this ticket more or less by accident; it's hopelessly vague. Also, cfreactor is basically dead now, so the original motivating use-case is not relevant. comment:7 Changed 9 years ago by comment:8 Changed 4 years ago by Note: See TracTickets for help on using tickets.
https://twistedmatrix.com/trac/ticket/191
CC-MAIN-2020-24
refinedweb
380
60.04
In the last post we talked about Fragment in Android and how we can use it to support multiple screen. We described their lifecycle and how we can use it. In this post we want to go deeper and create an example that helps us to understand better how we can use fragments. As example we want to create a simple application that is built by two fragments: - one that shows a list of links and - another one that show the web page in a WebView. while, in the portrait mode we want something like: CREATE LAYOUTThe first step we have to do is creating our layout. As we said we need two different layout one for portrait and one for landscape. So we have to create two xml file under res/layout (for portrait mode) and one under res/layout-land (for landscape mode). Of course we could customize more our layout including other specs but it is enough for now. These two files are called activity_layout" > <fragment android: </RelativeLayout> This one is for the portrait mode and as we notice we have only one fragment containing just the link list. We will see later how to create this fragment. We need, moreover, another layout as it is clear from the pic above, the one that contains the WebView. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <WebView android: </LinearLayout> For the landscape mode we have something very similar plus the FrameLayout component in the same layout. : <fragment android: <FrameLayout android: </LinearLayout> CREATE LINK LIST FRAGMENTAs you have already noticed both layout have a common fragment that is called LinkListFragment. We have to create it. If you didn’t read already the post explaining the lifecycle it is time you give a look. In this case we don’t have to override all the methods in the fragment lifecycle, but those important to control its behaviour. In our case we need to override: - onCreateView - onAttach In this fragment we use a simple ListView to show the links and a simple adapter to customize the way how the items are shown. We don’t want to spend much time on how to create a custom adapter because it is out of this topic you can refer here to have more information. Just to remember in the onCreateView fragment method we simply inflate the layout and initialize the custom adapter. As XML layout to inflate in the fragment we have: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <ListView android: </LinearLayout> while in the method looks like: @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { Log.d("SwA", "LV onCreateView"); View v = inflater.inflate(R.layout.linklist_layout, container, false); ListView lv = (ListView) v.findViewById(R.id.urls); la = new LinkAdapter(linkDataList, getActivity()); lv.setAdapter(la);()); } }); return v; } In this method we simply set our custom adapter and the set the listener when the user clicks on an item. We will cover this later (if you are curious see Inter fragment communication). In the onAttach method we verify that the activity that holds the fragment implements a specific interface. @Override public void onAttach(Activity activity) { // We verify that our activity implements the listener if (! (activity instanceof ChangeLinkListener) ) throw new ClassCastException(); super.onAttach(activity); } We will clarify why we need this control later. FRAGMENT COMMUNICATIONBasically in our example we have two fragments and they need to exchange information so that when user select an item in the fragment 1 (LinkListFragment) the other one (WebViewFragment) shows the web page corresponding to the link. So we need to find a way to let these fragments to exchange data. On the other way we know that a fragment is a piece of code that can be re-used inside other activity so we don’t want to bind our fragment to a specific activity to not invalidate our work. In Java if we want to decouple two classes we can use an interface. So this interface solution fits perfectly. On the other hand we don’t want that our fragment exchange information directly because each fragment can rely only on the activity that holds it. So the simplest solution is that the activity implements an interface. So in our case we define an interface called ChangeLinkListener that has only one method: public interface ChangeLinkListener { public void onLinkChange(String link); } We have, more over, to verify that our activity implements this interface to be sure we can call it. The best place to verify it is in the onAttach method (see above) and at the end we need to call this method when the user selects an item in the ListView:()); } }); PROGRAMMING MAIN ACTIVITY: FIND FRAGMENTBy now we talked about fragments only, but we know that fragments exists inside a “father” activity that control them. So we have to create this activity but we have to do much more. As we said before this activity has to implements a custom interface so that it can receive data from the LinkListFragment. In this method (onLinkChange) we have somehow to control if we are in landscape mode or in portrait mode, because in the first case we need to update the WebViewFragment while in the second case we have to start another activity. How can we do it? The difference in the layout is the presence of the FrameLayout. If it is present it means we are in landscape mode otherwise in portrait mode. So the code in the onLinkChange method is: @Override public void onLinkChange(String link) { System.out.println("Listener"); // Here we detect if there's dual fragment if (findViewById(R.id.fragPage) != null) { WebViewFragment wvf = (WebViewFragment) getFragmentManager().findFragmentById(R.id.fragPage); if (wvf == null) { System.out.println("Dual fragment - 1"); wvf = new WebViewFragment(); wvf.init(link); // We are in dual fragment (Tablet and so on) FragmentManager fm = getFragmentManager(); FragmentTransaction ft = fm.beginTransaction(); //wvf.updateUrl(link); ft.replace(R.id.fragPage, wvf); ft.commit(); } else { Log.d("SwA", "Dual Fragment update"); wvf.updateUrl(link); } } else { System.out.println("Start Activity"); Intent i = new Intent(this, WebViewActivity.class); i.putExtra("link", link); startActivity(i); } } Let’s analyse this method. The first part (line 5) verify that exists the FrameLayout. If it exists we use the FragmentManager to find the fragment relative to the WebViewFragment. If this fragment is null (so it is the first time we use it) we simply create it and put this fragment “inside” the FrameLayout (line 7-20). If this fragment already exists we simply update the url (line 23). If we aren’t in landscape mode, we can start a new activity passing data as an Intent (line 28-30). WEBVIEW FRAGMENT AND WEBVIEWACTIVITY: The web page Finally we analyze the WebViewFragment. It is really simple, it is just override some Fragment method to customize its behaviour: public class WebViewFragment extends Fragment { private String currentURL; public void init(String url) { currentURL = url; } @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { Log.d("SwA", "WVF onCreateView"); View v = inflater.inflate(R.layout.web_layout, container, false); if (currentURL != null) { Log.d("SwA", "Current URL 1["+currentURL+"]"); WebView wv = (WebView) v.findViewById(R.id.webPage); wv.getSettings().setJavaScriptEnabled(true); wv.setWebViewClient(new SwAWebClient()); wv.loadUrl(currentURL); } return v; } public void updateUrl(String url) { Log.d("SwA", "Update URL ["+url+"] - View ["+getView()+"]"); currentURL = url; WebView wv = (WebView) getView().findViewById(R.id.webPage); wv.getSettings().setJavaScriptEnabled(true); wv.loadUrl(url); } private class SwAWebClient extends WebViewClient { @Override public boolean shouldOverrideUrlLoading(WebView view, String url) { return false; } } } In the onCreateView method we simply inflate our layout inside the fragment and verify that the url to show is not null. If so we simply show the page (line 15-30). In the updateUrl we simply find the WebView component and update its url. In the portrait mode we said we need to start another activity to show the webpage, so we need an activity (WebViewActivity). It is really simple and i just show the code without any other comment on it: public class WebViewActivity extends FragmentActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebViewFragment wvf = new WebViewFragment(); Intent i = this.getIntent(); String link = i.getExtras().getString("link"); Log.d("SwA", "URL ["+link+"]"); wvf.init(link); getFragmentManager().beginTransaction().add(android.R.id.content, wvf).commit(); } } Source code @ github. Nice post.Give it up. Thanks for share this article. For more visit:Web App Development expandable listview layout how to set in tablet screen using fragment? pls help me ASAP. thank u in advance. If i get your question correctly, you want to replace ListView with ExpandableListView. In this case you have to chage in the layout the widget. If you want to know how to use ExpandableListView you can give a look here Sources don't work, missing class name in fragments (in activity_main.xml). You have to specify which class the deflater has to instantiate. I will check the source code to know if i missed some classes. Thx for your support I may be missing something, but where are LinkData, LinkAdapter, etc found? Because I can't find what class to import here There are in the source code. for example. What r u missing?
http://www.survivingwithandroid.com/2013/03/android-fragment-tutorial-webview-example.html
CC-MAIN-2013-48
refinedweb
1,542
56.15
Older browsers (mainly Edge and IE11) as well as Node.js do not provide certain APIs ( TextEncoder, fetch etc) that loaders.gl depends on. The good news is that these APIs can be provided by the application using the polyfill technique. While there are many good polyfill modules for these classes available on npm, to make the search for a version that is guaranteed to work with loaders.gl a little easier, the @loaders.gl/polyfills module is provided. To install these polyfills, just import the polyfills module before start using loaders.gl. import '@loaders.gl/polyfills'; import {parse} from '@loaders.gl/core'; loaders.gl only installs polyfills if the corresponding global symbol is undefined. This means that if another polyfill is already installed when @loaders.gl/polyfills is imported, the other polyfill will remain in effect. Since most polyfill libraries work this way, applications can mix and match polyfills by ordering the polyfill import statements appropriately (but see the remarks below for a possible caveat). See API Reference. Applications should typically only install this module if they need to run under older environments. While the polyfills are only installed at runtime if the platform does not already support them, they will still be included in your application bundle, i.e. importing the polyfill module will increase your application's bundle size. When importing polyfills for the same symbol from different libraries, the import can depend on how the other polyfill is written. to control the order of installation, you may want to use require rather than import when importing @loaders.gl/polyfills. As a general rule, import statements execute before require statments.
https://loaders.gl/docs/developer-guide/polyfills/
CC-MAIN-2019-35
refinedweb
273
50.02
Using XML in Visual Basic 2005: Chapter 12 of Professional VB 2005 This chapter introduces the five XML namespaces in the .NET Framework and demonstrates how to generate and manipulate XML in VB 2005. Continue Reading This Article Enjoy this article as well as all of our content, including E-Guides, news, tips and more. By submitting you agree to receive email communications from TechTarget and its partners. Privacy Policy Terms of Use. For newcomers to Visual Basic 2005, the enhanced language can seem daunting. But books such as Professional VB 2005 can help you learn the language and all the new features included in it. This title covers Visual Basic virtually from start to finish. It starts by looking at the .NET Framework and ends by looking at best practices for deploying .NET applications. Chapter 12, Using XML in Visual Basic 2005, discusses the five XML-specific namespaces exposed in the .NET Framework -- System.XML, System.XML.Serialization, System.XML.Schema, System.XML.XPath and System.XML.XSL. From there, the authors cover the objects and classes within those namespaces and offer an overview of how XML is used in other Microsoft technologies, primarily SQL Server and ADO.NET Read the rest of the excerpt in this PDF. MORE: Click here to read Chapter 13 of this book, Security in the .NET Framework 2.0, courtesy of SearchAppSecurity.com. Excerpted from the Wrox Press book, Professional VB 2005 (ISBN 0-7645-7536-8) by Bill Evjen, Billy Hollis, Rockford Lhotka, Tim McCarthy, Rama Ramachandran, Kent Sharkey and Bill Sheldon. Copyright © 2005. Published by John Wiley & Sons Inc.. Reprinted with permission.
http://searchwindevelopment.techtarget.com/tip/Using-XML-in-Visual-Basic-2005-Chapter-12-of-Professional-VB-2005
CC-MAIN-2014-52
refinedweb
271
59.19
avro Avro serialization support for Haskell See all snapshots avro appears in Module documentation for 0.5.2.0 avro-0.5.2.0@sha256:89ce8f363815bc110c4ff8c4d2cc85910183d6377b9e73092b2b9ba4927c3790,13106 - Data - Data.Avro - Data.Avro.Codec - Data.Avro.Deriving - Data.Avro.EitherN - Data.Avro.Encoding - Data.Avro.HasAvroSchema - Data.Avro.Internal - Data.Avro.JSON - Data.Avro.Schema. Generating code from Avro schema The preferred method to use Avro is to be “schema first”. This library supports this idea by providing the ability to generate all the necessary entries (types, class instances, etc.) from Avro schemas. import Data.Avro import Data.Avro.Deriving (deriveAvroFromByteString, r) deriveAvroFromByteString [r| { "name": "Person", "type": "record", "fields": [ { "name": "fullName", "type": "string" }, { "name": "age", "type": "int" }, { "name": "gender", "type": { "type": "enum", "symbols": ["Male", "Female"] } }, { "name": "ssn", "type": ["null", "string"] } ] } |] This code will generate the following entries: data Gender = GenderMale | GenderFemale schema'Gender :: Schema schema'Gender = ... data Person = Person { personFullName :: Text , personAge :: Int32 , personGender :: Gender, , personSsn :: Maybe Text } schema'Person :: Schema schema'Person = ... As well as all the useful instances for these types: Eq, Show, Generic, noticing HasAvroSchema, FromAvro and ToAvro. See Data.Avro.Deriving module for more options like code generation from Avro schemas in files, specifying strictness and prefixes, etc. Using Avro with existing Haskell types Note: This is an advanced topic. Prefer generating from schemas unless it is required to make Avro work with manually defined Haskell types. In this section we assume that the following Haskell type is manually defined: data Person = Person { fullName :: Text , age :: Int32 , ssn :: Maybe Text } deriving (Eq, Show, Generic) For a Haskell type to be encodable to Avro it should have ToAvro instance, and to be decodable from Avro it should have FromAvro instance. There is also HasAvroSchema class that is useful to have an instance of (although it is not required strictly speaking). Creating a schema A schema can still be generated using TH: schema'Person :: Schema schema'Person = $(makeSchemaFromByteString [r| { "name": "Person", "type": "record", "fields": [ { "name": "fullName", "type": "string" }, { "name": "age", "type": "int" }, { "name": "ssn", "type": ["null", "string"] } ] } |]) Alternatively schema can be defined manually: import Data.Avro import Data.Avro.Schema.Schema (mkUnion) schema'Person :: Schema schema'Person = Record "Person" [] Nothing [ fld "fullName" (String Nothing) Nothing , fld "age" (Int Nothing) Nothing , fld "ssn" (mkUnion $ Null :| [(String Nothing)]) Nothing ] where fld nm ty def = Field nm [] Nothing Nothing ty def NOTE: When Schema is created separately to a data type there is no way to guarantee that the schema actually matches the type. It will be up to a developer to make sure of that. Prefer generating data types with Data.Avro.Deriving when possible. Instantiating FromAvro When working with FromAvro directly it is important to understand the difference between Schema and ReadSchema. Schema (as in the example above) is just a regular data schema for an Avro type. ReadSchema is a similar type, but it is capable of captuting and resolving differences between “writer schema” and “reader schema”. See Specification to learn more about schema resolution and de-conflicting. FromAvro class requires ReaderSchema because with Avro it is possible to read data with a different schema compared to the schema that was used for writing this data. ReadSchema can be obtained by converting an existing Schema with readSchemaFromSchema function, or by actually deconflicting two schemas using deconflict function. Another important fact is that field’s values in Avro payload are written and read in order with how these fields are defined in the schema. This fact can be exploited in writing FromAvro instance for Person: import Data.Avro.Encoding.FromAvro (FromAvro (..)) import qualified Data.Avro.Encoding.FromAvro as FromAvro instance FromAvro Person where fromAvro (FromAvro.Record _schema vs) = Person <$> fromAvro (vs Vector.! 0) <*> fromAvro (vs Vector.! 1) <*> fromAvro (vs Vector.! 2) Fields resolution by name can be performed here (since we have reference to the schema). But in this case it is simpler (and faster) to exploit the fact that the order of values is known and to access required values by their positions. Instantiating ToAvro ToAvro class is defined as class ToAvro a where toAvro :: Schema -> a -> Builder A Schema is provided to help with disambiguating how exactly the specified value should be encoded. For example, UTCTime can be encoded as milliseconds or as microseconds depending on schema’s logical type accordig to Specification: instance ToAvro UTCTime where toAvro s = case s of Long (Just TimestampMicros) -> toAvro @Int64 s . fromIntegral . utcTimeToMicros Long (Just TimestampMillis)) -> toAvro @Int64 s . fromIntegral . utcTimeToMillis ToAvro instance for Person data type from the above could look like: import Data.Avro.Encoding.ToAvro (ToAvro(..), record, ((.=))) instance ToAvro Person where toAvro schema value = record schema [ "fullName" .= fullName value , "age" .= age value , "ssn" .= ssn value ] record helper function is responsible for propagaing individual fields’ schemas (found in the provided schema) when toAvro’ing nested values. Type mapping Full list can be found in ToAvro and FromAvro modules. This library provides the following conversions between Haskell types and Avro types: User defined data types should provide HasAvroSchema / ToAvro / FromAvro instances to be encoded/decoded to/from Avro..
https://www.stackage.org/nightly-2020-09-16/package/avro-0.5.2.0
CC-MAIN-2020-40
refinedweb
838
54.02
Update: :) ################################ package CGI::Safe; ################################ $VERSION = 1.0; use strict; use Carp; use CGI; use Exporter; use vars qw/ @ISA @EXPORT_OK/; @ISA = qw/ CGI Exporter /; @EXPORT_OK = qw/ get_upload /; INIT { # Establish some defaults delete @ENV{ qw/ IFS CDPATH ENV BASH_ENV / }; # Clean up our Envir +onment $CGI::DISABLE_UPLOADS = 1; # Disable uploads $CGI::POST_MAX = 512 * 1024; # limit posts to 512 +K max } sub new { my ( $self, %args ) = @_; $CGI::DISABLE_UPLOADS = $args{ DISABLE_UPLOADS } if exists $args{ +DISABLE_UPLOADS }; $CGI::POST_MAX = $args{ POST_MAX } if exists $args{ +POST_MAX }; return ( exists $args{ source } ) ? CGI::new( $self, $args{ source + } ) : CGI::new( $self ); } sub get_upload { my $self; $self = shift if ref $_[0]; # can be tossed because hash keys can' +t be refs # this will occur if called in OO fash +ion my %specs = @_; if ( ! exists $specs{ cgi } ) { if ( defined $self ) { $specs{ cgi } = $self; } else { # Here, we're holding our breath and praying this doesn't +break in future releases. # CGI.pm uses objects internally, even if called through t +he functional interface. # self_or_default returns that object $specs{ cgi } = &CGI::self_or_default; } } # if the cgi value is not a reference and not a cgi object ... # This should *not* occur if ( ! ( ref $specs{ cgi } and $specs{ cgi }->isa( 'CGI' ) ) ) { croak '"cgi => $cgi_obj": The \'cgi\' value was not a CGI obje +ct'; } croak '&get_upload expects a hash with "file_name => $file_name"' +unless exists $specs{ file_name }; my %data = ( error => 0, file => undef, format => undef ); # Not using CGI::upload as I've had (and seen) problems with vario +us versions of this my $fh = $specs{ cgi }->param( $specs{ file_name } ); if ( $specs{ cgi }->cgi_error ) { $data{ error } = 'Error uploading file: ' . $specs{ cgi }->cgi +_error; return \%data; } if ( ! defined $fh ) { $data{ error } = 'No file uploaded.'; carp "No file uploaded. Did you remember 'enctype=\"multipart +/form-data\"' in your <form> tag?"; if ( $CGI::DISABLE_UPLOADS ) { carp "\$CGI::DISABLE_UPLOADS is set to $CGI::DISABLE_UPLOA +DS. This may be why no file was uploaded." } return \%data; } $data{ format } = $specs{ cgi }->uploadInfo( $fh )->{ 'Content-Typ +e' }; if ( exists $specs{ format } ) { my @format = ref $specs{ format } eq 'ARRAY' ? @{ $specs{ form +at } } : $specs{ form +at } ; my $re_format = quotemeta $data{ format }; if ( ! grep { /$re_format/ } @format ) { my $formats = ref $specs{ format } eq 'ARRAY' ? join ' or +', @{ $specs{ format } } : + $specs{ format } ; $data{ error } = "Illegal file format: $data{ format }. E +xpecting: $formats."; return \%data; } } binmode $fh; my $file = ''; binmode $file; { my $data = ''; while ( read( $fh, $data, 1024 ) ) { $file .= $data; } } if ( ! $file ) { $data{ error } = 'No file uploaded.'; return \%data; } $data{ file } = $file; return \%data; } "Ovid"; __END__ =head1 NAME CGI::Safe - Safe method of using CGI.pm. This is pretty much a two-li +ne change for most CGI scripts. =head1 SYNOPSIS use CGI::Safe; my $q = CGI::Safe->new(); =head1 DESCRIPTION If you've been working with CGI.pm for any length of time, you know th +at it allows uploads by default and does not have a maximum post size. Since it sav +es the uploads as a temp file, someone can simply upload enough data to fill up your +hard drive to initiate a DOS attack. To prevent this, we're regularly warned to incl +ude the following two lines at the top of our CGI scripts: $CGI::DISABLE_UPLOADS = 1; # Disable uploads $CGI::POST_MAX = 512 * 1024; # limit posts to 512K max As long as those are their before you instantiate a CGI object (or bef +ore you access param and related CGI functions with the function oriented interface), + you have pretty safely plugged this problem. However, most CGI scripts don't have thes +e lines of code. Some suggest changing these settings directly in CGI.pm. I dislike thi +s for two reasons: 1. If you upgrade CGI.pm, you might forget to make the change to the +new version. 2. You may break a lot of existing code (which may or may not be a go +od thing depending upon the security implications). Hence, the C<CGI::Safe> module. It will establish the defaults for th +ose variables and require virtually no code changes. Additionally, it will delete C<%EN +V> variables listed in C<perlsec> as dangerous. =head1 Objects vs. Functions Some people prefer the object oriented interface for CGI.pm and others + prefer the function oriented interface. Naturally, the C<CGI::Safe> module allows both. +There is also a C<CGI::Safe::get_upload> function that can be imported or used in OO f +ashion. use CGI::Safe; my $q = CGI::Safe->new( DISABLE_UPLOADS = 0 ); my $file = $q->get_upload( file_name => 'somefilename' ); Or: use CGI::Safe qw/ :standard get_upload /; $CGI::DISABLE_UPLOADS = 0; my $file = get_upload( file_name => 'somefilename' ); =head1 Uploads and Maximum post size As mentioned earlier, most scripts that do not need uploading should h +ave something like the following at the start of their code to disable uploads: $CGI::DISABLE_UPLOADS = 1; # Disable uploads $CGI::POST_MAX = 512 * 1024; # limit posts to 512K max The C<CGI::Safe> sets these values in an C<INIT{}> block. If necessar +y, the programmer can override these values two different ways. When using the function ori +ented interface, if needing file uploads and wanting to allow up to a 1 megabyte upload, they woul +d set these values directly I<before> using C<CGI::Safe::get_upload> or using any of the CGI.pm CG +I functions: use CGI::Safe qw/ :standard get_upload /; $CGI::DISABLE_UPLOADS = 0; $CGI::POST_MAX = 1_024 * 1_024; # limit posts to 1 meg max my $file = get_upload( file_name => 'somefilename' ); If using the OO interface, you can set these explicitly I<or> pass the +m as parameters to the C<CGI::Safe> constructor: use CGI::Safe; my $q = CGI::Safe->new( DISABLE_UPLOADS = 0, POST_MAX = 1_024 * 1_024 ); my $file = $q->get_upload( file_name => 'somefilename' ); =head1 CGI.pm objects from input files and other sources You can instantiate a new CGI.pm object from an input file, properly f +ormatted query string passed directly to the object, or even a has with name value pairs representi +ng the query string. To use this functionality with the C<CGI::Safe> module, pass this extra i +nformation in the C<source> key: use CGI::Safe; my $q = CGI::Safe->new( source = $some_file_handle ); Alternatively: use CGI::Safe; my $q = CGI::Safe->new( source => 'color=red&name=Ovid' ); =head1 File uploading This is not really necessary in the C<CGI::Safe> module, but it is inc +luded as many, many programmers have difficulty with this. C<CGI::Safe::get_upload> has takes three n +amed parameters (e.g. pass it a hash), two of which are optional. =over 4 =item 1 I<file_name> This specifies the name of the file in the "file" field of the of the +form. =item 2 I<format> This parameter is optional. Pass it a scalar with an allowed file typ +e or a list reference with multiple allowed file types. If the uploaded file doesn't match one of the sup +plied types, will return an error. By leaving this parameter off, C<CGI::Safe::get_upload> will return an +y type of file. =item 3 I<cgi> If, for some reason, you are using multiple CGI objects, you can speci +fy the CGI object which has the file in question. This parameter is also optional. It should seldom, if e +ver, be used. =back =head2 Using file uploading Basic use: use CGI::Safe; my $q = CGI::Safe->new( DISABLE_UPLOADS => 0 ); my $file = $q->get_upload( file_name => 'somefilename' ); Here's an example with all parameters specified: use CGI::Safe; my $q = CGI::Safe->new( DISABLE_UPLOADS => 0 ); my $file = $q->get_upload( file_name => 'somefilename', format => [ 'image/gif', 'image/jpeg' ] +, cgi => $cgi ); # use this only if yo +u have another cgi object instantiated # and it has the uplo +ad data that you need =head2 Return value from uploading C<CGI::Safe::get_upload> returns a scalar with a reference to an anony +mous has with three keys: =over 4 =item 1 error This key will contain a human readable error message that will explain + why the upload didn't succeed. It's value will be 0 (zero) if the upload was successful. =item 2 file This will be the actual contents of the file. =item 3 format This is the "content-type" of the file in question. For example, a GI +F file will have a format of 'image/gif'. =back =head2 Using the return values from uploading use CGI::Safe; my $q = CGI::Safe->new( DISABLE_UPLOADS => 0 ); my $file = $q->get_upload( file_name => 'somefilename' ); if ( $file->{ error } ) { print $q->header, $q->start_html, $q->p( $file->{ error } ), $q->end_html; } else { print $q->header( -type => $file->{ format } ), $file->{ file }; } ::Safe, the ve +rsion of Perl, and the version of the operating system you are using. =head1 BUGS 2001/07/13 There are no known bugs at this time. However, I am somewh +at concerned about the use of this module with the function oriented interface. CG +I.pm uses objects internally, even when using the function oriented interface (w +hich is part of the reason why the function oriented interface is not faster than t +he OO version). In order for me to determine the file object, I took a short cut and u +sed the C<CGI::self_or_default> method to capture that object. This simplifie +s my code, but it's possible that some versions of CGI.pm do not use this. If that i +s the case, I will need to pull the appropriate methods from the callers namespace ( +maybe) to get access to the uploaded file. =cut use CGI; use strict; { my %CGI_Patch; local ($^I, @ARGV) = ('.bak', $INC{'CGI.pm'}); while (<>) { s/^(\s*\$POST_MAX\s*=\s*)([^;]*);/${1}1024 * 100;/ && $CGI_Patch{POSTMAX}++; s/^(\s*\$DISABLE_UPLOADS\s*=\s*)([^;]*);/${1}1;/ && $CGI_Patch{NOUPLOADS}++; # I'll have my cake and eat it too!... my $cake = '\$query_string .= \(length\(\$query_string\) +'. '\? \'&\' : \'\'\) . \$ENV{\'QUERY_STRING\'}'. ' if defined \$ENV{\'QUERY_STRING\'};'; s/(\s*)#(\s*)($cake)/$1$2$3/ && $CGI_Patch{CAKE}++; print; close ARGV if eof; } print "CGI.pm ($INC{'CGI.pm'}) patch results...\n"; print '$POSTMAX updated...........' , $CGI_Patch{POSTMAX} , "\n"; print '$DISABLE_UPLOADS updated...' , $CGI_Patch{NOUPLOADS} , "\n" +; print 'Have your cake and eat it..' , $CGI_Patch{CAKE} , "\n"; } [download] Yes, I watch meteor showers No, I do not watch meteor showers My meteors prefer to bathe Results (150 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node_id=104626
CC-MAIN-2017-04
refinedweb
1,680
69.41
Created on 2010-02-16 20:48 by dabeaz, last changed 2014-09-03 06:45 by scoder. Background -----------. Would the idea of priority-GIL requests that Antoine had in his original patch solve this issue? Just a quick test under Linux (on a dual quad core machine): - with iotest.py and echo_client.py both running Python 2.7: 25.562 seconds (410212.450 bytes/sec) - with iotest.py and echo_client.py both running Python 3.2: 28.160 seconds (372362.459 bytes/sec) As already said, the "spinning endlessly" loop is a best case for thread switching latency in 2.x, because the opcodes are very short. If each opcode in the loop has an average duration of 20 ns, and with the default check interval of 100, the GIL gets speculatively released every 2 us (yes, microseconds). That's why I suggested trying more "realistic" workloads, as in ccbench. Also, as I told you, there might also be interactions with the various timing heuristics the TCP stack of the kernel applies. It would be nice to test with UDP. That said, the observations are interesting. The comment on the CPU-bound workload is valid--it is definitely true that Python 2.6 results will degrade as the workload of each tick is increased. Maybe a better way to interpreter those results is as a baseline of what kind of I/O performance is possible if there is a quick I/O response time. However, ignoring that and the comparison between Python 2.6 and 3.2, there is still a serious performance issue with I/O in 3.2. For example, the dramatic decrease in I/O performance as there are more CPU bound threads competing and the fact that there is a huge performance gain when all but one CPU core is disabled. I tried the test using UDP packets and get virtually the exact same behavior described. For instance, echoing 10MB (sent in 8k UDP packets) takes about 0.6s in Python 2.6 and 12.0s in Python-3.2. The time shoots up to more than 40s if there are two CPU-bound threads. The problem being described really doesn't have anything to do with TCP vs. UDP or any part of the network stack. It has everything to do with how the operating system buffers I/O requests and how I/O operations such as sends and receives complete immediately without blocking depending on system buffer characteristics (e.g., if there is space in the send buffer, a send will return immediately without blocking). The fact that the GIL is released when it's not necessary in these cases is really the source of the problem. We could try not to release the GIL when socket methods are called on a non-blocking socket. Regardless, I've re-run the tests under the Linux machine, with two spinning threads: * python 2.7: 25.580 seconds (409914.612 bytes/sec) * python 3.2: 32.216 seconds (325485.029 bytes/sec) (and as someone mentioned, the "priority requests" mechanism which was in the original new GIL patch might improve things. It's not an ideal time for me to test, right now :-)) I'm attaching Dave's new UDP-based benchmarks, which eliminate the dependency on the TCP stack's behaviour.) See also issue7993 for a patch adding a similar bandwidth benchmark to ccbench. Here is an improved version of the priority requests patch. With this patch I get the following result (2 background threads): 0.255 seconds (41109347.194 bytes/sec) I. Here is another patch based on a slightly different approach. Instead of being explicitly triggered in I/O methods, priority requests are decided based on the computed "interactiveness" of a thread. Interactiveness itself is a simple saturated counter (incremented when the GIL is dropped without request, decremented when the GIL is dropped after a request). Benchmark numbers are basically the same as with gilprio2.patch. The Here. On some platforms the difference is not so important. I ran it in Debian Lenny AMD64 "Core2 Duo P9600 @2.66GHz". # Python 3.2a0 (py3k:78982M, Mar 15 2010, 15:40:42) # [GCC 4.3.4] on linux2 0.67s without thread 0.84s with spinning thread With line buffering, I see the issue. * 6 s without thread * 115 s with the spinning thread (varying: 60 s, 98 s) * 16 s with the spinning thread and the last "gilinter.patch" # Modified version of the test case, with bufsize=1 from threading import Thread import time def writenums(f, n): start = time.time() for x in range(n): f.write("%d\n" % x) end = time.time() print(end-start) def spin(): while True: pass t1 = Thread(target=spin) t1.daemon=True # Uncomment to add a thread #t1.start() # With line buffering writenums(open("./nums", "w", 1), 1000000) Whoa, that's pretty diabolically evil with bufsize=1. On my machine, doing that just absolutely kills the performance (13 seconds without the spinning thread versus 557 seconds with the thread!). Or, put another way, the writing performance drops from about 0.5 Mbyte/sec down to 12 Kbytes/sec with the thread. With my priority GIL, the time is about 29 seconds with the thread (consistent with your experiment using the gilinter patch).. In terms of practical relevance, this test again represents a simple situation where computation is overlapped with I/O processing. Perhaps the program has just computed a big result which is now being streamed somewhere else by a background thread. In the meantime, the program is now working on computing the next result (the spinning thread). Think queues, actors, or any number of similar things---there are programs that try to operate like this. Almost forgot--if I turn off one of the CPU cores, the time drops from 557 seconds to 32 seconds. Gotta love it! >.. Oh the situation definitely matters. Although, in the big picture, most programmers would probably prefer to have fast I/O performance over slow I/O performance :-). > Oh. I?. Other than having this added preemption, do nothing else---just throw it all back to the user to come up with the proper "priorities." If there was something like this, it would completely fix the overlapped compute and I/O problem I mentioned. I'd just set a higher priority on the background I/O threads and be done with it. Problem solved. Ok, it's only a thought. I tried Florent's modification to the write test and did not see the effect on my machine with an updated revision of Python32. I am running Ubuntu Karmic 64 bit. 7s - no background threads. 20s - one background thread. According to the following documentation the libc condition is using scheduling policy when waking a thread and not FIFO order: The following documentation suggests ordering in Linux is not FIFO: I upload a quick and dirty patch (linux-7946.patch) to the new GIL just to reflect this by avoiding the timed waits. On my machine it behaves reasonably both with the TCP server and with the write test, but so does unpatched Python 3.2. I noticed high context switching rate with dave's priority GIL - with both tests it goes above 40K/s context switches. I updated the patch with a small fix and increased the ticks countdown-to-release considerably. This seems to help the OS classify CPU bound threads as such and actually improves IO performance. I. > I upload bfs.patch Interesting patch, but: - Please give understandable benchmark numbers, including an explicit comparison with baseline 3.2, and patched 3.2 (e.g. gilinter.patch) - Please also measure single-thread performance, because it looks like you are adding significant work inside the core eval loop - Do you need a hi-res clock? gettimeofday() already gives you microseconds. It looks like a bit of imprecision shouldn't be detrimental. - The magic number DEADLINE_FACTOR looks gratuitous (why 1.1^20 ?) - By the way, I would put COND_SIGNAL inside the LOCK_MUTEX / UNLOCK_MUTEX pair in bfs_yield(). If this gets accepted there will be cosmetic issues to watch out (and the patch should be cross-platform). I. > I use clock_gettime() to get the thread running time to calculate slice depletion. Ouch. CLOCK_THREAD_CPUTIME_ID is not a required part of the standard. Only CLOCK_REALTIME is guaranteed to exist. %) > Ouch. CLOCK_THREAD_CPUTIME_ID is not a required part of the standard. Only CLOCK_REALTIME is guaranteed to exist. Right, however the man page at kernel.org says the following on CLOCK_THREAD_CPUTIME_ID: "Sufficiently recent versions of glibc and the Linux kernel support the following clocks" The same statement shows up as early as 2003: However, if this is indeed a problem on some systems (none Linux?), then a fall back could be attempted for them. There could also be a problem on systems where the counter exists but has low resolution 10ms+ What platforms do you think this could be a problem on? > came up with cpued.py after reading the patches in an attempt to understand how they behave. In this case one thread is pure Python while the other occasionally releases the GIL, both CPU bound. I don't claim this is a real-world situation. However, it is a case in which bfs.patch behaves as expected. > I've tried ccbench with your patch and there's a clear regression in latency numbers. Please specify system and test details so I can try to look into it. On my system ccbench behaves as expected: $ ~/build/python/bfs/python ccbench.py == CPython 3.2a0.0 (py3k) == == x86_64 Linux on '' == --- Throughput --- Pi calculation (Python) threads=1: 1252 iterations/s. threads=2: 1199 ( 95 %) threads=3: 1178 ( 94 %) threads=4: 1173 ( 93 %) regular expression (C) threads=1: 491 iterations/s. threads=2: 478 ( 97 %) threads=3: 472 ( 96 %) threads=4: 477 ( 97 %) SHA1 hashing (C) threads=1: 2239 iterations/s. threads=2: 3719 ( 166 %) threads=3: 3772 ( 168 %) threads=4: 3464 ( 154 %) --- Latency --- Background CPU task: Pi calculation (Python) CPU threads=0: 0 ms. (std dev: 1 ms.) CPU threads=1: 0 ms. (std dev: 1 ms.) CPU threads=2: 0 ms. (std dev: 1 ms.) CPU threads=3: 0 ms. (std dev: 1 ms.) CPU threads=4: 0 ms. (std dev: 1 ms.) Background CPU task: regular expression (C) CPU threads=0: 0 ms. (std dev: 0 ms.) CPU threads=1: 6 ms. (std dev: 0 ms.) CPU threads=2: 2 ms. (std dev: 2 ms.) CPU threads=3: 1 ms. (std dev: 0 ms.) CPU threads=4: 5 ms. (std dev: 7 ms.) Background CPU task: SHA1 hashing (C) CPU threads=0: 0 ms. (std dev: 1 ms.) CPU threads=1: 0 ms. (std dev: 1 ms.) CPU threads=2: 0 ms. (std dev: 1 ms.) CPU threads=3: 1 ms. (std dev: 1 ms.) CPU threads=4: 1 ms. (std dev: 0 ms.) > Please specify system and test details so I can try to look into). By the way, I configure --with-computed-got). The? > What. Well, on initial check the scheduler seems to work well with regular gettimeofday() wall clock instead of clock_gettime(). :) /* Return thread running time in seconds (with nsec precision). */ static inline long double get_thread_timestamp(void) { return get_timestamp(); // wall clock via gettimeofday() /*struct timespec ts; clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts); return (long double) ts.tv_sec + ts.tv_nsec * 0.000000001;*/ } Does it make things better on your system? Uploaded an updated bfs.patch The latency problem was related to the --with-computed-gotos flag. I fixed it and it seems to work fine now. I also switched to gettimeofday() so it should work now on all Posix with high resolution timer. > I also switched to gettimeofday() so it should work now on all Posix > with high resolution timer But on a busy system, won't measuring wall clock time rather than CPU time give bogus results? > But on a busy system, won't measuring wall clock time rather than CPU time give bogus results? This was the motivation for using clock_gettime(). I tried the wall clock version under load (including on single core system) and it seems to behave. Now it remains to rationalize it :) gilinter.patch has good IO latency in UDP test on my system when built with --with-computed-gotos: In [34]: %timeit -n3 client.work() 0.320 seconds (32782026.509 bytes/sec) 0.343 seconds (30561727.443 bytes/sec) 0.496 seconds (21154075.417 bytes/sec) 0.326 seconds (32171215.998 bytes/sec) 0.462 seconds (22701809.421 bytes/sec) 0.378 seconds (27722146.793 bytes/sec) 0.391 seconds (26826713.409 bytes/sec) 0.315 seconds (33335858.720 bytes/sec) 0.281 seconds (37349508.136 bytes/sec) 3 loops, best of 3: 329 ms per loop Hmm, the gilinter patch shouldn't be sensitive to whether computed gotos are enabled or not. Here is an updated patch, though, the previous one didn't apply cleanly anymore. I've also made the priority condition a bit stricter. I update bfs.patch. It now builds on Windows (and Posix). I upload a new update to bfs.patch which improves scheduling and reduces overhead. Uploaded an update. A couple remarks on BFS-based patch: - nothing guarantees that you'll get a msec resolution -; } the product is computed as double, and then promoted as (long double). - the code uses a lot of floating point calculation, which is slower than integer Otherwise: "-." The problem with this type of fixed-priority is starvation. And it shouldn't be up to the user to set the priorities. And some threads can mix I/O and CPU intensive tasks. > ? Some more remarks: - COND_TIMED_WAIT macro modifies timeout_result when pthread_cond_timewait expires. But timeout_result is not an int pointer, just an int. So it is never updated, and as a result, bfs_check_depleted is never set after a thread has waited for the current running thread to schedule it in vain (in _bfs_timed_wait). - - calls to COND_WAIT/COND_TIMED_WAIT should be run in loops checking for the predicate, since it might be false even after these call return (spurious wakeups, etc): ." Please disregard my remark on COND_TIMED_WAIT not updating timeout_result, it's wrong (it's really a macro, not a function...) I." Yet. Comparison to BFS ----------------- Still need to test. Would be curious. One comment on that patch I just submitted. Basically, it's an attempt to make an extremely simple tweak to the GIL that fixes most of the problems discussed here in an extremely simple manner. I don't have any special religious attachment to it though. Would love to see a BFS comparison. Here is the result of running the writes.py test with the patch I submitted. This is on OS-X. bash-3.2$ ./python.exe writes.py t1 2.83990693092 0 t2 3.27937912941 0 t1 5.54346394539 1 t2 6.68237304688 1 t1 8.9648039341 2 t2 9.60041999817 2 t1 12.1856160164 3 t2 12.5866689682 3 t1 15.3869640827 4 t2 15.7042851448 4 t1 18.4115200043 5 t2 18.5771169662 5 t2 21.4922711849 6 t1 21.6835460663 6 t2 24.6117911339 7 t1 24.9126679897 7 t1 27.1683580875 8 t2 28.2728791237 8 t1 29.4513950348 9 t1 32.2438161373 10 t2 32.5283250809 9 t1 34.8905010223 11 t2 36.0952250957 10 t1 38.109760046 12 t2 39.3465380669 11 t1 41.5758800507 13 t2 42.587772131 12 t1 45.1536290646 14 t2 45.8339021206 13 t1 48.6495029926 15 t2 49.1581180096 14 t1 51.5414950848 16 t2 52.6768190861 15 t1 54.818582058 17 t2 56.1163961887 16 t1 58.1549630165 18 t2 59.6944830418 17 t1 61.4515309334 19 t2 62.7685520649 18 t1 64.3223180771 20 t2 65.8158640862 19 65.8578810692 Nice dabeaz. One potential concern with "dabeaz_gil.patch 2010-04-25 21:13" is that it appears to always leave the gil_monitor thread running. This is bad on mobile/embedded platforms where waking up at regular intervals prevents advanced sleep states and wastes power/battery. (practical example: the OLPC project has run into this issue in other code in the past) Could this be modified so that gil_monitor stops looping (blocks) so long as there are only IO bound Python threads running or while no python thread owns the GIL? In that situation a multithreaded python process that has either reverted to one thread or has all threads blocked in IO would be truly idle rather than leaving the gil_monitor polling. Dave, > In the current implementation, threads perform a timed-wait on a > condition variable. If time expires and no thread switches have > occurred, the currently running thread is forced to drop the GIL. A problem, as far as I can see, is that these timeout sleeps run periodically, regardless of the actual times at which thread switching takes place. I'm not sure it's really an issue but it's a bit of a departure from the "ideal" behaviour of the switching interval. >). Ok, so it's not very different, at least in principle, from what gilinter.patch does, right? (and actually, the benchmark results look very similar) Greg, I like the idea of the monitor suspending if no thread owns the GIL. Let me work on that. Good point on embedded systems. Antoine, Yes, the gil monitor is completely independent and simply ticks along every 5 ms. A worst case scenario is that an I/O bound thread is scheduled shortly after the 5ms tick and then becomes CPU-bound afterwards. In that case, the monitor might let it run up to about 10ms before switching it. Hard to say if it's a real problem though---the normal timeslice on many systems is 10 ms so it doesn't seem out of line. As for the priority part, this patch should have similar behavior to the glinter patch except for very subtle differences in thread scheduling due to the use of the GIL monitor. For instance, since threads never time out on the condition variable anymore, they tend to cycle execution in a purely round-robin fashion. I've updated the GIL patch to reflect concerns about the monitor thread running forever. This version has a suspension mechanism where the monitor goes to sleep if nothing is going on for awhile. It gets resumed if threads try to acquire the GIL, but timeout for some reason. I've also attached a new file schedtest.py that illustrates a subtle difference between having the GIL monitor thread and not having the monitor. Without the monitor, every thread is responsible for its own scheduling. If you have a lot of threads running, you may have a lot of threads all performing a timed wait and then waking up only to find that the GIL is locked and that they have to go back to waiting. One side effect is that certain threads have a tendency to starve. For example, if you run the schedtest.py with the original GIL, you get a trace where three CPU-bound threads run like this: Thread-3 16632 Thread-2 16517 Thread-1 31669 Thread-2 16610 Thread-1 16256 Thread-2 16445 Thread-1 16643 Thread-2 16331 Thread-1 16494 Thread-3 16399 Thread-1 17090 Thread-1 20860 Thread-3 16306 Thread-1 19684 Thread-3 16258 Thread-1 16669 Thread-3 16515 Thread-1 16381 Thread-3 16600 Thread-1 16477 Thread-3 16507 Thread-1 16740 Thread-3 16626 Thread-1 16564 Thread-3 15954 Thread-2 16727 ... You will observe that Threads 1 and 2 alternate, but Thread 3 starves. Then at some point, Threads 1 and 3 alternate, but Thread 2 starves. By having a separate GIL monitor, threads are no longer responsible for making scheduling decisions concerning timeouts. Instead, the monitor is what times out and yanks threads off the GIL. If you run the same test with the GIL monitor, you get scheduling like this: Thread-1 33278 Thread-2 32278 Thread-3 31981 Thread-1 33760 Thread-2 32385 Thread-3 32019 Thread-1 32700 Thread-2 32085 Thread-3 32248 Thread-1 31630 Thread-2 32200 Thread-3 32054 Thread-1 32721 Thread-2 32659 Thread-3 34150 Threads nicely cycle round-robin. There also appears to be about half as much thread switching (for reasons I don't quite understand).() New version of patch that will probably fix Windows-XP problems. Was doing something stupid in the monitor (not sure how it worked on Unix). @dabeaz I'm getting random segfaults with your patch (even with the last one), pretty much everywhere malloc or free is called. Ater skimming through the code, I think the problem is due to gil_last_holder: In drop_gil and take_gil, you dereference gil_last_holder->cpu_bound, but it might very well happen that gil_last_holder points to a thread that has been deleted (through tstate_delete_common). Dereferencing is not risky, because there's a high chance that the address is still valid, but in drop_gil, you do this: /* Make the thread as CPU-bound or not depending on whether it was forced off */ gil_last_holder->cpu_bound = gil_drop_request; Here, if the thread has been deleted in meantine, you end up writting to a random location on the heap, and probably corrupting malloc administration data, which would explain why I get segfaults sometimes later on unrelated malloc() or free() calls. I looked at it really quickly though, so please forgive me if I missed something obvious ;-) - pretty much all your variables are declared as volatile, but volatile was never meant as a thread-synchronization primitive. Since your variables are protected by mutexes, you already have all necessary memory barriers and synchronization, so volatile just prevents optimization - you use some funtions just to perform a comparison or substraction, maybe it would be better to just remove those functions and perform the substractions/comparisons inline (you declared the functions inline but there's no garantee that the compiler will honor it). - did you experiment with the time slice ? I tried some higher values and got better results, without penalizing the latency. Maybe it could be interesting to look at it in more detail (and on various platforms). Added extra pointer check to avoid possible segfault. I ;-) That second access of gil_last_holder->cpu_bound is safe because that block of code is never entered unless some other thread currently holds the GIL. If a thread holds the GIL, then gil_last_holder is guaranteed to have a valid value. Didn't have much sleep last night, so please forgive me if I say something stupid, but: Python/pystate.c: void PyThreadState_DeleteCurrent() { PyThreadState *tstate = _PyThreadState_Current; if (tstate == NULL) Py_FatalError( "PyThreadState_DeleteCurrent: no current tstate"); _PyThreadState_Current = NULL; tstate_delete_common(tstate); if (autoTLSkey && PyThread_get_key_value(autoTLSkey) == tstate) PyThread_delete_key_value(autoTLSkey); PyEval_ReleaseLock(); } the current tstate is deleted and freed before releasing the GIL, so if another thread calls take_gil after the current thread has called tstate_delete_common but before it calls PyEval_ReleaseLock (which calls drop_gil and set gil_locked to 0), then it will enter this section and dereference gil_last_holder. I just checked with valgrind, and he also reports an illegal dereference at this precise line. I stand corrected. However, I'm going to have to think of a completely different approach for carrying out that functionality as I don't know how the take_gil() function is able to determine whether gil_last_holder has been deleted or not. Will think about it and post an updated patch later. Do you have any examples or insight you can provide about how these segfaults have shown up in Python code? I'm not able to observe any such behavior on OS-X or Linux. Is this happening while running the ccbench program? Some other program? > Do you have any examples or insight you can provide about how these segfaults have shown up in Python code? I'm not able to observe any such behavior on OS-X or Linux. Is this happening while running the ccbench program? Some other program? If you're talking about the first issue (segfaults due to writting to gil_last_holder->cpu_bound), it was occuring quite often during ccbench (pretty much anywhere malloc/free was called). I'm running a regular dual-core Linux box, nothing special. For the second one, I didn't observe any segfault, I just figured this out reading the code and confirmed it with valgrind, but it's much less likely because the race window is very short and it also requires that the page is unmmaped in between. If someone really wanted to get segfaults, I guess a good start would be: - get a fast machine, multi-core is a bonus - use a kernel with full preemption - use a lot of threads (-n option with ccbench) - use purify or valgrind --free-fill option so that you're sure to jump to noland if you dereference a previously-free'd pointer One more attempt at fixing tricky segfaults. Glad someone had some eagle eyes on this :-). On Tue, Apr 27, 2010 at 12:23 PM, Charles-Francois Natali wrote: > . > I stand corrected. However, I'm going to have to think of a > completely different approach for carrying out that functionality as I > don't know how the take_gil() function is able to determine whether > gil_last_holder has been deleted or not. Please note take_gil() currently doesn't depend on the validity of the pointer. gil_last_holder is just used as an opaque value, equivalent to a thread id. > In Windows the high-precision counter might return different results > on different cores in some hardware configurations (older multi-core > processors). More specifically: some older multi-core processors where the HAL implements QueryPerformanceCounter using the TSC from the CPU, and the HAL doesn't keep the cores in sync and QPC doesn't otherwise account for it. This is rare; frequently QPC is implemented using another source of time. But it's true: QPC is not 100% reliable. QPC can unfortunately jump backwards (when using TSC and you switch cores), jump forwards (when using TSC and you switch cores, or when using the PCI bus timer on P3-era machines with a specific buggy PCI south bridge controller), speed up or slow down (when using TSC and not accounting for changing CPU speed via SpeedStep &c). The simple solution: give up QPC and use timeGetTime() with timeBeginPeriod(1), which is totally reliable but only has millisecond accuracy at best.;EN-US;Q274323& On Wed, Apr 28, 2010 at 12:41 AM, Larry Hastings wrote: > The simple solution: give up QPC and use timeGetTime() with timeBeginPeriod(1), which is totally > reliable but only has millisecond accuracy at best. It is preferable to use a high precision clock and I think the code addresses the multi-core time skew problem (pending testing). D %) Wow, that is a *really* intriguing performance result with radically different behavior than Unix. Do you have any ideas of what might be causing it?. Dave, The behavior of your patch on Windows XP/2003 (and earlier) might be related to the way Windows boosts thread priority when it is signaled. Try to increase priority of monitor thread and slice size. Another thing to look at is how to prevent Python CPU bound threads from (starving) messing up scheduling of threads of other processes. Maybe increasing slice significantly can help in this too (50ms++ ?). XP/NT/CE scheduling and thread boosting affect all patches and the current GIL undesirably (in different ways). Maybe it is possible to make your patch work nicely on these systems: Vista and Windows 7 involve CPU cycle counting which results in more sensible scheduling: I. Duck, Updated bfs.patch to patch cleanly updated py3k branch. Use: $ patch -p1 < bfs.patch A. Attached ccbench-osx.log made today on OSX on latest svn checkout. Hope it helps Updated Thanks for all your work Nir! I personally think the BFS approach is the best we've seen yet for this problem! Having read the thread you linked to in full (ignoring the tagents bikeshedding and mudslinging that went on there), it sounds like the general consensus is that we should take thread scheduling changes slowly and let the existing new implementation bake in the 3.2 release. That puts this issue as a possibility for 3.3 if users demonstrate real world application problems in 3.2. (personally I'd say it is already obvious that there are problems an wde should go ahead with your BFS based approach but realistically the we're still better off in 3.2 than we were in 3.1 and 2.x as is) > gettimeofday returns you wall clock time: if a process > that modifies time is running, e.g. ntpd, you'll likely > to run into trouble. the value returned is _not_ monotonic, > ... The issue #12822 asks to use monotonic clocks when available. What happened to this bug and patch? Not much :) The patch is complex and the issue hasn't proved to be significant in production code. Do you have a (real-world) workload where this shows up? Le 15/07/2014 09:52, Dima Tisnek a écrit : > > Dima Tisnek added the comment: > > What happened to this bug and patch?
https://bugs.python.org/issue7946
CC-MAIN-2017-47
refinedweb
4,859
67.55
Sysinternals SuiteThe entire set of Sysinternals Utilities rolled up into a single download. AccessChkv5.11 (May 16,.01 (November 20, 2007)An LDAP (Light-weight Directory Access Protocol) real-time monitoring tool aimed at troubleshooting Active Directory client applications. AdRestorev1.1 (November 1, 2006)Undelete Server 2003 Active Directory objects. Autologonv3.01 (February 23, 2011)Bypass password screen during logon. Autorunsv11.70 (August 1, 2013)See what programs are configured to startup automatically when your system boots and you login. Autoruns also shows you the full list of Registry and file locations where applications can configure auto-start settings. BgInfov4.20 (August 1,.0 (June 4, 2009)View the resolution of the system clock, which is also the maximum timer resolution. Contigv1.7 (November 15, 2012)Wish you could quickly defragment your frequently used files? Use Contig to optimize individual files, or to create new files that are contiguous. Coreinfov3.21 (December 19, 2013. NTFSInfov1.0 (November 1, 2006)Use NTFSInfo to see detailed information about NTFS volumes, including the size and location of the Master File Table (MFT) and MFT-zone, as well as the sizes of the NTFS meta-data files. PageDefragv2.32 (November 1, 2006)Defragment your paging files and Registry hives. PendMovesv1, 2012v6.0 (May 16,.02 (March.1 (March 7,(March 7,.03 (March 7,. TCPViewv3.05 (July 25, 2011)Active socket command-line viewer. VMMapv3.11 (September 10, 2012)VMMap is a process virtual and physical memory analysis utility. VolumeIdv2.0 (November 1, 2006)Set Volume ID of FAT or NTFS drives. Whoisv1.11 (October 17, 2012)See who owns an Internet address. WinObjv2.22 (February 14, 2011)The ultimate Object Manager namespace viewer is here. ZoomItv4.5 (June 20, 2013)Presentation utility for zooming and drawing on the screen. ![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]>
http://technet.microsoft.com/en-au/sysinternals/bb545027
CC-MAIN-2014-15
refinedweb
331
51.34
PyX is a Python package for the creation of PostScript and PDF files. It seems like a more modern alternative to MetaPost so I was keen to try it out. I found that the experimental pyx-0.8.1 package in Sage 2.10 did not work (errors about a DVI file not finishing) so I created a Sage source package of the latest version: pyx-0.10.spkg. Download that package and then do sage -i pyx-0.10.skpg to install it. Here is the standard Hello World example (look for hello.pdf in your working directory): import pyx x = y = int(0) c = pyx.canvas.canvas() c.text(x, y, “Hello, world!”) c.stroke(pyx.path.line(x, y, x+5, y+0)) c.writePDFfile(“hello”) The previous pyx spkg tried to put the pyxrc file into /etc but I prefer to run Sage as a normal user, and sudo-ing to install a package isn’t completely straightforward (you have to set some environment variables, so it’s not newbie-friendly). I made my spkg-install file put pyxrc into $SAGE_LOCAL/etc but I’m not sure if this is a suitable location. Any suggestions? edit: On the sage-devel mailing list I was told to use ~/.sage so I have ammended the pyx-0.10 package to do this.
https://carlo-hamalainen.net/2008/02/17/pyx-0-10-experimental-package/
CC-MAIN-2017-51
refinedweb
224
67.96
The Magic Laravel Helper tap() mohamed benhida Mar 1 Hello, In this post we will talk about how tap() helper works and where can we used in our projects. First of all, let's talk about what tap() exactly do behind the scenes. <?php function tap($value, $callback = null) { if (is_null($callback)) { return new HigherOrderTapProxy($value); } $callback($value); return $value; } After using tap() you must know that the very first thing we pass it is always what is going to be returned to us. Let's work with an example so we can more see the differences. for now let's say the $callback is not null and our tap() helper is like that. <?php function tap($value, $callback) { $callback($value); return $value; } Our example is that we want to update an user and in the same return the edited $user. <?php public function update(Request $request,User $user) { $user = $user->update($param);//return boolean } We all know that update() returns boolean but we want it to return our edited $user so we can pass it to our json response for example.Here come the rule of tap() <?php public function update(Request $request,User $user) { $user = tap($user,function($user) use($request){ $user->update($request->all()); }); //returns Edited User } We passed $user and a closure that accept $user and runs update() method. the $callback($value); will be triggered and it will returns the $value at the end in our case it returns $user model. But this is not helpfull it looks disgusting because we need to passe $request to closure and adding extra two lines. so here comes the rule of a nulled $callback that will return a new HigherOrderTapProxy($value); that helps us solve the problem above. Let's understand together what HigherOrderTapProxy($value) really does. <?php namespace Illuminate\Support; class HigherOrderTapProxy { public $target; public function __construct($target) { $this->target = $target; //target in our case is $user model } //when we call a method it will trigger this function public function __call($method, $parameters) { $this->target->{$method}(...$parameters); //$user->update($parameters) return $this->target; //return $user; } } Like we see HigherOrderTapProxy class accept $value and each time we call a method inside this class it will trigger the __call() function then the method will apply to the $value passed then it will return it. like you see in the comments above. now if you really understand how HigherOrderTapProxy works you can just do this and it will works. <?php public function update(Request $request,User $user) { $user = tap($user)->update($request->all());//return edited $user } and here we have the same value with a code on one line. I hope your enjoyed reading this article. Who's looking for open source contributors? (September 17 edition) Find something to work on or promote your project here. Please shamelessly pro...
https://dev.to/simo_benhida/the-magic-laravel-helper-tap--1jc7
CC-MAIN-2018-39
refinedweb
468
60.45
Oh,,, I see the error.. I had been stripping namespaces off of params and variables and was also stripping the prefix xsl: and didn't notice until now.. Regards Regards Alex Tilogeo.com On Tue, Feb 10, 2015 at 11:25 AM, Alex Muir alex@xxxxxxxxxxx < xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx> wrote: > Greetings, > > I have a client request to create logic to merge included and imported > xslt files into 1 main file. > > I've written a process that recurses through imports and includes > collecting data and adds different priorities to imported templates > depending on the import level and have left duplicate templates to be > handled manually. > > One odd scope issue that I can't figure out, using some docbook xslt for > testing, is that some variables that have a global variable declaration > such as $qanda.defaultlabel in the output I've posted here > > > > > are recognized as having been declared in all but one place as if the > scope in that particular location was not covered. I don't see a reason why > this would be occurring. Can anyone figure it out? > > Regards > > Alex > Tilogeo.com > XSL-List info and archive <> > EasyUnsubscribe <-list/2637520> (by > email <>)
https://www.oxygenxml.com/archives/xsl-list/201502/msg00034.html
CC-MAIN-2018-13
refinedweb
194
63.39
Introduction An infrared-receiver is a component that receives infrared signals and can independently receive infrared ray and output signals compatible with TTL level. It’s similar with a normal plastic-packaged transistor in size and it is suitable for all kinds of infrared remote control and infrared transmission. Components Experimental Principle Control a certain key (for example, Power key) via a remote controller by programming. When you press the key, infrared rays will be emitted from the remote controller and received by the infrared receiver, and the LED on the Mega 2560 board will light up. The schematic diagram: Experimental Procedures Step 1: Build the circuit Step 2: Open the code file. Step 3: Select the Board and Port. Step 4: Upload the sketch to the board. Now, press Power on the remote control and the LED attached to pin 13 on the Mega 2560 board will light up. If you press other keys, the LED will go out. Note: - There is a transparent plastic piece at the back of the remote control to cut off the power and pull it out before you use the remote control. - Please gently press the button on the remote to avoid invalid data FFFFFFFF. Code Analysis Code Analysis 16-1 Initialize the infrared-receiver #include <IRremote.h> const int irReceiverPin = 2; //the infrared-receiver attact to pin2 const int ledPin = 13; //built-in LED IRrecv irrecv(irReceiverPin); //Initialize the infrared-receiver decode_results results; //The decoding result is placed in the result of the decode results structure. Code Analysis 16-2 Enable infrared-receiver irrecv.enableIRIn(); //Restart the receiver Code Analysis 16-3 Receive and print the data if (irrecv.decode(&results)) { //If receive a data Serial.print("irCode: "); //print "irCode: " on the serial monitor Serial.print(results.value, HEX); //print the signal on serial monitor in hexadecimal Serial.print(", bits: "); Serial.println(results.bits); // Print the data bits irrecv.resume(); //Receive next data } delay(600); decode(&results): Decodes the received IR message, returns 0 if no data ready, 1 if data ready. Results of decoding are stored in results Code Analysis 16-4 If the power button is pressed if(results.value == 0xFFA25D) // if the power button on the remote control is pressed The 0xFFA25D is the code of the power button on the remote control, if you want to define other button, you can read the code of every button from the serial monitor. { digitalWrite(ledPin,HIGH); //Turn on the LED } else { digitalWrite(ledPin,LOW); //else turn of the LED }
https://learn.sunfounder.com/lesson-16-infrared-receiver/
CC-MAIN-2021-39
refinedweb
419
55.24
style that bird a duck In languages with dynamic typing this feature allows creating function that are not checking type of passed object but instead rely on existence of particular methods/properties within it and throws runtime exception when those properties not found. For instance, in groovy we could have method for printing info about some entity def printEntity = {entity -> println 'id: ${entity.id}, name: ${entity.name}' } Let’s say we have following class class Entity { Long id String name } So we can invoke our function printEntity(new Entity(id: 10L, name: 'MyName1')) id: 10, name: MyName1 But the same time we could pass map as argument printEntity(['id':10L, 'name':'MyName2']) id: 10, name: MyName2 Using some metaprogramming magic we could write even following class Ghost { def propertyMissing(String name) { if (name == 'id') { return -1L } else if (name == 'name') { return 'StubName' } } } And we will be still able to call our function printEntity(new Ghost()) id: -1, name: StubName Welcome to the real world Fortunately this concept can be used not only for languages with dynamic typing but for ones with more strict typing model, as Java. Wikipedia has good example of duck typing implementation in Java using Proxy class. Well, you say, what is the practical usage of this except feeling oneself the wisest guru :) Let me show some real life task that was solved in Java using duck typing technique. From the beginning I had simple report generator that queries DB of products and outputs id and name of certain entity. But then customer says: ‘I’d like to also have link to the entity detail page at our site. Beautiful, SEO friendly link. Could you do it to me’. ‘Sure ‘, I said. After digging our codebase I’ve discovered cool function generateSeoUrl() that does the job. The function takes one argument of type Entity, which is interface. So my intention was to observe implementations of Entity and try to use one of them for the report generation. How surprised was I after discovering that all of them are part of some self made ORM tool and their constructors accept query DB to get the entire information about product. So if I were using Entity implementations I had to deal with one extra query per row of my report and this is unacceptable since report was comprised of huge number of rows. So I decided to try other approach and implement Entity interface, overriding methods that are used by generateSeoUrl(). I clicked my IDE shortcut and got surprised again. Entity had about 50 (!!!) methods. Well, I already knew that only getEntityId() and getName() are used by generateSeoUrl() function, but then again, having new class with 50 empty methods just to override 2 of them doing useful action seemed not good idea for me. Thus I’ve decided stop trying coding and start to think :) Extend some of the Entity implementation to prevent querying DB or copy + paste generateSeoUrl() and adopt it for my needs were the options but still it was not beautiful. Especially when I reminded duck typing. I said to myself, we have a function that takes instance of Entity but only uses two method of this interface, so to complete my task I need something that looks like Entity and able to handle getEntityId() and getName() methods. Since entityId and name were already present in data used for generating my report I could reuse them in my new class to stub data for getEntityId() and getName(). To achieve duck typing we need to create Proxy that also implements InvocationHandler interface and static method to retrieve instance of Proxy. Final code of my class looks like public class ReportEntitySupport implements InvocationHandler { public static Entity newInstance(Long entityId, String name) { return (Entity) Proxy.newProxyInstance( Product.class.getClassLoader(), Product.class.getInterfaces(), new ReportEntitySupport(entityId, name) ); } private final String name; private final Long entityId; private ReportEntitySupport(Long entityId, String name) { this.name = name; this.entityId = entityId; } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { if (method.getName().equals('getName')) { return this.name; } else if (method.getName().equals('getEntityId')) { return this.entityId; } return null; } } So how to use it ? Inside my report generator class while iterating over ResultSet I’m using following Long entityId; String name; .... Entity entity = ReportEntitySupport.newIntance(entityId, name); String seoUrl = generateSeoUrl(entity); .... P.S. This post just illustrates that some uncommon for Java language concepts could be successfully applied for completing real life tasks improving your programming skills and making your code more beautiful. Reference: Duck typing in Java ? Well, not exactly from our JCG partner Evgeny Shepelyuk at the jk’s blog blog.
http://www.javacodegeeks.com/2012/09/duck-typing-in-java-well-not-exactly.html
CC-MAIN-2014-15
refinedweb
772
52.39
The most important changes in matplotlib 2.0 are the changes to the default style. While it is impossible to select the best default for all cases, these are designed to work well in the most common cases. A 'classic' style sheet is provided so reverting to the 1.x default values is a single line of python import matplotlib.style import matplotlib as mpl mpl.style.use('classic') See The matplotlibrc file for details about how to persistently and selectively revert many of these changes. Table of Contents scatter plot errorbar boxplot fill_betweenand fill_betweenx hexbin barand barh The colors in the default property cycle have been changed from ['b', 'g', 'r', 'c', 'm', 'y', 'k'] to the category10 color palette used by Vega and d3 originally developed at Tableau. (Source code, png, pdf) In addition to changing the colors, an additional method to specify colors was added. Previously, the default colors were the single character short-hand notations for red, green, blue, cyan, magenta, yellow, and black. This made them easy to type and usable in the abbreviated style string in plot, however the new default colors are only specified via hex values. To access these colors outside of the property cycling the notation for colors 'CN', where N takes values 0-9, was added to denote the first 10 colors in mpl.rcParams['axes.prop_cycle'] See Specifying Colors for more details. To restore the old color cycle use from cycler import cycler mpl.rcParams['axes.prop_cycle'] = cycler(color='bgrcmyk') or set axes.prop_cycle : cycler('color', 'bgrcmyk') in your matplotlibrc file. matplotlib.cm.ScalarMappableinstances is 'viridis'(aka option D). (Source code, png, pdf) For an introduction to color theory and how 'viridis' was generated watch Nathaniel Smith and Stéfan van der Walt's talk from SciPy2015. See here for many more details about the other alternatives and the tools used to create the color map. For details on all of the color maps available in matplotlib see Colormaps in Matplotlib. The previous default can be restored using mpl.rcParams['image.cmap'] = 'jet' or setting image.cmap : 'jet' in your matplotlibrc file; however this is strongly discouraged. The default interactive figure background color has changed from grey to white, which matches the default background color used when saving. The previous defaults can be restored by mpl.rcParams['figure.facecolor'] = '0.75' or by setting figure.facecolor : '0.75' in your matplotlibrc file. The default style of grid lines was changed from black dashed lines to thicker solid light grey lines. (Source code, png, pdf) The previous default can be restored by using: mpl.rcParams['grid.color'] = 'k' mpl.rcParams['grid.linestyle'] = ':' mpl.rcParams['grid.linewidth'] = 0.5 or by setting: grid.color : k # grid color grid.linestyle : : # dotted grid.linewidth : 0.5 # in points in your matplotlibrc file. The default dpi used for on-screen display was changed from 80 dpi to 100 dpi, the same as the default dpi for saving files. Due to this change, the on-screen display is now more what-you-see-is-what-you-get for saved files. To keep the figure the same size in terms of pixels, in order to maintain approximately the same size on the screen, the default figure size was reduced from 8x6 inches to 6.4x4.8 inches. As a consequence of this the default font sizes used for the title, tick labels, and axes labels were reduced to maintain their size relative to the overall size of the figure. By default the dpi of the saved image is now the dpi of the Figure instance being saved. This will have consequences if you are trying to match text in a figure directly with external text. The previous defaults can be restored by mpl.rcParams['figure.figsize'] = [8.0, 6.0] mpl.rcParams['figure.dpi'] = 80 mpl.rcParams['savefig.dpi'] = 100 mpl.rcParams['font.size'] = 12 mpl.rcParams['legend.fontsize'] = 'large' mpl.rcParams['figure.titlesize'] = 'medium' or by setting: figure.figsize : [8.0, 6.0] figure.dpi : 80 savefig.dpi : 100 font.size : 12.0 legend.fontsize : 'large' figure.titlesize : 'medium' In your matplotlibrc file. In addition, the forward kwarg to set_size_inches now defaults to True to improve the interactive experience. Backend canvases that adjust the size of their bound matplotlib.figure.Figure must pass forward=False to avoid circular behavior. This default is not configurable. scatter¶ The following changes were made to the default behavior of scatter - The default size of the elements in a scatter plot is now based on the rcParam lines.markersizeso it is consistent with plot(X, Y, 'o'). The old value was 20, and the new value is 36 (6^2). - scatter markers no longer have a black edge. - if the color of the markers is not specified it will follow the property cycle, pulling from the 'patches' cycle on the Axes. (Source code, png, pdf) The classic default behavior of scatter can only be recovered through mpl.style.use('classic'). The marker size can be recovered via mpl.rcParam['lines.markersize'] = np.sqrt(20) however, this will also affect the default marker size of plot. To recover the classic behavior on a per-call basis pass the following kwargs: classic_kwargs = {'s': 20, 'edgecolors': 'k', 'c': 'b'} plot¶ The following changes were made to the default behavior of plot - the default linewidth increased from 1 to 1.5 - the dash patterns associated with '--', ':', and '-.'have changed - the dash patterns now scale with line width (Source code, png, pdf) The previous defaults can be restored by setting: mpl.rcParams['lines.linewidth'] = 1.0 mpl.rcParams['lines.dashed_pattern'] = [6, 6] mpl.rcParams['lines.dashdot_pattern'] = [3, 5, 1, 5] mpl.rcParams['lines.dotted_pattern'] = [1, 3] mpl.rcParams['lines.scale_dashes'] = False or by setting: lines.linewidth : 1.0 lines.dashed_pattern : 6, 6 lines.dashdot_pattern : 3, 5, 1, 5 lines.dotted_pattern : 1, 3 lines.scale_dashes: False in your matplotlibrc file. errorbar¶ By default, caps on the ends of errorbars are not present. (Source code, png, pdf) This also changes the return value of errorbar() as the list of 'caplines' will be empty by default. The previous defaults can be restored by setting: mpl.rcParams['errorbar.capsize'] = 3 or by setting errorbar.capsize : 3 in your matplotlibrc file. boxplot¶ Previously, boxplots were composed of a mish-mash of styles that were, for better for worse, inherited from Matlab. Most of the elements were blue, but the medians were red. The fliers (outliers) were black plus-symbols ( +) and the whiskers were dashed lines, which created ambiguity if the (solid and black) caps were not drawn. For the new defaults, everything is black except for the median and mean lines (if drawn), which are set to the first two elements of the current color cycle. Also, the default flier markers are now hollow circles, which maintain the ability of the plus-symbols to overlap without obscuring data too much. (Source code, png, pdf) The previous defaults can be restored by setting: mpl.rcParams['boxplot.flierprops.color'] = 'k' mpl.rcParams['boxplot.flierprops.marker'] = '+' mpl.rcParams['boxplot.flierprops.markerfacecolor'] = 'none' mpl.rcParams['boxplot.flierprops.markeredgecolor'] = 'k' mpl.rcParams['boxplot.boxprops.color'] = 'b' mpl.rcParams['boxplot.whiskerprops.color'] = 'b' mpl.rcParams['boxplot.whiskerprops.linestyle'] = '--' mpl.rcParams['boxplot.medianprops.color'] = 'r' mpl.rcParams['boxplot.meanprops.color'] = 'r' mpl.rcParams['boxplot.meanprops.marker'] = '^' mpl.rcParams['boxplot.meanprops.markerfacecolor'] = 'r' mpl.rcParams['boxplot.meanprops.markeredgecolor'] = 'k' mpl.rcParams['boxplot.meanprops.markersize'] = 6 mpl.rcParams['boxplot.meanprops.linestyle'] = '--' mpl.rcParams['boxplot.meanprops.linewidth'] = 1.0 or by setting: boxplot.flierprops.color: 'k' boxplot.flierprops.marker: '+' boxplot.flierprops.markerfacecolor: 'none' boxplot.flierprops.markeredgecolor: 'k' boxplot.boxprops.color: 'b' boxplot.whiskerprops.color: 'b' boxplot.whiskerprops.linestyle: '--' boxplot.medianprops.color: 'r' boxplot.meanprops.color: 'r' boxplot.meanprops.marker: '^' boxplot.meanprops.markerfacecolor: 'r' boxplot.meanprops.markeredgecolor: 'k' boxplot.meanprops.markersize: 6 boxplot.meanprops.linestyle: '--' boxplot.meanprops.linewidth: 1.0 in your matplotlibrc file. fill_betweenand fill_betweenx¶ fill_between and fill_betweenx both follow the patch color cycle. (Source code, png, pdf) If the facecolor is set via the facecolors or color keyword argument, then the color is not cycled. To restore the previous behavior, explicitly pass the keyword argument facecolors='C0' to the method call. Most artists drawn with a patch ( ~matplotlib.axes.Axes.bar, ~matplotlib.axes.Axes.pie, etc) no longer have a black edge by default. The default face color is now 'C0' instead of 'b'. (Source code, png, pdf) The previous defaults can be restored by setting: mpl.rcParams['patch.force_edgecolor'] = True mpl.rcParams['patch.facecolor'] = 'b' or by setting: patch.facecolor : b patch.force_edgecolor : True in your matplotlibrc file. hexbin¶ The default value of the linecolor kwarg for hexbin has changed from 'none' to 'face'. If 'none' is now supplied, no line edges are drawn around the hexagons. barand barh¶ The default value of the align kwarg for both bar and barh is changed from 'edge' to 'center'. (Source code, png, pdf) To restore the previous behavior explicitly pass the keyword argument align='edge' to the method call. The color of the lines in the hatch is now determined by - If an edge color is explicitly set, use that for the hatch color - If the edge color is not explicitly set, use rcParam['hatch.color']which is looked up at artist creation time. The width of the lines in a hatch pattern is now configurable by the rcParams hatch.linewidth, which defaults to 1 point. The old behavior for the line width was different depending on backend: - PDF: 0.1 pt - SVG: 1.0 pt - PS: 1 px - Agg: 1 px The old line width behavior can not be restored across all backends simultaneously, but can be restored for a single backend by setting: mpl.rcParams['hatch.linewidth'] = 0.1 # previous pdf hatch linewidth mpl.rcParams['hatch.linewidth'] = 1.0 # previous svg hatch linewidth The behavior of the PS and Agg backends was DPI dependent, thus: mpl.rcParams['figure.dpi'] = dpi mpl.rcParams['savefig.dpi'] = dpi # or leave as default 'figure' mpl.rcParams['hatch.linewidth'] = 1.0 / dpi # previous ps and Agg hatch linewidth There is no direct API level control of the hatch color or linewidth. Hatching patterns are now rendered at a consistent density, regardless of DPI. Formerly, high DPI figures would be more dense than the default, and low DPI figures would be less dense. This old behavior cannot be directly restored, but the density may be increased by repeating the hatch specifier. The default font has changed from "Bitstream Vera Sans" to "DejaVu Sans". DejaVu Sans has additional international and math characters, but otherwise has the same appearance as Bitstream Vera Sans. Latin, Greek, Cyrillic, Armenian, Georgian, Hebrew, and Arabic are all supported (but right-to-left rendering is still not handled by matplotlib). In addition, DejaVu contains a sub-set of emoji symbols. (Source code, png, pdf) See the DejaVu Sans PDF sample for full coverage. The default math font when using the built-in math rendering engine (mathtext) has changed from "Computer Modern" (i.e. LaTeX-like) to "DejaVu Sans". This change has no effect if the TeX backend is used (i.e. text.usetex is True). (Source code, png, pdf) (Source code, png, pdf) To revert to the old behavior set the: mpl.rcParams['mathtext.fontset'] = 'cm' mpl.rcParams['mathtext.rm'] = 'serif' or set: mathtext.fontset: cm mathtext.rm : serif in your matplotlibrc file. This rcParam is consulted when the text is drawn, not when the artist is created. Thus all mathtext on a given canvas will use the same fontset. 'best', so the legend will be automatically placed in a location to minimize overlap with data. (Source code, png, pdf) The previous defaults can be restored by setting: mpl.rcParams['legend.fancybox'] = False mpl.rcParams['legend.loc'] = 'upper right' mpl.rcParams['legend.numpoints'] = 2 mpl.rcParams['legend.fontsize'] = 'large' mpl.rcParams['legend.framealpha'] = None mpl.rcParams['legend.scatterpoints'] = 3 mpl.rcParams['legend.edgecolor'] = 'inherit' or by setting: legend.fancybox : False legend.loc : upper right legend.numpoints : 2 # the number of points in the legend line legend.fontsize : large legend.framealpha : None # opacity of legend frame legend.scatterpoints : 3 # number of scatter points legend.edgecolor : inherit # legend edge color ('inherit' # means it uses axes.edgecolor) in your matplotlibrc file. The default interpolation method for imshow is now 'nearest' and by default it resamples the data (both up and down sampling) before color mapping. (Source code, png, pdf) To restore the previous behavior set: mpl.rcParams['image.interpolation'] = 'bilinear' mpl.rcParams['image.resample'] = False or set: image.interpolation : bilinear # see help(imshow) for options image.resample : False in your matplotlibrc file. Previously, the input data was normalized, then color mapped, and then resampled to the resolution required for the screen. This meant that the final resampling was being done in color space. Because the color maps are not generally linear in RGB space, colors not in the color map may appear in the final image. This bug was addressed by an almost complete overhaul of the image handling code. The input data is now normalized, then resampled to the correct resolution (in normalized dataspace), and then color mapped to RGB space. This ensures that only colors from the color map appear in the final image. (If your viewer subsequently resamples the image, the artifact may reappear.) The previous behavior cannot be restored. The previous auto-scaling behavior was to find 'nice' round numbers as view limits that enclosed the data limits, but this could produce bad plots if the data happened to fall on a vertical or horizontal line near the chosen 'round number' limit. The new default sets the view limits to 5% wider than the data range. (Source code, png, pdf) The size of the padding in the x and y directions is controlled by the 'axes.xmargin' and 'axes.ymargin' rcParams respectively. Whether the view limits should be 'round numbers' is controlled by the 'axes.autolimit_mode' rcParam. In the original 'round_number' mode, the view limits coincide with ticks. The previous default can be restored by using: mpl.rcParams['axes.autolimit_mode'] = 'round_numbers' mpl.rcParams['axes.xmargin'] = 0 mpl.rcParams['axes.ymargin'] = 0 or setting: axes.autolimit_mode: round_numbers axes.xmargin: 0 axes.ymargin: 0 in your matplotlibrc file. rcParams['axes.axisbelow'] = False. To reduce the collision of tick marks with data, the default ticks now point outward by default. In addition, ticks are now drawn only on the bottom and left spines to prevent a porcupine appearance, and for a cleaner separation between subplots. (Source code, png, pdf) To restore the previous behavior set: mpl.rcParams['xtick.direction'] = 'in' mpl.rcParams['ytick.direction'] = 'in' mpl.rcParams['xtick.top'] = True mpl.rcParams['ytick.right'] = True or set: xtick.top: True xtick.direction: in ytick.right: True ytick.direction: in in your matplotlibrc file. The default Locator used for the x and y axis is AutoLocator which tries to find, up to some maximum number, 'nicely' spaced ticks. The locator now includes an algorithm to estimate the maximum number of ticks that will leave room for the tick labels. By default it also ensures that there are at least two ticks visible. (Source code, png, pdf) There is no way, other than using mpl.style.use('classic'), to restore the previous behavior as the default. On an axis-by-axis basis you may either control the existing locator via: ax.xaxis.get_major_locator().set_params(nbins=9, steps=[1, 2, 5, 10]) or create a new MaxNLocator: import matplotlib.ticker as mticker ax.set_major_locator(mticker.MaxNLocator(nbins=9, steps=[1, 2, 5, 10]) The algorithm used by MaxNLocator has been improved, and this may change the choice of tick locations in some cases. This also affects AutoLocator, which uses MaxNLocator internally. For a log-scaled axis the default locator is the LogLocator. Previously the maximum number of ticks was set to 15, and could not be changed. Now there is a numticks kwarg for setting the maximum to any integer value, to the string 'auto', or to its default value of None which is equivalent to 'auto'. With the 'auto' setting the maximum number will be no larger than 9, and will be reduced depending on the length of the axis in units of the tick font size. As in the case of the AutoLocator, the heuristic algorithm reduces the incidence of overlapping tick labels but does not prevent it. LogFormatterlabeling of minor ticks¶ Minor ticks on a log axis are now labeled when the axis view limits span a range less than or equal to the interval between two major ticks. See LogFormatter for details. The minor tick labeling is turned off when using mpl.style.use('classic'), but cannot be controlled independently via rcParams. (Source code, png, pdf) ScalarFormattertick label formatting with offsets¶ With the default of rcParams['axes.formatter.useoffset'] = True, an offset will be used when it will save 4 or more digits. This can be controlled with the new rcParam, axes.formatter.offset_threshold. To restore the previous behavior of using an offset to save 2 or more digits, use rcParams['axes.formatter.offset_threshold'] = 2. (Source code, png, pdf) AutoDateFormatterformat strings¶ The default date formats are now all based on ISO format, i.e., with the slowest-moving value first. The date formatters are configurable through the date.autoformatter.* rcParams. Python's %x and %X date formats may be of particular interest to format dates based on the current locale. The previous default can be restored by: mpl.rcParams['date.autoformatter.year'] = '%Y' mpl.rcParams['date.autoformatter.month'] = '%b %Y' mpl.rcParams['date.autoformatter.day'] = '%b %d %Y' mpl.rcParams['date.autoformatter.hour'] = '%H:%M:%S' mpl.rcParams['date.autoformatter.minute'] = '%H:%M:%S.%f' mpl.rcParams['date.autoformatter.second'] = '%H:%M:%S.%f' mpl.rcParams['date.autoformatter.microsecond'] = '%H:%M:%S.%f' or setting date.autoformatter.year : %Y date.autoformatter.month : %b %Y date.autoformatter.day : %b %d %Y date.autoformatter.hour : %H:%M:%S date.autoformatter.minute : %H:%M:%S.%f date.autoformatter.second : %H:%M:%S.%f date.autoformatter.microsecond : %H:%M:%S.%f in your matplotlibrc file.
https://matplotlib.org/2.2.3/users/dflt_style_changes.html
CC-MAIN-2022-05
refinedweb
3,008
52.46
This directory contains some STL-like containers. Things should be moved here that are generally applicable across the code base. Don‘t add things here just because you need them in one place and think others may someday want something similar. You can put specialized containers in your component’s directory and we can promote them here later if we feel there is broad applicability. Containers should adhere as closely to STL as possible. Functions and behaviors not present in STL should only be added when they are related to the specific data structure implemented by the container. For STL-like containers our policy is that they should use STL-like naming even when it may conflict with the style guide. So functions and class names should be lower case with underscores. Non-STL-like classes and functions should use Google naming. Be sure to use the base namespace. Generally avoid std::unordered_set and std::unordered_map. In the common case, query performance is unlikely to be sufficiently higher than std::map to make a difference, insert performance is slightly worse, and the memory overhead is high. This makes sense mostly for large tables where you expect a lot of lookups. Most maps and sets in Chrome are small and contain objects that can be moved efficiently. In this case, consider base::flat_map and base::flat_set. You need to be aware of the maximum expected size of the container since individual inserts and deletes are O(n), giving O(n^2) construction time for the entire map. But because it avoids mallocs in most cases, inserts are better or comparable to other containers even for several dozen items, and efficiently-moved types are unlikely to have performance problems for most cases until you have hundreds of items. If your container can be constructed in one shot, the constructor from vector gives O(n log n) construction times and it should be strictly better than a std::map. base::small_map has better runtime memory usage without the poor mutation performance of large containers that base::flat_map has. But this advantage is partially offset by additional code size. Prefer in cases where you make many objects so that the code/heap tradeoff is good. Use std::map and std::set if you can‘t decide. Even if they’re not great, they're unlikely to be bad or surprising. Sizes are on 64-bit platforms. Stable iterators aren't invalidated when the container is mutated. Takeaways: std::unordered_map and std::unordered_map have high overhead for small container sizes, so prefer these only for larger workloads. Code size comparisons for a block of code (see appendix) on Windows using strings as keys. Takeaways: base::small_map generates more code because of the inlining of both brute-force and red-black tree searching. This makes it less attractive for random one-off uses. But if your code is called frequently, the runtime memory benefits will be more important. The code sizes of the other maps are close enough it's not worth worrying about. A red-black tree. Each inserted item requires the memory allocation of a node on the heap. Each node contains a left pointer, a right pointer, a parent pointer, and a “color” for the red-black tree (32 bytes per item on 64-bit platforms). A hash table. Implemented on Windows as a std::vector + std::list and in libc++ as the equivalent of a std::vector + a std::forward_list. Both implementations allocate an 8-entry hash table (containing iterators into the list) on initialization, and grow to 64 entries once 8 items are inserted. Above 64 items, the size doubles every time the load factor exceeds 1. The empty size is sizeof(std::unordered_map) = 64 + the initial hash table size which is 8 pointers. The per-item overhead in the table above counts the list node (2 pointers on Windows, 1 pointer in libc++), plus amortizes the hash table assuming a 0.5 load factor on average. In a microbenchmark on Windows, inserts of 1M integers into a std::unordered_set took 1.07x the time of std::set, and queries took 0.67x the time of std::set. For a typical 4-entry set (the statistical mode of map sizes in the browser), query performance is identical to std::set and base::flat_set. On ARM, std::unordered_set performance can be worse because integer division to compute the bucket is slow, and a few “less than” operations can be faster than computing a hash depending on the key type. The takeaway is that you should not default to using unordered maps because “they're faster.” A sorted std::vector. Seached via binary search, inserts in the middle require moving elements to make room. Good cache locality. For large objects and large set sizes, std::vector's doubling-when-full strategy can waste memory. Supports efficient construction from a vector of items which avoids the O(n^2) insertion time of each element separately. The per-item overhead will depend on the underlying std::vector's reallocation strategy and the memory access pattern. Assuming items are being linearly added, one would expect it to be 3/4 full, so per-item overhead will be 0.25 * sizeof(T). flat_set and flat_map support a notion of transparent comparisons. Therefore you can, for example, lookup base::StringPiece in a set of std::strings without constructing a temporary std::string. This functionality is based on C++14 extensions to the std::set/ std::map interface. You can find more information about transparent comparisons in the less<void> documentation. Example, smart pointer set: // Declare a type alias using base::UniquePtrComparator. template <typename T> using UniquePtrSet = base::flat_set<std::unique_ptr<T>, base::UniquePtrComparator>; // ... // Collect data. std::vector<std::unique_ptr<int>> ptr_vec; ptr_vec.reserve(5); std::generate_n(std::back_inserter(ptr_vec), 5, []{ return std::make_unique<int>(0); }); // Construct a set. UniquePtrSet<int> ptr_set(std::move(ptr_vec), base::KEEP_FIRST_OF_DUPES); // Use raw pointers to lookup keys. int* ptr = ptr_set.begin()->get(); EXPECT_TRUE(ptr_set.find(ptr) == ptr_set.begin()); Example flat_map<std::string, int>: base::flat_map<std::string, int> str_to_int({{"a", 1}, {"c", 2},{"b", 2}}, base::KEEP_FIRST_OF_DUPES); // Does not construct temporary strings. str_to_int.find("c")->second = 3; str_to_int.erase("c"); EXPECT_EQ(str_to_int.end(), str_to_int.find("c")->second); // NOTE: This does construct a temporary string. This happens since if the // item is not in the container, then it needs to be constructed, which is // something that transparent comparators don't have to guarantee. str_to_int["c"] = 3; A small inline buffer that is brute-force searched that overflows into a full std::map or std::unordered_map. This gives the memory benefit of base::flat_map for small data sizes without the degenerate insertion performance for large container sizes. Since instantiations require both code for a std::map and a brute-force search of the inline container, plus a fancy iterator to cover both cases, code size is larger. The initial size in the above table is assuming a very small inline table. The actual size will be sizeof(int) + min(sizeof(std::map), sizeof(T) * inline_size). Chromium code should always use base::circular_deque or base::queue in preference to std::deque or std::queue due to memory usage and platform variation. The base::circular_deque implementation (and the base::queue which uses it) provide performance consistent across platforms that better matches most programmer‘s expectations on performance (it doesn’t waste as much space as libc++ and doesn't do as many heap allocations as MSVC). It also generates less code tham std::queue: using it across the code base saves several hundred kilobytes. Since base::deque does not have stable iterators and it will move the objects it contains, it may not be appropriate for all uses. If you need these, consider using a std::list which will provide constant time insert and erase. The implementation of std::deque varies considerably which makes it hard to reason about. All implementations use a sequence of data blocks referenced by an array of pointers. The standard guarantees random access, amortized constant operations at the ends, and linear mutations in the middle. In Microsoft's implementation, each block is the smaller of 16 bytes or the size of the contained element. This means in practice that every expansion of the deque of non-trivial classes requires a heap allocation. libc++ (on Android and Mac) uses 4K blocks which eliminates the problem of many heap allocations, but generally wastes a large amount of space (an Android analysis revealed more than 2.5MB wasted space from deque alone, resulting in some optimizations). libstdc++ uses an intermediate-size 512-byte buffer. Microsoft's implementation never shrinks the deque capacity, so the capacity will always be the maximum number of elements ever contained. libstdc++ deallocates blocks as they are freed. libc++ keeps up to two empty blocks. A deque implemented as a circular buffer in an array. The underlying array will grow like a std::vector while the beginning and end of the deque will move around. The items will wrap around the underlying buffer so the storage will not be contiguous, but fast random access iterators are still possible. When the underlying buffer is filled, it will be reallocated and the constents moved (like a std::vector). The underlying buffer will be shrunk if there is too much wasted space (unlike a std::vector). As a result, iterators are not stable across mutations. std::stack is like std::queue in that it is a wrapper around an underlying container. The default container is std::deque so everything from the deque section applies. Chromium provides base/containers/stack.h which defines base::stack that should be used in preference to std::stack. This changes the underlying container to base::circular_deque. The result will be very similar to manually specifying a std::vector for the underlying implementation except that the storage will shrink when it gets too empty (vector will never reallocate to a smaller size). Watch out: with some stack usage patterns it's easy to depend on unstable behavior: base::stack<Foo> stack; for (...) { Foo& current = stack.top(); DoStuff(); // May call stack.push(), say if writing a parser. current.done = true; // Current may reference deleted item! } Code throughout Chromium, running at any level of privilege, may directly or indirectly depend on these containers. Much calling code implicitly or explicitly assumes that these containers are safe, and won't corrupt memory. Unfortunately, such assumptions have not always proven true. Therefore, we are making an effort to ensure basic safety in these classes so that callers' assumptions are true. In particular, we are adding bounds checks, arithmetic overflow checks, and checks for internal invariants to the base containers where necessary. Here, safety means that the implementation will CHECK. As of 8 August 2018, we have added checks to the following classes: base::StringPiece base::span base::Optional base::RingBuffer base::small_map Ultimately, all base containers will have these checks. Safety checks can affect performance at the micro-scale, although they do not always. On a larger scale, if we can have confidence that these fundamental classes and templates are minimally safe, we can sometimes avoid the security requirement to sandbox code that (for example) processes untrustworthy inputs. Sandboxing is a relatively heavyweight response to memory safety problems, and in our experience not all callers can afford to pay it. (However, where affordable, privilege separation and reduction remain Chrome Security Team's first approach to a variety of safety and security problems.) One can also imagine that the safety checks should be passed on to callers who require safety. There are several problems with that approach: Therefore, the minimal checks that we are adding to these base classes are the most efficient and effective way to achieve the beginning of the safety that we need. (Note that we cannot account for undefined behavior in callers.) This just calls insert and query a number of times, with printfs that prevent things from being dead-code eliminated. TEST(Foo, Bar) { base::small_map<std::map<std::string, Flubber>> foo; foo.insert(std::make_pair("foo", Flubber(8, "bar"))); foo.insert(std::make_pair("bar", Flubber(8, "bar"))); foo.insert(std::make_pair("foo1", Flubber(8, "bar"))); foo.insert(std::make_pair("bar1", Flubber(8, "bar"))); foo.insert(std::make_pair("foo", Flubber(8, "bar"))); foo.insert(std::make_pair("bar", Flubber(8, "bar"))); auto found = foo.find("asdf"); printf("Found is %d\n", (int)(found == foo.end())); found = foo.find("foo"); printf("Found is %d\n", (int)(found == foo.end())); found = foo.find("bar"); printf("Found is %d\n", (int)(found == foo.end())); found = foo.find("asdfhf"); printf("Found is %d\n", (int)(found == foo.end())); found = foo.find("bar1"); printf("Found is %d\n", (int)(found == foo.end())); }
https://chromium.googlesource.com/chromium/src/+/master/base/containers/README.md
CC-MAIN-2018-39
refinedweb
2,125
57.57
Startup time of arduino pro mini with mysensors 2.2 - took 10 sec (guess its the radio), can I execute some code immediately (<1sec)? Hi, just encounter an issue I need help on. I'm new to mysensors . Try to upgrade some of my LEDs with DimmableLED Sensors. I'm using the current development branch for 2.2 as I had issues with the radio on 2.1. All fine - I created a small ardunio mini with the radio and and MOSFET for switching, works. But then I encounter an issue: every time I switch on the led with the wall switch it tooks a few seconds >10 to get ON. I want it immediately to go ON with 100%, then getting old value from gateway. I tested with a simple sketch without mysensors - it switches almost immediately , especially without bootloader. So I guess this is related to the startup time of the mysensors lib, any way to place some code somewhere it get executed ASAP? Like "analogWrite( LED_PIN, 255);" to switch LED ON in milisecs, not secs? Thanks! Joerg @alterfritz ... one thing to add: of cause I tried adding it 1st line in setup(), does not work. @alterfritz said in Startup time of arduino pro mini with mysensors 2.2 - took 10 sec (guess its the radio), can I execute some code immediately (<1sec)?: d with the w If I correctly understood your setup you have wall switch on the high AC voltage side ? why not make it on the low voltage side ? the arduino can be powered all the time and your problem is gone. In order to try help you find out... please share your sketch, serial output... which radio are you using... what makes you think it's the radio? @rozpruwacz its a normal wall switch, using 220v. The LED using a ac-dc converter with 5v output. I use pro mini with 3.3v on raw input. seesm to work, however when I switch on it took around 10 sec, even I place analogWrite( LED_PIN, 255); 1st line in setup(). here it is - mostly the example... I'm using NRF24L01. Why the radio - I can check messages on gateway / mqtt (on raspPi). Took them around 10 sec to show - February 15, 2014 - Bruce Lacey - Version 1.1 - August 13, 2014 - Converted to 1.4 (hek) - DESCRIPTION - This sketch provides a Dimmable LED Light using PWM and based Henrik Ekblad - henrik.ekblad@gmail.com Vera Arduino Sensor project. - Developed by Bruce Lacey, inspired by Hek's MySensor's example sketches. - The circuit uses a MOSFET for Pulse-Wave-Modulation to dim the attached LED or LED strip. - The MOSFET Gate pin is connected to Arduino pin 3 (LED_PIN), the MOSFET Drain pin is connected - to the LED negative terminal and the MOSFET Source pin is connected to ground. - This sketch is extensible to support more than one MOSFET/PWM dimmer per circuit. - */ // Enable debug prints to serial monitor #define MY_DEBUG //#define MY_NODE_ID // Enable and select radio type attached #define MY_RADIO_NRF24 //#define MY_RADIO_NRF5_ESB //#define MY_RADIO_RFM69 //#define MY_RADIO_RFM95 #include <MySensors.h> #define EPROM_LIGHT_STATE 1 #define EPROM_DIMMER_LEVEL 2 #define LIGHT_OFF 0 #define LIGHT_ON 1 #define SN "WallLight6" #define SV "1.0" #define CHILD_ID_LIGHT 1 #define LED_PIN 3 // Arduino pin attached to MOSFET Gate pin #define FADE_DELAY 10 // Delay in ms for each percentage fade up/down (10ms = 1s full-range dim) static int16_t currentLevel = 0; // Current dim level... MyMessage dimmerMsg(CHILD_ID_LIGHT, V_DIMMER); MyMessage lightMsg(CHILD_ID_LIGHT, V_LIGHT); int16_t LastLightState=LIGHT_OFF; int16_t LastDimValue=100; /*** - Dimmable LED initialization method */ void setup() { // Pull the gateway's current dim level - restore light level upon sendor node power-up //request( CHILD_ID_LIGHT, V_DIMMER ); //Serial.println( "Node ready to receive messages..." ); int LightState=loadState(EPROM_LIGHT_STATE); if (LightState<=1) { LastLightState=LightState; int DimValue=loadState(EPROM_DIMMER_LEVEL); if ((DimValue>0)&&(DimValue<=100)) { //There should be no Dim value of 0, this would mean LIGHT_OFF LastDimValue=DimValue; } } //Here you actualy switch on/off the light with the last known dim level SetCurrentState2Hardware(); Serial.println( "Node ready to receive messages..." ); } void presentation() { // Register the LED Dimmable Light with the gateway sendSketchInfo(SN, SV); present( CHILD_ID_LIGHT, S_DIMMER ); } /*** - Dimmable LED main processing loop */ void loop() { } void receive; Serial.print( "Changing level to " ); Serial.print( requestedLevel ); Serial.print( ", from " ); Serial.println( currentLevel ); fadeToLevel( requestedLevel ); saveState(EPROM_DIMMER_LEVEL, requestedLevel); // Inform the gateway of the current DimmableLED's SwitchPower1 and LoadLevelStatus value... send(lightMsg.set(currentLevel > 0)); // hek comment: Is this really nessesary? send( dimmerMsg.set(currentLevel) ); } } /*** This method provides a graceful fade up/down effect */ void fadeToLevel( int toLevel ) { int delta = ( toLevel - currentLevel ) < 0 ? -1 : 1; while ( currentLevel != toLevel ) { currentLevel += delta; analogWrite( LED_PIN, (int)(currentLevel / 100. * 255) ); delay( FADE_DELAY ); } } void SetCurrentState2Hardware() { if (LastLightState==LIGHT_OFF) { Serial.println( "Light state: OFF" ); } else { Serial.print( "Light state: ON, Level: " ); Serial.println( LastDimValue ); } //Send current state to the controller SendCurrentState2Controller(); } void SendCurrentState2Controller() { if ((LastLightState==LIGHT_OFF)||(LastDimValue==0)) { send(dimmerMsg.set((int16_t)0)); } else { send(dimmerMsg.set(LastDimValue)); } } and why You choosed that design ? you are not able to switch on the lights remotely. @rozpruwacz This is the way it was constructed. I plan to leave them on all time, maybe remove / replace them in the future - however if somebody uses the switches the lights should go on instantly, basically by set analogWrite( LED_PIN,255); like a normal switch. Any way to do so? @alterfritz Try to add the "on" command not in setup() (this requires presetation() to be already passed) and use a before()-routine instead. So I would rather make it in hardware. pull-up resistor on the switching transistor will make it on at the powerup. and I would store the last dimm value into eeprom to not wait for the gateway response because you will not make it response in less that 1s - the protocol just not allow for that. @rozpruwacz Exactly. This is why the "on" should be issued in before() (or something like hwInit()). "before()" is executed before any MySensors-specific communication.... In addition to the other advices you're already receiving, and following your suspicion that the delay might be related to the radio, you could try adding the following defines to your sketch: #define MY_PARENT_NODE_ID 0 #define MY_PARENT_NODE_IS_STATIC #define MY_TRANSPORT_MAX_TX_FAILURES 3 The first two avoid the node from having to search for it. The last one might help you determine if the problem is the radio not starting correctly if the time is reduced after you add it. Let us know how it goes. yes, so in the before() function read the eeprom value and set it in analogWrite. hardware pull-up is optional because the delay between switching power on and before function call will be short enough, bit if You use the pull-up the delay will be even shorter. @rejoe2 like this... btw - this forum is fanastic... responses more or less in real time... ... void presentation() { analogWrite( LED_PIN,255); } .... sorry ... mean before ...like this void before() { analogWrite( LED_PIN,255); } This seems to have done the trick... LED switch on immediately! Thanks! Btw... I also try the mysbootloader. This wirless update is great, however - does this bootloader slows down bootup time? yes, because the code that will run in the node must be first uploaded to it. the bootup time will be dependent on the code size. @alterfritz Yes, update-bootloader tries to connect to GW for updates first seconds (around 20). So it is not dependend on code size... Additional note: Waiting for RF connection to be established can also be prevented by using #define MY_TRANSPORT_WAIT_READY_MS ... @alterfritz Exactly. After trying to connect to the controller for 1 sec., the node will execute normal code like any other "just-arduino". Pls note: as there is no connection to the controller, the node may miss relevant info typically provided by the controller. Things like myControllerConfig, last values for counters etc... If you use this, you should make sure, these are set correctly later on. So - thanks for all your help. Great forum. Let me summarize what I understand so far: 1.) by using the before() procedbure I can execute immediately. Quickly tested. Seems perfect for me... however... 2.) I really like this mysbootloader. But by using this it always takes up around 20 sec to boot up. There is NO way to switch on my LED directly when I power it up... Correct? @alterfritz Speaking about software: Not exactly, there is one thing you could still do: Compile the bootloader yourself with light switched on... @rejoe2 Right... I may look into this. For now I will try to get a stable version and load it without bootloader directly via usbasp...
https://forum.mysensors.org/topic/7460/startup-time-of-arduino-pro-mini-with-mysensors-2-2-took-10-sec-guess-its-the-radio-can-i-execute-some-code-immediately-1sec/7
CC-MAIN-2019-22
refinedweb
1,427
66.84
Prev Java JVM Code Index Headers Your browser does not support iframes. Re: String comparison using equals() and == From: Patricia Shanahan <pats@acm.org> Newsgroups: comp.lang.java.programmer Date: Sat, 22 Aug 2009 07:12:19 -0700 Message-ID: <M5ednS6UNOZKYRLXnZ2dnUVZ_h-dnZ2d@earthlink.com> Christian wrote: Daniel Pitts schrieb: .... <sscce> public class Strings { public static void main(String[] args) { String a = "hello"; String b = "hello"; String b2 = new String("hello"); String b3 = new String(a); String b4 = (Math.random() < 1.0f ? "h" : "") + "ello"; System.out.println("a = " + a); System.out.println("b = " + b); System.out.println("b2 = " + b2); System.out.println("b3 = " + b3); System.out.println("b4 = " + b4); System.out.println("a==b = " + (a==b)); System.out.println("a==b2 = " + (a==b2)); System.out.println("a==b3 = " + (a==b3)); System.out.println("a==b4 = " + (a==b4)); } } </sscce> <output> a = hello b = hello b2 = hello b3 = hello b4 = hello a==b = true a==b2 = false a==b3 = false a==b4 = false </output> this output was actually what I had expected. Though I had this described above on some linux jdk (should be 1.6 but don't know which) Actually it gave me a hard time when I tried to show someone that string comparison with == does not work in Java.. May be it was just an error in recompilation.. though I remember that I finally had to read in a String from System.in to definately show that == does not work. Could you see if you can reproduce the bugs, either using Daniel's program, or an SSCCE of your own that you can post? Do you remember the details such as the compiler and JVM you used? Patricia!]
http://preciseinfo.org/Convert/Articles_Java/JVM_Code/Java-JVM-Code-090822171219.html
CC-MAIN-2021-49
refinedweb
279
61.73
Compute weights in the manner described in the IMCF algorithm. More... #include <vcl_memory.h> #include "rgrl_weighter.h" Go to the source code of this file. Compute weights in the manner described in the IMCF algorithm. 20 Sept 2003, CS: Added the possibility that the intensity and signature weights are pre-computed. I am not sure that this is the most appropriate way to handle this. The weighting could be moved into the matching stage, but then the matcher needs to know whether or not to set the weights. 27 Jan 2004, CT: Intensity is no long considered. The class allows the freedom of using the absolute signature weight computed somewhere else (use_precomputed_signature_wgt ), or the robust signature weight from the signature error vector of a match (use_signature_error ), or neither. use_precomputed_signature_wgt has precedence over use_signature_error. Definition in file rgrl_weighter_m_est.h.
http://public.kitware.com/vxl/doc/release/contrib/rpl/rgrl/html/rgrl__weighter__m__est_8h.html
crawl-003
refinedweb
138
59.09
_GUIBox - Rubberband selection boxes using GUIs Started by Ascend4nt, 4 posts in this topic You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy! Register a new account Already have an account? Sign in here. Similar Content - By Katie_Deely Hey. - By Joep86 Hi, I just started recently with AutoIT and I am trying to make two dropdownlists where the selectable values of the second dropdownlist will be depending on what is selected on the first one. For example: Dropdown 1 Dropdown 2 xxx => 01-15 ("01" , "02" ,"03" , ...) yyy => a - f ("a" , "b" ,"c" ,"d" ,"e" ,"f" ) zzz => "new", "old", "spare" I started with this code that I've found in this Forum: #include <GUIConstantsEx.au3> ; Here is the array Global $aArray[6] = ["SORT", "PCM", "UNIF", "KKE", "GMS", "CDY"] ; And here we get the elements into a list $sList = "" For $i = 0 To UBound($aArray) - 1 $sList &= "|" & $aArray[$i] ; Create a GUI #include <GUIConstantsEx.au3> $hGUI = GUICreate("DropDown", 500, 500) ; Create the combo $hCombo = GUICtrlCreateCombo("", 10, 10, 200, 20) ; And fill it GUICtrlSetData($hCombo, $sList) GUISetState() While 1 Switch GUIGetMsg() Case $GUI_EVENT_CLOSE Exit EndSwitch WEnd Any idea how to start on this one... thanks upfront - By UEZ I'm stuck on how to use for range selection .Cells function. Instead of _Excel_RangeSort($oWorkbook, Default, Default, "AD:AD", Default, Default, $xlYes, False, Default, "AE:AE", Default, "L:L", Default) and $aResult = _Excel_RangeRead($oWorkbook, 1, "S2:AB" & $iRows) which works properly I want to use Cells to select the range. Why? Because the Excel sheet was modified and an additional row was inserted. I want to create the script more dynamically by selection it with Cells because I can search for the column headers. Any idea? Thanks. - By Xandy Hirlsetdata($control_combo, "a|b|c"); Populate Combo while 1 switch guigetmsg() case $control_clear_button guictrlsetdata($control_combo, ""); This clears the entire list frustrating me case $gui_event_close exitloop endswitch;guigetmsg() wend;main loopDoes anyone one know how I can reset the selection to "", clear, empty set?I've tried SetEditText but doesn't work CBS_DROPDOWNLIST. SOLVED: Sorry I thought I had tried this: _GUICtrlComboBox_SetCurSel($control_combo, -1) is the solution. - By javip.
https://www.autoitscript.com/forum/topic/112092-_guibox-rubberband-selection-boxes-using-guis/
CC-MAIN-2017-34
refinedweb
369
58.21
sizeof & ternary operators in C Advertisements Besides the operators discussed above, there are few a other important operators including sizeof and ? : supported by the C Language. Example Try following example to understand all the miscellaneous operators available in C − #include <stdio.h> main() { int a = 4; short b; double c; int* ptr; /* example of sizeof operator */ printf("Line 1 - Size of variable a = %d\n", sizeof(a) ); printf("Line 2 - Size of variable b = %d\n", sizeof(b) ); printf("Line 3 - Size of variable c= %d\n", sizeof(c) ); /* example of & and * operators */ ptr = &a; /* 'ptr' now contains the address of 'a'*/ printf("value of a is %d\n", a); printf("*ptr is %d.\n", *ptr); /* example of ternary operator */ a = 10; b = (a == 1) ? 20: 30; printf( "Value of b is %d\n", b ); b = (a == 10) ? 20: 30; printf( "Value of b is %d\n", b ); } When you compile and execute the above program, it produces the following result − Line 1 - Size of variable a = 4 Line 2 - Size of variable b = 2 Line 3 - Size of variable c= 8 value of a is 4 *ptr is 4. Value of b is 30 Value of b is 20 c_operators.htm Advertisements
http://www.tutorialspoint.com/cprogramming/c_sizeof_operator.htm
CC-MAIN-2016-50
refinedweb
203
56.29
Last week Apple presented a bunch of new fantastic software pieces and technologies for iOS/OS X developers. New OS X, new iOS, new Xcode and even a new language! These innovations looks really impressive and promising, and we know you undoubtedly want to try them. So we did our best to deliver the new Xcode 6 support together with basic Swift support in a short time. And here it comes, please, welcome AppCode 3.0.1 update! To try the new features check you have Xcode 6 installed and selected in AppCode’s Preferences | Xcode. Create new project from Xcode 6 template, build and run your app on simulator or device. There is a couple of known issues with Xcode 6 however: XCTest does not work on iOS7.1, iOS8 devices; Logic tests don’t work on a simulator. These hopefully will be fixed in the upcoming updates. We were really thrilled with the new programming language shown by Apple. It’s concise and full of modern concepts and ideas, which are familiar to us because of Kotlin language that we develop here in JetBrains. What we do in 3.0.1 update is the basic Swift support – you can open and edit coloured *.swift files. Let’s start with it and learn all together this new fantastic language. A few other improvements and fixes were also included. Especially, for the UI Designer, where we’ve fixed some exceptions, added UITableViewCell/UICollectionViewCell subviews (OC-9654) and improved the font size (OC-10161). Develop with pleasure! The AppCode Team Fantastic langage ? Looks like scala to me… really ? I looked through the examples on and I wouldn’t mix up them. Is that a bad thing? It’s not a bad thing. It’s more like… I wish, for once, they would do something cool, like use a langage that’s popular, already exist, instead of creating a new one that doesn’t bring any new feature to what’s already available… …like java? Apple did the right thing by building Swift on top of the Objective-C runtime and using the LLVM compiler tool suite. Furthermore, they have a language that they truly own and can grow without being constrained by another language community. Swift not popular? Then you haven’t checked the top 20 popular languages from TIOBE Index which lists it at #16: “Most perplexing feedback on Swift: folks who see it (and judge it) as the end of a trek – but don’t realize it is the start of a new world.” — Chris Lattner, Swift Language creator It’s a revolution for us, but it’s not revolutionary. I love it and do think it is the right move for iOS programming and surely they will improve on it however quoting the creator saying that it’s creation is at the center of the “new world” is a weak point to make. When would a creator say otherwise? Looks like a lot of things, but then so does Scala. It’s not a bad start, but I suspect it might need to sprout exception handling and namespaces at some point. Well done Jetbrains for getting something out quickly. Structs and classes act as namespaces. For methods and properties, sure, but not for other classes, unless there’s something I’m missing. From the docs: “In Swift, you define a class or a structure in a single file, and the external interface to that class or structure is automatically made available for other code to use.” There doesn’t seem to be any way to limit access to that external interface. Nested types do seem to provide namespacing (if not visibility control), but your namespaces had better be pretty small or your source files are going to be pretty large. At the top level frameworks and bundles provide namespacing. For example: UIKit.UITableView… To disambiguate from: MyFramework.UITableView… JetBrains does a really good job ! Always swift (no pun intended) to match Apple announcement Looks like Rust () to me. What do you mean Rust? It looks like Kotlin. This looks like a great update! The AppCode team continues to impress! You guys rock! An awesome update! Thanks a lot! 😀 I’ve just created a Swift class in 3.0.1 and it looks good. Adding support for a new language with a 1 Mb patch is very impressive (even if it does also require 5 Gb of Xcode 6 to be present). I think that it’s too early to provide useful feedback. The code runs but the editor does little more than syntax colouring. There’s everything still to do. Good luck. Now that there are welcome proceedings in AppCode, amd 0xDBE was announced, is there any news on the C++ IDE? Really, really looking forward to that. We’ll start the public EAP as soon as the quality and performance are up to the expectations. No indication at all? 2014? 2015..? We hope to start public EAP this Fall but we can’t be 100%. We still have a lot to do. OK, thanx guys you are amazing. Thanks a lot, swift has just become even more interesting ! Thanks! It is all good things… swift, new languages… But c++11 support very-very bad… AppCode can’t parse not only complex things, but so simple as. Screenshot is here… Same as in 2.x releases… ((( I think that write my own refactoring browser for C++ with clang faster, then wait for fixes… Hi Vladimir, thank you for the comment. AppCode has not bad support support of C++11 😉 But we faced the problem with macros in exact cocos2dx project, please vote and follow the issue: Will you be able to provide full Swift support (refactoring and everything) even without Apple releasing the Swift compiler code? I saw that it is possible to get the AST from the Swift compiler. But is that sufficient to build your AppCode features on? It should not be a problem, Hendrik, we’ll do our best. WHY ALL THESE LANGUAGES? Kotlin, Swift, Java 8 & 9, JavaScript, etc. They’re all so similar. Obviously Java, Kotlin and Swift are orders of magnitude nicer than JavaScript and other scripting languages, but my point being, WHY CAN’T WE JUST DEFINE A SINGLE BEST OF BREED, STATE OF HE ART PROGRAMMING LANGUAGE FOR ALL PURPOSES AND PLATFORMS. Maybe Jetbrais will figure out a way to get all these languages to work together and run everywhere on everything. Yeah… But sadly somewhy they just don’t want to use Haskell!!!
https://blog.jetbrains.com/objc/2014/06/appcode-3-0-1-update-xcode-6-and-basic-swift-support/
CC-MAIN-2019-43
refinedweb
1,094
74.79
This section illustrates you the concept of file exception. Exceptions are the errors that occurs at runtime. It is either generated by the Java Virtual Machine (JVM) in response to an unexpected condition or it is generated by a code. In the given example, we have specified a file in order to read it but the system couldn't find the file and throws an exception. Here is the code: import java.io.*; public class FileException { public static void main(String[] args) { try { File f = new File("C:/newFile.txt"); BufferedReader reader = new BufferedReader(new FileReader(f)); String str = ""; while ((str = reader.readLine()) != null) { System.out.println(str); } reader.close(); } catch (Exception e) { System.out.println(e); } } } Through the above code, we will be able to understand the concept of file exception. Output:
http://www.roseindia.net/tutorial/java/core/files/fileException.html
CC-MAIN-2014-49
refinedweb
133
58.28
Notes: Thursday 12 September 2002 - 12 September: Read Ch 5.3-5.10 - 19 September: Problem Set 3 - Upcoming lab hours: (Small Hall): Thursday, 5-7pm (Sol); 7-9pm (Tiffany); Sunday 4-6pm (Mike); Monday 6-8pm (Serge); Wednesday (18 Sept), 5-7pm (Sol); Wednesday (18 Sept), 8-9pm (Tiffany). Notes and Questions What are the advantages and disadvantages of each approach to array bounds errors: - No checking (C) - Run-time checking (Java) - Static checking (ESC/Java) Graph Data Abstraction In class Tuesday, we will work on implementing a Graph data abstraction that satisfies this specification:public class Graph { // OVERVIEW: // A Graph is a mutable type that represents an undirected // graph. It consists of nodes that are named by Strings, // and edges that connect a pair of nodes. // A typical Graph is: < Nodes, Edges > // where // Nodes = { n1, n2, , nm } // and // Edges = { {from_1, to_1}, , {from_n, to_n} } // Creator public Graph () // EFFECTS: Initializes this to a graph // with no nodes or edges: < {}, {} >. // Mutators public void addNode (String name) // REQUIRES: name is not the name of a node in this // MODIFIES: this // EFFECTS: adds a node named name to this: // this_post = < this_pre.nodes U { name }, this_pre.edges > public void addEdge (String fnode, String tnode) // REQUIRES: fnode and tnode are names of nodes in this. // MODIFIES: this // EFFECTS: Adds an edge from fnode to tnode to this: // this_post = < this_pre.nodes, this_pre.edges U { {fnode, tnode} } > // Observers public boolean hasNode (String node) // EFFECTS: Returns true iff node is a node in this. public StringIterator nodes () // EFFECTS: Returns the StringIterator that // yields all nodes in this in arbitrary order. StringSet getNeighbors (String node) // REQUIRES: node is a node in this // EFFECTS: Returns the StringSet consisting of all nodes in this // that are directly connected to node: // \result = { n | {node, n} is in this.edges } } Links Buffer Overflows - CAIDA Analysis of Code Red - CERT Advisory. The request that exploits the buffer overflow vulnerability: /default.ida?N 3%u0003%u8b00%u531b%u53ff%u0078%u0000%u00=a - Smashing the Stack for Fun and Profit, Aleph One - Improving Security Using Extensible Lightweight Static Analysis (David Evans and David Larochelle), IEEE Software, Jan/Feb 2002. Run-Time Exceptions - Reports on the Ariane 5 run-time exception: Lions Report (official inquiry), Jean-Marc Jiziquel and Bertrand Meyer Stephen Marshall (include video of explosion) Our recommendation now is the same as our recommendation a month ago, if you haven't patched your software, do so now. Scott Culp, security program manager for Microsoft's security response center
http://www.cs.virginia.edu/cs201j-fall2002/lectures/0912.html
CC-MAIN-2017-47
refinedweb
413
57.2
Stephen Rothwell writes:> Hi Richard,> > Richard Stallman <rms@gnu.org> writes:> >.> > Just to try something, could you please take the program below and change> the 172 to be about 15-20 less that then number of MB of physical RAM> you have, then compile and run it and then try to suspend to disk> and see how long it takes. On my Thinkpad 600E, it changes the time> to suspend from ~30 seconds to < 10 seconds.> > This is an experiment to see if your BIOS is faster to suspend> when most of memory is zero'd.Interesting. I've just tried this on my Dell Inspiron 3200 (144MBytes) with a 110 MByte block of zeroes. The time went from 47seconds (filled with 1s) to 20 seconds (filled with 0s).Filling programme appended. Regards, Richard....Permanent: rgooch@atnf.csiro.auCurrent: rgooch@ras.ucalgary.ca===============================================================================/* memfill.c Source file for memfill (fill memory with pattern and wait). Copyright (C) 2000 Richard Go. Richard Gooch may be reached by email at rgooch@atnf.csiro.au The postal address is: Richard Gooch, c/o ATNF, P. O. Box 76, Epping, N.S.W., 2121, Australia.*//* This programme will fill memory with a specified pattern and wait. Written by Richard Gooch 30-AUG-2000 Last updated by Richard Gooch 30-AUG-2000*/#include <unistd.h>#include <stdio.h>#include <stdlib.h>int main (int argc, char **argv){ char length_modifier; signed long size_bytes, count, size_multiplier; unsigned char pattern; char *size_ptr; unsigned char *array; static char usage_string[] = "Usage:\tmemfill length[b|k|m] value"; if (argc != 3) { fprintf (stderr, "%s\n", usage_string); exit (1); } size_ptr = argv[1]; length_modifier = size_ptr[strlen (size_ptr) - 1]; if ( isdigit (length_modifier) ) { size_bytes = strtol (size_ptr, NULL, 0); } else { switch (length_modifier) { case 'b': case 'B': size_multiplier = 1; break; case 'k': case 'K': size_multiplier = 1024; break; case 'm': case 'M': size_multiplier = 1024 * 1024; break; default: fprintf (stderr, "%s\n", usage_string); exit (1); } size_ptr[strlen (size_ptr) - 1] = '\0'; size_bytes = strtol (size_ptr, NULL, 0) * size_multiplier; } pattern = strtol (argv[2], NULL, 0); if ( ( array = malloc (size_bytes) ) == NULL ) { fprintf (stderr, "Error allocating %ld bytes\n", size_bytes); exit (1); } fprintf (stderr, "Filling..."); for (count = 0; count < size_bytes; ++count) *array++ = pattern; fprintf (stderr, "\tfilled. Press control-C to stop\n"); while (1) pause (); return (0);} /* End Function main */-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at
https://lkml.org/lkml/2000/8/30/103
CC-MAIN-2020-10
refinedweb
401
61.16
Well, I see my last post attracted some interesting comments and I'd like to follow up with just another Python thought. While some still like to relegate Python, Perl, Ruby to the niche of 'scripting languages' we should be clear that there are some very complex applications out there written, and maintained, in these languages. So, this week I received my nice new laptop, a ThinkPad T60p which I found out has a nice feature - the Active Protection System or a set of accelerometers that protect the hard drives from shock. So, being curious I found out that yes, you can access the data from the driver that manages the accelerometer programmatically... using IOCTLs. Well, this doesn't sound like the usual realm for a scripting language, but I've seen others manage physical devices in both Perl and Python so why not. Some Google searches later gave me some .NET code that accessed the driver and away I went. Firstly I had to use the Python Win32 package to access the Windows API, but the rest was pretty simple and my first quick hack turned into a fairly usable module in quick time. """ This module implements a (Windows only) interface to the IBM/Lenovo ThinkPad Active Protection System (APS). The APS uses an accelerometer to monitor the motion of the ThinkPad and to stop the hard drives when the motion indicates that possible sudden movement may damage them. The module provides a simple class that allows the user to read the status and X/Y positions indicated by the device driver.Classes: ShockSensor """__author__ = 'Simon Johnston (skjohn@us.ibm.com)' __version__ = '1.0'import win32file from win32con import * import structclass ShockSensor: """ This is the only class provided by the module and acts as the programmer API to the ShockMgr device driver. """ STATUS_RUNNING = range(0, 4) STATUS_STOPPED = range(8, 9) STATUS_AUTO_IGNORE = range(13, 13) def __init__(self): """ Initialize an instance of ShockSensor. """ self.hDevice = None def open(self): """ Open access to the device driver, this must be called before trying to read values from the driver. """ self.hDevice = win32file.CreateFile(r'//./ShockMgr', GENERIC_READ, FILE_SHARE_READ, None, OPEN_EXISTING, 0, 0) def read(self): """ Read the current data from the driver, note that most users will want to use the status(), X() and Y() methods rather than accessing the raw buffer directly. """ state = None if self.hDevice: data = win32file.DeviceIoControl(self.hDevice, 0x733FC, '', 0x24, None) state = struct.unpack('i16h', data) return state def status(self): """ Return the status of the hard drives, this status is set by ShockMgr based upon the accelerometer readings. """ if self.hDevice: return self.read()[0] def X(self): """ Return the value of the accelerometer in the X axis. """ if self.hDevice: return self.read()[1] def Y(self): """ Return the value of the accelerometer in the Y axis. """ if self.hDevice: return self.read()[2] def close(self): """ Close the handle we use to read from the device driver. """ if self.hDevice: self.hDevice.Close() self.hDevice = None So, how does this work? well the code below demonstrates it quite nicely and will loop reading the status of the accelerometers and showing their values (I put this in a loop so you can tilt your ThinkPad and watch the values change). import time sensor = ShockSensor() sensor.open() for i in range(10): status = sensor.status() if status in ShockSensor.STATUS_RUNNING: status_str = 'Running' elif status in ShockSensor.STATUS_STOPPED: status_str = 'Stopped' elif status in ShockSensor.STATUS_AUTO_IGNORE: status_str = 'Running (Auto-Ignore)' else: status_str = str(status) print 'Status=%s; X=%d; Y=%d' % (status_str, sensor.X(), sensor.Y()) time.sleep(1) sensor.close() So for all the Python programming ThinkPad users out there - enjoy.
https://www.ibm.com/developerworks/community/blogs/johnston/date/200608?sortby=0&maxresults=50&lang=en
CC-MAIN-2014-23
refinedweb
613
64.41
How to drive a 7 segment display directly on Raspberry Pi in Python … In the next article, we use this 7 segments kit to make a countdown ticker. Hi Alex – I have ordered the kit and look forward to tinkering with it. In the meanwhile, is it possible to get a detailed explanation of what the code does? In particular, the time.ctime()[11:13] etc parts are beyond me. Many thanks and keep up the good work. KC Hi Kieran. Good point. When I write my own code I usually do a full walk-through don’t I? ctime() is a new one on me also, but I’ll look it up. I didn’t tweak Bertwert’s code much, just mainly switched it from BOARD (which I don’t use) to BCM which I do and changed the wiring. Your kit is about to go in the post :) Hi Alex, Thanks for posting and sharing this. I’ve managed to make it work, but cannot figure out how to adjust the time displayed. Hmm it corrected the time automatically. I still have lots to learn :) Here’s a code walkthrough… lines 3-5 import required libraries and set BCM mode lines 8-13 define and set up the 8 ports for the LED segments lines 15-21 define and set up the 4 ports for digits lines 23-33 create a dictionary to hold the values for each segment to display each number 0-9 and null lines 35-50 are wrapped in a try: finally: block to catch any errors or exceptions and clean up the GPIO ports on exit lines 36-48 contain the main program loop which carries on until we terminate with CTRL+C or some error/exception occurs line 37 time.ctime()give us the current time in the form: ‘Mon Nov 16 14:11:05 2015’ time.ctime()[11:13]gives us a ‘slice’ of this output starting at the 11th character and stopping at, but not including, the 13th. This gives us the hour. In the same way, time.ctime()[14:16]gives us the minute. Together, the whole expression n = time.ctime()[11:13]+time.ctime()[14:16]gives us a string variable, n containing the current hour and minute values. line 38 s = str(n).rjust(4)since the output of ctime()is a string, the str()is redundant (I didn’t spot that before). nis already a string variable. rjust(4), as you might guess from the name, right justifies our string, nand pads it with space characters as required (not in this case). So now we have a string variable, s, containing our four figure value of current time: 1411 lines 39-48 iterate through each digit in turn, left to right. lines 40-45 determine which LED segments should be switched on, then 46-48 cause the current digit’s enable pin to be grounded for a millisecond. So each of the active segments will display on the current digit. Then the loop iterates round the other digits, before going back to the start of the main loop and checking the time again. lines 40-45 iterate through the segments to set up the ports to display each number. line 41 GPIO.output(segments[loop]sets the current segment’s port according to the value it finds in the dictionary we created earlier num[s[digit]][loop]looks horribly complex, but let’s break it down. For each port we want a 0 or a 1 to turn it on or off. We have all the correct values stored in our dictionary num. We have our four digits stored in variable sand our current loop counter digittells us which of the four digits we want. so num[s[digit]]looks in our dictionary numfor the nth figure in s, which will be a string value 0-9. With me so far? A Python dictionary works as a key:value pair. So, num['9']will output the values corresponding to ‘9’. We don’t want them all though, so we add [loop]to cut it down to just the specific segment we want right now in this loop iteration. So the output from num[s[digit]][loop]will be a single integer value 0 or 1, which is used to switch the corresponding GPIO port for the current segment Off or On. lines 42-45 control the flashing ‘dot’ which remains on for even seconds and off for odd seconds, giving an indication of the passage of seconds. lines 46-48, as already mentioned, these cause the correct digit to be enabled for a millisecond so that its value can be displayed, before going onto the next digit. 49-50 ensure a clean exit when the program terminates for any reason (in conjunction with line 35 try:) Hi Alex, Fantastic tutorial! It took me about 2 hours to wire it all up, the last thing that I was expecting it to do was work first time, BUT IT DID! Thanks a lot and keep up the good work. It’s always great when that happens. And you know what? It happens more and more as time goes by, but it never stops feeling great. The Joy of STEM ;-) time.ctime() returns a string, like this: ‘Mon Nov 16 22:08:00 2015′ So time.ctime()[11:13] would the hour, i.e. ’22’ and time.ctime()[14:16] would be the minute, i.e. ’08’ All the Best Julian N. Nice article Alex. I’m sure I have one of these in my bits box somewhere. I was going to attempt to use it with a MAX7219 following a previous article you wrote, but this looks like a simpler way of doing things. They’re lots of fun. I’m refactoring the code and am going to do something else with it soon :) I realised after David Meiklejohn tweeted at me this morning, that Bertwert’s code could be somewhat condensed, which also makes it easier to understand. You know what it’s like with your robot series. Once you get into something fun/interesting, you have to keep tinkering until you can make it dance how you want :) Got a number of suggestions I could make to the Python code here, but rather than wade in I’m going to wait until my 7-segments kit turns up so that I can check my improvements actually work on real hardware first ;-) For now, I’ll just suggest that the seconds-indicator-code if (int(time.ctime()[18:19])%2 == 0) and (digit == 1): GPIO.output(25, 1) else: GPIO.output(25, 0) could be placed above the for loop in range(0,7): line. No need to run the same thing 7 times! Haha Andrew. I’ll be making some changes myself in tomorrow’s blog. I’ve made several tweaks. (Yours is in the post now so should be with you in a day or two.) Looks like what I did last year: Great minds think alike, we both described the resistors with a chart: “Pin -> Resistor -> Pi” Great job! It was largely based on your forum thread bertwert. Thanks for posting it :) Didn’t notice you mentioned me in this post :D I got one and it worked first time – it’s painful connecting all the wires though! Great idea. Is it possible to,get a white display though instead of the red Glad it worked. :) It’s possible to get hold of different colours but I don’t have them. I got some here: Hi Alex, Love the kit and I’ve placed an order for it. I’m a horticulture student and looking at raspberrys for temp/humidity/ EC sensors. I’m hoping I can tie in the display with a hygrometer sensor I have on its way to me. My idea is to have the sensor connected to a pi with a display to show current readings, but also have it internet connected so real time readings can be taken, also have alerts sent by email if certain parameters (e.g. soil gets to dry) are breached . My only problem is I have no clue when it comes to coding. Can you help or guide me at all? Regards, Rob. I’ll offer my usual tips: start small, work on one small piece at a time, gradually work your way up; don’t dive in at the deep end and try to tackle everything at once. The old “learn to walk before you try running” ;-) IMHO the Raspberry Pi forums are a much better place to ask for the kind of help you’re looking for. Hi, Would this kit work for the Raspberry pi 2? Best, Austin Yes it works on all models of Pi. Hey, nice tutorial One question: What kind of resistors are you using (in terms of ohm)? Around 100 Ohm. You could get away with lower as long as you didn’t leave the segments illuminated 100% of the time. No need to use MegaOhm or GigaOhm – Ohm M G ! ;-) Hello again, I’ve got some C code which provides me with a variable containing the room temperature (measured by a sensor connected to RPi). Now i want that value to be displayed on the 7 segment 4 digit display. Is there any possibilty to implement Python into C or to compile Python into C? What sensor are you using? If it’s a common one, there’s a good chance there’s already been a Python module or sample code written to read it. Alternatively if your sensor-code is *only* available in C, what I’d do in your situation would be to write a small ‘wrapper’ C program (i.e. compile it to a standalone executable) which when called simply prints out the current sensor value. And then from within your python code, call the external executable using and read its output. Or you could go the other way round, and use to control the GPIOs driving the 7-segment display directly from your C code… Dear Alex, I recently bought from you the kit. Do you know if a can show a python string in the segments?, I have now something like this: myFile = 'youtubesubs.txt' # this calls the text file myFile = open(myFile, mode= 'r') # opens in readmode # read all file contents lastLine = myFile.readlines()[-1] # read the last line n = 6 # splits the string each 6 characters splitted = [lastLine[i:i+n] for i in range(0, len(lastLine), n)] print(splitted[0]) That prints this string on the console: 2.183 Thank you!! It should be possible if you feed your variable splitted[0] into variable n on line 37 instead of the time. But the decimal point might need some extra code to sort it out. Hi! do you know if I can add a ‘.’ in the dictionary to by able to show numbers like this?: 2.234. I received the kit today, all working perfectly, thank you :-) Hi Adrian. The decimal point (dp) is controlled by port 25 in this scheme. It is possible to include it in the dictionary, but the code would need some refactoring. Depending on what you’re doing it might be easier to control it separately. Thank you! Yes, in this case I’ll do it separately I’ve written a little program (which you’re free to adapt for your own purposes) which can display any number (integer or decimal) between -999 and 9999 It automatically displays the minus sign and/or decimal point where necessary :-) Thank you Alex, Right now is working without the decimal point, but I’m going to check this other code for improve mine. bye! Sorry, Andrew, not Alex, everybody here starts with A… :-D It’s so we can get our alliterations done right. Another alphabetically astute answer allowing Andrew, Adrian and Alex astonishing amusement :) Absolutely awesome and amazing ;) It’s fairly easy to convert this to a common anode display like the one I bought. Essentially, you put 1’s where all the 0’s are :-) My swapped version of the software is here: If you have any questions or want other help, reply here. The file has been renamed to clock-ca.py and it’s here: Here is mine finished!! Counting youtube subscribers. I’m looking for made another one but with 8 digits, any ideas how to start? I’m not sure if I have enough room in the gpio for another 4 digits :-S The way that multiplexing works, means that to add another 4 digits (to expand it from 4 digits to 8 digits) only needs another 4 GPIO lines :-) Using the pin-numbers from Alex’s photo above (and assuming your 4-digit 7-segment displays are identical) you’d simply connect each of the LED-pins 1,2,3,4,5,7,10,11 together in parallel, and then the LED-pins 6,8,9,12 from each display would connect separately to different GPIO pins on the Pi (using 8 + 4 + 4 = 16 GPIOs in total). And then you’d simply modify Alex’s code above to make the Pi think it’s talking to a single 8-digit 7-segment display. Thank you Andrew! I’ll give it a try and comment here :-) Is there a way to output a 12 hour as opposed to a 24 hour clock? yes – without actually looking at the code (using my phone at the moment), find the bit which deals with hours and then do a simple if statement something like this… (I’ve given a method here without spoon-feeding the code – having now looked at the code it is a bit more involved.) Thanks. I’ll try my best to make it work. I’m building a controller for large 12v LED digits. Each one is in its own 18″x24″ frame and I’m using seconds also, for a total of 6 digits. So there will have to be a lot of modifying of this design. But I think it’ll work. My biggest concern is running the 12V LED’s from a second power source and with a common ground. I don’t think the raspberry pi can handle that much current on its own. Hi guys, I’m trying to follow your tutorial and i’m stuck at this point. It prints the last number sended on each digit. I can decide to print or not on each digit but i can’t control the displayed number. For example : string_display = “1234”; digit to prints = [0,2] Will display: | 4 | | 4 | | Any ideas ? Without seeing your full code, it’s impossible to know what’s going wrong. But what you _could_ try is: which would give you a new_string of ‘1 3 ‘ which you can then display using Alex’s code above. Hi AndrewS and thank you for your time ! :) I tried the python clock and the python ticker ( just inverting 0 and 1 in the library ) As is, it will display numbers faster than my eyes can read it. So, i’ve just added a small sleep between each draw and in both cases, it displays always the same number on each digit. my 7segment ref is “3641BS” ( actually not the same as in the tutorial ) Any ideas ? here’s my code (sorry for this, i didn’t reach the “code” button ) : # = {‘ ‘:(1,1,1,1,1,1,1,1), ‘0’:(0,0,0,0,0,0,1,1), ‘1’:(1,0,0,1,1,1,1,1), ‘2’:(0,0,1,0,0,1,0,1), ‘3’:(0,0,0,0,1,1,0,1), ‘4’:(1,0,0,1,1,0,0,1), ‘5’:(0,1,0,0,1,0,0,1), ‘6’:(0,1,0,0,0,0,0,1), ‘7’:(0,0,0,1,1,1,1,1), ‘8’:(0,0,0,0,0,0,0,1), ‘9’:(0,0,0,0,1,0,0,1), ‘B’:(1,1,0,0,0,0,0,1), ‘y’:(1,0,0,0,1,0,0,1), ‘E’:(0,1,1,0,0,0,0,1), ‘1’:(0,0,0,1,0,0,0,1), ‘L’:(1,1,1,0,0,0,1,1), ‘X’:(1,0,0,1,0,0,0) time.sleep(0.3) finally: GPIO.cleanup() Hi Alex For my own clock project, which I’ll document when it’s done, I’m using this display which has a colon (to flash the seconds). It has 14 pins – the two extra ones are for the colon and the other twelve match exactly the display you have above, which makes life easy. Hi Alex, Great Post, thank you success! Having banged my head against it for a while last night got up this morning to make the link between the BCM number not being the same as the pin numbers on the pi! School boy I know but alas success. would it be possible to include a if statement to remove the leading zero so for anything less than 10:00 the leading zero was blank? Thanks for posting this tutorial! If you want a single segment to display your Raspberry IP address, I modified it slightly: # code modified, tweaked and tailored from code by bertwert # on RPi forum thread topic 91796 import RPi.GPIO as GPIO from time import sleep import socket import commands GPIO.setmode(GPIO.BCM) cmd=”(hostname -I) || true” output= commands.getstatusoutput(cmd) MyIp=output[1] #return eth0 IP address #MyIP=MyIp.split(“.”) #split by subnet #MyIPstr=MyIP[3].zfill(4) #Get last number with leading zeros # GPIO ports for the 7seg pins segments = (11,4,23,8,7,10,18,25) #GPIO numbers of 7 segment display for segment in segments: #define output and turn off GPIO.setup(segment, GPIO.OUT) GPIO.output(segment, 0) # GPIO ports for the digit 0-3 pins # digits = (22,27,17,24) digits = (22) #only 1 digit # 7seg_digit_pins (12,9,8,6) digits 0-3 respectively #for digit in digits: #set display base pin to ground (turn digit on) GPIO.setup(digits, GPIO.OUT) GPIO.output(digits, 0) num = {‘ ‘:(0,0,0,0,0,0,0,0), ‘0’:(1,1,1,1,1,1,0,0), ‘1’:(0,1,1,0,0,0,0,0), ‘2’:(1,1,0,1,1,0,1,0), ‘3’:(1,1,1,1,0,0,1,0), ‘4’:(0,1,1,0,0,1,1,0), ‘5’:(1,0,1,1,0,1,1,0), ‘6’:(1,0,1,1,1,1,1,0), ‘7’:(1,1,1,0,0,0,0,0), ‘8’:(1,1,1,1,1,1,1,0), ‘9’:(1,1,1,1,0,1,1,0), ‘.’:(0,0,0,0,0,0,0,1)} try: while True: for digit in range(len(MyIp)): for loop1 in range(75): #keep looping for about 1 second for loop2 in range(0,8): GPIO.output(int(segments[loop2]), num[MyIp[digit]] [loop2]) sleep(0.001) GPIO.output(22, 1) #sleep(0.05) #sleep after digit are printed GPIO.output(22, 0) sleep(1) finally: GPIO.cleanup() Better to move below section into the while True loop. Then it updates whenever the connection breaks or establishes. cmd=”(hostname -I) || true” output= commands.getstatusoutput(cmd) MyIp=output[1] #return eth0 IP address Hi! I have a question about the code. In the loop “for loop in range(0,7):” you’re stating each GPIO to True for every segment needing light. Okay but you don’t light them now, you light them a bit further with “GPIO.output(digits[digit], 0)” right? What I don’t understand is this: if I need to display an “8.”, with your code I will power on 8 leds simultaneously for about 0.001s okay? But… every led consume about 13mA and 8*13 = 104. It doesn’t respect the 50mA total rule! Even for a very short perdiod, are you sure it doesn’t damage GPIO controller ? What about limiting two or tree segments for a very short time and then light another bunch for leds but not all leds if needed simultaneously? Wouldn’t be safer? Where did you get the 13 mA figure from? I don’t think each segment pulls that much current in this setup. In any case, nobody trying this has reported a failure, so I don’t think it’s an issue. I haven’t specifically measured the current draw of each segment, but also remember that each digit is only lit 1/4 of the time in the above script. import os os.environ[‘TZ’] = ‘US/Eastern’ then change that to your time zone. else the clock will display in UTC If you have the Pi set up for the proper timezone, it will show properly. I set my pi to my local timezone. The code is just pulling the current time from the OS. So whatever it reports is displayed. I appreciate the blog. I had some 2.25″ 7-segments that I wasn’t using that I multiplexed together and I’m applying your concept to them. My prototype worked great with smaller versions. Working on the big one now. Hi, Alex. I am VERY new to using the Raspberry Pi, however I was thinking of building a digital clock. Initially I bought a clock kit using a Raspberry Pi Zero W. I’ve learnt a little of the programming from making that kit. However it isn’t possible to do what I wanted with the kit and it uses a MAX6951 chip to interface to the Pi so is different to program from your clock. I’ve seen your clock and wondered about expanding it. What I want to do is build a clock using larger LED displays (50mm) for the main display i.e. HH:MM:SS and smaller displays (25mm) for DD:MM:YY below. I know I will need to interface these displays with transistors to allow them to be driven with a higher voltage, probably 12VDC. My question revolves around the GPIO ports, by my reckoning to drive all those displays I would need 12 ports for each digit + a further 7 for the segments, total 19 ports. I’m not planning on using the DP, I will create the colons by using fixed LEDs. As a further add-on I was thinking of a DAY display using two alphanumeric 11 segment displays to give MO, TU, WE, TH, FR, SA & SU. so a further 2 ports plus these displays have 11 segments, so a further 4 ports for the additional segments, giving a total of 25 GPIO ports used. I was puzzled in your python code why you did not use the port numbers in order i.e. 2,3,4,5 etc. You used 11,4,23,8,7,10,18,25, why was that? The Raspberry Pi that I am planning on using is a Pi 3 Model B, and has 26 GPIO ports. Are some of some of those reserved i.e. not usable? Sorry this is such a long explanation but I hope it makes clear what I am planning to do. Any advice would be appreciated. Anthony Good question. And the answer may not be obvious – until you know! When I publish documentation I work kind of backwards. If you look at the wiring diagram/photo you will see exactly why I chose the ports the way I did. I avoid the i2c ports (because they have hardware pull-ups) but apart from that the port order was chosen exclusively so that the diagram and photo would be clear to the users. So essentially it’s determined by the hardware layout. Not many people work this way though. That’s great to know, and I’ve also learnt a little bit more about the Raspberry Pi, if I avoid the i2c ports I will be 1 port short, maybe I’ll do without the day indicator. Anyway small steps, I plan to start with the 4 digit clock and work up. It’s great to start learning about a new product, it’s really taken me back to the days of self assembly computers I.e. The Sinclair ZX80 and learning Z80 machine code to make the most use of that 1k of memory. Thank you for your explanation, and yes it does make sense now you’ve explained it. AIUI the hardware pull-ups on the i2c ports just means that if you configure them as ‘inputs’, then anything externally reading those pins would read a ‘high’ value instead of a floating value. But as you’re going to be using them as ‘outputs’, then it’s fine to drive them high or low, without any problems. When in output mode, the Pi’s GPIO pins are “strong enough” to override any external pull-up resistors. Thanks for that info, very useful. However I have decided to take a different path now. After reading a lot and building a kit using Pi Zero W, I have decided to use a MAX7221 to drive the displays. Thanks to a lot of quick learning to program in C by reading someone elses work, I think I can acheive what I want. I do know about the different voltage levels. Although it seems that in some cases the Pi will drive the MAX7221 directly, there are a few examples on this very site. If its a problem I think a MAX3379 voltage translator should work, lets just hope my soldering skills are up to that, as it only comes in a 14-TSSOP surface mount package. im a child and i got it working perfectly with no supervision or help thanks for the post Well done :) Hi Alex I have just bought your kit which I hope to use with an RPi I set up several years ago to monitor and log temperatures in a room (link below). That has been running successfully for several years now. I intend to use the display to show the current temp instead of having to run the web server my setup offers. Despite having set that temp Logger up, I am quite a noob when it comes to this so forgive me if I ask a stupid question: Where would I put your .py file and how do I get it to automatically run when the RPi boots? The temp Logger software currently running uses a cron command to get the data and write to an SQL database so I have no idea how to do that. The answer depends on what version of Raspbian you’re using. In the old days before Raspbian Jessie, all you had to do was add a line with “python path/to/file/name.py” in rc.local and it would start on boot. These days it’s a bit more involved since the switch to systemd Matt Hawkins wrote a great post about it which I refer to each time I do this… Thanks for your reply. The setup is running Wheezy and I intend to clone it and start adding the display from there. I have been reading up a little and realise it is more complex. I would not only need to run your .py file to drive the display, but also to query the temp perature prob say every minute, all without locking the Apache web server and other cron jobs that are running. I currently don’t know if the display loop in your .py file would lock me out of everything else. I guess a crude solution would be to modify the temperature-logger to also periodically write out the current temperature to a file e.g. /tmp/temperature, and then you could have a separate script (the Pi is quite capable of running multiple separate scripts at once) reading the value from the /tmp/temperature file, and displaying the number on the display. Ah ha! I was wondering how to have a global value for the temp, writing it to a file that can be accessed is a great idea! Crude? Hey, if it works it works. Also, rather than get the Temp Logger part to do that, since its primary role is to add to a data base every 15 mins, a separate logger fired with a cron job every minute could provide the required temp value. I have some experience with Visual Basic for Windows but pretty much new to both Linux and Python. Thanks a bunch Alex, looking forward to getting your kit and making a start on this. TypeError: tuple indices must be integerss, not tuple Perhaps you’ve made a typo in your code? Cruel ;p Sorry, didn’t mean to sound cruel, the more polite version of my answer would be: I’m sure Alex would have tested his code before posting it online, and nobody else has complained about a TypeError, so perhaps you accidentally made a mistake when typing it in. Could you double-check that what you’ve typed in matches the code Alex has in the article? Alternatively, maybe you could copy’n’paste your code to somewhere like pastebin.com to see if somebody can spot your mistake? I thought it was a “witty cruel pun” as in TypeError ~ Typo Hi – I just purchased your kit and am looking forward to using your code to display a strava API (via this project idea:) Do you have any tips on directing the 7 segment LED code for this use? Thanks! Ok let me get this straight. You’re going to use the strava scroller python program as an example showing you how to use the Strava api, but instead of using it with a scroll phat, you’re using one of these 7-segs? It should be doable, but you’re going to have to rip out all the scroll phat code and plumb in the 7seg code instead. Try to take it in smallish bites. Personally I’d figure out the API first and see if you can get the information you want to out of your account and print it on the Pi’s screen. As a separate thing, wire up the 7-seg kit and run the demo program to make sure it’s working. Once you’ve got your head around both sides, start to bring them together. That’s how I’d tackle it, anyway. Good luck and have fun. :) Thanks Alex – I’ll give it a go and will let you know how it comes out :) Hi – I bought a couple of your excellent kits, but have a minor problem. How do I get the brightness of the segments the same (they differ depending on how many segments are lit – ie a ‘1’ (with two segments lit) is much brighter than ‘8’ (with seven segments lit) etc.). (I’ve used the parts of the two kits to extend the display to show HH:MM:SS:FF where FF are frames (25 per second). That all works and was fun learning Python from my 14 year old kid – brought back memories of coding 35 years ago!) Cheers I’m really not sure how you could do this using direct GPIO drive of the segments. Sounds like you might be hitting the limits of the 3V3 bus on the Pi? If you’ve mis-wired the display to have a single (shared) resistor on the ‘common’ pin, instead of having separate resistors for each of the segments, then that would explain the behaviour you’re seeing (I made the same mistake many years ago when I used a 7-segment display for the first time). But if the display is wired correctly (i.e. you’ve followed Alex’s instructions above) and you are indeed hitting the current-limits of the Pi’s 3V3 bus, then perhaps a workaround might be to use (the same) higher-value per-segment resistors, which would reduce the per-segment current enough that they’d be the same brightness regardless of whether one segment is lit or all segments are lit? Thanks Alex & Andrew. I’m fairly sure I haven’t miss-soldered but will check again, then try the resistors. It’s also a very old Pi so could well be low current from the board. Will check and let you know! Cheers, Steven Alex! I have one question. I am new to this raspberry pi thing and when i entered your code, after i type finally: it shows me i have a syntax error. Can you help me? The full sentence it says is: SyntaxError : unindent does not match any outer indentation level It’s hard, for me, to understand this sentence, because English is my second language. Look forward to hearing from you. Python is whitespace-sensitive, which means you need to have the correct number of spaces at the start of each line, or Python won’t be able to understand your code. Double-check how many space characters you have at the start of each line, and make sure you’re not mixing space and tab characters. I tried every possible space and tab solution. I have solved the whitespace problem. Now i have another one. Display is displaying 6.6.6.6 and after a minute it goes to 7.7.7.7 “I tried every possible space and tab solution. … I have solved the whitespace problem.” LOL. Perseverance gets there in the end :) If all the digits are displaying the same number, that sounds like a wiring problem. Perhaps two connections are touching when they shouldn’t be? No, there are no connections touching. I checked multiple times. I also tried to reenter the code multiple times and it’s still showing the same number There’s definitely a problem somewhere… could it be a faulty breadboard? *shrug* I have just changed the breadboard. Still doesn’t work. You are supposed to work in a Python shell right? And are you supposed to enter your local time on line 37? No, you’re supposed to type (or copy) the code into a separate file (e.g. myfile.py), and run it from the command line with e.g. ‘python myfile.py’ I struggled over the same problem. But finally i found the solution. The problem is simple: Our displays work reverse! This mean that high and low have to be switched. Change line 13+21, 0 to 1 and 0 to 1. Lines 23-33, swap all 0 and 1. And finally line 46+48, 0 to 1 and 1 to 0. Hope it helps some one. Greets Hi Alex, thanks for sharing this. I’m super new to electronics, and I built my first 7-segment display by following your article. I updated your code to display a string of any length in marquee fashion Here’s the updated code. You may run it like `python marquee.py “hello there”` “` # Modified # by oebilgen@gmail.com import RPi.GPIO as GPIO import time import sys GPIO.setwarnings(False)) # 0: top # 1: top right # 2: bottom right # 3: bottom # 4: bottom left # 5: bottom top # 6: middle num = { ‘ ‘: (0, 0, 0, 0, 0, 0, 0), ‘0’: (1, 1, 1, 1, 1, 1, 0), ‘1’: (0, 1, 1, 0, 0, 0, 0), ‘2’: (1, 1, 0, 1, 1, 0, 1), ‘3’: (1, 1, 1, 1, 0, 0, 1), ‘4’: (0, 1, 1, 0, 0, 1, 1), ‘5’: (1, 0, 1, 1, 0, 1, 1), ‘6’: (1, 0, 1, 1, 1, 1, 1), ‘7’: (1, 1, 1, 0, 0, 0, 0), ‘8’: (1, 1, 1, 1, 1, 1, 1), ‘9’: (1, 1, 1, 0, 0, 1, 1), ‘A’: (1, 1, 1, 0, 1, 1, 1), ‘B’: (0, 0, 1, 1, 1, 1, 1), ‘C’: (1, 0, 0, 1, 1, 1, 0), ‘D’: (0, 1, 1, 1, 1, 0, 1), ‘E’: (1, 0, 0, 1, 1, 1, 1), ‘F’: (1, 0, 0, 0, 1, 1, 1), ‘G’: (1, 0, 1, 1, 1, 1, 1), ‘H’: (0, 1, 1, 0, 1, 1, 1), ‘I’: (0, 1, 1, 0, 0, 0, 0), ‘J’: (0, 1, 1, 1, 0, 0, 0), ‘K’: (1, 0, 1, 0, 1, 1, 1), ‘L’: (0, 0, 0, 1, 1, 1, 0), ‘M’: (1, 1, 1, 1, 0, 0, 1), ‘N’: (1, 1, 1, 0, 1, 1, 0), ‘O’: (1, 1, 1, 1, 1, 1, 0), ‘P’: (1, 1, 0, 0, 1, 1, 1), ‘Q’: (1, 1, 1, 0, 0, 1, 1), ‘R’: (0, 0, 0, 0, 1, 0, 1), ‘S’: (1, 0, 1, 1, 0, 1, 1), ‘T’: (0, 0, 0, 1, 1, 1, 1), ‘U’: (0, 1, 1, 1, 1, 1, 0), ‘V’: (0, 1, 0, 0, 1, 1, 1), ‘W’: (1, 0, 1, 1, 1, 0, 0), ‘X’: (0, 0, 1, 0, 0, 1, 1), ‘Y’: (0, 1, 1, 1, 0, 1, 1), ‘Z’: (1, 1, 0, 1, 1, 0, 1), ‘!’: (0, 1, 1, 0, 0, 0, 0), ‘?’: (1, 1, 0, 0, 1, 0, 1), ‘@’: (1, 1, 1, 1, 0, 1, 1), ‘-‘: (0, 0, 0, 0, 0, 0, 1), ‘_’: (0, 0, 0, 1, 0, 0, 0), } if len(sys.argv) != 2: print “Please type the text as parameter (e.g. python type.py \”hello there\”)” sys.exit(1) PADDING = ” ” * 4 text = PADDING + sys.argv[1].upper() + PADDING i = -1 old_binary = None try: while i < len(text) – 4: new_binary = (int(time.ctime()[18:19]) % 2 == 0) if old_binary != new_binary: i = i + 1 old_binary = new_binary s = text[i:i + 4] for digit in range(4): for loop in range(0, 7): d = num[s[digit]][loop] GPIO.output(segments[loop], d) GPIO.output(25, 0) GPIO.output(digits[digit], 0) time.sleep(0.001) GPIO.output(digits[digit], 1) finally: GPIO.cleanup() “` Love your clock tutorial. I am looking for a driver for bigger displays type Kingbright SC40-19EWA. I am running this program still on a Raspberry Pi 2 Alex, Have you ever tried to display 6 digits? I want to include seconds also.
https://raspi.tv/2015/how-to-drive-a-7-segment-display-directly-on-raspberry-pi-in-python?replytocom=60580
CC-MAIN-2021-25
refinedweb
6,384
80.01
Convert List to Map in Java In this tutorial, we will learn the logic for converting a List into a Map in Java. We will also implement a Java program that will demonstrate the conversion. Conversion of List representing the Job-Salaries of employees into a Map of key-value pair What is a List? List represents the ordered sequence of elements. List belongs to the java.util.List library module of Java. Elements from the list can be inserted, traversed, deleted, etc. The elements in the list are ordered in sequential manner that is why this data structure is called as List. Each element in a List has an index which begins from zero, the further elements in the list has indexing i+1, and so on. We can add any Java object to a List. If the List is not of specific class-type, using Java Generics, the programmer can mix objects of different types (classes) in the same List. Mixing objects of different class types in the same List is not a good programming practice. What is a Map? A map is an object that stores the Key-Value pair of data. A map cannot contain duplicate values. Each key in map object can map to maximum only one value. There are three types of maps, they are HashMap, TreeMap, and LinkedHashMap. The basic operations of Map are put(), get(), containskey(), containsvalue(), size() and isEmpty(). In Java, Map doesn’t allows the programmer to have the duplicate keys, but it allows to have duplicate values. HashMap and LinkedHashMap allows the programmer to have null keys and values in the map, but TreeMap doesn’t allow null key or value in the map. Following is the Java code implementation of List to Map conversion: import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; public class jobsalaries { public String employee; public int salary; public jobsalaries(String employee, int salary) { this.employee = employee; this.salary = salary; } @Override public String toString() { return employee + "=" + salary; } public String getemployee() { return employee; } public int getsalary() { return salary; } public static void main(String[] args) { // input list of Objects List<jobsalaries> js = new ArrayList<jobsalaries>(); js.add(new jobsalaries("Software_Engineer", 80000)); js.add(new jobsalaries("Business_Analyst", 120000)); js.add(new jobsalaries("HR", 150000)); Map<String, Integer> map = new HashMap<>(); // construct key-value pairs from employee and salary fields of Job for (jobsalaries obj : js) { map.put(obj.getemployee(), obj.getsalary()); } System.out.println("List : " + js); System.out.println("Map : " + map); } } Output: Explanation: In the above Java code I have demonstrated the List representing the job-salaries and converted it to a map. Beginning from the main class, I have declared a list of object js. Then further I have added few entries into the List. Then I have declared a resultant map of string type keys and integer type values. Then using a loop, the key value pairs have be formed by referring to the list of the Jobs and Salaries. The getemployee() and getsalary() are the mapper functions that assign the employee position accordingly to its respective salary.
https://www.codespeedy.com/convert-list-to-map-in-java/
CC-MAIN-2020-45
refinedweb
520
58.48
On Fri, 23 Sep 2011, Serge E. Hallyn wrote:> (re-sending to Cc: Greg and linux-usb)> > Add to the dev_state and alloc_async structures the user namespace> corresponding to the uid and euid. Pass these to kill_pid_info_as_uid(),> which can then implement a proper, user-namespace-aware uid check.> > Changelog:> Sep 20: Per Oleg's suggestion: Instead of caching and passing user namespace,> uid, and euid each separately, pass a struct cred.This should be broken up into two separate patches: One to addkill_pid_info_as_cred() and the other to modify the usbfs driver.> --- a/drivers/usb/core/devio.c> +++ b/drivers/usb/core/devio.c> @@ -393,9 +395,8 @@ static void async_completed(struct urb *urb)> struct dev_state *ps = as->ps;> struct siginfo sinfo;> struct pid *pid = NULL;> - uid_t uid = 0;> - uid_t euid = 0;> u32 secid = 0;> + const struct cred *cred = NULL;> int signr;> > spin_lock(&ps->lock);> @@ -408,8 +409,7 @@ static void async_completed(struct urb *urb)> sinfo.si_code = SI_ASYNCIO;> sinfo.si_addr = as->userurb;> pid = as->pid;> - uid = as->uid;> - euid = as->euid;> + cred = as->cred;> secid = as->secid;> }> snoop(&urb->dev->dev, "urb complete\n");> @@ -423,8 +423,7 @@ static void async_completed(struct urb *urb)> spin_unlock(&ps->lock);> > if (signr)> - kill_pid_info_as_uid(sinfo.si_signo, &sinfo, pid, uid,> - euid, secid);> + kill_pid_info_as_cred(sinfo.si_signo, &sinfo, pid, cred, secid);This continues a bug that already exists in the current code. Once ps->lock is released, there is no guarantee that the async structure will still exist. It may already have been freed, and the reference to as->cred may already have been dropped. That's why the local copies have to be made above. cred shouldn't be a simple copy of as->cred; it should also increment the reference count.> @@ -706,8 +705,7 @@ static int usbdev_open(struct inode *inode, struct file *file)> init_waitqueue_head(&ps->wait);> ps->discsignr = 0;> ps->disc_pid = get_pid(task_pid(current));> - ps->disc_uid = cred->uid;> - ps->disc_euid = cred->euid;> + ps->cred = get_cred(cred);You might as well get rid of the "cred" local variable. It isn't used for anything except this assignment.Alan Stern
http://lkml.org/lkml/2011/9/23/188
CC-MAIN-2015-06
refinedweb
343
54.12
Wouter's), but I like this one mainly because its integrated in VS. (Where I spend much of my time these days). Andrew Whitechapel and the VSTO folks got a shipping vehicle for Tom's OpenXML Package Editor. Along with a number of other VSTO and VSTA Powertools, the VS-integrated Package editor is available here. Here's a list of features you'll find in the VSTO Powertool Package Editor: Andrew has all the details on his blog. Gary Depue from Summit Software presented a webcast last Wednesday on How to Integrate VSTA 2.0. I'm still getting up to speed here in the new team, so I missed it, but I'll be sure to go back and watch in the next couple of days...) Change.. Cheers, Kevin. BillG will be there to give a keynote, and since he's stepping down from his day-to-day tasks next year, you don't want to miss that. You can find all the details on. Charles Torre sat down with Art Leonard, Brian Jones, Doug Mahugh and me to discuss Open XML on Channel9. It's a two part session, Part1 is here. Brian and Doug did most of the talking and focused broadly on the background around the file format. Art and I talked a little bit about our efforts in designing and prototyping the new Open XML SDK in Part2, which should be up soon. Art and I had a lot of fun working on the Open XML API, I hope that shows in this video. And I welcome feedback on my whiteboard skills. I've been quiet on the blog these past few months. I've been juggling work, a new baby (and a toddler) and some remodel plans, so extra-curricular activities like blogging have taken a back seat. But today at TechEd 2007, we released a Community Technology Preview (CTP) of a managed-code Open XML API for use with .NET Framework 3.0! Art Leon. We learned a lot about working with Open XML file via the Packaging API while working with Ken Getz building the code snippets last year. With this API, we want to reduce the number of lines of code that are required to perform simple Open XML tasks, such as; The API does this by abstracting the Packaging API and by strongly typing the individual parts of the Open XML file. By exposing parts of the package s strongly-typed objects, searching for and discovering those parts will require less code. All of the structure of the Open XML file is exposed, so you can quickly move from one part to the next within the graph and discover the part you need. Take the WDDeleteComments() snippet. In it, we have to walk through the graph, starting at the package, find the main document part, then find the comments part, delete it and the relationships to it from the main document part: //; } //. … But with the new API, the same code is reduced to this: using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(docName, true)) MainDocumentPart mainPart = wordDoc.MainDocumentPart; mainPart.DeletePart(mainPart.CommentsPart); As in the snippet, you still need to go through the XML of the MainDocumentPart and remove the references to the Comments part, but I think you'll find that working with the package structure of Open XML files much is much easier. Our goal with this CTP is to get some feedback on our work to improve the way you code solutions with Open XML using .NET. I hope you'll download it and start playing. There is a forum on MSDN for feedback. Doug has posted some pictures of a meetup we had last week with the OpenXML developers group:. I just got an email from Erika that read "I have a surprise for you!" and included a link to her latest blog post. Seems Ken Getz (who wrote the code snippets) repurposed 5 of the snippets into a couple of Visual walkthroughs with code. Ken's a great guy and he was a lot of fun to work with on the code snippet project. Links and details are on Erika's blog. I recorded a screencast for Channel9 last week in which I walk you through a few of the code snippets we shipped for working with Office Open XML files using .NET 3.0. I take you through three snippets: 1. How to find the officeDocument part, or "start part" - This snippet is particularly useful. The officeDocument part is related from the package and represents the start of the tree of parts (graph, really) that represents the Office document. Once you've found this part, you can start working through the relationships and parts to find what you need. This is the only snippet that works with all three file types (documents, workbooks and presentations).. UPDATE: We got the screencast posted late today. Sorry about the inconvience. Doug was out when we noticed the bad link. His email had an OOF message pointing to another person, who had an OOF message that pointed to another person, who..... By the time I got through all the cascading OOF's, Doug had arrived at his destination and updated the link. Thanks Doug! I mentioned that we were expecting a baby girl anytime. Well, Molly Ryan Boske arrived Thrusday morning. mother and baby are fine. She weighed in at a whopping 10lbs 1oz, which reminds me of the They Might Be Giants Song, She's Actual Size. Microsoft provides great benefits, and I really have to thank the company for taking such good care of us all. I'll be OOF for the rest of the year (4 weeks parental leave). Okay, I've been out for a couple of days, taking some time after shipping Office 2007 and getting ready for another product delivery (our second child, a girl, due anytime). While I was busy getting baby stuff out of storage, the rest of the company kept busy shipping great products for 2007. Here's the rollup with links of everything that came out over the past couple of days: Visual Studio 2005 Tools for Office Second Edition VSTO 2005 SE is a fully-supported free add-on to Visual Studio 2005 that enables developers to build applications on the 2007 Office System. This is really great, the folks in VSTO stepped up and got this Second Edition out just in time for Office 2007. Here are some highlights of the functionality: Go get it and start building some great CTP solutions! .NET Framework 3.0 In this released build of .NET 3.0, you will find the System.IO.Packaging namespace in the windowsbase.dll assembly. This is Microsoft's offering to enable developers to write applications against the Office Open XML file formats. Windows Vista Yes, you heard it right, Vista has shipped, it's finally here. We signed off on the final build of Office 2007!
http://blogs.msdn.com/kevinboske/Default.aspx
crawl-002
refinedweb
1,161
71.24
, Let me ask something. Rule: Don’t use a variable that has a side effect applied to it more than once in a given statement. It’s clear. Let’s talk a bit about those two examples of "risky" codes. is a "risky", meanwhile using instead may be a good solution? As well as is less trustable as Am I right? In the latter case, yes. But in the former case, if you’re expecting x to be incremented your substitution won’t do that. The replacement would be: Hi Alex, Thanks, I’ve learnt one new thing today. Thanks! I believe you meant to use "versions" in the following: There are actually two version of each operator Maybe you were trying to tease on the next one and i’m too literal to get it, but you used "any" twice in: These problems can generally all be avoided by ensuring that any any variable that has a side-effect applied is used no more than once in a given statement. tx! Typos fixed! Thanks! Hi Alex, I am confused on this section below, just can’t seem to get my head around it. Probably i’m completely misunderstanding it.. but wouldnt the ++ be applied to x before the assignment due to the precedence of ++ being higher than =? Would you please be able to explain what the compiler is doing in each situation, i.e. when ++ is applied to x before or after assignment? Hope this makes sense. Cheers! This is the part i’m confused about: What value does this program print? The answer is: it’s undefined. If the ++ is applied to x before the assignment, the answer will be 1. If the ++ is applied to x after the assignment, the answer will be 2. Operator++ does have higher precedence than operator=, which is why it’s evaluated first. So first, x++ evaluates to 1. At this point, one of two things can happen: * x gets incremented, then the value of 1 from the evaluation of x++ is assigned to x (result: x=1) * the value of 1 from the evaluation of x++ is assigned to x, then x gets incremented (result: x=2) You don’t really need to understand this to a high degree -- just know that you should never apply more than one side effect to a given variable in a single statement. I am confused about the same part. "If the ++ is applied to x before the assignment, the answer will be 1. If the ++ is applied to x after the assignment, the answer will be 2." int main() { int x = 1; x = x++; std::cout << x; return 0; } I don’t know if the example needs to be modified. If yes, Could you please write out the new statements for "++ is applied to x before/after the assignment". Thanks The example doesn’t need to be modified (it’s a fine example of a statement you should never write). All you really need to know is using a variable with side effects applied more than once in a given statement may lead to undefined results, because different compilers may interpret these differently. The GNU C++ compiler issues the next warning: main.cpp:35:27: warning: operation on ‘x’ may be undefined [-Wsequence-point] int value = add(x, ++x); // is this 5 + 6, or 6 + 6? It depends on what order your compiler evaluates the function arguments in So you can’t say you’re not warned! i don’t get the postfix increment/decrement operators. From my understanding; x++ == x x- == x what’s the point? x++ returns a copy of x, and then increments x. The above snippet should print 56. So this is useful when you want to increment x after using it. Confusion in evaluation sequence of increment and decrement operator is one of the major causes of bugs in my programs. Hence, I started using brackets like (var++) and (++var). Using a variable with side effects more than once in a given statement results in undefined behavior. The compiler could return 4, 5, or 6. What’s most likely happening in your case is that i++ returns its value first, and then increments i. 2 + 2 * i++ = 2 + 2 * 1 = 4. 4 gets assigned to i. Then the ++ takes effect, and increments i, to get 5. Int i =1; i = 2+2*i++; The ans is 5 . Why ??? It should be 6 or 4 Great job Alex with these tutorials, although I didn’t fully understand what a side effect was from reading this sub-chapter. So I decided to do a bit of research to help clarify what a side effect is. I’ve put in what I’ve collected below. I know one example below uses the function printf() (which is used in the C language) instead of cout, but as C++ supercedes C, this function also works in C++ and is still relevant. Here’s the information I’ve found below: A computer program stores data in variables, which represent storage locations in the computer’s memory. The contents of these memory locations, at any given point in the program’s execution, is called the program’s state. Any operation which modifies the state of the computer or which interacts with the outside world is said to have a side effect. See Wikipedia on Side Effect. For example, this function has no side effects. Its result depends only on its input arguments, and nothing about the state of the program or its environment changes when it is called: In contrast, calling these functions will give you different results depending upon the order in which you call them, because they change something about the state of the computer: This function has the side effect of writing data to output. You don’t call the function because you want its return value; you call it because you want the effect that it has on the "outside world": One common demonstration of side effect behavior is that of the assignment operator in C++. For example, assignment returns the right operand and has the side effect of assigning that value to a variable. I just thought I’d mention that an evaluation of expression does not have a side effect if it does not change an observable state of the machine, and produces same values for same input. It also should be noted though that side effects can be much more than just modifications of the operands, it can also involve modifications of other objects, global data, I/O operations etc. Basically, anything that makes changes besides the return-value is a side effect. References: //Here’s another explanation for side effects complementary to the explanations I’ve mentioned thus far: What exactly is a ‘side-effect’ in C++? Is it a standard term which is well defined? c++11 draft - 1.9.12: Accessing an object designated by a volatile glvalue (3.10),. What is the significance of a ‘side effect’? The significance is that, as expressions are being evaluated they can modify the program state and/or perform I/O. Expressions are allowed in myriad places in C++: variable assignments, if/else/while conditions, for loop setup/test/modify steps, function parameters etc…. A couple examples: ++x and strcat(buffer, "append this"). In a C++ program, the Standard grants the optimiser the right to generate code representing the program operations, but requires that all the operations associated with steps before a sequence point appear before any operations related to steps after the sequence point. The reason C++ programmers tend to have to care about sequence points and side effects is that there aren’t as many sequence points as you might expect. For example, given you may expect a call to f(2, 3) but it’s actually undefined behaviour. This behaviour is left undefined so the compiler’s optimiser has more freedom to arrange operations with side effects to run in the most efficient order possible - perhaps even in parallel. It also avoid burdening compiler writers with detecting such conditions. 1. Is the comma operator free from side effects? Yes - a comma operator introduces a sequence point: the steps on the left must be complete before those on the right execute. There are a list of sequence points at - you should read this! (If you have to ask about side effects, then be careful in interpreting this answer - the "comma operator" is NOT invoked between function arguments, array initialisation elements etc.. The comma operator is relatively rarely used and somewhat obscure. 2.Side effects when passing objects to function in C++ When calling a function, all the parameters must have been completely evaluated - and their side effects triggered - before the function call takes place. BUT, there are no restrictions on the compiler related to evaluating specific parameter expressions before any other. They can be overlapping, in parallel etc.. So, in f(expr1, expr2) - some of the steps in evaluating expr2 might run before anything from expr1, but expr1 might still complete first - it’s undefined. //One more explanation: The term "side-effect" arises from the distinction between imperative languages and pure functional languages. A C++ expression can do three things: 1. compute a result (or compute "no result" in the case of a void expression), 2. raise an exception instead of evaluating to a result, 3. in addition to 1 or 2, otherwise alter the state of the abstract machine on which the program is nominally running. (3) are side-effects, the "main effect" being to evaluate the result of the expression. Exceptions are a slightly awkward special case, in that altering the flow of control does change the state of the abstract machine (by changing the current point of execution), but isn’t a side-effect. The code to construct, handle and destroy the exception may have its own side-effects, of course. The same principles apply to functions, with the return value in place of the result of the expression. So, just computes a return value, it doesn’t alter anything else. Therefore it has no side-effects, which sometimes is an interesting property of a function when it comes to reasoning about your program (e.g. to prove that it is correct, or by the compiler when it optimizes). The following function: does have a side-effect, since modifying the caller’s object "a" is an additional effect of the function beyond simply computing a return value. It would not be permitted in a pure functional language. Alex, your definition of a side effect is: "a side effect is a result of an operator, expression, statement, or function that persists even after the operator, expression, statement, or function has finished being evaluated." The stuff in your quote about "has finished being evaluated" refers to the fact that the result of an expression (or return value of a function) can be a "temporary object", which is destroyed at the end of the full expression in which it occurs. So creating a temporary isn’t a "side-effect" by that definition: other changes are. References: Thanks for the info and correction. I’ve updated the section on side effects slightly to make it more accurate and hopefully a little more comprehensible. You’re welcome. It’s more accurate now. Thank you for updating this! 🙂 @Alex I guess "… one of the parameters to x has a side effect." should be "… one of the parameters to add() has a side effect." Yup, fixed. Thanks! Please Alex, I’m looking forward to read your answer over my question. I think "Tormentor" did a question just the same day than me and possibly you didn’t realize about it. Thanks for learcpp.com, it is an incredible well explained guide. Yup, I just missed your comment. It’s answered now. Sorry! What does this mean? Also, what is it used for? It was in my school assignment. This increments the value of local variable g by the value of global variable g. We discuss the meaning of :: in lesson 4.3a -- Namespaces. I don’t know if comment this here or in functions page. Using Visual Studio 2015 under windows7x64. First line couts: X:3, Y:2, Z:1. Result of the side effect: 6. Second one couts: X:3, Y:3, Z:3. Result of the side effect: 9. But what really I didn’t expect was X:3 and Z:1. Why it isn’t in opposite? (X=1,Y=2,Z=3)?. Isn’t that the order of the call (the order sending arguments to the function) This issue is about how function takes arguments, or how internally "++" works? C++ does not guarantee whether function parameters will be evaluated from left to right or right to left. Furthermore, using an variable with side-effects more than once in a single expression will yield indeterminate results. can someone explain me why output of the following is 11 and not 10? i was expecting the temporary value 10 to be initialized to ‘a’ after ‘a’ has been incremented from 10 to 11. Don’t do this. You’re breaking the rule: Don’t use a variable that has a side effect applied to it more than once in a given statement. The statement a = a++ results in undefined behavior, and different compilers may product different results (some will say the answer is 10, others 11). a = a++ "Rule: Favor pre-increment and pre-decrement over post-increment and post-decrement. The prefix versions are not only more performant, you’re less likely to run into strange issues with them." No wonder I keep running into strange issues. From now on I will refer to this language as ++C. Hi Alex, So… if i’ve understood you correctly. Would the following snippet be considered acceptable programming:- #include <iostream> using namespace std; // "i’m a lazy typist." int add(int x, int y){ return x+y; } int main(){ int x(5); int z(5); int value = add(x, ++z); /* or z++ however prefer preincrement/predecrement */ cout << value; return 0; } which i assume will produce the result of 11 on all compilers. As the point being, we’re avoiding, reassigning a value to ‘x’ in the same statement “add(x, ++x)”. : warning unsequenced modification and access to ‘x’. As reported on my phone app, CCTools. In which your example in this lesson, refused even to compile, hence your warning, I also use codeblocks/laptop for comparison its less fussy. sorry if this text seems verbose. Thanks again Alex. Yes, this is fine. Hey Alex! something is wrong. I am unable to write xminusminus (post decrement of a variable ‘x’). When I type xminusminus and looking back to my posted comment, I only see a single minus after x (something like ‘x-‘). That is not a big problem and you can ignore this . I am only trying to clear that you made a mistake in the table above. Sorry for this extra comment. It is okay when I use [code] tag. Take a look at the last row of the table (the table which is showing the two versions of increment-decrement operators) : "Postfix decrement (pre-decrement)" | -- | x- | Evaluate x, then decrement x should be: "Postfix decrement (post-decrement)" | -- | x- | Evaluate x, then decrement x One question. I m not very good in math. Can I still be a good programmer? How math is related to programming? What browser are you using? It looks okay to me on Chrome, Firefox, and IE. As for programming and math, it depends on what kind of programming you want to do. If you’re going to do complicated stuff with algorithms, 3d graphics, probability, or simulations, then yes, having a good math background is important. If you’re going to do business programming, or functionality for web pages, or user interfaces, then you probably won’t need much math. What’s more important is being good at logic. I m using chrome. Thnx. Looks like you didn’t noticed what I said. Alex I was talking about the table (post decrement, pre decrement table). You made a typo I think, in the last row of table. Postfix-decrement (pre-decrement) Should be: Postfix-decrement (post-decrement) Hope it is clear now. Ahh, I see what you mean. All fixed now. Thanks! Can anyone explain why the output is 11 9 11? Honestly, I’m not sure. operator<< evaluates from left to right, so I'd expect the following to happen: a = 10 ++a = 11, which gets printed. a++ means 11 gets printed, a is incremented to 12. --a = 11, which gets printed. So I'd expect 11 11 11. But Visual Studio 2010 and 2013 prints 11 9 11. Visual Studio 2008 prints 10 9 11. So even different compilers don't agree. It's possible that the compilers just aren't handling this case correctly. Which would be more evidence not to use a variable that has a side effect applied to it more than once in a given expression. Thanks,Alex. Hi! Alex. The program u have given above is displaying a blank o/p what could be the possible problem? please help! This program doesn’t print anything, so there’s no output. I updated the sample program to print value, so you can see whether it equals 11 or 12 on your compiler. Hi ,Alex, I am confused . because " ++ " is higher precedence than " = " so I think first x++ ,and then y = (x++), so y is 6 . please correct me if i am wrong!! Thanks. Great question, and yes, you are wrong. 🙂 As you correctly note, ++ has higher precedence than =, so x++ evaluates first. What happens when x++ evaluates? The compiler makes a temporary copy of x’s value (5). Then the actual variable x is incremented to 6. Then the value of the copy of x (5) is used for evaluation. Therefore, x++ evaluates to 5, now 6, even though the variable x is now set to 6! And thus, “int y = x++” becomes “int y = 5”, and thus y is assigned the value of 5. You can validate that this is true by putting this line in your compiler and then printing the value of x and y afterward. hey, this code give the output 14. int t=1; t=++t + ++t + t++ + t++; that is t=2+3 and in postincrement operator 3+4; t=2+3 + 3+4; t=3+3 + 4+4; t=6 +8; t=14; if the number of preincrement is greater than post then value of post operator is bigest value of preincrement operator; for example: t=1; t=++t + ++t + ++t + t++ + t++; output is 20; BECAUSE t=2+3+4 + 4+4; t=4+4+4 + 4+4; t=20; it’s really confusing.how come t=3+3+4+4? Given int t = 1; Since cout << ++t + ++t + t++ + t++; outputs 12 (as in 3 + 3 + 3 + 3), but t = ++t + ++t + t++ + t++; cout << t; outputs 14, I think what happens is: 1. t is declared to be 1. 1. t is preincremented. t is now 2. 2. t is preincremented. t is now 3. 3. t = t + t + t + t is evaluated. from 3 + 3 + 3 + 3, t is now 12. 4. t is postincremented. t is now 13. 5. t is postincremented. t is now 14. 6. t is printed. int t = 1; cout << ++t + ++t + t++ + t++; t = ++t + ++t + t++ + t++; cout << t; Actually I’m probably wrong. //But wait! There's more! //But wait! There's more! int t = 1; cout << t++ + t++ + t++ + t++ << endl; cout << t << endl << endl; // Should evaluate 1 + 1 + 1 + 1 as 4, // then output 4, // then increment t four times to 5, // then output t as 5. // Actually outputs 4 and 5 OK. t = 1; t = t++ + t++ + t++ + t++; cout << t << endl << endl; // Should evaluate 1 + 1 + 1 + 1 as 4, // then set t to 4, // then increment t four times to 8, // then output t as 8. // Actually outputs 8 OK. t = 1; cout << ++t + ++t + ++t + ++t << endl; cout << t << endl << endl; // Should increment t four times to 5, // then evaluate 5 + 5 + 5 + 5 as 20, // then output 20, // then output t as 5. // Actually outputs 15 and 5 WTF? t = 1; t = ++t + ++t + ++t + ++t; cout << t << endl; // Should increment t four times to 5, // then evaluate 5 + 5 + 5 + 5 as 20, // then set t to 20, // then output t as 20. // Actually outputs 15 WTF? Seems like your compiler may not be doing something quite correctly. Should evaluate as 20 and 5, and it does in Visual Studio 2008. That’s one of the many problems with including multiple expressions with side effects in a single statement. Even if the C++ language guarantees that it will evaluate a certain way (and it may not), your compiler may not do it correctly. int t; t= ++t + ++t + t++ + t++; printf(“%d”,t); //result=14 how?? Hi all int i=2; i=++i * i++ * i++; answer i am geeting is 29 n=++i*i++*i++; answer i am getting is 27 printf(“%d”,++i*i++*i++); answer i am getting is 36 for same expression getting different values..please sort this problem how complier is going evalute? please reply me… printf for some reason seems to evaluate the i++ once it’s done, so it’s 3 (you incremented it before using it - ++i) * 3 * 4. the second case (n=) simply evaluates the i++, and since the other additions are done only after the statement is done, you get 3*3*3 the first case is the same as the second one, except it adds +2 gained from the increments (i++) before. Hi all int i=2; i=++i * i++ * i++; answer i am geeting is 29 n=++i+i++*i++; answer i am getting is 27 printf(“%d”,++i+i++*i++); answer i am getting is 36 Hi, I teach C++ to 12th graders. I encountered this problem in the lab wherein the statement I would expect the value 13, but my students got 14 as the output. any pointers? what is funny is that the following code gives the value 13. The only difference being in the way x is initialized. I am baffled!!! answer to the second code snippet is also = 14 Remember that operator precedence requires that ++ be evaluated before + and =. So what happens is ++y is evaluated twice with y becoming 6 then 7. The addition 7+7 is then performed before finally assigning result 14 to x. I fear I also got two different answers, using Turbo c IDE. Now, the answer might lie in Order of precedence, though i cannot satisfactorily expalin it. Here, acc to table Alex gave: Pre-increment and Dynamic initialization fell in third box, with evalauation going Right to left. Therefore in second scenario, the case might be: (Going Left to Right) int x ( This evaluates third, giving x=13) = ++y(This evalautes second, giving y=7) + ++y (This evaluates first, giving y=6); IN first case, no dyanmic initialization, and hence, x is evaluated pretty much same way. x = ++y + ++y; Both ++y having higher precedence, than any other operand in statement are evaluated to 7 apeice, giving answer 14. Though, I think this can explain it, I would like to have confirmation…. I will post it out on forums sometime… and see if anyone agrees with me, or provides better explanation. Hello buddy, dont panic, your students r right ,try to understand this- int y=5,x; x=(++y)+(++y);// x=6+7=13 but when compiler encounter 1st expresion ++y it increment value of y to 6 now y is 6 nd when it encounter another ++y in statement ++y, value of y becomes 7 now notice that compiler now have or assign value of y to 7, now compiler treats all occurence of y value to 7 in statement before assining value to x,so whether it is x=(++y)+(++y); x=6+7;//incremented or say highest value of y becomes value of y for compiler before assigning to x nd therefore expression becomes x=7+7; x=14; or consider another ex. for better understanding x=(++y)+(++y)+(++y); x=6+7+8;//incremented or highest value of y will become value of y before assining to x so it becomes- x=8+8+8; x=32; hope that u can understand now,thanks. It’s as technorat said. consider this expression: x = (a++) + (a++); now what happens is it takes the value of a, adds the value of a to it, and then does the incrementing (if a = 5, then x will be 10, and a will be 7 in the end). It’s the same thing when having ++ in front. example x = (++a) + (++a); It takes a value, increments it, then it sees another ++ in the statement, so it increments ‘a’ again, and then it does the adding, meaning that if a was 5, then x will be 14, and a will be 7 in the end. Also - you don’t really need the brackets in this case, as ++ has higher priority than the standard + sign (tho it helps the readability). oh no . complecated. need to learn easiest way. hi Alex! I tried to test the precedence of the ++ operator, both prefix and postfix. The code i typed in is: Why the is ++x++ illicit? Thanks for the nice tutorial. ++x++ evaluates as ++(x++) due to post-increment having a higher precedence than pre-increment. Note that both versions of ++ need to operate on an lvalue (a variable that has an address). With pre-increment, because the increment is done before the value is evaluated, the value is returned as an lvalue. However, with post-increment, the value gets evaluated before the increment, so the value is returned as an rvalue. So in your expression, x++ is returning an rvalue, and pre-increment can’t work with an rvalue, so the compiler throws an error. Note that if you write the expression as (++x)++, then it does work, because (++x) returns an lvalue that can be operated on by post-increment. Thanks for explaining. I was struggling to understand this: ++x++. It says above that… “C++ does not define the order in which function parameters are evaluated.” But in the section “Precedence and associativity” it shows that () “Parenthesis”, () “Function calls”, () “Implicit assignments”, and , “comma” have an associative of left to right… what is the difference? No, what he said is that function parameters could be done left to right, or right to left. E.g. It’s not defined whether you do a + b or b + a (makes no difference in this case, but with increment/decrement operators it could get messy. I can see how this would be confusing. The comma used inside a function call parameter list to separate the parameters isn’t the comma operator. So while the comma operator does have a left to right association, the order that parameters inside a function call are processed could be either direction. CAN YOU EXPLAIN THE BELOW ASSIGMENT OPERATOR, PLEASE: INT SUM, CTR; SUM=12; CTR=4; SUM = SUM + (++CTR) THE VALUE IS 15 …..HOW? SUM = SUM + (-CTR) SUM = SUM + (CTR++) THE VALUE IS 16 …..HOW? SUM = SUM + (CTR-) THE VALUE IS 16 …..HOW? thanx in advance Tried to evaluate the first expression and I get the expected result SUM = 17. Have not tried the other exampels but assume I will get the expected results as well. No idea how you end up with your results?! At face value this looks strange…I will go over them in order. 1. This one should evaluate to 17, like Peter P said. There is no way it can evaluate to 15, since it will become: (SUM = 12 + 5). 2. again, something is wrong. This one should be (I think), SUM = 12 + (-4) = 8. 3. This one should be correct, as the postfix ++ will increment CTR AFTER the expression is evaluated (I think). 4. I have never seen a single - postfixed like this, I dont believe it should even compile. Use your compiler to try and compile each statement, and see what happens. I have just written my observations here, I did not compile the code. you have written the statement for the post fix increment as; The postfix increment/decrement operators are a little more tricky. The compiler makes a temporary copy of x, increments x, and then evaluates the temporary copy of x. but compiler should first evaluate the temporary copy of x and then it should increment x. please correct me if i am wrong!! thanks Som Shekhar I believe what I wrote is correct. If you find something that seems to indicate otherwise, let me know. I don’t know the internals of the C++ compilers, but I can see how this is confusing. May I suggest that unless you know the compiler actually does it in that order that you change it to: The compiler makes a temporary copy of x, evaluates the temporary copy of x, and then increments x. As this will make it clearer that that the increment of x does not affect the copy of x. If I’m not mistaken mr Shekhar understood the copy as a pointer like thing, whereas I assume you meant a complete clone of the memory for x. Another suggestion, what about motivating people to use the debugger for code like this? This seems like the perfect place to get some simple and safe experience, where the debugger may make it more clear what actually happens. I guess the challenge here is what debuggers people will use though :/ but yeah just a thought 🙂 You should remember it’s a tutorial for beginners. If you tell them about the internal functionning of the compiler when they just learn something new, they won’t focus on anythig and will get lost. I’m okay with discussion of related more advanced topics in the comments section. Users who are curious can gain additional knowledge. Users who do not have the foundational knowledge to understand the comments can simply skip them without worrying about whether that will impact their understanding of future tutorial material. The most reliable reference materials I’ve found indicates that the increment of the original object happens before the evaluation of the copy. is equivalent to, Whereas writing Name (required) Website
http://www.learncpp.com/cpp-tutorial/33-incrementdecrement-operators-and-side-effects/comment-page-1/
CC-MAIN-2018-05
refinedweb
5,110
63.9
> So which of xor-popcount and add-up-up-trailing-zero-counts is faster may well depend on platform. I ran some timings for comb(k, 67) on my macOS / Intel MacBook Pro, using timeit to time calls to a function that looked like this: def f(comb): for k in range(68): for _ in range(256): comb(k, 67) comb(k, 67) ... # 64 repetitions of comb(k, 67) in all Based on 200 timings of this script with each of the popcount approach and the uint8_t-table-of-trailing-zero-counts approach (interleaved), the popcount approach won, but just barely, at around 1.3% faster. The result was statistically significant (SciPy gave me a result of Ttest_indResult(statistic=19.929941828072433, pvalue=8.570975609117687e-62)). Interestingly, the default build on macOS/Intel is _not_ using the dedicated POPCNT instruction that arrived with the Nehalem architecture, presumably because it wants to produce builds that will still be useable on pre-Nehalem machines. It uses Clang's __builtin_popcount, but that gets translated to the same SIMD-within-a-register approach that we have already in pycore_bitutils.h. If I recompile with -msse4.2, then the POPCNT instruction *is* used, and I get an even more marginal improvement: a 1.7% speedup over the lookup-table-based version.
https://bugs.python.org/msg409360
CC-MAIN-2022-05
refinedweb
216
51.18
Emilio G. Cota <address@hidden> writes: > On Tue, Dec 04, 2018 at 13:52:16 +0000, Alex Bennée wrote: >> > We could always >> > >> > #ifdef __FAST_MATH__ >> > #error "Silliness like this will get you nowhere" >> > #endif >> >> Emilio, are you happy to add that guard with a suitable pithy comment? > > Isn't it better to just disable hardfloat then? > > --- a/fpu/softfloat.c > +++ b/fpu/softfloat.c > @@ -220,7 +220,7 @@ GEN_INPUT_FLUSH3(float64_input_flush3, float64) > * the use of hardfloat, since hardfloat relies on the inexact flag being > * already set. > */ > -#if defined(TARGET_PPC) > +#if defined(TARGET_PPC) || defined(__FAST_MATH__) > # define QEMU_NO_HARDFLOAT 1 > # define QEMU_SOFTFLOAT_ATTR QEMU_FLATTEN > #else > > Or perhaps disable it, as well as issue a #warning? Issuing the warning is only to tell the user they are being stupid but yeah certainly disable. Maybe we'll be around when someone comes asking why maths didn't get faster ;-) > > E. -- Alex Bennée
https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg00630.html
CC-MAIN-2019-35
refinedweb
144
50.16
From: o.r.c@p.e.l.l.p.o.r.t.l.a.n.d.o.r.u.s (david parsons) Date: 17 Jun 1999 17:41:19 -0700 The namespace for /dev was dictated a long time ago, and you vary from that namespace at your own risk. I think herein lies some of the confusion about devfs vs. procfs, et.al.The files in /dev fall into two categories. Some were indeed dictated along time ago, and indeed you can't change them without causing allosrts of problems with programs and shell scripts which have these nameshard-coded into them. Examples of such names include /dev/tty,/dev/null, and so on. I would argue that procfs falls in the samecategory, since most (but certainly not all) of the usage of the procfilesystem, especially the non-process related aspects of the procfilesystem, are used by programs. (For example: free, ps, etc.)There is however second class of files which live in /dev, and for thosefiles the names are really up to the system administrator, and are amatter between the whims of the system adminsitator and the variousconfiguration files which list those devices: /etc/inittab, /etc/fstab,etc. Examples of such device names include names for disk drives,tapes, cd-roms, and ttys, and so on. Indeed, this is why devfs canuse device names such as /dev/dsk/... instead of /dev/hda1 withoutcausing all hell to break loose.While there are some naming conventions which most people subscribe tofor these second class of device files/names in /dev, they are by nomeans fixed. And it's for these classes of device names, where Ibelieve hardcoding a new naming convention into the kernel is especiallya bad idea.And so again, *yes*, devfs is different from procfs. One can be infavour of the latter without being in favour of the former, and this iswhy. - Ted-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
https://lkml.org/lkml/1999/6/18/233
CC-MAIN-2015-35
refinedweb
340
62.27
Created on 2012-01-19 12:25 by David.Layton, last changed 2014-04-01 07:36 by paul.j3. argparse.FileType.__call__ opens the specified file and returns it. This is well documented as an anit-idiom." Disregarding the above, handling a file which may or may not have been opened depending the users input requires a bit of boilerplate code compared to the usual with-open idiom. Additionally, there is no way to prevent FileType from clobbering an existing file when used with write mode. Given these issues and others, it seems to me that the usefulness of FileType is outweighed by propensity to encourage bad coding. Perhaps, it would be best if FileType (or some replacement) simply checked that the file exists (when such a check is appropriate), it can be opened in the specified mode, and, curry the call to open (i.e. return lambda: open(string, self._mode, self._bufsize)) > Additionally, there is no way to prevent FileType from clobbering an existing file > when used with write mode. I think this deserves its own feature request: now that Python 3.3 has an “exclusive create” mode, argparse.FileType could gain support for it. Would you open that request? > Given these issues and others, We have one issue and one missing feature; what are the other issues? > it seems to me that the usefulness of FileType is outweighed. In packaging/distutils2 for example we have similar functions that return an open file object and never close it: the responsibility is at a higher level. Other packaging code calling these functions does so in a with statement. It is not evil by nature. The problem here is that FileType may return stdin or stdout, so we can’t just always close the file object (or maybe we can, say using an atexit handler?). > Perhaps, it would be best if FileType (or some replacement) simply checked that the file exists But then you’d run into race conditions. The only sure was to say if a file can be opened is to open it. s/sure was/sure way/ Eric, checked that the file exists But then you’d run into race conditions. The only sure was to say if a file can be opened is to open it. I think you misunderstand me. I am NOT suggesting that you open and close the file. I am saying that you should not open it in the first place. If I cannot open the file at the point in my program where I actually want to open it, then fine, I can decide, in my own code, what to. Causing a problem on some OSes and not others is worse than causing a problem on all OSes as it increases the likelihood of buggy code passing tests and moving to production. I think argparse is wonderful; I just think that by having FileType not open the file, the number of it's use cases is increased. As it stands now, I would prefer the just pass the argument as a string argument and handling the opening myself unless: 1. I wanted my program to open the file at the very beginning of the program (where one traditionally handles arg parsing) 2. I wanted to exit on the first, and only, attempt to open the file 3. I really did not care if the file closed properly--which, granted, is often the case with tiny scripts The moment any of these is not true due to a change in requirements, I will have to refactor my code to use a filename arg. Where as if I start out with a bog-standard filename and open it myself, I can easily add the behaviour I want. I just don't see FileType as a big convenience. However, I do see that changing this would break backwards compatibility and would not want to see that happen. Perhaps a new FileNameType that does some basic, perhaps, optional checks would have wider use-cases. I hope this helps. David Layton On Fri, Feb 3, 2012 at 2:37 PM, Éric Araujo <report@bugs.python.org> wrote: > > Éric Araujo <merwok@netwok.org> added the comment: > > s/sure was/sure way/ > > ---------- > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > So I generally agree that FileType is not what you want for anything but quick scripts that can afford to either leave a file open or to close stdin and/or stdout. However, quick scripts are an important use case for argparse, so I don't think we should get rid of FileType. What should definitely happen: * Someone should add some documentation explaining the risks of using FileType (e.g. forgetting to close the file, or closing stdin/stdout when you didn't mean to). What could potentially happen if someone really wanted it: * Someone could create a "safe" replacement, e.g. a FileOpenerType that returns an object with open() and close() methods that do the right things (or whatever API makes sense). In this patch I implement a FileContext class. It differs from FileType in 2 key areas: - it returns a 'partial(open, filename, ...)' - it wraps '-' (stdin/out) in a dummy context protecting the file from closure. The resulting argument is meant to be used as: with args.input() as f: f.read() etc The file is not opened until value is called, and it will be closed after the block. stdin/out can also be used in this context, but without closing. The signature for this class is the same as for FileType, with one added parameter, 'style'. (alternative name suggestions welcomed). class argparse.FileContext(mode='r', bufsize=-1, encoding=None, errors=None, style='delayed') The default behavior, "style='delayed'", is as described above. "style='evaluate'" immediately calls the partial, returning an opened file. This is essentially the same as FileType, except for the stdin/out context wrapping. "style='osaccess'", adds os.acccess testing to determine whether the 'delayed' file can be read or written. It attempts to catch the same sort of OS errors that FileType would, but without actually opening or creating the file. Most of the added tests in test_argparse.py copy the FileType tests. I had to make some modifications to the testing framework to handle the added levels of indirection. I have not written the documentation changes yet. A sample use case is: import argparse, sys p = argparse.ArgumentParser() p.add_argument('-d','--delayed', type=argparse.FileContext('r')) p.add_argument('-e','--evaluated', type=argparse.FileContext('r', style='evaluate')) p.add_argument('-t','--test', dest='delayed', type=argparse.FileContext('r', style='osaccess')) p.add_argument('-o','--output', type=argparse.FileContext('w', style='osaccess'), default='-') p.add_argument('--unused', type=argparse.FileContext('w', style='osaccess'),help='unused write file') args = p.parse_args() with args.output() as o: if args.delayed: with args.delayed() as f: print(f.read(), file=o) if args.evaluated: with args.evaluated as f: print(f.read(), file=o) # f and o will be closed if regular files # but not if stdin/out # the file referenced by args.unused will not be created An alternative way to delay the file opening, is to return an instance that has a `filename` attribute, and has an `open` method. This can be compactly added to the `FileContext` that I defined in the previous patch. The `FileContext` provides the `_ostest` function (testing using os.access), and wrapper for stdin/out. class FileOpener(argparse.FileContext): # delayed FileType; alt to use of partial() # sample use: # with args.input.open() as f: f.read() def __call__(self, string): string = self._ostest(string) self.filename = string return self def open(self): return self.__delay_call__(self.filename)() file = property(open, None, None, 'open file property') From "argparse.FileType for '-' doesn't work for a mode of 'rb'" I learned that 'stdin/out' can be embedded in an 'open' by using the 'fileno()' and 'closefd=False' (so it isn't closed at the end of open). With this, the dummy file context that I implemented in the previous patch, could be replaced with: partial(open, sys.stdin.fileno(), mode=self._mode,..., closefd=False) However, as Steven Bethard wrote in the earlier issue, setting up tests when stdin/out will be redirected is not a trivial problem. So I don't yet have a testable patch. --------------- And just for the record, the 'osaccess' testing that I wrote earlier, probably should also test that proposed output file string is not already a directory.
https://bugs.python.org/issue13824
CC-MAIN-2018-51
refinedweb
1,409
66.44
Network assignment HELP NEEDED jambeard lewistrix Greenhorn Joined: Aug 16, 2005 Posts: 14 posted Aug 16, 2005 03:22:00 0 Hi, I have an assignment due in a couple of weeks and really need some help with it. I have started it allready and all the code I have written is bellow. I'm very new to java and without the help of my lecturer's am finding this assignment quite hard so ANY help would be apreciated. This is the assignment: The assignment is about delivering an on-line (airline JavaJet) booking system. The customer should be able to request a departures time table (2 planes leave per day), make reservations, cancel reservations (only the client who has made the reservation), buy tickets, enquire about free places on a flight. The system will make use of datagram Sockets technology and will consist of a server and multiple clients. You are to implement a simple information server and several client objects. You need not worry about producing a graphical user interface. (If you do it will be worth extra marks). The functionality of the Client is that it should Be able to connect to the server and register its IP address as a valid user along with the name of the user in order to join the service. Be able to remove the user from the service (e.g. when disconnecting to execute log-off command). Be able to send a message to the server specifying the information required: - request departures time table per day (2 planes leave per day), - make reservation, cancel reservation, buy tickets, enquire about free places on a flight. Be able to asynchronously receive and display messages from the information server that a plane has just became fully booked. The second urgent message is that a fully booked plane has just had a cancellation and now has seats available (when somebody else cancels the booking). This requirement can only be properly achieved through use of multithreading. The functionality of the Server is that it should Keep a record (in some form) of all on-line registered participants and their addresses. To be able to accept new participant registrations and add them to the active participants list. To be able to accept requests from client to remove itself from its records (log-off only from the computer it has registered first). To read the pre-prepared airline time table from a text file consisting of multiple lines with the following format: JavaJet <flight Number> <day.month.year> <hh.mm> <total number of seats> < reserved seats> <seats available> To receive messages and perform the actions required (book, un-book, sell tickets, send timetable information to the cleint. To handle multiple client connections concurrently. Maintain correctly the list of seats available in the booking system. At the moment my code dosn't really work. When I connect to the server it starts a thread and then instantly ends it, then the second time it starts it, runs it for one iteration and then ends it, then starts it, runs it for 2 iterations etc. I can't figure out why. I'm unsure of how to read the timetable from a file and the thing I am most clueless about is how to asynchronously receive and display messages from server to clients. I also don't understand how to register the client's IP / name with the server and then store it. Any other general info and help on datagrams and java would REALLY be appreciated. I'm not asking anyone to do this for me!! Just need a little help. This is the code I have so far. I've only been working on this for about a day. This is what I have so far for the server: /* reciever.java */ import java.net.*; // for network import java.util.*; // for utilities import java.io.*; // for streams public class reciever { public static void main( String[] argv ) { setAvaliableSeats( 10 ); senderThread thread = null; InetAddress clientHostName; int clientHostPort; try { int portNumber = 3000; // assume that port to send to is in arg vector if( argv.length == 1 ) { portNumber = Integer.parseInt( argv[ 0 ] ); } DatagramSocket socket = new DatagramSocket( portNumber ); //socket.setSoTimeout( 0 ); System.out.println( "ready to receive on " + socket.getLocalAddress().toString() + " port: " + socket.getLocalPort() ); while( true ) { byte[] buffer = new byte[1024]; DatagramPacket packet = new DatagramPacket( buffer, buffer.length ); //assume first packet sent is connection reqest from client socket.receive( packet ); //get clients host name and port number clientHostName = packet.getAddress(); clientHostPort = packet.getPort(); thread = new senderThread( socket, packet, clientHostName, clientHostPort ); thread.start(); } } catch (IOException ioe) { System.out.println("ERROR: " + ioe); } } public static void setAvaliableSeats( int avaliableSeats ) { try { FileWriter fout = new FileWriter("avaliableSeats.txt",false); PrintWriter pout = new PrintWriter(fout,true); pout.println(avaliableSeats); pout.close(); } catch( IOException e ) {} } } This is what I have so far for the client: /* sender.java */ import java.net.*; // for network import java.util.*; // for utilities import java.io.*; // for streams public class sender { static DatagramSocket socket; public static void main( String[] argv ) { try { InetAddress addr = InetAddress.getLocalHost(); int sendPortNumber = 4000; int portNumber = 3000; if (argv.length == 2) { try { addr = InetAddress.getByName(argv[0]); portNumber = Integer.parseInt(argv[1]); } catch (UnknownHostException uh) { addr = InetAddress.getLocalHost(); } } socket = new DatagramSocket( sendPortNumber ); while( true ) { sendMessages( addr, portNumber ); receiveMessages(); } } catch (UnknownHostException uh) { System.out.println("ERROR: " + uh); System.exit(0); } catch (IOException ioe) { System.out.println("ERROR: " + ioe); } } public static void sendMessages( InetAddress addr, int portNumber ) { try { String request; //set up buffer for reading user input BufferedReader kbd = new BufferedReader(new InputStreamReader(System.in)); //read in string from user System.out.print( "Enter Message: " ); request = kbd.readLine(); //turn string into array of bytes byte[] buffer = request.getBytes(); // now create and send packet DatagramPacket packet = new DatagramPacket(buffer,buffer.length,addr,portNumber); socket.send(packet); //tell user packet sent successfully System.out.println("Packet data sent"); System.out.println(); } catch( IOException e ) {} } public static void receiveMessages() { try { byte[] buffer = new byte[1024]; DatagramPacket receivePacket = new DatagramPacket( buffer, buffer.length ); socket.receive( receivePacket ); byte[] receivedMessage = receivePacket.getData(); System.out.println(); System.out.println( "Message received from server: " + receivedMessage ); } catch( IOException e ) {} } } This is what I have so far for my thread: /* senderThread.java */ import java.net.*; // for network import java.util.*; // for utilities import java.io.*; // for streams public class senderThread extends Thread { DatagramSocket socket = null; DatagramPacket packet = null; InetAddress clientHostName; int clientHostPort; public senderThread( DatagramSocket socket, DatagramPacket packet, InetAddress clientHostName, int clientHostPort ) { this.socket = socket; this.packet = packet; this.clientHostName = clientHostName; this.clientHostPort = clientHostPort; } public void run() { try { //Just a little test I used to see if program was reading and //writting number of seats to a file... int integer = readAvaliable(); System.out.println( integer ); writeAvaliable( 1 ); integer = readAvaliable(); System.out.println( integer ); System.out.println( "Connected to: " + socket ); String connected = new String( "Connected to: " + socket ); byte[] buffer = connected.getBytes(); DatagramPacket sendPacket = new DatagramPacket(buffer,buffer.length, clientHostName, clientHostPort); socket.send( sendPacket ); //For each input client sends, print it along with address + port number while( true ) { receiveMessages(); sendMessages(); } } catch (EOFException e) { // socket unexpectedly closed System.err.println("Connection to client unexpectedly closed"); } catch (IOException e) { // error reading client input System.err.println("Error reading from socket " + socket); } finally { //close the socket used to connect / send packets to client } } public void receiveMessages() { try { System.out.println( "RECEIVING MESSAGES" ); socket.receive( packet ); byte[] request = packet.getData(); System.out.println(); System.out.println( "Received " + request); System.out.println( "From IP Address: " + packet.getAddress() ); System.out.println( "Using Port: " + packet.getPort() ); } catch( IOException e ) {} } public void sendMessages() { try { System.out.println( "SENDING MESSAGES" ); String messageString = new String( "Some message" ); byte[] buffer = messageString.getBytes(); DatagramPacket sendPacket = new DatagramPacket(buffer,buffer.length, clientHostName, clientHostPort); socket.send( sendPacket ); } catch( IOException e ) {} } //read from file number of seats avaliable public int readAvaliable() { int avaliableSeats = 0; try { FileReader fin = new FileReader("avaliableSeats.txt"); BufferedReader din = new BufferedReader(fin); String line = din.readLine(); avaliableSeats = Integer.parseInt(line); din.close(); return avaliableSeats; } catch( IOException e ) {} return avaliableSeats; } //writes to file new number of seats avaliable public void writeAvaliable( int amount ) { try { int newAmount = ( readAvaliable() - amount ); FileWriter fout = new FileWriter("avaliableSeats.txt",false); PrintWriter pout = new PrintWriter(fout,true); pout.println(newAmount); pout.close(); } catch( IOException e ) {} } } Thanks a lot, J. Norm Radder Ranch Hand Joined: Aug 10, 2005 Posts: 684 posted Aug 16, 2005 05:49:00 0 Rather large question. I suggest you break it up into smaller questions/problems and solve them one at a time. As an aid to finding problems, add println() statements to your code to show where it is executing and what the contents of key variables is. Then you can post them so we can see how the program is working. Good luck. Layne Lund Ranch Hand Joined: Dec 06, 2001 Posts: 3061 posted Aug 16, 2005 13:21:00 0 It will also help you be more organized if you format your code using commonly accepted standards. For example, it is a common convention to indent the group of statements that are enclosed in { and }. This will help you and us both to be able to read your code more easily. Layne Java API Documentation The Java Tutorial jambeard lewistrix Greenhorn Joined: Aug 16, 2005 Posts: 14 posted Aug 16, 2005 15:40:00 0 I do indent my code it's just that when I cut and pasted the code from something it all went a bit wrong! sorry I agree. Here's the link: subject: Network assignment HELP NEEDED Similar Threads Hint needed: TCP -> UDP , test on localhost Simple problem in running a Server class Urgently Help/Guidance Required creating new sockets Problem regarding socket time out exception. All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/377591/java/java/Network-assignment-NEEDED
CC-MAIN-2014-10
refinedweb
1,620
50.33
Type.GetTypeFromProgID Method (String) .NET Framework (current version) Namespace: System Gets the type associated with the specified program identifier (ProgID), returning null if an error is encountered while loading the Type. Assembly: mscorlib (in mscorlib.dll) Parameters - progID - Type: System.String The ProgID of the type to get. Return ValueType: System.Type The type associated with the specified ProgID, if progID is a valid entry in the registry and a type is associated with it; otherwise, null. This method is provided for COM support. ProgIDs are not used in the Microsoft .NET Framework because they have been superseded by the concept of namespace. SecurityCriticalAttribute Requires full trust for the immediate caller. This member cannot be used by partially trusted or transparent code. Return to top .NET Framework Available since 1.1 Available since 1.1 Show:
https://msdn.microsoft.com/en-us/library/hss5hw09.aspx?cs-save-lang=1&cs-lang=vb
CC-MAIN-2017-47
refinedweb
136
54.18
Opened 7 years ago Closed 7 years ago Last modified 7 years ago #14508 closed (fixed) Test suite silences warnings Description The documentation encourages use of python -Wall manage.py test to find PendingDeprecationWarning. However, Django's test suite in many places silences warnings (e.g. DeprecationWarning), and then 'restores' the state of warning filters to something other than what it was originally. So, for example, if you do: python -Wall manage.py test auth myapp the warnings from myapp will not be shown as they ought. Also, many warnings that Django's test suite itself emits are silenced by the current way of handling warnings - such as all the PendingDeprecationWarnings emitted by unittest2.failUnless etc., and some from cgi, depending on which tests are run and in what order. Two things need to be done: - Correctly save and restore the original filters. warnings.catch_warningsfrom Python 2.5 and greater shows how to do this. - Use more specific filters so that only warnings generated by Django are silenced, not those generated by unittest2. This is slightly tricky, since we are bundling unittest2 in the Django namespace. Change History (5) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by I've got a cleanup of the test suite on my to-do list just before the RC -- a search and replace for 'failUnless'->'assertTrue', 'assertEquals'->'assertEqual' etc is easy to do, but if we do it right now, it makes backporting tests to 1.2 a pain, since the diffs become incompatible. If we do this just before we land 1.3, then we don't have backporting issues to 1.2, and when we're developing 1.4, the diffs will be minimal. The current approach of "reset to no-pending-deprecation-warnings" only exists because we can't retrieve the current warning state out of the warnings module. catch_warnings helps here, except for the fact that we can't use it in Python 2.4. If anyone can propose a way to restore old warning state that is Python 2.4 compatible, I'm all in favor of using it. I've tried (as best I can) to catch all the places that we need to reset warnings, but evidently I've missed some. Specific reports are welcome, they're not easy to find, because they depend on specfic combinations of test ordering. comment:3 Changed 7 years ago by Copying warnings.filters and assigning back again seems to work on Python 2.4 -> 2.6: _saved = warnings.filters[:] ... warnings.filters = _saved This works fine in a script, but something strange seems to happen at the interactive prompt when I do this. In fact, even without this I get very strange behaviour - for example if I add an 'ignore' simplefilter, followed by an 'always' simplefilter, and then do a warning, I don't get the same results at the interactive prompt compared to a script - typing exactly the same thing. Very strange. I'm seeing that behaviour with Python 2.4 - 2.6. comment:4 Changed 7 years ago by comment:5 Changed 7 years ago by (In [14527]) [1.2.X] Fixed #14508 - test suite silences warnings. Utility functions get_warnings_state and save_warnings_state have been added to django.test.utils, and methods to django.test.TestCase for convenience. The implementation is based on the catch_warnings context manager from Python 2.6. Backport of [14526] from trunk. Correction - the test suite does seem to use specific enough filters for the things it silences. However, it often doesn't silence PendingDeprecationWarningsthat it ought to anticipate, so running with python -Wallproduces far too much output - it should only produce output for things that need to be fixed. Also, it is easy to avoid silencing the warnings from the bundled unittest2 if we use fully qualified module regexes for the things we are expecting to silence e.g. django\.contrib\.auth\.modelsrather than django\..*
https://code.djangoproject.com/ticket/14508
CC-MAIN-2017-47
refinedweb
656
65.42
Common client-side web technologies "Websites should look good from the inside and out." - Paul Cookson ASP.NET Core applications are web applications and they typically rely on client-side web technologies like HTML, CSS, and JavaScript. By separating the content of the page (the HTML) from its layout and styling (the CSS), and its behavior (via JavaScript), complex web apps can leverage the Separation of Concerns principle. Future changes to the structure, design, or behavior of the application can be made more easily when these concerns are not intertwined. While HTML and CSS are relatively stable, JavaScript, by means of the application frameworks and utilities developers work with to build web-based applications, is evolving at breakneck speed. This chapter looks at a few ways that JavaScript is used by web developers and provides a high-level overview of the Angular and React client-side libraries. Note Blazor provides an alternative to JavaScript frameworks for building rich, interactive client user interfaces. Client-side Blazor support is still in preview, so for now it's out of scope for this chapter. HTML HTML is the standard markup language used to create web pages and web applications. Its elements form the building blocks of pages, representing formatted text, images, form inputs, and other structures. When a browser makes a request to a URL, whether fetching a page or an application, the first thing that is returned is an HTML document. This HTML document may reference or include additional information about its look and layout in the form of CSS, or behavior in the form of JavaScript. CSS CSS (Cascading Style Sheets) is used to control the look and layout of HTML elements. CSS styles can be applied directly to an HTML element, defined separately on the same page, or defined in a separate file and referenced by the page. Styles cascade based on how they are used to select a given HTML element. For instance, a style might apply to an entire document, but would be overridden by a style that applied to a particular element. Likewise, an element-specific style would be overridden by a style that applied to a CSS class that was applied to the element, which in turn would be overridden by a style targeting a specific instance of that element (via its ID). Figure 6-1 Figure 6-1. CSS Specificity rules, in order. It's best to keep styles in their own separate stylesheet files, and to use selection-based cascading to implement consistent and reusable styles within the application. Placing style rules within HTML should be avoided, and applying styles to specific individual elements (rather than whole classes of elements, or elements that have had a particular CSS class applied to them) should be the exception, not the rule. CSS preprocessors CSS stylesheets lack support for conditional logic, variables, and other programming language features. Thus, large stylesheets often include quite a bit of repetition, as the same color, font, or other setting is applied to many different variations of HTML elements and CSS classes. CSS preprocessors can help your stylesheets follow the DRY principle by adding support for variables and logic. The most popular CSS preprocessors are Sass and LESS. Both extend CSS and are backward compatible with it, meaning that a plain CSS file is a valid Sass or LESS file. Sass is Ruby-based and LESS is JavaScript based, and both typically run as part of your local development process. Both have command-line tools available, as well as built-in support in Visual Studio for running them using Gulp or Grunt tasks. JavaScript JavaScript is a dynamic, interpreted programming language that has been standardized in the ECMAScript language specification. It is the programming language of the web. Like CSS, JavaScript can be defined as attributes within HTML elements, as blocks of script within a page, or in separate files. Just like CSS, it's recommended to organize JavaScript into separate files, keeping it separated as much as possible from the HTML found on individual web pages or application views. When working with JavaScript in your web application, there are a few tasks that you'll commonly need to perform: Selecting an HTML element and retrieving and/or updating its value. Querying a Web API for data. Sending a command to a Web API (and responding to a callback with its result). Performing validation. You can perform all of these tasks with JavaScript alone, but many libraries exist to make these tasks easier. One of the first and most successful of these libraries is jQuery, which continues to be a popular choice for simplifying these tasks on web pages. For Single Page Applications (SPAs), jQuery doesn't provide many of the desired features that Angular and React offer. Legacy web apps with jQuery Although ancient by JavaScript framework standards, jQuery continues to be a commonly used library for working with HTML/CSS and building applications that make AJAX calls to web APIs. However, jQuery operates at the level of the browser document object model (DOM), and by default offers only an imperative, rather than declarative, model. For example, imagine that if a textbox's value exceeds 10, an element on the page should be made visible. In jQuery, this would typically be implemented by writing an event handler with code that would inspect the textbox's value and set the visibility of the target element based on that value. This is an imperative, code-based approach. Another framework might instead use databinding to bind the visibility of the element to the value of the textbox declaratively. This would not require writing any code, but instead only requires decorating the elements involved with data binding attributes. As client-side behaviors grow more complex, data binding approaches frequently result in simpler solutions with less code and conditional complexity. jQuery vs a SPA Framework Most of the features jQuery lacks intrinsically can be added with the addition of other libraries. However, a SPA framework like Angular provides these features in a more integrated fashion, since it's been designed with all of them in mind from the start. Also, jQuery is an imperative library, meaning that you need to call jQuery functions in order to do anything with jQuery. Much of the work and functionality that SPA frameworks provide can be done declaratively, requiring no actual code to be written. Data binding is a great example of this. In jQuery, it usually only takes one line of code to get the value of a DOM element or to set an element's value. However, you have to write this code anytime you need to change the value of the element, and sometimes this will occur in multiple functions on a page. Another common example is element visibility. In jQuery, there might be many different places where you'd write code to control whether certain elements were visible. In each of these cases, when using data binding, no code would need to be written. You'd simply bind the value or visibility of the elements in question to a viewmodel on the page, and changes to that viewmodel would automatically be reflected in the bound elements. Angular SPAs Angular remains one of the world's most popular JavaScript frameworks. Since Angular 2, the team rebuilt the framework from the ground up (using TypeScript) and rebranded from the original AngularJS name to simply Angular. Now several years old, the redesigned Angular continues to be a robust framework for building Single Page Applications. Angular applications are built from components. Components combine HTML templates with special objects and control a portion of the page. A simple component from Angular's docs is shown here: import { Component } from '@angular/core'; @Component({ selector: 'my-app', template: `<h1>Hello {{name}}</h1>` }) export class AppComponent { name = 'Angular'; } Components are defined using the @Component decorator function, which takes in metadata about the component. The selector property identifies the ID of the element on the page where this component will be displayed. The template property is a simple HTML template that includes a placeholder that corresponds to the component's name property, defined on the last line. By working with components and templates, instead of DOM elements, Angular apps can operate at a higher level of abstraction and with less overall code than apps written using just JavaScript (also called "vanilla JS") or with jQuery. Angular also imposes some order on how you organize your client-side script files. By convention, Angular apps use a common folder structure, with module and component script files located in an app folder. Angular scripts concerned with building, deploying, and testing the app are typically located in a higher-level folder. You can develop Angular apps by using a CLI. Getting started with Angular development locally (assuming you already have git and npm installed) consists of simply cloning a repo from GitHub and running npm install and npm start. Beyond this, Angular ships its own CLI, which can create projects, add files, and assist with testing, bundling, and deployment tasks. This CLI friendliness makes Angular especially compatible with ASP.NET Core, which also features great CLI support. Microsoft has developed a reference application, eShopOnContainers, which includes an Angular SPA implementation. This app includes Angular modules to manage the online store's shopping basket, load and display items from its catalog, and handling order creation. You can view and download the sample application from GitHub. React Unlike Angular, which offers a full Model-View-Controller pattern implementation, React is only concerned with views. It's not a framework, just a library, so to build a SPA you'll need to leverage additional libraries. There are a number of libraries that are designed to be used with React to produce rich single page applications. One of React's most important features is its use of a virtual DOM. The virtual DOM provides React with several advantages, including performance (the virtual DOM can optimize which parts of the actual DOM need to be updated) and testability (no need to have a browser to test React and its interactions with its virtual DOM). React is also unusual in how it works with HTML. Rather than having a strict separation between code and markup (with references to JavaScript appearing in HTML attributes perhaps), React adds HTML directly within its JavaScript code as JSX. JSX is HTML-like syntax that can compile down to pure JavaScript. For example: <ul> { authors.map(author => <li key={author.id}>{author.name}</li> )} </ul> If you already know JavaScript, learning React should be easy. There isn't nearly as much learning curve or special syntax involved as with Angular or other popular libraries. Because React isn't a full framework, you'll typically want other libraries to handle things like routing, web API calls, and dependency management. The nice thing is, you can pick the best library for each of these, but the disadvantage is that you need to make all of these decisions and verify all of your chosen libraries work well together when you're done. If you want a good starting point, you can use a starter kit like React Slingshot, which prepackages a set of compatible libraries together with React. Vue From its getting started guide, perfectly capable of powering sophisticated Single-Page Applications when used in combination with modern tooling and supporting libraries." Getting started with Vue simply requires including its script within an HTML file: <!-- development version, includes helpful console warnings --> <script src=""></script> With the framework added, you're then able to declaratively render data to the DOM using Vue's straightforward templating syntax: <div id="app"> {{ message }} </div> and then adding the following script: var app = new Vue({ el: '#app', data: { message: 'Hello Vue!' } }) This is enough to render "Hello Vue!" on the page. Note, however, that Vue isn't simply rendering the message to the div once. It supports databinding and dynamic updates such that if the value of message changes, the value in the <div> is immediately updated to reflect it. Of course, this only scratches the surface of what Vue is capable of. It's gained a great deal of popularity in the last several years and has a large community. There's a huge and growing list of supporting components and libraries that work with Vue to extend it as well. If you're looking to add client-side behavior to your web application or considering building a full SPA, Vue is worth investigating. Choosing a SPA Framework When considering which JavaScript framework will work best to support your SPA, keep in mind the following considerations: Is your team familiar with the framework and its dependencies (including TypeScript in some cases)? How opinionated is the framework, and do you agree with its default way of doing things? Does it (or a companion library) include all of the features your app requires? Is it well documented? How active is its community? Are new projects being built with it? How active is its core team? Are issues being resolved and new versions shipped regularly? JavaScript frameworks continue to evolve with breakneck speed. Use the considerations listed above to help mitigate the risk of choosing a framework you'll later regret being dependent upon. If you're particularly risk-averse, consider a framework that offers commercial support and/or is being developed by a large enterprise. References – Client Web Technologies - HTML and CSS - Sass vs. LESS - Styling ASP.NET Core Apps with LESS, Sass, and Font Awesome - Client-Side Development in ASP.NET Core - jQuery - jQuery vs AngularJS - Angular - React - Vue - Angular vs React vs Vue: Which Framework to Choose in 2020 - The Top JavaScript Frameworks for Front-End Development in 2020 Feedback
https://docs.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/common-client-side-web-technologies?cid=kerryherger
CC-MAIN-2020-10
refinedweb
2,293
52.6
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. Generate Custom Xml file on button click ??? How can we generate custom xml file as same as how we generate pdf report on button click in openerp ? pls anyone guide me on custom xml file generation process ??? ??? The steps i followed ..... python code : class xmlFile_Creation(osv.osv): #this function "generate_xml " is called from button click on view.xml and stores the encoded xml data into database coloumn def generate_xml(self,cr, uid, ids, context=None): root = ET.Element("root") doc = ET.SubElement(root, "doc") field1 = ET.SubElement(doc, "field1") field1.set("name", "blah") field1.text = "some value1" field2 = ET.SubElement(doc, "field2") field2.set("name", "asdfasd") field2.text = "some vlaue2" xmlstr = ET.tostring(root) file=base64.encodestring( xmlstr ) return self.write(cr, uid, ids, {'filedata':file, 'name':"filenew"}, context=context) _name="sample.xmlfile.creation" _columns={ 'trader':fields.many2one('sample.traders','SelectTrader'), 'name': fields.char('Filename', 16, readonly=True), 'filedata': fields.binary('File', readonly=True), } xmlFile_Creation() Hello, You can do something like below. Using lxml: from lxml import etree create XML root = etree.Element('root') root.append(etree.Element('child')) another child with text child = etree.Element('child') child.text = 'some text' root.append(child) pretty string s = etree.tostring(root, pretty_print=True) print s Output: <root> <child/> <child>some text</child> </root> ??? I too have this same question? Can openerp directly serve up a .txt or .html or .xml or really any other plain text format? Seems odd that we can do pdf and not raw text... Hello, #sai , #tim there is one community module Jasper report that have same functionality, you can refer that module that may!
https://www.odoo.com/forum/help-1/question/generate-custom-xml-file-on-button-click-31667
CC-MAIN-2016-50
refinedweb
300
55.61
In this section, you learn how to use dialog fragments to present a simple alert dialog and a custom dialog that is used to collect prompt text. Before we show you working examples of a prompt dialog and an alert dialog, we would like to cover the high-level idea of dialog fragments. Dialog-related functionality uses a class called DialogFragment. A DialogFragment is derived from the class Fragment and behaves much like a fragment. You will then use the DialogFragment as the base class for your dialogs. Once you have a derived dialog from this class such as public class MyDialogFragment extends DialogFragment { ... } you can then show this dialog fragment MyDialogFragment as a dialog using ... No credit card required
https://www.oreilly.com/library/view/pro-android-4/9781430239307/s0177-0177.html
CC-MAIN-2019-47
refinedweb
120
56.45
Tue, 2003-01-21 at 03:09, Jak wrote: > Tony, > thanks for your patches, they have fixed most of the problems I had. > > > > Try running stty to fix your display. > > > I can't convince stty to do anything useful for me : > do I need a specific/patched version ? No. > do I need to specify anything except rows cols values ? No. > how would I change the colour depth ? You can still use fbset :-), except that as it currently stands, it might corrupt the display if panning is enabled. rivafb does not support panning at bootup so fbset is safe to use. > I have 7 different lines in /etc/fb.modes related to 1024x768@..., > how can I tell stty that I want to use the equivalent of the timing values of > any particular one of these 7 modes ( assuming timings are still relevant ) ? Bullseye :-) This is one of the reasons I submitted the GTF implementation patch (It's already in fbmon.c). The patch will compute the modelines for you given xres, yres and your monitor's operational limits. It has the advantage of computing the maximum refresh rate your monitor is capable of, and it can theoretically calculate any mode without the use of additional entries in /etc/fb.modes. This is ideal for VESA GTF compliant monitors but I have tested this with old monitors as well. Once you have changed to an appropriate window size, you can then use fbset to fine-tune your display timings. The only problem right now is how do you pass the monitor info to the driver? The best way is to parse the EDID block and use I2C/DDC. Personally, I think passing the monitor info as a boot/module option is the simplest and safest method. Once the above is done, adding support for GTF to a driver is just a 10-liner code. I already did this for some of the drivers including rivafb. Of course, proprietary displays will need their own modeline formula which, if this is the case, the driver has to add its own algorithm or use fbset tricks. You can do something like this: fbset 1600x1200-85 && stty cols 200 rows 75 > what was wrong with fbset which I have to keep for the forseeable > future anyway until 2.6 approaches 2.4 reliability ? Nothing's wrong with fbset, because if fbset becomes unusable, then so will most fbdev-based applications. We don't want that to happen. The main difference is instead of fbdev telling the console to change the window size, it's now the console telling fbdev to change the window size. As the console is blind to color depth, pixelformat, accel, etc, you can still use fbset to change most of the above. > > > >) > > { > > - int err; > > - err = pci_module_init(&rivafb_driver); > > - if (err) > > - return err; > > - pci_register_driver(&rivafb_driver); > > - return 0; > > + return pci_module_init(&rivafb_driver); > > } > > > This will normally return 1, not 0 as before. Is this intended ? The above should return 0 if successful. Tony Tony, =09thanks for your patches, they have fixed most of the problems I had. > > Try running stty to fix your display. > I can't convince stty to do anything useful for me : =09do I need a specific/patched version ? =09do I need to specify anything except rows cols values ? =09how would I change the colour depth ? =09I have 7 different lines in /etc/fb.modes related to 1024x768@..., how can I tell stty that I want to use the equivalent of the timing value= s of any particular one of these 7 modes ( assuming timings are still relevant= ) ? =09what was wrong with fbset which I have to keep for the forseeable future anyway until 2.6 approaches 2.4 reliability ? > >) > { > -=09int err; > -=09err =3D pci_module_init(&rivafb_driver); > -=09if (err) > -=09=09return err; > -=09pci_register_driver(&rivafb_driver); > -=09return 0; > +=09return pci_module_init(&rivafb_driver); > } > This will normally return 1, not 0 as before. Is this intended ? On Sun, 1 Dec 2002, Jon Smirl wrote: > This patch fixes the syntax in fbtest so that gcc 3.2 Thanks! I applied most of them. > is happy. I also adjusted the libs for the current > version of libnetpbm. > > I went searching for libnetpnm which doesn't seem to > exist standalone any more. It is part of libnetpbm > now. Hmm, this one is more problematic. It seems to depend on your distribution (I use Debian). ---------- Forwarded message ---------- Date: 18 Jan 2003 16:10:12 +0100 From: Fredrik Noring <noring@...> To: Geert Uytterhoeven <geert@...> Cc: Petr Vandrovec <vandrove@...> Subject: [PATCH] fbset.c - fixes mode info Hi, Please apply this fix to report proper negative sync polarity. Thanks, Fredrik --- fbset-2.1.old/fbset.c 1999-06-23 16:11:46.000000000 +0200 +++ fbset-2.1.new/fbset.c 2003-01-18 15:55:02.000000000 +0100 @@ -629,8 +629,12 @@ vmode->hslen, vmode->vslen); if (vmode->hsync) puts(" hsync high"); + else + puts(" hsync low"); if (vmode->vsync) puts(" vsync high"); + else + puts(" vsync low"); if (vmode->csync) puts(" csync high"); if (vmode->gsync) On Mon, 2003-01-20 at 02:56, Alexander Kern wrote: [...] > --- linux-2.5.orig/drivers/video/console/fbcon.c 2003-01-17 16:13:53.000000000 +0100 > +++ linux/drivers/video/console/fbcon.c 2003-01-19 19:17:23.000000000 +0100 > @@ -1876,17 +1876,23 @@ > struct display *p = &fb_display[vc->vc_num]; > struct fb_info *info = p->fb_info; > struct fb_var_screeninfo var = info->var; > - int err; > + int err; int x_diff, y_diff; > > var.xres = width * vc->vc_font.width; > var.yres = height * vc->vc_font.height; > var.activate = FB_ACTIVATE_NOW; > - > + x_diff = info->var.xres - var.xres; > + y_diff = info->var.yres - var.yres; > + if(x_diff < 0 || x_diff > vc->vc_font.width || > + (y_diff < 0 || y_diff > vc->vc_font.height)) { > + DPRINTK("resize now %ix%i\n", var.xres, var.yres); > err = fb_set_var(&var, info); > return (err || var.xres != info->var.xres || > - var.yres != info->var.yres) ? > - -EINVAL : 0; > - > + var.yres != info->var.yres) ? -EINVAL : 0; > + } else { > + DPRINTK("prevent resize\n"); > + return 0; > + } > } > > static int fbcon_switch(struct vc_data *vc) Yes, that will work, only if all your console have the same window size. If the size of one of your console is different, then these tests (x_diff > vc->vc_font.width || y_diff > vc->vc_font.height) will become true each time you switch consoles, so you'll be back with a yres of 1040 instead of 1050. The best solution is for the driver to round up to 1050 if 1040 is not acceptable. Still, its much better than the old one :-). If you don't mind, I'll add a few things to your patch: a. We do not need to activate the hardware immediately if there is a chance of failure. b. The xres/yres returned from fb_set_var() will be acceptable as long as the value is within a fontwidth/fontheight. This should fix hardware that only has a limited set of video modes. BTW, I'm also attaching a diff to fix vc_resize() in vt.c. In vc_resize(), if con_resize() exits with an error, the new console dimensions are not reset to the original, and memory from kmalloc() is not freed. Tony PATCH 1: fbcon_resize << begin >> diff -Naur linux-2.5.59/drivers/video/console/fbcon.c linux/drivers/video/console/fbcon.c --- linux-2.5.59/drivers/video/console/fbcon.c 2003-01-20 08:19:50.000000000 +0000 +++ linux/drivers/video/console/fbcon.c 2003-01-20 08:37:06.000000000 +0000 @@ -1870,23 +1870,35 @@ } -static int fbcon_resize(struct vc_data *vc, unsigned int width, - unsigned int height) + static int fbcon_resize(struct vc_data *vc, unsigned int width, + unsigned int height) { struct display *p = &fb_display[vc->vc_num]; struct fb_info *info = p->fb_info; struct fb_var_screeninfo var = info->var; - int err; - - var.xres = width * vc->vc_font.width; - var.yres = height * vc->vc_font.height; - var.activate = FB_ACTIVATE_NOW; - - err = fb_set_var(&var, info); - return (err || var.xres != info->var.xres || - var.yres != info->var.yres) ? - -EINVAL : 0; - + int err; int x_diff, y_diff; + int fw = vc->vc_font.width; + int fh = vc->vc_font.height; + + var.xres = width * fw; + var.yres = height * fh; + x_diff = info->var.xres - var.xres; + y_diff = info->var.yres - var.yres; + if (x_diff < 0 || x_diff > fw || + (y_diff < 0 || y_diff > fh)) { + var.activate = FB_ACTIVATE_TEST; + err = fb_set_var(&var, info); + if (err || width != var.xres/fw || + height != var.yres/fh) + return -EINVAL; + DPRINTK("resize now %ix%i\n", var.xres, var.yres); + var.activate = FB_ACTIVATE_NOW; + fb_set_var(&var, info); + p->vrows = info->var.yres_virtual/fh; + } else { + DPRINTK("prevent resize\n"); + } + return 0; } static int fbcon_switch(struct vc_data *vc) << end >> PATCH 2: vc_resize << begin >> diff -Naur linux-2.5.59/drivers/char/vt.c linux/drivers/char/vt.c --- linux-2.5.59/drivers/char/vt.c 2003-01-20 08:18:11.000000000 +0000 +++ linux/drivers/char/vt.c 2003-01-20 08:17:37.000000000 +0000 @@ -732,6 +732,10 @@ if (new_cols == video_num_columns && new_rows == video_num_lines) return 0; + err = resize_screen(currcons, new_cols, new_rows); + if (err) + return err; + newscreen = (unsigned short *) kmalloc(new_screen_size, GFP_USER); if (!newscreen) return -ENOMEM; @@ -746,9 +750,6 @@ video_size_row = new_row_size; screenbuf_size = new_screen_size; - err = resize_screen(currcons, new_cols, new_rows); - if (err) - return err; rlth = min(old_row_size, new_row_size); rrem = new_row_size - rlth; << end >> diff ; I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/linux-fbdev/mailman/linux-fbdev-devel/?viewmonth=200301&viewday=20
CC-MAIN-2017-04
refinedweb
1,546
68.26
I too have wanted support for writing function declarations in tables more compactly. However, there's probably an interesting question as to whether it should auto translate into the form that takes self as a first argument. One alterative would be: Box = { function notAMethod() print "I'm just a function in the Box namespace" end, function :method() print( string.format( "I'm a method of: %s", tostring( self ) ) ) end } I agree with the suggestion that finding a way to get rid of the commas would definitely be desirable. This all leads to the broader subject of whether there should be a syntax that supports more control constructs and arbitrary code inline while building tables. It would feel more "data description"-ish. No formal proposals at this time... Mark on 1/23/08 5:33 AM, alex.mania@iinet.net.au at. =) > >
https://lua-users.org/lists/lua-l/2008-01/msg00522.html
CC-MAIN-2021-49
refinedweb
141
63.59
A and how to work around the bug in Hibernate Annotations. On the AMIS SOA training program we got a nice assignment. The instructors gave us a WSDL, a database and some instructions. We had to create something with the SOA Suite based on the WSDL and the instructors would connect to our service to test whether we succeeded. Our team (Alex, Patrick B. and me) decided to make a ‘something’ with BPEL and PL/SQL. When I came home after the training I thought why not also do it in java, just to refresh my knowledge. I generated some Java classes based on the WSDL with XFire, that went quite smooth (thanks to earlier experience with XFire [AM1]:). The next step was connecting to the database with Hibernate Annotations. I used all the standard JPA annotations, so this should also work with Toplink Essentials or any other JPA-implementation. I forgot that some tools exist to generate the mapping, so I did it by hand. These are the database tables: After some simple tables a table with a composite Id was in my way. I usually avoid those tables, but now I had no control over the tables (well actually I have, but that’s considered cheating the assignment 😉 ) After searching on google and dzone I didn’t get any further, the closest thing to a solution was a piece of information in the Hibernate manual [HI1]. That link got me confused and it isn’t really clear what you have to do. I guess I’m also a bit rusty with Hibernate. To clean off rust you can use Cola, so I opened a can, drank it and I almost immediately had a result. I remembered I wrote a blog about generating annotated classes for JPA [AM2]. So let’s give that a shot. My idea was to create a mapping for the annoying table and see what was generated. After that I can probably remove some annotations and attributes on existing annotations. The table in the way was the CC_REGISTRATIONS table. To create a solution I just created mappings for the primary key columns (att_id and ssn_id) and a random normal column (status). This way you don’t have to scroll through pages of code and it’s easier for me to see a solution. The result of the generating process was two classes (yes, I also expected one): CcRegistrationsEntity and CcRegistrationsEntityPK. The CcRegistrationsEntity is just like your normal entity. The only differences are an extra annotation on the class definition: @IdClass(CcRegistrationsEntityPK.class) and two @Id annotations (on attId and ssnId) instead of one. package nl.amis.csi; import javax.persistence.*; import java.math.BigInteger; @Entity @IdClass(CcRegistrationsEntityPK.class) @Table(schema = "CCSI", name = "CC_REGISTRATIONS") public class CcRegistrationsEntity { private BigInteger ssnId; private BigInteger attId; private String status; @Id @Column(name = "SSN_ID", nullable = false, length = 10) public BigInteger getSsnId() { ... } public void setSsnId(BigInteger ssnId) { ... } @Id @Column(name = "ATT_ID", nullable = false, length = 10) public BigInteger getAttId() { ... } public void setAttId(BigInteger attId) { ... } @Basic @Column(name = "STATUS", length = 1) public String getStatus() { ... } public void setStatus(String status) { ... } public boolean equals(Object o) { ... } public int hashCode() { ... } } The CcRegistrationsEntityPK class has the primary key fields of the table and -very important- an overridden equals and hashCode method. package nl.amis.csi; import java.io.Serializable; public class CcRegistrationsEntityPK implements Serializable { private java.math.BigInteger ssnId; private java.math.BigInteger attId; public java.math.BigInteger getSsnId() { ... } public void setSsnId(java.math.BigInteger ssnId) { ... } public java.math.BigInteger getAttId() { ... } public void setAttId(java.math.BigInteger attId) { ... } public boolean equals(Object o) {... } public int hashCode() { ... } } Now this all looks quite reasonable, but I got an error: Caused by: java.sql.SQLException: ORA-00904: “CSIREGISTR0_”.”SSNID”: invalid identifier That’s strange, there should be an underscore. Renaming the getSsnId to getSsn_id is a solution, but a very ugly one. After some searching I found out this actually is a bug in Hibernate Annotations [HI2]. Luckily there is a comment on that bug that is very helpful. When you move the @Column annotations for the Id columns to the IdClass everything works fine. ** Update 20-05 – How you really should do it ** After the comment of p3t0r I contacted him and the word ‘prefer’ was a real understatement. With the @IdClass annotation you have redundant information in your classes and there is a much cleaner solution (which I didn’t get at the time). But here it is (I actually did it myself, p3t0r just gave me a lot of directions). First the CsiRegistrations2 class: package nl.amis.hibernate.tables; import javax.persistence.*; @Entity @Table(name = "CC_REGISTRATIONS") public class CsiRegistrations2 { private CsiRegistrations2PK pk; private String status; @EmbeddedId public CsiRegistrations2PK getPk() {...} public void setPk(CsiRegistrations2PK pk) {...} @Column public String getStatus() {...} public void setStatus(String status) {...} } As you can see the id’s are replaced with a CsiRegistrations2PK object and annotated with an @EmbeddedId. The CsiRegistrations2 class: package nl.amis.hibernate.tables; import javax.persistence.Column; import javax.persistence.Embeddable; import java.io.Serializable; import java.math.BigInteger; @Embeddable public class CsiRegistrations2PK implements Serializable { private BigInteger sessionId; private BigInteger attendeeId; @Column(name = "SSN_ID", nullable = false, length = 10) public BigInteger getSessionId() {...} public void setSessionId(BigInteger sessionId) {...} @Column(name = "ATT_ID", nullable = false, length = 10) public BigInteger getAttendeeId() {...} public void setAttendeeId(BigInteger attendeeId) {...} } This is almost the same as the previous example, but with an @Embeddable. The reason why the bu g on Hibernate Annotations is o pen for so long is because there is a better solution I guess. And again a confirmation my theory: blaming a bug is just a sign of a bad developer 😉 ** end of update ** Conclusion It’s very educational to write JPA-annotations yourself, but when you’re stuck don’t try for too long, let a generator do the work for you and try to figure out why a piece of code is generated. It’s a pity that there is a bug in Hibernate Annotations, but on the other hand it’s the first real bug I found. I found it very hard to find a blog/article that explains how to map composite primary keys, it isn’t that difficult after all, so I probably used the wrong keywords. Sources [HI1] [HI2] [AM1] [AM2] I just saw one article , which explain implementation of composite primary key using annotation with simple example. Personally I prefer to use javax.persistence.EmbeddedId / javax.persistence.Embeddable to nicely group the attributes of the key in a single object without having to duplicate them in the parent object. Also, having the embeddable annotation on the Id object instantaneously tell other developers how the object should be be used…
https://technology.amis.nl/2008/05/18/mapping-composite-primary-keys-in-jpa-how-to-work-around-a-bug-in-hibernate-annotations/
CC-MAIN-2016-22
refinedweb
1,110
50.23
I'm hoping this is a solved problem and I'm just not familiar with what's out there.I'm hoping this is a solved problem and I'm just not familiar with what's out there. // Public API released to clients public interface DataObject {} // Internal API public interface DataObjectInternal extends DataObject {} // Implementation classes public class DataObjectImpl implements DataObjectInternal {} public class DataObjectMock implements DataObjectInternal {} es5f wrote:I might note that sounds a bit odd. Myself I would have a data layer api. Any other layer would use that not the data model objects but maybe that is what you meant. There is a public API which exposes functionality to clients. Behind that API we have implementation classes with significantly more functionality. Several of these classes are in the data model. jschell wrote:We have EasyMock right now, which requires interfaces to work off of. I really don't need to stub out functional classes at this point. We can't do unit testing at all, though, because our data objects are very busy and can't be stubbed out. I've just started looking at JMock. That might also be an option. I am rather certain there are freeware/commercial frameworks/apis which will provide provide dynamic mocks for you. They basically grab the class, build a proxy, do some rules (which you provide when you start the library) and then insert themselves. jschell wrote: I might note that sounds a bit odd. Myself I would have a data layer api. Any other layer would use that not the data model objects but maybe that is what you meant. If I am building a layer and I think there is a need to mock it then I would provide that as part of the layer along with suitable pseudo functionality in the mocks. I would use interfaces to implement that.Yeah, a data layer API sure would be nice. Unfortunately, that's not currently my reality on the ground. I've got about five million lines of code directly accessing the data model implementation classes, including static factory methods, public access to member variables, etc. I'm trying to throw the first cut of an API in front of it. The problem is that there are two layers of API for these data classes: one public, which clients have access to, and one with additional functionality for internal use only. The public (for clients) API exists in the form of a set of interfaces, but the internal API is basically "cast DataObject to DataObjectImpl, and then call the methods you need on it". jschell wrote:I agree, and I should emphasise I'm not trying to stub out major parts of a complex API. I just need 2 tiers of API for the data layer, and I was wondering if my implementation implements interface X which extends interface Y was the right way to go. Although I am not a great fan of mocking complex APIs. The reason of course is that if it is complex enough that it seems difficult then getting the mock behavior correct, and keeping it up to date, requires that someone must not only do a lot of work but must verify it and get it correct. Which almost gets to the point where someone must test the mock. Bachchu wrote:Mocking complex APIs is not easy. i guess its better to go for power mocking since its an interface....... its very easy...
https://community.oracle.com/message/9209625
CC-MAIN-2014-15
refinedweb
579
71.75
I have written a program that will allow a user to input the number of stock prices that they want to enter and then their closing prices. From here the program will give the average price, high and low prices. I can get the program to work with the variables declared as int, but I cannot work with decimals, which I need to and am asking for help on doing this. I have tries various float and double types, but keep getting hung up with ill or non compatible types? Here is my code #include <stdio.h> #include <stdlib.h> void stockPrice ( double *stockArray, int qty ); void average( double *stockArray, int qty); void high( double stockArray[], int qty); void low( double stockArray[], int qty); int main() { double * stockArray; int quantity; printf("Please tell me how many stock prices that you will input? : "); scanf("%d", &quantity); stockArray = (double *) malloc( quantity *sizeof( int ) ); stockPrice ( stockArray, quantity ); average( stockArray, quantity); high ( stockArray, quantity); low ( stockArray, quantity); free( (void *) stockArray ); return 0; } void stockPrice ( double *stockArray, int qty ) { int i; printf( "Enter the stock's closing price now:\n"); for ( i = 0; i < qty; i ++ ) { printf("Stock Price %d:$ ", i + 1 ); scanf("%d", &stockArray[i] ); } } void average( double *stockArray, int qty) { int i; double sum = 0; double average = 0; for ( i = 0; i < qty; i++) sum = sum + stockArray[i]; average = sum / qty; printf("Average $%5.2d\n", average); } void high ( double stockArray[], int qty) { int i; double highest = stockArray[0]; for ( i = 1; i < qty; i++) { if( stockArray[i] > highest ) highest = stockArray[i]; } printf( "Largest $%d\n", highest ); } void low ( double stockArray[], int qty) { int i; double lowest = stockArray[0]; for ( i = 1; i < qty; i++) { if( stockArray[i] < lowest ) lowest = stockArray[i]; } printf( "Largest $%5.2d\n", lowest ); } Thanks you and help is much appreciated! JC
http://www.dreamincode.net/forums/topic/66389-need-to-use-decimals-in-program-float-or-double/
CC-MAIN-2016-40
refinedweb
303
59.87
I know everyone is tired seeing those how-to's on brute forcing instagram and other social media's. Sorry to break this to you but most of them are poorly engineered or do not work at all and claim's false results. I am writing this post because I really wanted to shout out to everyone ( Atleast for those who really want to learn hacking... ) what is the authentic way to hack instagram. If you tried all other post on hacking instagram ( Which is mostly fake ) , I request you to try this once. I promise you that you will learn what is real hacking. Okay , How exactly are we gonna do it ? Simple , With a linux only python script called Instagram-Py! I created Instagram-Py after seeing all those poorly implemented or fake brute force scripts that people actually used , When I first saw those reviews on these tools , I was shocked Because Why would people even use these tools which does not even follow a good code base! Well , Turns out a lot of people who used these tools where just noobs ( No offense ) , I would have done the same when I was just starting to learn hacking ( Trust Me , I was sooo dumb and trusted these tools! Some worked though ). There is a common misconception that I am not the original author but I am the original author of Instagram-Py , Why People thought I was fake ? Because I deleted my old username which I worked and commited on github. Yes If you analyze the git logs you will know that I am the author but no one does that. ( Because they don't know to do that! ) and there is a new account who is pretending to be me which annoys me a lot. Okay no more booring stories , Time to break some stuff like pro! Here is the new official github project for instagram-py , Use this repo to support me (By staring it and watching it) and raise issues to get it fixed asap! ( Do not use any other mirrors as it may have backdoors! ) There is a video embeded at the end of this post , If you are a visual learner then see that to get started right away! Thank You sh4dy for making a video out of my tool! Step 1: Get a Linux Distro of Your Choice! I recommend using Kali Linux , Why ? Because its a linux distro which is dedicated for hacking stuff , I learned about linux in the very start from Kali Linux ( Because of the urge to learn how to hack and crazy cli animations). So Thank You Offensive Security for your awesome distro's. If your default operating system is Windows then you can simply Live boot to Kali Linux to just use it without installing it into your computer. To Live Boot , You need a flash drive which has atleast 8GB storage space ( To run smoothly! ). Follow the instruction below to create a bootable flash drive to live boot kali linux , if you already on linux then skip this step. ( You can also google , On How to live boot Kali Linux from Windows... ) Create Bootable Flash Drive on Windows! - Download the official Kali Linux ISO Image here , Download the version that corresponds to your CPU Architecture - Download Rufus from here , Will be used to make bootable flash drive. - Run Rufus , It should look something like this... - In rufus , Select the your flash drive as the device. - Select MBR (If not select the default ) for partition scheme and target system type - Give your new Volume Label to any name you like. - Select ISO Image from the drop down list that is close to the Disk Icon. - Now click the Disk Icon to browse for the Kali Linux ISO Image file from your computer. - After selecting the ISO Image , Click on the Start button to start the process! - On finish , You must now have a bootable flash drive that can live boot you kali linux. Now safely remove the flash drive and Shutdown your PC. Plug in your flash drive and then turn on your PC again , Select Live Boot from the menu once you see the Kali Linux Splash Screen. Now you must have Kali Linux Booted and ready to do some stuff. Note: In some computers you must enable Boot from USB to use a Bootable Flash Drive so please make sure to enable it from your Boot Menu. Step 2: Install Instagram-Py from Python Package Index. For this step , I will be using apt-get package manager because thats the default for Kali Linux but you can do the same for all other package managers , like pacman and dnf. Open a new Terminal in Kali Linux and execute these commands. Now we have successfully installed Instagram-Py in Kali Linux , Now its time to configure it. Step 3: Configuring Instagram-Py and Tor Server. First lets configure Tor Server , Tor is a project which aims for privacy , it completely hides your network traffic ( Not the data usage ) from your ISP , Its not a proxy because it can be used for anything , Even for selling drugs ( Heard of Dark Web , Tor Network is know as the Dark Web and Tor Server helps us to connect to it). Execute this command to edit the Tor Configuration in Kali Linux. Search for line which looks like this , #ControlPort 9051 and only delete the # from that line. It should look something like this... Lets start our tor server with our new configuration. Now lets configure instagram-py , Execute this command and everything will be done for you! Please see that , the default configuration will only work if you use tor's default configuration , if you use a custom configuration for tor then please execute the above command without the --default-configuration argument and answer some simple questions. Now everything is set , Only thing to do is to Find a Victim and Start the Attack. Step 4: Executing the Attack! Here is the usage for Instagram-Py from my console... For a simple attack with medium verbosity , Execute this command in a terminal! Thats it , You actually launched a authentic attack on a instagram account using Instagram-Py! Are You a Visual Learner ? I saw this youtube video from a random guy , Its really awesome , Thank You sh4dy for making a video out of my tool! Please go to this repo to know more about Instagram-Py! Thank You for your Time , See ya! 50 Responses Features of Instagram-Py! As of Instagram-Py v2.0.6 , Instagram is bundled in a single package , which does not require you to install tor or python in your linux distro! Only for linux 64 Bit. Craft your own python script which will embed into Instagram-Py for Maximum Customization of your brute force attack , example: What if you want a message sent to your phone when an account is hacked? How to Get Instagram-Py with Zero Setup ( Instagram-Py Portable ) You just need to execute a single command to get instagram-py , you don't even need python and tor , works for any 64 Bit Linux Distro ( Kali Linux , Ubuntu , Fedora or Arch Linux and etc... ) HERE IS THE PROJECT PAGE -> You can only get Instagram-Py Portable at the project page! My script started running. but it is testing same password for minimum 30 times I got this >>> fatal error:: Connection to host failed , check your connection and tor configuration. This happens in the last update, but in previous version (when it's still on the deathsec repository) I have no problem at all. I've set up it properly, like create default configuration, start the tor service, and opening control port on torrc file. Can you help? Sorry for My English.. Did you restart your Tor Server every time you make changes to the torrc. You can always open an issue on github here Always report errors on github , it will be solved asap. I'm planning to release a easy to use AppImage for the tool and tor to avoid these types of errors. Hey refer this issue, it has been resolved , It might help you. If not rise a new issue in the project page Hey I really wanna learn how to hack someone's Instagram account thats what he was saying this whole time if your victim is unaware abt anything you can do phishing which is easy. Hello, Anthony. Hey , Sorry about earlier on github and keep up your good work! Step 1: Script Is Working but How Do I Bypass Security of Instagram? Hello Anthony, As soon as I get the password cracked, Instagram opens this suspicion login page, Is there any way I can bypass it? As I said earlier , Just stay cool and wait a bit , till the original user verifies him/her -self. Most of the time , Nobody will change the password , Even me. ( But I don't use Instagram though! ) Hacking is always like a magic trick , Your whole plan can be destroyed in a single change of a bit. Hey sorry for the late response , See issue for more information. Ya instagram raises this suspicion if you use vpn or tor network or geographically away from the real account holder i always recieve this when i try to login my pc freshly but when i try it in my phone there will be no suspicion page. I managed to do everything but intagram-py and did not get the account password, what could I do? Hey there , Brute Force attacks have only 50% chance to hack a account so don't get your hopes high... In Hacking brute force is the safest way to hack social media accounts since it avoid us from interacting with the user, with the best wordlist you could crack any account.... The best wordlist is rocky.txt or dou you recommend another one? root@kali:~# sudo pip3 install stem Collecting stem Downloading stem-1.6.0.tar.gz (2.0MB).1MB 30kB/s Building wheels for collected packages: stem Running setup.py bdist_wheel for stem ... done Stored in directory: /root/.cache/pip/wheels/a8/14/32/c7d3560a2ccf8ddb5e5be92b4c11015712d88299b13c19968e Successfully built stem Installing collected packages: stem Successfully installed stem-1.6.0 root@kali:~# sudo pip3 install instagram-py Collecting instagram-py Downloading instagram-py-2.0.4.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-UyeWMG/instagram-py/setup.py", line 13, in <module> from InstagramPy import _version_ File "InstagramPy/_init_.py", line 10, in <module> from .InstagramPyCLI import InstagramPyCLI File "InstagramPy/InstagramPyCLI.py", line 133 end='') ^ SyntaxError: invalid syntax ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-UyeWMG/instagram-py/ root@kali:~# i am having this problem I think you need to have python3-setuptools , Make sure you are running the script with python v3.6 , Instagram-py only runs with python 3.6 >= I downloaded the Python v3.6 its still using v3.7 how can I exucute python v3.6 instagram-py needs >=3.6 . python 3.7 is good Does the first step "apt-get update && apt-get upgrade" seriously need 950 MB? Hi Antony, Everything went smoothly upto the last step, then it says fatal error: can't find the password list Do we have to use anything else in place of password_list.txt? _ can you please help me and rewrite the last line of the programme for me? Please? Also can you please tell me if I want to use some other wordlist document what should I do? Like if you update a new document then what changes should be made in the programme in future? Thanks in advance. I really need help. Please reply asap. Here is the same Also the line sudo easy_install3 - U pip Is not working Hey Antony ,i think Instagram has blocked my ip address even when changing the my ip address through tor please check blow verbose i am getting b'{"message": "Sorry, there was a problem with your request.", "status": "fail", "errortype": "sentryblock"}' normally if server is part of a IP network/host that has had serious spammers on it. Instagram has blacklisted its network/IP. How many tries did you try ? In tor network we have 300 IP's ( Approx. ) , with each IP we could check 10 Times ( Approx. ) , we can check again with the same IP if we wait a minute or two ( It increases exponentially ). So we have 30000 Tries atleast and at the time we reach the last server ip , we have another 30000 Tries , maximum time taken to try 10 passwords wih a ip = 1 Min ( 6 Secs x 10 ). Time taken to finish all the 300 servers = 1x300 = 300 Min = 5 Hr. Thus Every ip has a time wait of 5 hr. the moment when instagram blocks us to wait for 5 Hr. is where the attack fails! Thus the maximum tries we could try until we wait another 5 Hr. is 196608000 Tries which is 1.9 Billion , Its working as I promised. But I really did not think about the blacklisted ip's since tor changes its ip frequently.. Lol thats sick haha, +1 The tool is working real good bro(September 6, 2020) but the only problem is the Instagram's suspicious login page after(which is not much of a problem) anyway the tool is good. Hey Everyone! Download the new Instagram-Py Portable. All You need to do is to execute a single command to get Instagram-Py, You don't even need python and tor server! I'm getting a fatal error:: password list not found at /home/ubuntu/Developer/.exploits/facebook-phished.txt Can you help please?? OK so.. I dont think that will work since instagram has captcha verification.IT would work if it was offline attack but since it is an online one I am not so sure about it. Hello anthony, i want to ask you something. firstly i'm not a hacker at all. but i understand this. if brute-force attack only have 50% chance to crack the password which is based on it's wordlist, then the unique password will never have chance to be cracked right? here's more simple thought. "if i'm trying to hack instagram account of people that they doesn't even speak 'english' well let say the password will be something like ;bangsattelolet (which is an indonesian language) even though we have enough alphabet to crack password (i mean we have a-z in the list) doesn't ever mean we can crack it right?" is there any explanation around this topics so you can tell the readers in the first sentence and they didn't even have to read till end. Thanks by the way appreciate your works please reply- how to rectify the fatcl error:: password list not found at/home/david/Pentest/wordlists/wordlist.txt.(on ubuntu) can you help me?? And I sorry for my english Hi, how would I do this if my account is inactive? Somebody took it and isn't using it, so can this method still work? Hi Antony, I was able to follow all the steps with good results. However, in my case, one password is attempted every one second, which I think is very slow. How can I speed up the process and try, lets say, 1000 passwords per second? Is that even possible? Thanks. Ok listen... I installed Kali with Rufus and had my USB ready and so rebooted my PC, but on boot menu my device did not allow me to boot from the USB. The reason was because it was blocked from my device's security protocol... How do I disable it, in order to boot from the USB? I actually know the answer to this one. You have to turn secure boot off in your boot settings. UEFI mode still. And you can also bump the usb drive up in settings so that if there's a usb drive it'll boot there and if not then it'll boot to your normal mBR. To get to your boot settings, boot your computer and when the logo comes up, press F2and it should pull up. If not, just google how to get to your boot settings on your particular model. In fact, Google is your best go to for any of these answers because whatever you want to ask has been asked by someone before. Also, if you want to hack anything ever you're going to have to learn to search for and effectively gather information. Hey Antony I am not a hacker.so is there any way through which I can learn all the above from start.please reply asap. Amazing! It Works perfectly. I've Tried This in My Own Account and It's Cool! Can you help me access my temporarily disabled account if I have forgotten my password? How to fix the problem with the password list? Can you still hack any instagram account? How much time did it take?? can we use this in termux? sir? The script says that the Password list not found at path/to/password_list. Txt hey, I'm getting this error easy_install3 command is not found please can anyone help me password_list.txt not found please hekp You need to use your own password list. dear can you please able to make a video on this all process I have this type of error fatal error:: password list not found at /path/to/password_list.txt This worked for me. I also saw someone explain it better in a post titled How to create a Fake Facebook account, and it also worked for me. Thanks for this post. Share Your Thoughts
https://null-byte.wonderhowto.com/forum/only-authentic-way-hack-instagram-0182776/
CC-MAIN-2021-04
refinedweb
2,994
73.07
"Hi everybody! Hi dr. Nick!" I'm really not a fan of JFileChooser - It's as anti-mac-looking as possible. So I messed around with FileDialog. I gave up on that because it's ancient compared to Swing and seems to have been abandoned by the Coffee Gods. I shoveled my way even deeper down by trying to figure out how to use a Finder/Explorer window instead of the given Java options (FileDialog, JFileChooser). I found this neat thread at JavaRanch: Open a folder in finder in Java (Mac OS forum at JavaRanch) "Houston, we have a problem", though. I'm to dimwitted to understand the sample code given by "Andrew Monkhouse". I thank him sincerely for lending internet a hand, but apparently, he doesn't explain his solution. I'm talking about this part: Code : public class ColinMcTaggart { private static final String[] prefix = { "osascript", "-e", "tell application \"Finder\"", "-e", "activate", "-e", "<openCmdGoesHere>", "-e", "end tell" }; public static void main(String[] args) throws Exception { prefix[6] = buildFolderCommand(System.getProperty("user.dir")); Runtime.getRuntime().exec(prefix).waitFor(); } private static String buildFolderCommand(String folderPath) { StringBuilder openCommand = new StringBuilder("open "); String[] pathParts = folderPath.split("/"); for (int i = pathParts.length - 1; i > 0; i--) { openCommand.append("folder \"").append(pathParts[i]).append("\" of "); } return openCommand.append("startup disk").toString(); } } I ran the code and it worked perfectly. I just don't understand it. The first part was some osascript that opened the finder. The rest doesn't make sense to me. Could you decode this piece of information to me and future researchers? Also, is it possible to interact with the Finder window? Can you set filters, get selected files, etc? It's probably possible with osascript. What do you think? If it's not possible, I'd rather create my very own file dialog, which I think is the best choice of all above anyways, hardest but most awarding. Got any tips on that? Thanks!
http://www.javaprogrammingforums.com/%20java-theory-questions/30500-decode-sample-code-printingthethread.html
CC-MAIN-2014-52
refinedweb
323
58.79
matherr - Man Page SVID math library exception handling Synopsis #include <math.h> int matherr(struct exception *exc); extern _LIB_VERSION_TYPE _LIB_VERSION; Link with -lm. Description Note: the mechanism described in this page is no longer supported by glibc. Before glibc 2.27, it had been marked as obsolete. Since glibc 2.27, the mechanism has been removed altogether. New applications should use the techniques described in math_error(7) and fenv(3). This page documents the matherr() mechanism as an aid for maintaining and porting older applications. The System V Interface Definition (SVID) specifies that various math functions should invoke a function called matherr() if a math exception is detected. This function is called before the math function returns; after matherr() returns, the system then returns to the math function, which in turn returns to the caller. To employ matherr(), the programmer must define the _SVID_SOURCE feature test macro (before including any header files), and assign the value _SVID_ to the external variable _LIB_VERSION. The system provides a default version of matherr(). This version does nothing, and returns zero (see below for the significance of this). The default matherr() can be overridden by a programmer-defined version, which will be invoked when an exception occurs. The function is invoked with one argument, a pointer to an exception structure, defined as follows: struct exception { int type; /* Exception type */ char *name; /* Name of function causing exception */ double arg1; /* 1st argument to function */ double arg2; /* 2nd argument to function */ double retval; /* Function return value */ } The type field has one of the following values: - DOMAIN A domain error occurred (the function argument was outside the range for which the function is defined). The return value depends on the function; errno is set to EDOM. - SING A pole error occurred (the function result is an infinity). The return value in most cases is HUGE (the largest single precision floating-point number), appropriately signed. In most cases, errno is set to EDOM. - OVERFLOW An overflow occurred. In most cases, the value HUGE is returned, and errno is set to ERANGE. - UNDERFLOW An underflow occurred. 0.0 is returned, and errno is set to ERANGE. - TLOSS Total loss of significance. 0.0 is returned, and errno is set to ERANGE. - PLOSS Partial loss of significance. This value is unused on glibc (and many other systems). The arg1 and arg2 fields are the arguments supplied to the function (arg2 is undefined for functions that take only one argument). The retval field specifies the return value that the math function will return to its caller. The programmer-defined matherr() can modify this field to change the return value of the math function. If the matherr() function returns zero, then the system sets errno as described above, and may print an error message on standard error (see below). If the matherr() function returns a nonzero value, then the system does not set errno, and doesn't print an error message. Math functions that employ matherr() The table below lists the functions and circumstances in which matherr() is called. The "Type" column indicates the value assigned to exc->type when calling matherr(). The "Result" column is the default return value assigned to exc->retval. The "Msg?" and "errno" columns describe the default behavior if matherr() returns zero. If the "Msg?" columns contains "y", then the system prints an error message on standard error. The table uses the following notations and abbreviations: Attributes For an explanation of the terms used in this section, see attributes(7). Examples The example program demonstrates the use of matherr() when calling log(3). The program takes up to three command-line arguments. The first argument is the floating-point number to be given to log(3). If the optional second argument is provided, then _LIB_VERSION is set to _SVID_ so that matherr() is called, and the integer supplied in the command-line argument is used as the return value from matherr(). If the optional third command-line argument is supplied, then it specifies an alternative return value that matherr() should assign as the return value of the math function. The following example run, where log(3) is given an argument of 0.0, does not use matherr(): $ ./a.out 0.0 errno: Numerical result out of range x=-inf In the following run, matherr() is called, and returns 0: $ ./a.out 0.0 0 matherr SING exception in log() function args: 0.000000, 0.000000 retval: -340282346638528859811704183484516925440.000000 log: SING error errno: Numerical argument out of domain x=-340282346638528859811704183484516925440.000000 The message "log: SING error" was printed by the C library. In the following run, matherr() is called, and returns a nonzero value: $ ./a.out 0.0 1 matherr SING exception in log() function args: 0.000000, 0.000000 retval: -340282346638528859811704183484516925440.000000 x=-340282346638528859811704183484516925440.000000 In this case, the C library did not print a message, and errno was not set. In the following run, matherr() is called, changes the return value of the math function, and returns a nonzero value: $ ./a.out 0.0 1 12345.0 matherr SING exception in log() function args: 0.000000, 0.000000 retval: -340282346638528859811704183484516925440.000000 x=12345.000000 Program source #define _SVID_SOURCE #include <errno.h> #include <math.h> #include <stdio.h> #include <stdlib.h> static int matherr_ret = 0; /* Value that matherr() should return */ static int change_retval = 0; /* Should matherr() change function's return value? */ static double new_retval; /* New function return value */ int matherr(struct exception *exc) { fprintf(stderr, "matherr %s exception in %s() function\n", (exc->type == DOMAIN) ? "DOMAIN" : (exc->type == OVERFLOW) ? "OVERFLOW" : (exc->type == UNDERFLOW) ? "UNDERFLOW" : (exc->type == SING) ? "SING" : (exc->type == TLOSS) ? "TLOSS" : (exc->type == PLOSS) ? "PLOSS" : "???", exc->name); fprintf(stderr, " args: %f, %f\n", exc->arg1, exc->arg2); fprintf(stderr, " retval: %f\n", exc->retval); if (change_retval) exc->retval = new_retval; return matherr_ret; } int main(int argc, char *argv[]) { double x; if (argc < 2) { fprintf(stderr, "Usage: %s <argval>" " [<matherr-ret> [<new-func-retval>]]\n", argv[0]); exit(EXIT_FAILURE); } if (argc > 2) { _LIB_VERSION = _SVID_; matherr_ret = atoi(argv[2]); } if (argc > 3) { change_retval = 1; new_retval = atof(argv[3]); } x = log(atof(argv[1])); if (errno != 0) perror("errno"); printf("x=%f\n", x); exit(EXIT_SUCCESS); } See Also fenv(3), math_error(7), standards(7) Colophon This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Referenced By math_error(7).
https://www.mankier.com/3/matherr
CC-MAIN-2022-33
refinedweb
1,072
58.48
On Wed, Jul 11, 2007 at 11:14:20AM +0100, Tony Finch wrote: > registers or on the stack) and a case analysis branch, but a normal > function return (predictable by the CPU) is replaced by a less-predictable > indirect jump. Does anyone have references to a paper that discusses an GHC does not use calls and returns. To quote a recent paper on this subject: 7.1 Using call/return instructions As we mentioned in Section 2, GHC generates code that manages the Haskell stack entirely separately from the system-supported C stack. As a result, a case expression must explicitly push a return address, or continuation, onto the Haskell stack; and the "return" takes the form of an indirect jump to this address. There is a lost op- portunity here, because every processor has built-in CALL and RET instructions that help the branch-prediction hardware make good predictions: a RET instruction conveys much more information than an arbitrary indirect jump. Nevertheless, for several tiresome reasons, GHC cannot readily make use of these instructions: * The Haskell stack is allocated in the heap. GHC generates code to check for stack overflow, and relocates the stack if necessary. In this way GHC can support zillions of little stacks (one per thread), each of which may be only a few hundred bytes long. However, operating systems typically take signals on the user stack, and do no limit checking. It is often possible to arrange that signals are executed on a separate stack, however. * The code for a case continuation is normally preceded by an info table that describes its stack frame layout. This arrangement is convenient because the stack frame looks just like a heap closure, which we described in Section 2. The garbage collector can now use the info table to distinguish the point- ers from non-pointers in the stack frame closure. This changes if the scrutinee is evaluated using a CALL instruction: when the called procedure is done, it RETurns to the instruction right after the call. This means that the info table can no longer be placed before a continuation. Thus the possible benefits of a CALL/RET scheme must outweigh the performance penalty of abandoning the current (efficient) info table layout. Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: Digital signature Url :
http://www.haskell.org/pipermail/haskell-cafe/2007-July/028435.html
CC-MAIN-2014-42
refinedweb
397
50.97